• Home
  • Blog
  • Business
  • AI in Insurance: Navigating Regulatory Scrutiny and Ethical Challenges

AI in Insurance: Navigating Regulatory Scrutiny and Ethical Challenges

Updated:September 10, 2025

Reading Time: 4 minutes
Grok's chatbot

Insurance companies are increasingly integrating AI into underwriting, pricing, claims handling, and customer service processes. This allows for a significant increase in the speed of data processing, increased accuracy of risk assessment, and improved quality of interaction with policyholders. However, along with technological advantages, regulatory concerns are also growing. We at our website want to explore this topic – and share it with you.

The use of AI poses new challenges for insurers: ensuring transparency of algorithms, preventing unfair discrimination, complying with fair pricing requirements, and creating sustainable internal control systems. Legal and ethical aspects of AI application are becoming an integral part of corporate governance in the insurance business.

Regulatory pressure is shaping new rules of the game

The widespread use of artificial intelligence technologies could not go unnoticed by regulators. For example, the most developed market in terms of AI technologies, the United States, has already taken concrete steps towards regulating AI. To date, several US states, including California, Colorado, and New York, have already adopted laws or recommendations regulating the use of AI in insurance. Moreover, 24 states have adopted their own versions of the 2023 National Association of Insurance Commissioners (NAIC) Model Bulletin on the Use of AI by Insurers.

The main goal of the new regulations is to minimize the risks of unfair discrimination, ensure fairness, transparency, and accountability in the use of intelligent systems.

The model requirements of the ytrjnjhs[ states include:

• creating systems for internal testing of AI;

• implementing corporate governance and control structures;

• mandatory written policies and procedures;

• the need for transparency with consumers;

• requirements for certification and quality control of algorithms.

These measures are aimed at ensuring that AI technologies serve the public interest and comply with established standards of insurance regulation.

Fair and unfair discrimination: an eternal problem in a new context

While the use of artificial intelligence opens up new horizons for insurers in the field of risk assessment, the fundamental principles of insurance regulation remain unchanged. The National Association of Insurance Commissioners (NAIC) emphasizes that insurance is fundamentally built on the principle of objective risk discrimination. This means that differences between policyholders are permissible and even necessary, but only to the extent that they are based on objective, actuarially sound data about the probability of an insured event.

However, the introduction of AI into insurance processes brings new risks, primarily related to the possibility of unfair discrimination. Such situations can arise when algorithms make decisions based on data directly or indirectly related to protected characteristics of policyholders, such as race, gender, age, or ethnicity. Even without explicit intent to discriminate, the use of correlated variables can lead to results that are considered unfair and violate the principles of equal access to insurance products.

In this context, the ā€œAI Principlesā€ adopted by the NAIC in 2020 provide important guidance for all entities using AI in insurance. Insurers are expected to make decisions using AI in a fair and ethical manner, which means actively working to minimize algorithmic bias. In addition, it is critical to ensure transparency of models, i.e. the ability to explain to customers how decisions affecting them are made. Accountability of insurers for the performance of AI systems means that they cannot shirk responsibility by referring to the autonomy or complexity of algorithms. Finally, the sustainability and reliability of the technologies used must be guaranteed throughout the entire life of the AI ​​systems.

At the same time, the main regulatory benchmark for assessing the legality of using artificial intelligence in insurance activities remains the Unfair Trade Practices Act. This law prohibits discriminatory or misleading methods of conducting insurance business, and its provisions are fully applicable to the new technological realities associated with the use of intelligent systems.

Thus, even in the era of digital transformation of the insurance market, the basic legal logic remains unchanged: technology should serve to enhance justice and protect public interests, and not undermine them.

Corporate governance: new AI literacy

The introduction of artificial intelligence into insurance processes requires insurance companies to seriously review their corporate governance system. One of the expectations for boards of directors is to have what is known as ā€œAI literacy,ā€ which refers to the skills, knowledge, and awareness of the opportunities, limitations, and risks associated with the use of intelligent systems in insurance. Without an understanding of how AI works and the threats it poses, CEOs will not be able to fully perform their responsibilities to oversee technology solutions and manage the risks associated with them.

One of the key requirements is to ensure that the use of AI technologies is aligned with the goals and values ​​of the organization. This means that when implementing new solutions, insurers should consider not only economic feasibility, but also compliance with their core principles, including protecting the interests of clients and complying with regulatory standards. The second important element is to increase the technological competence of board members. This means that corporate governance participants must have a sufficient level of understanding of technological processes to effectively evaluate proposed strategies, manage risks, and make informed decisions.

In addition, companies must develop clear criteria for assessing the effectiveness of AI systems. Such criteria are necessary for an objective assessment of how intelligent solutions contribute to achieving the organization’s goals, what results they bring, and whether they meet the stated expectations for transparency, fairness, and accuracy. Strategic integration of AI into the company’s long-term plans is also becoming an important area of ​​work. The use of intelligent technologies should not be viewed as a temporary initiative, but as an element of a sustainable corporate strategy aimed at increasing competitiveness in the context of digital transformation. In addition, the creation of a written program for the responsible use of AI, the so-called AIS (Artificial Intelligence System Program), is becoming a mandatory requirement for insurers, especially in cases where AI makes or significantly influences decisions regarding the provision or cost of insurance services. Such a program should regulate the processes of development, implementation, control and audit of AI systems, establish requirements for transparency, fairness and accountability, and also assign responsibility for the management of AI systems at the level of the company’s top management.


Tags:

Joey Mazars

Contributor & AI Expert