Insurance industry company

The EU Artificial Intelligence Act: implications for the insurance industry

Artificial Intelligence (AI) is poised to play a pivotal role in the ongoing digital transformation across all industries and society as a whole. In the insurance sector, the combination of AI and the Internet of Things (IoT) is opening up a wide range of opportunities for future growth and advancement. A notable indicator of this shift was EIOPA’s thematic review in 2018, focusing on the utilization of Big Data Analytics in motor and health insurance.[1] The review found that 31% of European insurance companies were already incorporating AI across their value chains, with an additional 24% in the “proof of concept” stage. This trend has persisted and was underscored in a 2021 report on AI governance principles by EIOPA’s Stakeholder Group on Digital Ethics.[2]

AI is progressively revolutionizing the insurance industry, granting insurers a powerful advantage in bolstering operational efficiency and enhancing customer-centric services. AI innovations are facilitating improved utilization of the ever-expanding pool of data, spanning visual and sensor data. By enabling real-time data analysis, AI is elevating various aspects of the insurance domain, ranging from refining underwriting decisions and expediting claims settlements to fortifying defenses against fraud.

According to McKinsey, AI and machine learning (ML) techniques are poised to automate 25% of the insurance industry by 2025, resulting in substantial cost reductions for companies.[3] Juniper Research’s projections suggest that within property, health, life, and motor insurance, the cumulative annual savings are anticipated to surpass $1.2 billion by 2023, marking a five-fold increase compared to 2018.[4]

These accrued savings are cascading down to consumers, empowering insurers to offer more tailored, precise, and competitively-priced insurance products and services. However, the escalating adoption of AI in the insurance sector has raised corresponding concerns about the transparency and explainability of AI systems.

Background on the EU AI Act

In April 2021, the European Commission introduced a draft of the EU AI Act, which sets forth regulations governing the development, commercialization, and utilization of AI-driven products and services. This legislation will have jurisdiction over all businesses operating within the European Union, spanning various industries.[5]

The EU AI Act establishes a framework for categorizing AI systems into four distinct groups based on the level of “risk” associated with their applications.[6] The primary aim is to promote the creation of responsible and trustworthy AI systems, beginning with the inception of the software. Additionally, the act will prohibit the use of AI applications that pose an “unacceptable risk.” Presently, the category of “unacceptable risk” includes AI systems involved in biometric identification within public spaces and social scoring applications.

Although the implementation of the EU AI Act is expected to be a gradual process, some countries, such as Spain, are contemplating the testing of the risk framework in a sandbox environment as early as October 2022.[7] This proactive approach signifies a willingness to assess and refine the regulatory framework before its full enactment.

On December 9, 2023, the European Parliament and the Council reached a political agreement on the AI Act. This marks a crucial step in the development and approval of this legislation. The next stages in the process involve the formal voting on the AI Act by both the European Parliament and the Council. Once these formal votes are completed and the AI Act is officially adopted, the finalized text will be published in the Official Journal. This publication will mark the completion of the legislative process and the AI Act’s entry into force as law within the European Union.

Implications for the insurance industry

Although AI is already making its presence felt across the insurance value chain, recent innovations, such as the emergence of generative AI, indicate that we are just scratching the surface of AI’s potential impact on the sector. While the widespread adoption of generative AI by insurers is still in its early stages, insurance companies are actively exploring its potential applications. These range from offering consumer advice and guiding policyholders through claims processes to enhancing pricing and underwriting procedures.

The AI Act carries specific implications for the insurance sector. Firstly, it is a “horizontal” regulation designed to cover all relevant sectors simultaneously. This cross-sectoral nature presents challenges in integrating its provisions and oversight into highly regulated and supervised industries like insurance. Secondly, the AI Act envisions the development of harmonized standards by European Standardisation Organizations and the provision of guidance and compliance tools to assist both providers and users in meeting the requirements. Thirdly, the AI Act adopts a risk-based approach, with the majority of its requirements applying to AI systems identified as high risk.

Given the unique characteristics of the insurance sector, the development of standards and guidance to facilitate AI Act implementation is vital to ensure a smooth application and prevent potential conflicts with existing insurance legislative and supervisory frameworks. While some use cases in life and health insurance are likely to be considered high-risk, many other potential use cases will require transparency obligations and voluntary codes of conduct. Ensuring coherence between sectoral requirements and AI Act standards, while maintaining proportionality, may require additional efforts.

The AI Act, with its focus on high-risk insurance activities and General Purpose AI Systems (GPAI), introduces a comprehensive regulatory framework that insurance companies and service providers in the sector need to adhere to.

High-Risk Insurance Activities

Under the AI Act, high-risk AI systems are defined as those that make decisions affecting individuals’ eligibility for health and life insurance coverage. Insurance companies involved in such activities are subject to a wide range of technical and governance measures. These measures include:

  1. Risk Mitigation Systems: Implementing systems to minimize the risks associated with high-risk AI.
  2. High-Quality Data Sets: Using reliable and accurate data sets in AI systems.
  3. Activity Logging: Keeping detailed records of AI system activities.
  4. Comprehensive Documentation: Creating thorough documentation related to AI systems.
  5. Clear User Information: Providing clear and transparent information to users.
  6. Human Oversight: Ensuring human oversight of AI processes.
  7. Robustness, Accuracy, and Cybersecurity: Ensuring a high level of robustness, accuracy, and cybersecurity in AI systems.
  8. Fundamental Rights Impact Assessment: Conducting mandatory assessments of the impact of AI on fundamental rights.
  9. Complaint Mechanism: Establishing a mechanism for individuals to raise complaints and request explanations regarding decisions made by high-risk AI systems that affect their rights.

Insurance companies may incur these obligations whether they are deployers (users) or providers (developers) of AI systems for high-risk activities.

General Purpose AI Systems (GPAI)

Many activities in the insurance sector, such as dialogue generation for virtual assistants, optimized underwriting and pricing through historical data analysis, and marketing/sales content generation, can fall under the category of General Purpose AI (GPAI).

The AI Act proposes a two-tier regulation system for GPAI, depending on their impact:

  1. Low Impact GPAI: For GPAI models with low impact, various transparency requirements apply. These include creating technical documentation, compliance with EU copyright law, and disseminating detailed summaries about the training data used.
  2. High Impact GPAI For high-impact GPAI models, especially those with systemic risk, more stringent obligations are imposed. These may include model evaluations, assessment and mitigation measures, adversarial testing requirements, and cybersecurity protections/reports.

The AI Act’s approach aims to strike a balance between fostering innovation and ensuring transparency, accountability, and cybersecurity in AI systems used within the insurance sector.


You May Also Like