AI regulation to shake up liability insurance

Developments in artificial intelligence (AI) and related regulation will require an overhaul of liability insurance policies and the creation of new AI-specific coverages, experts have told Commercial Risk.

The use of AI by business has accelerated in recent years, going well beyond automating back-office processes. Increasingly, AI is being used to support research, product design and manufacturing, to power autonomous vehicles, hire talent and create media content. The use of AI, especially in products and services, raises important questions for liability, such as who is responsible when AI causes physical harm or property damage.

AI is likely to see shifts in where liability lands. This will have implications for most liability lines, in particular product liability and professional indemnity insurance, explained Lisa Williams, global head of casualty at Zurich Insurance.

“The risk landscape is certainly changing, but in terms of coverage it is more about shifting of responsibility, such as professional indemnity to product liability or motor to product liability. The landscape for product liability will change significantly, with upcoming changes to EU regulation, collective redress and third-party litigation funding, and I do foresee an increase in frequency and potential severity as class actions have the opportunity to be enacted,” she said.

Regulations such as the EU’s proposed AI Act and Products Liability Directive will play a big role in shaping AI liability and insurance going forward, according to Lisa Williams. The AI Act would classify AI systems by risk and mandate various development and use requirements, while a proposed AI Liability Directive aims to establish a framework for AI-related liability claims and damages. The update to the EU’s product liability framework will also include digital and AI products.

Going forward, liability insurance policies will need to adapt to reflect AI and changes in regulation, said Lisa Williams. “We map out as many scenarios as possible to work out where there is existing coverage today and where [liability] is shifting, as well as to identify any gaps in cover and to create the natural wordings and language. That is a process we are in now for both the Product Liability Directive and AI,” she said.

The Product Liability Directive, for example, will change previous definitions and introduce a new liability for “loss or corruption of data”. Such cover is not standard in product liability policies but can be taken out under a professional indemnity tech or cyber policy, Lisa Williams explained.

“I can see the need to look at AI from a product liability and professional indemnity perspective and start to innovate so that AI regulations and challenges can be dealt with, and so that there are no areas of greyness between these two products,” she said.

Insurance policies were mostly drafted before AI developed to where it is today, and many do not explicitly deal with the new technology, according to Chris Williams, partner at law firm Clyde & Co. For insurers, this raises important questions around pricing, coverage and future claims, he explained.

“Insurers are having to deal with insureds that are using AI technology that is not within the scope of the policy when it was written. There are also questions around which policy the risk would fall under, or how many policies would potentially cover an AI risk. Even when insurers write new policies to account for AI, which policy is most suited to its inclusion?” said Chris Williams.

“Insurers will need to establish whether AI risks are covered under current insurance arrangements, and where they are not, how they can integrate and write that cover for a future [should they wish to do so] when the AI landscape, from both a technological and regulatory perspective, is changing rapidly,” he added.

Over the next year, insurers will need to review professional indemnity and product liability policies, and potentially create either specific policies for AI liability or try to combine existing policies to ensure they are up to standard with regulatory changes, according to Lisa Williams.

“Traditionally, clients have purchased separate product liability and professional indemnity policies, which have different triggers, deductibles and coverage. It is important as professional underwriters that we are clear around clarification of where AI liability needs to sit, and this is something we are doing at Zurich. We have working groups looking at this from the professional indemnity and product liability perspective, and I am sure other insurers will be doing the same,” she said.

Despite developments in generative AI, there are currently no completely new categories of insurable risk, said Jaymin Kim, director of commercial strategy at Marsh McLennan. New technology does not always correspond to new insurable risk categories, she told Commercial Risk.

Aspects of generative AI may give rise to new insurable risks with time, namely as further technological developments are made. For example, the potential for generative AI systems to develop emergent capabilities that they were not intentionally designed to develop, makes it much harder to identify not only beneficial use cases but also potential vulnerabilities and attack vectors, Kim explained.

“In our analysis, there are no new insurable risk categories that have emerged because of generative AI as a technology so far, but we are in the early days. Rather, what we are seeing is the manifestation of existing and familiar risk categories in new ways, which may generally be addressed by existing cyber, media, casualty and first-party insurance products” she said.

In many cases, risks associated with AI – like bodily injury, financial loss or physical damage – are already covered by insurers. “These risks already exist but the question here is which, if any, insurance policy will cover them and where will the liability with AI ultimately fall? Insurers need to get up to speed and start asking customers how they are using AI and what they are doing to mitigate the risks,” Chris Williams said.

How insurers respond to emerging generative AI exposures and regulatory changes is an important consideration, believes Kim. “So far, we do not see wholesale exclusions or limitations when it comes to generative AI exposures. From a technological standpoint, it is not creating new insurable risk categories. But if carriers start excluding or limiting AI-specific exposures – for example as the result of legal/regulatory uncertainty or the emergence of large claims – that might create a [coverage] gap when it comes to how companies assess and mitigate liability,” she said.

“Existing insurance policies are broad enough, we believe, to address many of the exposures that might emerge from generative AI. That said, as the legal and regulatory landscape changes, there may be new liability considerations for companies that develop and/or use AI systems,” said Kim.

If buyers meet disclosure requirements, liability insurance should cover AI-related risks, according to Lisa Williams. But companies that develop or co-develop AI, or those that incorporate AI into manufacturing, may want to consider buying tech liability, cyber or specific AI covers, she added.

Clyde’s Chris Williams also noted that AI can prevent and mitigate losses, which will need to be reflected in underwriting. “AI enables insurers to price risk more accurately and competitively; it will help to spot fraudulent claims and reduce some of the administrative burden. For the insured, the effective adoption of AI can help to reduce claims arising in the first place and when claims do arise, the value might be lower,” he said.

For example, virtual reality training could help reduce the risk of hazardous tasks carried out by employees, and AI could even be used to replace humans altogether in certain dangerous situations. “Embracing the opportunities presented by AI in this way could result in a highly favourable impact on both the number and the value of claims, while enabling the insurer to more accurately assess and price the risk,” said Chris Williams.

Generally, insurers are not excluding AI risks, but companies will need to disclose AI-related activities and governance to insurers, according to Zurich’s Lisa Williams.

“Most insurers will research a client’s products and the technology behind them, but they will need to declare and explain in their submission and presentation what they are doing from an AI perspective. It’s important that we get a good understanding of their part in that, and how much is contracted out. There are many things we will need to understand to underwrite that and to give advice,” she said.

Kim believes risk managers, brokers and insurers need to work together to understand the risks of AI and implications for insurance cover.

“As with any evolving business practice, when using generative AI it is important to partner with your broker and other insurance stakeholders to make sure you understand whether existing policies are likely to be sufficient to address corresponding risk and insurance considerations. It needs to be an ongoing discussion, not a one-off conversation about generative AI,” she said.

Back to top button