Buyers told to prepare for ‘probing’ AI questions at renewal
Insurers are likely to ask more probing questions about artificial intelligence (AI) at future renewals as underwriters seek to better understand the risks and coverage implications, Chris Williams, partner at law firm Clyde & Co in London, told Commercial Risk.
Over the past year, insurers have moved from thinking broadly about AI to wanting to understand how insureds are using the technology and the potential implications for risks and coverage, said Williams, who specialises in intellectual property and technology disputes. As a result, many insurers are now preparing questions on AI as part of the underwriting process for property and casualty policies, he explained.
“Insurers will be more active in asking questions and engaging with insureds as to what AI they are using, what they are using it for and what governance structures they have by way of risk mitigation,” said Williams, as France hosted the Artificial Intelligence Action Summit, where representatives from more than 100 countries discussed a host of AI-related issues.
Understanding an insured’s governance will be key to assessing AI risks, continued Williams.
“It’s not just a case of asking whether a company uses AI, but how you are using it, and how is it embedded. A lot of [underwriting questions] will focus on what governance and policies there are within the organisation,” said Wiliams. “If you are an insurer looking to provide cover, and the insured can show they have a board member with oversight of AI, and that they have rolled out an AI policy and surrounded it with training and guardrails, the insurer can take a great deal of confidence,” he said.
And insurers will be looking for buyers to proactively offer up information on AI and governance, he continued. “Insurers do not want to be chasing insureds for information and getting limited answers in response. Insurers really like open dialogue, which is good when you start talking premiums and renewals,” Williams said.
“So insurers will increasingly expect companies to have given thought to their use of AI even before they are asked the question, and to have implemented demonstrable measures to identify and mitigate at least the anticipated risks. Some insurers and brokers are already receiving packs of information from some clients, including the AI policy, the identification of an AI-responsible individual within the organisation, and a schedule of what AI systems are being used, what the likely risks of that use will be and the steps that have been taken to mitigate those risks,” said Williams.
He advises risk managers to be proactive and prepare in advance.
“Take steps now. If you can, get to a position where you can provide insurers with comfort that you have taken even the most basic steps, such as implementing a written governance framework and clear AI policy, ideally with a senior individual with overall responsibility for that organisation’s use of AI. These relatively unburdensome steps, which are in any event best practice absent insurance considerations, will invariably help when engaging with insurers,” he said.
To date, only a small number of insurers have moved to provide affirmative cover for certain AI risks in their policies, mainly under cyber insurance. Williams does not expect the insurance market to introduce broad exclusions for AI in property and casualty policies in the foreseeable future, but with the technology evolving so rapidly, he stressed that position may change.
Currently, insurers are taking a different approach to AI than they did with cyber, according to Williams. In order to address so-called ‘silent’ or non-affirmative cover for cyberattacks, insurers applied broad exclusions to P&C policies, pushing cyber cover into standalone cyber insurance or bespoke policy extensions.
“Insurers are asking lots of questions [on AI], not to be difficult to the insureds but to find out if they are already covered,” said Williams. “Through this process, they can identify novel uses of AI where insureds may not be covered, which makes it easier to identify whether bespoke cover is needed,” he added.
“It would appear that some insurers are taking a measured approach – not because they are behind the curve, but as a conscious decision to first properly understand how insureds are using AI, what risks this use presents, what mitigation and guardrails can be built in, and ultimately how they can best obtain this information at the underwriting stage,” continued Williams.
Insurers are developing AI-specific policies at present but these are to address specific risks with AI development and systems, rather than the wider risks, explained Williams. “Perhaps the reason for that is that no one yet fully understands the wider risks, and that by asking questions they will gain a much clearer understanding of what these risks are, how they can be mitigated and whether they are already covered or not,” he said.
AI could also be a source for future litigation, for example in areas like breach of intellectual property, including copyright, and data rights, according to Williams. LinkedIn customers have filed a class action in the US alleging that the networking platform shared their personal data with third parties to train generative AI models, without their consent.
There is also the potential for litigation when companies find the performance of AI systems falls short of expectations, said Williams.
“Business are starting to realise the limitations of what AI, in particular generative AI, can do, especially when it comes to improving the bottom line. Some companies have invested in very highly expensive and sophisticated AI systems on the promise that they could deliver a number of benefits, when it is often clear these benefits are in fact illusory. This is likely to be due to a combination of businesses having purchased systems for fear of missing out and also some bold promises having been made by sales teams,” he said.
Negligence from directors and officers and the role of AI in making decisions are others areas to watch, explained Williams. Companies may want to keep some form of “version control” or record to show the rationale behind certain AI-assisted decisions.