Beazley has no plans to exclude AI
Cyber and technology errors and omissions insurance is able to cover most current uses of artificial intelligence, according to London-based specialty insurer Beazley, which told Commercial Risk that it currently has no plans to exclude AI.
Insurers have not moved to exclude artificial intelligence, although a small number have changed wording to explicitly cover some AI-related exposures. As yet, Beazley, a key player in the cyber market, has not felt the need to go down this route, explained Bob Wice, head of underwriting management for Beazley’s global cyber risks division.
“We haven’t come out with anything to clarify that a definition in our policy needed to be expanded to include artificial intelligence, although a couple of insurers have done so… Overall, there has not been anything like an artificial intelligence exclusion at this point,” he said.
Wice does not expect the market to widely exclude AI for now, but he does anticipate growing interest from underwriters’ AI activities and governance.
“It’s a work in progress. We are in the early stages of understanding how claims might come through, and we are in a position where we have to evaluate. This is a dialogue with brokers, insureds and the technology companies that we have close relationships with: what are the risks they are worried about, what are the risks they talk about to manage against, and how do they think their insurance coverage should work,” he said.
“We as the risk bearers need to think about how you price for that, how you set a deductible or retention that is adequate, and how do you make sure you can underwrite against the management of those risks. It’s early days, and we haven’t seen books of business affected by this so much that there has been [the need for] an artificial intelligence exclusion deployed across the market,” added Wice.
He said that AI presents a number of risks for companies depending on how it is used. For example, exposures tend to be higher for companies that monetise artificial intelligence by incorporating AI systems into products and services, he explained.
Key risks associated with AI include data privacy and infringement of intellectual property, as well as liability from hallucinations, where AI systems generate false information. AI also has implications for cybersecurity, with criminals potentially using the technology to generate malware, more effective phishing emails and deep fakes.
For the most part, these risks would be covered by existing insurance products, according to Wice.
“Whether it’s copyright and intellectual property, cybersecurity or the provision of an AI service… At this point I do not see why there wouldn’t be [cover], unless an underwriter does not like what they are seeing and puts in an AI exclusion. It hasn’t happened yet, and I don’t see it happening [at present]. That could change with use cases that go beyond what a cyber insurance policy should cover,” Wice said.
For example, cybersecurity risks related to AI should be covered under standalone cyber insurance, according to Wice.
“Talking about the cybersecurity threat, most of the time that coverage will flow through if there is a use of AI that results in a security breach or a privacy breach, whether that is data exfiltration or information that is being fed into these machines. The coverage for the use of AI by threat actors, for the most part, is pretty broad,” he said.
“It’s simply the way the coverage works. If AI was used by the threat actor to execute a [cyberattack], it should be business as usual [from a cyber insurance perspective],” he added.
There could also be cover for AI-related IP infringement under sub-sections of cyber insurance, explained Wice. “Cyber policies are a package of coverages, and one of those ubiquitously provided is media liability, which includes intellectual property infringement, like copyright and trade mark infringement,” he said.
Technology errors and omissions insurance, which includes cybersecurity and privacy cover, also provides protection for organisations that offer a professional or technology service, or product, to their customers.
“That is where we get into whether there is a monetisation of artificial intelligence embedded into the technology product or service. Is AI or a large language model a technology product and are they charging for it? Those are the dynamics that you have to punch through to each individual use case,” said Wice.
“But for the most part, if there is a situation where AI falls down in what it is supposed to do, you could see a situation where it would be covered under the policy, if it’s part of the provision of a technology product or service offered for a fee,” he said.
Most use-cases of AI are insurable, as long as appropriate governance is adhered to, explained Wice.
“We need to be cognisant of the fundamentals of what we are doing – insuring fortuitous or unforeseeable risk. We don’t want to be insuring risks that are known to likely result in liability claims or where organisations are not conducting good business practices or are not meeting its compliance requirements. If an organisation chooses to do something to generate revenue and monetise the use of artificial intelligence, key considerations are if they are doing that knowing they are violating the law or third-party intellectual property rights, or is there a risk one could foresee going wrong,” he said.
“If we do come to a point where there is a flouting of compliance obligations, that becomes a risk that is very difficult to insure,” said Wice.