AI will shift liability rather than create new risks

Lines between product and professional liability likely to blur

Artificial intelligence raises challenging questions for liability as the technology continues to outpace regulatory and legal developments, experts have told Commercial Risk.

The release of generative AI like ChatGPT has accelerated the use of the technology by business, from aiding medical and scientific research to product design and manufacturing. AI is increasingly being incorporated into products and services, for example to power autonomous vehicles, provide advice through chatbots, and create media content and art.

AI brings a raft of opportunities but also potential risks, including financial and reputational issues, according to Chris Williams, partner at UK law firm Clyde & Co. “AI simply cannot be ignored. It will play an increasingly important role for businesses, that’s inevitable. AI is the future and all companies will use it one way or another. The legal issues surrounding AI are varied, complex and rapidly evolving – these considerations are here to stay,” he said.

Yet insured companies using this new technology do not always fully understand these risks, he added.

“AI systems can be something of a black box, using large amounts of third-party data. But where does that data come from and how is the AI trained? This could give rise to a range of liability, such as breach of data privacy or copyright rules. There is also increasing scrutiny in terms of transparency and explainability, and the need for the business to able to explain ‘what’s in the black box’. This will only increase” he said.

Companies are also incorporating AI into their operations and products despite nascent and diverging regulation around AI safety and liability. The EU is developing a comprehensive package of AI regulations, while the UK is taking a pro-innovation wait-and-see approach. In October last year, President Biden issued an executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which introduces new reporting requirements for AI developers in the US.

“When things go wrong with AI, there is the potential for a range of losses including financial and reputational, as well as third-party liability. For example, if an algorithm causes financial or other loss, where does that liability fall? Where does the buck stop – would it be the business that is using it, the AI developer or licensor?” said Chris Williams.

Currently, the “buck stops with the human expert” using AI technology – such as a doctor using AI to aid diagnosis, according to Jaymin Kim, senior vice-president, emerging technologies, at Marsh in the US.

But the lines between product and professional liability may shift in the future as regulations come online, according to Kim. Emerging regulations that focus on what companies and boards must and must not do could also impact directors’ and officers’ liability, she said.

“Regulation will play a significant role for the liability considerations of companies – both developers and end-users – because ultimately it is the regulatory landscape that will inform companies in terms of what they need to do,” said Kim.

The ability of companies to pass liability to AI technology developers will also depend on regulation and contractual agreements. However, the relationship between AI developers and users is blurring, as companies co-develop AI systems, or use proprietary data to train or fine tune an AI model, explained Kim.

AI will have significant implications from a liability perspective, with rapid changes in regulation, legal interpretation and technology, explained Lisa Williams, global head of casualty at Zurich Insurance.

“AI adds complexity and uncertainty and is developing so fast that it is hard to anticipate and understand. But it is important that customers have the right governance, checks and balance around AI to ensure it is performing and not causing harm,” she said.

A notable shift in recent years has been the use of AI in products and services, which raises the stakes when it comes to risk and insurance, according to Lisa Williams. “With [the development of large language model AI like ChatGBT] people are starting to really understand what AI could do, and it is being used more and more in operations, but also in products. And that changes the landscape considerably,” she said.

A big driver shaping AI liability going forward will be regulation, Lisa Williams explained.

“EU AI and product liability regulation will be a catalyst… AI risks will be seen through a very different lens,” she said. “The modernisation of the EU Product Liability Directive will start to bring product liability up to speed and into the digital economy. It’s coming, and it is one of the questions reinsurers are now asking insurers – how they are handling AI and the types of conversations they are having with customers as well,” she added.

AI is a big challenge for businesses and policymakers alike. “It is rare that a new technology is in the hands of the retail customer before corporations have had time to wrap their arms around specific business use cases, and ensure it aligns with their business strategies and practices,” said Kim.

“It’s here and it is here to stay. We are still in a relatively nascent stage of exploring how generative AI systems can be used for commercial use cases. At the same time, we are seeing the emergence of new regulatory frameworks, such as the EU’s AI Act, and in the US the Biden administration’s executive order specific to AI,” said Kim.

“It’s still early days, but globally there is a focus from regulators and legislators thinking about how they regulate the AI space, including generative AI. But it remains to be seen how different jurisdictions will respond, and it seems likely that in numerous jurisdictions we will see new disclosure requirements on companies, in particular for those developing generative AI systems with the potential to impact critical infrastructure,” she said.

Courts are already starting to look at complex liability cases involving technology differently. When combined with new products liability and AI regulation, the growth in litigation funding and expansion of collective redress in Europe, this could result in a “considerable shift” in liability further down the line, said Lisa Williams.

There are also questions around intellectual property rights for products or research generated purely by AI, explained Chris Williams.

“If a company uses AI, for example in research and development for life sciences, it may be difficult to obtain copyright or patent ownership. Regimes differ around the world but there tends to be a requirement for human involvement in the authorship/inventive process,” he said.

There are also a number of copyright infringement lawsuits and class actions under way in the US as creatives and media companies allege that technology companies are using their material without permission to train AI. In the UK, Getty Images is suing Stability AI in the High Court for copyright infringement.

Back to top button