UK risk managers say AI cybersecurity code should be mandatory

The majority (74%) of UK risk managers want the government’s planned cybersecurity code for AI technologies to be compulsory, according to a poll conducted by risk management association Airmic. The draft code of practice on AI cybersecurity, which was published by the Department for Science, Innovation and Technology earlier this year, is aimed at stakeholders across the AI supply chain, particularly developers and system operators, to protect users from cybersecurity risks in AI products and services, and set a baseline for cybersecurity.

The government said the code would be voluntary and would support the UK’s other AI regulations and data protection law.

But only 10% of Airmic members backed the government on its proposal that the code should be voluntary. The remaining 16% of the poll, conducted this week, said they remain unsure.

Airmic said its members believe that the code needs to be mandatory because AI is already being used in misinformation and disinformation campaigns, threatening the democracy of societies.

Hoe-Yeong Loke, head of research at Airmic, explained: “There always tends to be a tension between regulation and innovation when it comes to emerging technologies such as AI. While recognising that codes of practice such as DSITs may need to be updated as AI continues to develop, risk professionals believe the ethical risks from AI call for some measure of standard practice across industry.”

The government said its voluntary code of practice is a “starting point”, with a view to developing the code into a global standard with a relevant standard setting body.

“The proposed voluntary code sets baseline security requirements for all AI technologies and distinguishes actions that need to be taken by different stakeholders across the AI supply chain,” the government said. “The cybersecurity of AI requires a global approach, as the risks cross international borders, and so international engagement has been a key element of our approach.”

Julia Graham, CEO of Airmic, said: “The code will provide much-needed steer in AI as sought by Airmic members and the UK and international organisations they serve. Airmic is also supportive of the UK government’s efforts to align AI regulations and standards with international standards, though that should not come at the expense of a pro-business, pro-innovation approach to AI for the UK.”

A consultation on the code closed earlier this month. Airmic made its own submission.

Leigh-Anne Slade, head of media, communications and interest groups at Airmic, said: “Airmic wants to be a voice for the profession, especially on a topic such as AI that has dominated discussions with our members through our special interest groups. We will continue our engagement with the government and with standard setters, as their work in this critically important space develops.”

Back to top button