Terrorists ‘experimenting’ with artificial intelligence, warns Pool Re

Terrorist and violent extremists (TVEs) are “actively experimenting” with using artificial intelligence (AI) to help plan, facilitate and execute violent attacks, warns UK’s terrorism reinsurance mutual Pool Re. In a report written with the Royal United Services Institute (RUSI), Pool Re says there is “clear evidence” that actors hold an interest in the misuse of AI.

Dr Simon Copeland, a research fellow in the terrorism and conflict research group at RUSI and author of the report, says terrorists’ exploitation of AI remains in an experimental phase. The report says AI tools offer significant efficiencies in generating and distributing propaganda to radicalise individuals and groups, but this remains a small part of terrorist activity today.

As well as generating content, AI-powered models can be used to distil information, such as in the production of explosives, the report says. “The integration of AI-technology and computerised 3D model simulations may allow TVEs to conduct accurate testing to hone the lethality of weapons without the need to learn complex programming skills.”

It adds that in future, “AI-powered modelling will only increase in sophistication and may provide opportunities for TVEs to simulate how certain weapons or attack methodologies might work in specific settings”.

“Though potential TVE exploitation of AI will overlap with those of other nefarious actors (for example, AI-facilitated scams, fraud, or other forms of cybercrime for fundraising), other uses are likely to be unique and inherently linked to the goal of advancing a political, religious, racial, or ideological cause,” the report explains.

But it notes that AI presents ideological and religious conflicts for some terrorist groups, which may influence its adoption. For example, pro-Islamic State supporters have questioned whether AI-generated images are un-Islamic, while some far-right groups are influenced by AI conspiracy theories.

“As a result, overcoming the safeguards or guardrails designed to prevent AI models from producing harmful content has become an act of resistance in itself,” the report notes. It adds that the adoption of AI-powered tools is likely to be an incremental process, rather than sudden and systemic.

“The next ten years are likely to continue as a period of trial and error. TVEs will adopt elements that work for them while abandoning others,” the reports says.

Back to top button