How terrorist groups are leveraging AI to recruit and finance their operations

6 hours ago 3

Counter-terrorism authorities have, for years, characterized keeping up with terrorist organizations and their use of digital tools and social media apps as a game of Whac-a-Mole.

Jihadist terrorist groups such as Islamic State and its predecessor al-Qaida, or even the neo-Nazi group the Base, have leveraged digital tools to recruit, covertly finance via crypto, download weapons for 3D printing and spread tradecraft to its followers, all while leaving law enforcement and intelligence agencies playing catch up.

Over time, thwarting attacks and maintaining the technological advantage over these types of terror groups has evolved, as more and more open source resources become available.

Now, with artificial intelligence – both on the horizon as a rapidly developing technology and in the here and now as free, accessible apps – agencies are scrambling.

Sources familiar with the US government’s counterterrorism efforts told the Guardian that multiple security agencies are very concerned about how AI is making hostile groups more efficient in their planning and operations. The FBI declined to comment on this story.

“Our research predicted exactly what we’re observing: terrorists deploying AI to accelerate existing activities rather than revolutionise their operational capabilities,” said Adam Hadley, the founder and executive director of Tech Against Terrorism, an online counterterrorism watchdog, which is supported by the United Nations Counter-Terrorism Committee Executive Directorate (CTED).

“Future risks include terrorists leveraging AI for rapid application and website development, though fundamentally, generative AI amplifies threats posed by existing technologies rather than creating entirely new threat categories.”

So far, groups such as IS and other adjacent entities, have begun using AI, namely OpenAI’s chatbot, ChatGPT, to amplify recruitment propaganda across multimedia in new and expansive ways. Not unlike the imminent threat it poses to upending modern workforces in dozens of job sectors and is poised to enrich some of the wealthiest people on earth – AI will complicate new public safety issues.

“You take something like a Islamic State news bulletin, you can now turn that into an audio piece,” said Moustafa Ayad, the executive director for Africa, the Middle East and Asia at the Institute for Strategic Dialogue. “Which we’ve seen supporters do and support groups, too, as well as photo arrays that they produce centrally.”

Ayad continued, echoing Hadley: “A lot of what AI is doing is enabling what’s already there. It’s also supporting their capacity in terms of propaganda and dissemination – it’s a key part of that.”

IS isn’t hiding its fascination with AI and has now openly recognized the opportunity to capitalize on what it currently offers, even providing a “Guide to AI Tools and Risks” to its supporters over an encrypted channel. In one of its latest propaganda magazines, IS outlined the future of AI and how the group needs to embrace it as part of its operations.

“For every individual, regardless of their field or expertise, grasping the nuances of Al has become indispensable,” it wrote in an article. “[AI] isn’t just a technology, it’s becoming a force that shapes war.” In the same magazine, an IS author explains that AI services can be “digital advisors” and “research assistants” for any member.

Over an always active chat room that IS uses to communicate with its followers and recruits, users have begun discussing the many ways AI can be a resource, but some were wary. One user asked if it was safe to use ChatGPT for “how to do explosives” but weren’t sure if agencies were keeping tabs on it – which has become one of the broader privacy concerns surrounding the chatbot since its inception.

“Are there any other options?” asked an online IS supporter in the same chat room. “Safe one.”

But another user found a less obvious way around setting off any alarms if they were being watched: by dropping the schematics and the instructions on how to create a “simple blueprint for Remote Vehicle prototype according to chatgpt”. Truck ramming has become a choice method for IS in recent attacks involving followers and operatives, alike. In March, an IS-linked account also released an AI-created bomb making video with an avatar, for a recipe that can be created with household items.

Far-right groups have also been curious about AI, with one advising followers on how to create disinformation memes, while others have looked to AI for the creation of Adolf Hitler graphics and propaganda.

Ayad said some of these AI-driven tools have also been a “boon” to terror groups and their operational security – techniques to securely communicate without prying eyes – such as encrypted voice modulators that can mask audio, which altogether, “can assist with them further cloaking and enhancing their opsec” and day-to-day tradecraft.

Terror groups have always been at the forefront of maximizing and embracing digital spaces for their growth, AI is just the latest example. In June 2014, IS, still coming into the global public consciousness, live-tweeted imagery and messages of their mass executions of over 1,000 men as they stormed Mosul, which caused soldiers in the Iraqi army to flee in fear. After the eventual establishment of the so-called Caliphate and its increasing cyber operations, what followed was a concerted and coordinated effort across government and Silicon Valley to crackdown on all IS accounts online. Since, western intelligence agencies have singled out crypto, encrypted texting apps, sites where 3D printed guns can be found, among others, as spaces to police and surveil.

But recent cuts to counterterrorism operations across world governments, including some by Doge in the US, have degraded efforts.

“The more pressing vulnerability lies in deteriorating counter-terrorism infrastructure,” said Hadley. “Standards have significantly declined with platforms and governments less focused on this domain.”

Hadley explained how this deterioration is coinciding with “AI-enabled content sophistication” urging companies like Meta and OpenAI, to “reinforce existing mechanisms including hash sharing and traditional detection capabilities” and work to develop more “content moderation” surrounding AI.

“Our vulnerability isn’t new AI capabilities but our diminished resilience against existing terrorist activities online,” he added.

Read Entire Article
Infrastruktur | | | |