Microsoft is partnering with Tech Against Terrorism to develop an AI-powered tool for detecting terrorist or violent extremist content online.
Tech Against Terrorism—an independent nonprofit organization launched by the United Nations in 2016— will work with Microsoft to develop an AI-powered tool that will detect potentially harmful content for subsequent human review.
The tool will first be used to strengthen Tech Against Terrorism’s Terrorist Content Analytics Platform, a repository of verified and classifiable terrorist content from designated terrorist and violent extremist organizations.
The organizations will then use the TCAP to improve the capabilities of Microsoft’s Azure AI Content Safety service in flagging potential terrorist content for further review. The ACS system uses next-generation multimodal models to track terrorist content across various content types, including text, images and video.
During the pilot, Tech Against Terrorism and Microsoft will first aim to establish a framework to assess the accuracy of terrorist content detection, including determining whether content is correctly flagged without perpetuating bias and whether the tool is under-detecting content or flagging false positives. If the pilot is successful, the two plan to make the tool available to smaller platforms and nonprofits.
“This joint project aims to understand and demonstrate the potential for AI technologies to transform the way we challenge complex digital safety risks while upholding human rights,” says Adam Hadley, executive director of Tech Against Terrorism and CEO of QuantSpark.
“AI systems, designed and deployed with rigorous safeguards for reliability and trustworthiness, could power a leap forward in detecting harmful content—including terrorist content created by Generative AI—in a nuanced, globally scalable way, enabling more effective human review of such content,” he adds.
Tech Against Terrorism says it’s archived more than 5,000 pieces of AI-generated content shared in terrorist and violent extremist spaces, and it discovers huge amounts more every year. It has, it says, identified users exploiting generative AI tools to bolster the creation and dissemination of propaganda in support of both violent Islamist and neo-Nazi ideologies.
And while such use of generative AI is currently in its infancy, it says, it’s likely to represent a threat in the medium to long term.
Examples uncovered by Tech Against Terrorism include a pro-Islamic State tech support group publishing a guide advising IS supporters on how to use ChatGPT without compromising operational and personal security. Another IS supporter claimed to have used an open-source AI tool to transcribe and translate a leadership message published by official IS propaganda outlets. A pro-al Qaeda propaganda outlet has been using highly likely AI-generated images as the basis for propaganda posters.
One emerging issue, says Tech Against Terrorism, is the undermining of hash-based detection tools, with the risk that AI-enabled variations of preexisting and new content could render these tools obsolete.
“The use of digital platforms to spread violent extremist content is an urgent issue with real-world consequences,” says Brad Smith, vice chair and president at Microsoft. “By combining Tech Against Terrorism’s capabilities with AI, we hope to help create a safer world both online and off.”
Follow me on Twitter.