Assessing the Threat of AI Misuse in Disinformation Campaigns
Published in AEI Ideas
Assessing the Threat of AI Misuse in Disinformation Campaigns
Last year saw a remarkable series of advancements in artificial intelligence (AI). The latest image generation models, such as DALL-E, Midjourney, and Stable Diffusion, demonstrated unprecedented capabilities in creating highly realistic and stylistically diverse images. And ChatGPT captured the broader public’s interest as it pushed the boundaries of what was previously thought possible in natural language processing.
The potential of these technologies is vast, with the ability to revolutionize various industries such as education, health care, and the creative arts. However, alongside these benefits, it is important to consider the potential negative consequences and ethical dilemmas that may arise from their use. It is crucial to address these issues proactively to ensure these technologies are used in a responsible and beneficial manner.
We already have recent experiences with technology-enabled misinformation campaigns. The US intelligence community found that foreign governments, including Russia and Iran, conducted influence operations during the 2016 and 2020 US presidential elections, primarily using technology platforms. The FBI also recently stated that they have “national security concerns” about TikTok, warning that the Chinese government could potentially use the popular video-sharing app to influence American users.
Generative AI could supercharge these challenges in the future. Bad actors could amplify polarization through entirely new AI-generated content such as persuasive text and misleading images. Or they could utilize this technology to generate highly realistic deepfake videos and audio recordings of leaders, in which the leaders appear to say something they never actually said. These tools could be used by cybercriminals to clone people’s voices for scams and identity fraud, or they could be used by nation-states seeking to create and spread disinformation during an election or augment suspicion and weaken public trust.
OpenAI, to its credit, launched an important effort through a partnership with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory to surface these risks and identify potential solutions.
The report provides a thoughtful framework through which to think about the threat of AI-enabled influence operations and some of the steps that can minimize those risks along with their associated trade-offs. For example, language models could drive down the cost of running influence operations, making them more accessible to new actors and different types of actors. The authors also suggest that influence operations with language models will become easier to scale and currently expensive tactics may become cheaper.
The paper delves into potential solutions by focusing on four key stages of the influence operation process: the creation of AI models, access to those models, distribution of generated content, and the formation of beliefs in the target audience. Responses are assessed through a framework exploring its technical feasibility, social feasibility, downside risks (including heightened forms of censorship, the risk of the mitigation itself being politicized, and the risk of bias), and finally the effectiveness of the solution.
The paper is worth reading in its entirety, but it also resurfaces four broader concerns:
There is an urgent need for more conversations around not just the benefits of AI but also the potential harms. OpenAI deserves credit for teeing up some of these questions before releasing ChatGPT and for using the limited release as an opportunity to surface potential issues. Meta and Google also have similar important efforts. Other companies should consider such initiatives as they explore AI in their domains.
State and federal agencies should prioritize hiring AI experts as part of their leadership teams to more effectively design the policy frameworks that maximize the benefits of AI while mitigating the risks.
AI providers, social media platforms, policymakers, civil society organizations, and national security entities need to implement new institutions and coordination mechanisms to effectively address the diverse threats posed by AI.
These recent developments in AI make the new National Science Foundation (NSF) Directorate for Technology, Innovation and Partnerships, established under the CHIPS and Science Act, all the more important. The legislation tasked the NSF with supporting research and development on key technology areas, including AI, machine learning, automation, and quantum computing. However, the omnibus bill fell several billion dollars short from providing the needed appropriations. Fully funding this office will be crucial in answering some of these difficult questions while also ensuring America’s continued leadership in AI, particularly against the growing threats from China.
The biggest near-term risk for AI is that it is outpacing the ability for policymakers and regulators to understand it, much less develop the smart regulatory regimes needed to drive economic growth and maximize the benefits while minimizing the harms. It’s important for these discussions to begin in earnest to prepare for the policy challenges that lie ahead.