According to a recent survey by The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy, most American adults are worried about AI tools contributing to the proliferation of misinformation in the upcoming 2024 elections.
These concerns cut across party lines, with both Republicans and Democrats expressing unease about the possibility of presidential candidates using AI to fabricate false images or videos (85% of Republicans and 90% of Democrats) or to respond to voter queries (56% of Republicans and 63% of Democrats). This pessimism regarding AI's use in elections has grown, particularly after its use in the 2020 US election as well as this year's Republican presidential primary.
The survey also found that 66% of Americans support the idea of the federal government prohibiting false or misleading AI-generated images in political advertisements.
Worries about AI's potential to spread misinformation are not baseless, as AI-generated disinformation has already gained traction online leading up to the 2024 election. Examples include a manipulated video of Biden seemingly delivering a speech against transgender individuals and AI-generated images depicting children supposedly learning satanism in libraries.
Why does it matter?
There is a consensus among experts from academia, civil society, and Big Tech that the malicious effects of AI will undoubtedly impact elections. The rapid advancement of AI tools raises concerns about the amplification of misinformation in the 2024 presidential election to an unprecedented degree, and a substantial majority of US adults share these concerns. There is an apparent demand for action to address the use of AI in propagating misinformation during elections, with a majority of Americans supporting government regulations targeting AI-generated content in political advertisements.