AI experts and tech-inclined political scientists are sounding the alarm on the unregulated use of AI tools going into an election season.
Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.
A booth is ready for a voter, Feb. 24, 2020, at City Hall in Cambridge, Mass., on the first morning of early voting in the state. (AP Photo/Elise Amendola)
“We’re not prepared for this,” warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. “To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”
Among the many capabilities of AI, here are a few that will have significance ramifications with elections and voting: automated robocall messages, in a candidate’s voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave.
Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.
“What if Elon Musk personally calls you and tells you to vote for a certain candidate?” said Oren Etzioni, the founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. “A lot of people would listen. But it’s not him.”
Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecurity company based in Austin, Texas, has predicted that groups looking to meddle with U.S. democracy will employ AI and synthetic media to erode trust.
“What happens if an international entity — a cybercriminal or a nation state — impersonates someone? What is the impact? Do we have any recourse?” Stoyanov said. “We’re going to see a lot more misinformation from international sources.”
AI-generated political disinformation already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children supposedly learning satanism in libraries.
Panelists speak about artificial intelligence at the Milken Institute Global Conference. (Milken Institute)
AI images appearing to show Trump’s mug shot also fooled some social media users even though the former president didn’t take one when he was booked and arraigned in a Manhattan criminal court for falsifying business records. Other AI-generated images showed Trump resisting arrest, though their creator was quick to acknowledge their origin.
Rep. Yvette Clarke, D-N.Y., has introduced legislation that would require candidates to label campaign advertisements created with AI. Clark has also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact.
Some states have offered their own proposals for addressing concerns about deepfakes.
Clarke said her greatest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other.
“It’s important that we keep up with the technology,” Clarke told The Associated Press. “We’ve got to set up some guardrails. People can be deceived, and it only takes a split second. People are busy with their lives and they don’t have the time to check every piece of information. AI being weaponized, in a political season, it could be extremely disruptive.”
The Associated Press contributed to this report.