It’s election day in Arizona, and elderly voters in Maricopa County are getting phone calls that local polling stations have been closed due to threats from militia groups.
Meanwhile, numerous photos and videos on social media in Miami show poll workers throwing out ballots.
Phone calls in Arizona and videos in Florida”profound frauds” was created with artificial intelligence tools. But by the time local and federal authorities realized what they were dealing with, the false information spread across the country.
This simulated scenario was part of a recent exercise in New York that brought together dozens of former U.S. and state officials, civil society leaders and tech company executives to practice for the 2024 election.
The results were encouraging.
“It was confusing to the people in the room to see how quickly a few of these threats could get out of hand and really dominate an election cycle,” said Miles Taylor, a former senior official at the Department of Homeland Security. Training for The Future US, a Washington, D.C.-based non-profit organization.
Dubbed “The Deepfake The exercise, titled Dilemma, demonstrated how AI-powered tools threaten to spread disinformation in an already polarized society and wreak havoc on the 2024 election, multiple participants told NBC News. Rather than examining a single attack by a hostile group or regime, the exercise explored a range of scenarios of domestic and foreign actors launching disinformation, exploiting rumors and seizing political divisions.
Organizers and participants in the war game spoke exclusively to NBC News about how it played out.
They said it raises troubling questions about whether federal and local officials and the technology industry are prepared to counter disinformation, both foreign and domestic, designed to undermine public confidence in the election results.
Current U.S. officials privately say they share those concerns, and that some state and local election agencies will struggle to keep the election process on track.
The exercise highlighted uncertainty about the roles of federal and state agencies and tech firms seven months before what is expected to be one of the most divisive elections in US history. Is the federal government capable of detecting AI deep fraud? Should the White House or state election office publicly state that a particular report is false?
Unlike a natural disaster, where government agencies operate through a central command, America’s decentralized election system is entering uncharted territory without a clear understanding of who is in charge, said Nick Penniman, CEO of a bipartisan organization that promotes political reform and electoral integrity. .
“Now, in the last few years, we have had to defend our elections in America from both domestic and foreign forces. We just don’t have the infrastructure or the history to do it at scale because we’ve never faced threats this serious in the past,” said Penniman, who participated in the exercise.
“We know a hurricane will eventually hit our polls,” Penniman said. But in training, “because patterns of working together were not formed, few understood exactly how to coordinate with others.”
Around a long table in the “White House Situation Room,” participants played assigned roles, including directors of the FBI, CIA and Department of Homeland Security, and sifted through alarming reports from Arizona and Florida and numerous other unconfirmed threats. including a theft at a postal processing center for mail-in ballots.
In consultation with technology companies, the players, who were “government officials”, struggled to determine the facts, who was spreading the “deepfakes” and how government agencies should respond. (MSNBC anchor Alex Witt also participated in the exercise, playing the role of president of the National Association of Broadcasters.)
At first it was not clear that there were photographs in the exercise and a video of poll workers throwing away ballots in Miami was fake. The footage went viral in part because of a Russian bot-writing campaign.
Eventually, officials were able to determine that the entire episode was staged and enhanced by artificial intelligence to make it look more believable.
In that and other cases, including the bogus calls to Arizona voters, players wavered over who would make the public announcement telling voters that polling places were safe and ballots were valid. Federal officials worry that any public statement will be seen as an attempt to boost President Joe Biden’s re-election chances.
“There was a lot of discussion and uncertainty about whether the White House and the president were engaged,” Taylor said.
“One of the big debates in the room is whose job it is to say something is real or fake,” he said. “Is it state-level election officials who say we’ve identified fraud?” Are they private companies? Is it the White House?”
“That’s something we think we’ll see in this election cycle as well,” Taylor said.
While the war game imagines tech executives in a room with federal officials, the reality is that the federal government and private firms are engaged in how to combat foreign propaganda and disinformation. decreased sharply In recent years.
Once close cooperation It emerged after the 2016 election amid sustained attacks by Republicans in Congress among federal officials, tech companies and researchers. and court rulings preventing federal agencies from consulting with companies about moderating online content.
The result is a potentially risky gap in securing the 2024 election.
Former officials and experts said state governments lack the resources to detect deep fraud with artificial intelligence or quickly counter it with accurate information, and now tech companies and some federal agencies are reluctant to take a leading role.
“Everybody is terrified of lawsuits and … accusations of suppression of free speech,” said former Pennsylvania Secretary of State Kathy Bookwar, who attended the training.
The New York war game and similar sessions in other states are part of a broader effort to encourage more communication between tech executives and government officials, Taylor said.
But in the world outside of the war game, social media platforms have cut teams that regulate fake election content, and there are no signs that these companies are willing to cooperate closely with the government.
State and local election offices face a shortage of experienced personnel. A wave of physical and cyber threats has caused a record exodus of election workers, leaving election agencies ill-prepared for November.
Concerned about understaffed and inexperienced state election agencies, a coalition of nonprofits and good-government groups launched a bipartisan, nationwide network of former officials, technology experts and others to help local governments detect deep fraud in real time and respond with accurate information. plans to organize.
“We have to do everything we can to try to fill the void, independent of the federal government and social media platforms,” he said.
Boockvar, a former secretary of state, said he hopes nonprofits can help maintain communication channels by acting as a bridge between technology companies and the federal government.
Some of the largest artificial intelligence technology firms say they are adding security measures to their products and are contacting government officials to beef up election security ahead of the November vote.
“Ahead of the upcoming election, OpenAI has implemented policies to prevent abuse, launched new features to increase transparency around AI-generated content, and developed partnerships to connect people with authoritative voting data sources,” a spokesperson said. “We continue to work with governments, industry partners and civil society towards our shared goal of protecting the integrity of elections around the world.”
However, the internet is full of smaller generative AI companies that don’t follow the same rules, as well as open source tools that allow people to build their own generative AI software.
An FBI spokesman declined to comment on the hypothetical situation, but said the bureau’s Foreign Influence Task Force remains the federal leader in “identifying, investigating and disrupting foreign influence operations that target our democratic institutions and values in the United States.”
The US Cybersecurity and Infrastructure Security Agency said it is working closely with state and local agencies to protect the country’s elections.
“CISA is proud to continue to stand shoulder-to-shoulder with state and local election officials as they defend our election process against a variety of cyber, physical and operational security risks, including the risk of foreign influence operations,” said General Counsel Cait. Conley.
For many in the training room, the scenarios drove home the need for voters to recognize deep-seated fraud and mount an ambitious public education campaign to inoculate Americans against the coming onslaught of foreign and domestic disinformation.
The Future US and other groups are now in talks with Hollywood writers and producers to produce a series of public service videos to help raise awareness of fake video and audio clips during the election campaign. .
But if public awareness campaigns and other efforts fail to stem the spread of misinformation and potential violence, the country could face an unprecedented stalemate over who wins the election.
If there is enough doubt about what happened during the election, there is a danger that the outcome of the vote will become a “deadlock” with no clear winner, said Danny Crichton of Lux Capital, a venture capital firm that focuses on emerging technologies. – hosted the training.
If enough things “go wrong or people get stuck in elections, then you get a draw,” Crichton said. “And to me, that’s the worst case scenario. … I don’t think our system is strong enough to handle it.”