For most of 2023, only a handful of states have adopted it laws to deal with the challenges posed by artificial intelligence and deep fraud to political campaigns.
But now that the 2024 election cycle is in full force, state legislators across the country have moved to deal with the difficult, fast-moving issue.
In the first three weeks of 2024 alone, lawmakers from both major parties have introduced legislation to combat false and disinformation AI and deep rigging could create in elections in at least 14 states.
The issue was back in the spotlight after it emerged on Monday fake robot call with voice Impersonating President Joe Biden, he told Democratic voters in New Hampshire not to vote in Tuesday’s primary.
It remains unclear whether the voice – an apparent impersonation of the president or digital manipulation – was generated using artificial intelligence, although the investigating New Hampshire Attorney General’s Office said the message was “artificially generated based on initial indications”.
The challenge underscores the kind of threat posed by the use of artificial intelligence and deep-faking that goes to the heart of the campaign season.
“The moment of deep political fraud is here. Policy makers must hurry to implement protection or else we face electoral chaos. The New Hampshire deepfake is a reminder of the many ways deepfakes can create confusion and perpetuate fraud,” the statement said.
“The good news is that states are rushing to fill the void,” Weissman said.
State bills that attempt to address the problem typically fall into two categories: disclosure requirements and prohibitions.
Disclosure requirements usually require a disclaimer to be placed on any media created using artificial intelligence that is given to influence an election within a certain time frame.
Prohibitions often have nuanced exceptions. For example, a Michigan law passed last year does not apply the ban if the statement is shared and the person responsible for the media does not know that it “falsely represents” the people it depicts.
Bills requiring the disclosure have been introduced by Republican lawmakers since the beginning of the year Alaska and Florida and by the Democrats Colorado.
Meanwhile, the Democrats Hawaii, South Dakota, Massachusetts, Oklahoma and Nebraskaas well as Republicans Indiana and Wyominghave introduced laws to ban AI-generated media within certain time frames before elections unless the media discloses it.
Democrats in Nebraska also filed bill prohibiting within 60 days of the election to spread all the deep fraud.
in ArizonaRepublican lawmakers have proposed a bill that would allow any candidate for public office to appear on the ballot or any Arizona resident to sue for relief or damages against anyone who publishes a “digital impersonation” of that person.
Idaho Republicans proposed a bill to ban the distribution of undisclosed “synthetic media.” The bill would also create rules that would allow people featured to sue the people who posted it.
Proposed by Republicans Bill in Kentucky It would create definitions for deepfakes and prohibit their distribution unless the people depicted in them consent, while giving them the ability to sue for relief and damages.
And led by Democrats Bill in Virginia It would make it a Class 1 felony for anyone to create “any deceptive audio or visual media” with intent to commit a “criminal offense.”
In the final days of 2023, lawmakers in three other states — Ohio, South Carolina and New Hampshire — introduced similar bills, none of which advanced.
Bill in New HampshireThe proposal, introduced by Democrats, “requires disclosure of the use of deceptive artificial intelligence in political advertising.”
The introduction of such bills does not necessarily mean that any of them will become law.
Last year, for example, only three states passed laws regulating the use of artificial intelligence and deep spoofing in political campaigns — even as the size, scope, and potential dangers they pose became clearer over the course of the year.