‘Deepfakes’ and AI content: Colorado lawmakers dive into new tech ahead of November elections

A panel of House legislators on Monday considered several bills that sponsors say will protect the integrity of this year’s elections, as well as future ones.
Notably, lawmakers are targeting AI-generated content, arguing that, used improperly, the new technology poses a “threat to free and fair elections.”
The House State, Civic, Military and Veterans Affairs Committee advanced the measure on party line 8-3 vote.
One such bill, House Bill 1147, seeks to regulate the use of content generated by artificial intelligence, such as “deepfakes,” in communications about election candidates.
A “deepfake” refers to AI-generated images or videos that look real, a phenomenon further popularized by a viral Tiktok of a deepfake of “Tom Cruise” goofing around.
The bill defines a deepfake as “an image, video, audio or multimedia AI-generated content that falsely appears to be authentic or truthful and which features a depiction of an individual appearing to say or do something the individual did not say or do without the person’s consent.”
Supporters fear such AI generative content, which they note has become more and more pervasive, would open the pandora’s box, if it hasn’t already, to deception, particularly in a major election season.
But some worry about the proposal’s potential misuse, notably in its execution, raising scenarios in which a candidate is disqualified based on a complaint that turns out to be false.
Rep. Brianna Titone, D- Golden, one of the bill’s sponsors, said 20,000 New Hampshire residents received robocalls featuring an AI-generated voice mimicking President Joe Biden, urging them not to vote just days before the primary election.
Similar deepfake controversies have occurred in elections globally, such as a case in Turkey, where a fake video depicted members of a terrorist organization endorsing incumbent candidate President Recep Tayyip Erdogan’s opponent, Kemal K?l?çdaroglu.
Titone acknowledged the potential First Amendment implications inherent in the bill but argued that First Amendment protections do not apply to statements made with malice, such as an intentionally deceitful deepfake.
“There’s a lot of leeway in favor of the First Amendment over the candidate or the elected official; it’s a much harder standard for libel and things like that, but a deepfake is the next level of trying to incriminate somebody or compromise them or make them seem like they’ve incriminated themselves by saying things that they may not have said,” she said.
The bill would require all such content featuring a candidate to include a disclosure statement indicating that the communication is a deepfake. Under the proposal, candidates suspecting a violation of the law can file a complaint with the Secretary of State’s Office and, if a violation is determined, an administrative hearing officer may impose civil penalties.
Andrew Brandt, a cyber crime investigator from Boulder, said he has seen a significant increase in “misinformation” in recent years, both in and outside of the political realm.
“Election disinformation used to be a hand-crafted artisanal process, but now it can be created in minutes, and the harm spread around the world before the truth can even tie its shoes,” he said.
Brandt said he also hopes further legislation is enacted to protect civilians from the potential harms posed by generative AI.
Ruthie Barko of TechNet, an organization actively involved in similar bills in other states, proposed an amendment to align the bill’s language more closely with that of other states’ election deepfake legislation. She also suggested including an exemption for deepfakes created for cybersecurity and anti-fraud purposes, something she said is “critical.”
“Coloradans and U.S. voters at large deserve to know whether or not campaign content is authentic,” she said.
This is the first serious election in which artificial intelligence plays such a prominent role, noted Craig Holman of Public Citizen.
“Since this is a new type of technology, there is hardly any regulation of the use of AI in campaign communications, not even disclosure,” he said. “We are frequently now subject to what could be false images, false voices, and most voters really can’t tell the difference anymore. (This legislation) is so necessary as we’re about to enter one of the most contentious presidential elections.”
Aly Belknap with Colorado Common Cause echoed Holman’s concerns, highlighting the pervasiveness of deepfakes and other avenues for spreading “disinformation,” such as fake social media profiles. These issues, she said, combined with declining trust in the media among Americans, pose a threat to the future of fair elections.
“The availability of generative AI tools makes it easier than ever to spread false information and propaganda with little resources and at a large scale, leaving voters confused and further questioning what they see or hear,” she said. “We can’t let generative AI go unchecked in political campaigns. In this increasingly complex information ecosystem, it’s critical that citizens have the tools to determine whether an image, video, or audio representation made by a candidate campaign is authentic and truthful.”
While no one testified in opposition of the bill, several committee members raised concerns about potential misuse of the legislation.
Rep. Brandi Bradley, R- Larkspur, questioned the possibility of candidates creating an intentionally false deepfake of themselves and blaming it on their opponent in an effort to manipulate an election.
Rep. Scott Bottoms, R- Colorado Springs, raised a scenario, in which the Secretary of State’s Office might disqualify a candidate based on an untrue complaint.
At the national level, several groups also cautioned against rushing to regulate deepfakes, arguing that policymakers should consider “countervailing considerations,” notably that such content can “sometimes serve legitimate, nondeceptive purposes, such as the creation of satire or other forms of commentary or art.”
“These types of expression have inherent value and, in the United States, merit considerable legal protection under the First Amendment. U.S. law specifies that even outright deception with no redeeming artistic or other licit purpose, while typically entitled to less constitutional protection, cannot be prohibited simply for its own sake,” Daniel Weiner and Lawrence Norden wrote for the Brennan Center for Justice. “The government must still provide an independent justification for any restriction and demonstrate that the restriction in question is appropriately tailored to its stated goal.”
HB 1147 carves out an exemption for “satire” or “parody,” or when its production is “substantially dependent” on an individual’s ability to physically or personally impersonate a candidate – and not on generative AI.
Several amendments to HB 1147 sought to clarify its intent and specify that a candidate can seek an injunction to halt further dissemination of a deepfake in question.
Additionally, an amendment accounted for potential new regulations from the Federal Communications Commission. Since the incident in New Hampshire, the FCC has implemented several guidelines on deepfakes, though they are geared more toward AI-generated phone calls than photos and videos.
While the bill won’t put a stop to “bad actors” who create false content, Titone said it’s a step forward in the continued efforts to guarantee fair elections.
“It is an opportunity to ensure that our democracy is protected and we put some guardrails around the use of deepfakes in our election system,” added her co-sponsor, Rep. Junie Joseph, D- Boulder.

luige.delpuerto@coloradopolitics.com