AI companies urged to face fines for developing tools fueling fraud
As artificial intelligence technology makes it easier to scam people, create fake images and videos and even ruin reputations, one policy debate swirls around who should get penalized for such behavior — the technology’s developer or the person who deployed it.
Drawing those boundaries are complicated, particularly when dealing with minors.
During a Nov. 19 Senate hearing chaired by Colorado’s U.S. John Hickenlooper, Hany Farid, a professor at the University of California Berkeley School of Information, used an example of a 12-year-old boy who could create non-consensual fake nude images of his female classmates.
While the boy should be held accountable, the government must start enforcing stiff punishments against the AI company that developed the technology that put the tools in the kid’s hands, he said.
Farid was adamant that punishing teenagers for misbehavior would not have a major national impact but that stiff, expensive punishments levied on the AI companies developing the technology would.
Alvin McBorrough, founder and managing partner of OGx, a Denver consulting firm focusing on technology and analytics, said the developer builds the AI tools while the deployer uses them.
“It does become the responsibility for the developer and the deployer to put trusted safeguards in place,” McBorrough said.
As the AI industry grows at a record pace, those who are pushing for regulation say there is no real accountability because both state and federal governments are lagging in passing legislation to hold both developers and those who deploy the technology responsible.
They note that realistic-looking videos and images are being used more and more to victimize teenagers and adults. Here’s one problem they identified: When a high school student creates a fake porn video or image of other classmates, they face little to no punishments — schools say there are no policies in the district, plus there are no state or federal laws to regulate that behavior.
Meanwhile, scams targeting older people and other consumers have become more and more widespread, aided, in part, by AI technology.
Last year, consumers lost $10 billion to scams and fraud, a major increase from $3.5 billion in 2020, according to the Federal Trades Commission.
Justin Brookman, the director of technology policy for Consumer Reports, said AI technological advancements have made it easier and cheaper to defraud people: What used to cost a scammer about $4 to create a believable fake image or video now costs about 12 cents.
Discriminatory practices will also continue to grow, Farid said, claiming major companies are still using flawed algorithms created in the past — and just updating with advanced AI technology, instead of creating brand-new coding.
U.S. Rep. Brittaney Pettersen said the areas most affected by what she and others described as “bias” in the technology are the housing and finance industries.
McBorrough said the concerns about “bad” technology are reasonable if companies continue to piggyback on already flawed technology that discriminates against particular demographics, which is why he applauded the action taken by the Colorado legislature in 2024.
The Colorado legislature passed Senate Bill 205, which supporters said would establish a framework to lower the risk of “unwanted bias” in AI-driven decision-making. The bill is slated to go into effect in February 2026.
Opponents, meanwhile, argued that innovation should be “encouraged and not stifled,” and that policy should strike a balance between consumers and fostering technological advancement. They added that legislation is pushing provisions “that may not be feasible or effective.”
In the meantime, the Attorney General’s Office has been tasked with implementing the law by creating audit policies and identifying high-risk AI practices, including creating a task force to tackle the bill’s flaws in next year’s legislative session.
McBorrough, who told Colorado Politics he works with major AI development companies, such as Google, said the industry is committed to protecting the public from the “bad actors” who are taking AI technology in the wrong direction.
“The intention is good, and some developers are being cautious in making decisions and carefully planning what we do,” he said.
On Nov. 19, Farid warned Hickenlooper and other members of Congress that unless laws are passed that hit major companies hard, the “bad” side of AI will only continue to get worse.
Farid said the solution centers around money.
If major technology companies develop AI technology that allows scams, nonconsensual imagery, and other bad uses, people will continue to misuse it.
Farid said to stop large technology companies, such as Google, Microsoft, and others developing AI, from developing such technologies, they must face fines that are significant enough to change their behavior.