‘Lawyers haven’t gotten the message’: Colorado justices hear debate about proposed AI rule
The Colorado Supreme Court heard comments from members of the legal profession last week about a proposed rule change intended to remind attorneys that misuse of new artificial intelligence tools can violate longstanding professional obligations.
Court of Appeals Judge Lino S. Lipinsky de Orlov, who chairs the Rules of Professional Conduct Standing Committee, told the justices that instances of lawyers and judges inadvertently including faulty, AI-generated references in their legal filings continue to make headlines.
“The reason why we need a rule, in the opinion of the standing committee, is that the legal profession hasn’t yet gotten the message that there are risks,” he said at the Dec. 17 hearing.
“Yes, I think it is coming. It is upon us. We need to be ready for it,” agreed Chief Justice Monica M. Márquez.

The proposal before the Supreme Court would add language to the rules of professional conduct clarifying that increased use of AI in the law “does not diminish a lawyer’s responsibilities” under the existing rules. Consequently, attorneys should be aware of “the benefits and risks associated with” such technology.
The rule change also would alert lawyers that misuse of AI could implicate the professional rules governing communication, client confidentiality, candor toward the court and use of a non-lawyer’s assistance.
“There’s not a get-out-of-jail-free card because you used a particular type of technology. The rules still apply,” said Julia Martinez, who chaired the subcommittee that produced the rule change.
She elaborated that the subcommittee did not want to discourage the responsible use of AI in a way that benefits clients, nor did it want to be so specific that technological changes would require further adjustments relatively quickly.
“These proposals, frankly, they’re modest. They don’t create new rules and new obligations,” said Martinez.
Written comments submitted in advance illustrated a divide among attorneys.
Denver attorney Casey Frank wrote that the changes are “unobjectionable because they contain no enforceable imperatives.” Frank suggested that lawyers be required in their filings to attest that they have independently verified the accuracy of output generated from an AI tool.
Already, some of Colorado’s federal judges impose that requirement, with U.S. District Court Judge S. Kato Crews having a directive in place for his cases and U.S. District Court Judge Nina Y. Wang adopting a similar policy this month.
U.S. Magistrate Judge Maritza Dominguez Braswell, who has taken the lead on AI issues in Colorado alongside Lipinsky and Justice Maria E. Berkenkotter, has also issued detailed guidance framing her approach to issues arising from negligent use of AI.

Public defender River B. Sedaka argued to the Supreme Court that the rule should acknowledge “ethical concerns triggered by the use of AI.” Those include the potential for plagiarism and the technology’s impact on water use and emissions.
“As lawyers, nuanced and critical thinking are essential to our job,” she said. “More troublingly, the increased use of AI, I believe, is encouraging our communities to abdicate that role of critical thinking and appreciating nuance because you can so easily get this whole brief essay, memo out of a machine.”
Justice William W. Hood III asked if Sedaka’s warnings about systemic impacts were appropriate to include in the rule, “given that we usually don’t showcase anything that might strike some as political in nature.”
Sedaka clarified that lawyers are capable of making their own decisions about whether to use or not use AI tools, but the rules “already require us to work to overcome the economic and social barriers for people who cannot afford” legal services.
M. David Johnson, a San Francisco-based attorney who is CEO of Modern Family Law, worried that the language could scare off attorneys from using AI for fear of violating the rules. He suggested an initiative akin to the Washington Supreme Court’s permitting of AI-based projects on a proposal-by-proposal basis.
“Why do you think it’s necessarily true that this is equivalent to taking out a billboard and saying folks should stay away from AI?” asked Hood.
“It really does look at AI as an enemy,” responded Johnson. “It’s a hammer. And a hammer can be used to build a house for the poor or it can be used to build a concentration camp. But you don’t blame the hammer.”
Lipinsky, the Court of Appeals judge, called the proposal “modest” and “a wake-up call” for lawyers about the potential dangers of relying uncritically on AI tools. He pointed to a database compiled by researcher and legal advisor Damien Charlotin that has tracked, to date, 709 cases with fake, AI-generated references in court filings worldwide.
“If Prof. Charlotin hadn’t found any cases where lawyers were misusing this technology, we probably wouldn’t be here. But the problem is the lawyers haven’t gotten the message,” said Lipinsky.
He also mentioned that in recent months, U.S. Sen. Chuck Grassley, R-Iowa, who chairs the Senate Judiciary Committee, has corresponded with two federal judges about AI-based errors in their orders.
“Now that we’re beginning to see cases where judges have issued orders or rulings that contain hallucinations, there’s a potential for erosion of public trust in that context, as well,” said Berkenkotter.
Justice Melissa Hart was not present for the hearing. She has been on a health-related leave of absence since late October and, two days after the hearing, announced she will leave the court in January.

