‘AI won’: Judges caution lawyers to educate themselves about artificial intelligence in the law
A panel of federal and state judges last week told a group of attorneys, in explicit terms, artificial intelligence is here to stay and they must educate themselves about how to use it responsibly.
“Here’s how I’ve been trying to think about this: Most of us drive cars. I have a car that has a combustion engine and I have a car that’s electric,” said Justice Maria E. Berkenkotter. “I have to understand how to be safe — change the tires, change the oil, check the brakes. I don’t have to understand the combustion engine. I don’t have to understand regenerative braking.”
Similarly, she continued, attorneys do not have to know everything about how AI models operate, “but you do have to understand enough to be careful.”
The Feb. 23 discussion on AI, sponsored by the Colorado Bar Association’s Judicial Liaison Section, emphasized the double-edged sword presented by new forms of artificial intelligence.
Judges warn about bias in courtrooms, legal profession
On the one hand, AI legal tools can improve access to justice for self-represented litigants. For years, Utah has maintained the “Sandbox,” where providers may offer unconventional legal services under relaxed regulations, with an eye toward gathering data and assessing consumer complaints. To date, 24,000 individual customers have used tools that include legal chatbots and document generators.
On the other hand, AI generators that pull from a vast catalogue of legal writings may end up replicating systemic biases.
“There’s concern that lawyers are gonna end up with fancy AI and fancy products,” added Berkenkotter, “and self-represented folks at the end of the day — because there’s no money in it — are going to end up with products that are lesser.”
Pulling back the curtain: Justice William Hood talks about appeals behind the scenes
“There isn’t a lot of regulation in this space,” said U.S. Magistrate Judge Maritza Dominguez Braswell. “That is because there is this desire to promote competition. And also, because our Congress doesn’t move very fast, we’re not gonna see a lot of regulation and laws constraining it.”
Currently, Colorado lawmakers are considering regulations on the use of AI in elections, with one proposed bill requiring disclosure of any “deepfakes” — AI-generated images, voices or videos of candidates — in elections communications. In addition, one federal judge in Colorado has enacted his own disclosure requirement for the use of AI.
U.S. District Court Judge S. Kato Crews, who President Joe Biden appointed in January, has an order for parties in civil cases to certify they have not used generative AI in preparing their filings. If they do use AI, even if a human later reviews the work, they must attest to that, as well.
Through the court’s clerk, Crews told Colorado Politics it is too early to assess the directive’s impact. However, his goal is to promote the “thoughtful use” of AI by having lawyers confirm they checked the output of their AI generator before turning in their work.
Judge Lino S. Lipinsky de Orlov of the Court of Appeals, who also spoke on the AI panel, recalled that when he first began discussing AI in the spring of 2023, lawyers were getting in trouble for using ChatGPT and unwittingly including fake legal citations generated by the model in their filings.
“I asked ChatGPT recently, ‘Why are lawyers obtaining these hallucinations in the text?'” he said. “ChatGPT took no responsibility and said, ‘Well, lawyers should be checking the citations.'”
Lipinsky added that he recently attended a demonstration for a generative AI model that responded to a prompt by producing genuine legal citations.
“The current AI revolution is over in many ways. AI won,” he said. “We all need to learn how to use it effectively. How to avoid the pitfalls.”
'Political with a small "p"': Federal, state and local pathways to the bench
Dominguez Braswell said AI models can be a “black box,” with little transparency for how they draw on information and make connections to generate their output. She suggested the way to think about AI’s potential is: “AI’s not going to replace me, but someone who’s really good at using AI might.”
“I don’t love the blanket initiatives you’ve seen some companies take, which is AI: not allowed. Because people are gonna use it, especially the younger generations,” said Dominguez Braswell. “So, it’s better to talk about it and adopt responsible use policies.”
That sentiment echoed an acknowledgement by Justice Melissa Hart last fall that the Colorado Supreme Court must be mindful of emerging AI issues, even in the absence of a court case specifically implicating artificial intelligence.
“I have an obligation to understand and to educate myself regularly and keep up to date, both because, as the state Supreme Court, we are responsible for regulating the legal system … and there are huge numbers of legal-ethical issues that come with AI,” Hart said at a University of Denver-sponsored event.
Berkenkotter emphasized that lawyers are already using AI in their personal lives, whether by unlocking their phones through facial recognition technology or using music streaming services to recommend playlists. One audience member responded that AI, in reality, “is using us.”
“I think that’s going to be a recurring theme,” Berkenkotter said.

