Colorado Politics

Is AI plagiarism? | BIDLACK

During my 17 years on the political science faculty of the U.S. Air Force Academy, I graded thousands of papers. And let me tell you, when you have been grading a hundred papers on the same topic for five hours, and still have a couple hours to go, it becomes a bit tedious. Yet these papers are important in a cadet’s education, in that they are forced to actually think about the implications of, say, the separation of church and state, or federalism, rather than just memorize a set of facts. And sometimes you get surprises, such as the time I had a paper from a cadet that was so good I wrote “this is the best paper I’ve ever graded” on it. He was thrilled. During the next class, I asked him if he had called his mom to tell her what I wrote. He sheepishly said yes, he did call.

In college settings, writing papers is often an assignment students dislike (a few love it), but writing is a vital part of the learning process. And after college, there are careers that depend upon an ability to write coherently, from physicians writing up interesting cases from which they learned something to lawyers, who write almost continuously about the cases they are working on. So, writing continues to be vital for many people, including, say, columnists.

Which is why I’m so worried about AI.

Artificial Intelligence (AI) is transforming how students approach academic work, particularly in writing college papers. Though AI-powered tools can assist with research, grammar and organization, their use presents significant risks colleges and students must carefully consider.

One of the primary dangers of using AI in college papers is the increased risk of plagiarism. AI writing tools can generate content students may submit as their own, blurring the lines between original work and machine-generated text. This undermines academic integrity and can result in severe consequences, including failing grades and disciplinary action.

FILE PHOTO: Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

That makes sense, right? Those last two paragraphs rather succinctly sum up the issue. Well, they should, because those two paragraphs are from the AI generator called CoPilot that comes with MS Word. I asked for a brief essay on the dangers of AI in the classroom, and CoPilot produced an essay, from which I plucked those two paragraphs, that I likely would have graded as a B+ or A- were I still teaching. You might have noticed a slight change in tone, as I stuck an AI bit into my own writing, but the entire AI essay hangs together nicely. I can’t imagine what teachers are going to do now, especially given AI is going to get better and better.

Yet it comes with real dangers.

I’m writing about this topic because of a recent Colorado Politics story, in which Colorado attorneys are being instructed on the challenges of AI in their line of work. U.S. Magistrate Judge Maritza Dominguez Braswell is one of three Colorado-based judges tasked with studying the implication of AI in the law, and to share that knowledge with the Colorado Bar Association and others.

One of the significant issues with AI-produced court documents also shows the limits of the technology, when an AI bot creates fake citations. Remember please when you fire up an AI application, you are asking a very capable web surfer to seek out any and everything related to the topic at hand. But the problem is the computer is a very literal machine, but it cannot think independently. Thus, it may create a citation out of a short story it found, suggesting it wasn’t a work of fiction, or it may misunderstand and create a new “citation” completely on its own.

Unlikely, you say? Well, as part of a recent report on chronic disease in children issued by our nation’s worst ever Health and Human Services Secretary, Robert F. Kennedy Jr., reporters found, when checking the sources that “proved” the worth of the lunatic medical suggestions Kennedy is so devoted to, reporters found a number of them were entirely fictious. It looks like Kennedy (and of course, I’m referring to him and his staff, as I’m sure he didn’t write the thing) told an AI tool to generate a report, and poof, it did so. Unfortunately, when it hit a bump in the road, it filled that bump with a fake report, a medical citation that referred to a medical study that doesn’t exist. That is bad anywhere but is especially evil when in the field of childhood medicine. When caught, of course the Trump administration just called it a few typos in formatting, and they stand by the fake story, sigh.

I guess we shouldn’t be too surprised, when the current Secretary of Education referred to AI as “A1,” like the steak sauce. Given Linda McMahon was previously a senior executive at WWE wrestling and never actually taught or had any educational role, and that her mandate is only to do as much damage to the Department of Education as she can, perhaps mixing up a vital educational tool and a food topping isn’t too surprising. She can always use AI to write her speeches.

Though AI offers powerful tools for academic writing, its use in college papers, in the law, medicine and beyond is fraught with potential dangers. It is essential for students and educators to understand these risks, promote responsible use of technology, and uphold the values of academic integrity and personal growth.

Do you like that concluding paragraph? Well, it was AI too, except for a minor edit.

Caveat emptor, dear readers.

Hal Bidlack is a retired professor of political science and a retired Air Force lieutenant colonel who taught more than 17 years at the U.S. Air Force Academy in Colorado Springs.

Tags opinion

PREV

PREVIOUS

When patients lose their voice, everyone loses | OPINION

By Kristy Kibler Earlier this month the Colorado Prescription Drug Affordability Board (PDAB) voted to set an Upper Payment Limit (UPL) on Enbrel, a medication relied upon by thousands of people living with autoimmune diseases. This is not a technical adjustment; it is a life-altering decision that calls into question whether patients will continue to […]

NEXT

NEXT UP

App Store Accountability Act leverages existing systems to protect kids | PODIUM

By Warren Binford and Jon Kruljac More than 50 years ago, Colorado led the world in combatting child abuse and exploitation by establishing the first academic center focused on ending these crimes against children. The Kempe Center at the University of Colorado helped lead the effort by proposing new laws and policies that protect children, […]


Welcome Back.

Streak: 9 days i

Stories you've missed since your last login:

Stories you've saved for later:

Recommended stories based on your interests:

Edit my interests