Senate Republicans are pushing a provision in a major tax-and-spending bill that would bar states from regulating AI for 10 years to avoid conflicting local rules. Colleges and universities are facing a parallel crisis: uncertainty over whether and how students should be allowed to use generative AI. In the absence of clear institutional guidance, individual instructors are left to make their own calls, leading to inconsistent expectations and student confusion.
This policy vacuum has triggered a reactive response across higher education. Institutions are rolling out detection software, cracking down on AI use in syllabi, and encouraging faculty to read student work like forensic linguists. But the reality is that we cannot reliably detect AI writing. And if we’re being honest, we never could detect effort, authorship, or intent with any precision in the first place.
That’s why I’ve stopped trying. In my classroom, it doesn’t matter whether a student used ChatGPT, the campus library, or help from a roommate. My policy is simple: You, the author, are responsible for everything you submit.
That’s not the same as insisting on authorial originality, some imagined notion that students should produce prose entirely on their own, in a vacuum, untouched by outside influence. Instead, I teach authorial responsibility. You are responsible for ensuring that your work isn’t plagiarized, for knowing what your sources are, and for the quality, accuracy, and ethics of the writing you turn in, no matter what tools you used to produce it.
This distinction is more important than ever in a world where large language models are readily accessible. We conflate linguistic polish with effort, or prose fluency with moral character. But as Adam Grant argued last year in The New York Times, we cannot grade effort; we can only grade outcome.
This has always been true, but AI has made it undeniable. Instructors might believe they can tell when a student has put in “genuine effort,” but those assumptions are often shaped by bias. Does a clean, structured paragraph indicate hard work? Or just access to better training, tutoring, or now, machine assistance? Does a clumsy but heartfelt draft reflect authenticity? Or limited exposure to academic writing? Our ability to detect effort has always been flawed. Now, it’s virtually meaningless.
That’s why it doesn’t matter if students use AI. What matters is whether they can demonstrate understanding, communicate effectively, and meet the goals of the assignment. If your grading depends on proving whether a sentence came from a chatbot or a person, then you don’t know what the target learning outcome was in the first place. And if our assessments are built on presumed authorship, they’re no longer evaluating learning. They’re evaluating identity.
There are already cracks in the AI-detection fantasy. Tools like GPTZero and Turnitin’s AI checker routinely wrongly accuse multilingual students, disabled students, and those who write in non-standard dialects. In these systems, the less a student “sounds like a college student,” the more likely they are to be accused of cheating. Meanwhile, many students, especially those who are first-generation, disabled, or from under-resourced schools, use AI tools to fill in gaps that the institution itself has failed to address. What looks like dishonesty is often an attempt to catch up.
Insisting on originality as a condition of academic integrity also ignores how students actually write. The myth of the lone writer drafting in isolation has always been a fiction. Students draw from templates, search engines, notes from peers, and yes, now from generative AI. If we treat all of these as violations, we risk criminalizing the ordinary practices of learning.
This requires a shift in mindset that embraces writing as a process rather than a product. It means designing assignments that can withstand AI involvement by asking students to revise, explain, synthesize, and critique. Whether a sentence was AI-generated matters far less than whether the student can engage with what it says, revise it, and place it in context. We should be teaching students how to write with AI, not how to hide from it.
I’m not arguing for a free-for-all. I’m arguing for transparency, accountability, and educational clarity. In my courses, I don’t treat AI use as taboo technology. I treat it as a new literacy. Students learn to engage critically with AI by revising in response to its suggestions, critiquing its assumptions, and making conscious choices about what to accept and what to reject. In other words, they take responsibility.
We cannot force students to write “original” prose without any external help. But we can teach them to be responsible authors who understand the tools they use and the ideas they put into the world. That, to me, is a far more honest and useful version of academic integrity.
Annie K. Lamar is an assistant professor of computational classics and, by courtesy, of linguistics at the University of California, Santa Barbara. She specializes in low-resource computational linguistics and machine learning. At UC Santa Barbara, she is the director of the Low-Resource Language (LOREL) Lab. Lamar holds a PhD in classics from Stanford University and an MA in education from the Stanford Graduate School of Education. Lamar is also a Public Voices fellow of The OpEd Project.
The views expressed in this article are the writer’s own.
The post Higher Ed’s AI Panic Is Missing the Point appeared first on Newsweek.