Feb 12,2026
The Ghost in the Machine: Why AI Cheating is a Crisis of Education, Not Just Morality
Eng. Abdirahman Said Abubakar
The scene has become archetypal across university campuses. A student stares at a blinking cursor on a blank document. The clock is ticking. In a separate tab, ChatGPT sits patiently, ready to generate 1,500 words on Proust or supply the Python code for a data science problem in seconds.
For the vast majority of history, the barrier to cheating was effort. You had to find a willing peer, or a paid ghostwriter, and hope they were competent. Today, the barrier has evaporated. We now have a ghost in the machine—ubiquitous, free, and eerily competent.
The immediate reaction from academia has been one of panic and prohibition. Turnitin detectors have been weaponized. Proctoring software has become more intrusive. Some departments have reverted to handwritten blue-book exams in an attempt to barricade the walls.
But viewing this purely as an arms race against dishonesty misses the point. The mass adoption of AI to complete assessments is not merely a failure of student ethics; it is a referendum on the value of the assessment itself.
The “Rational” Cheater
We must first acknowledge the context. University education, particularly in the humanities and social sciences, is currently burdened by a crisis of cost and vocational anxiety. When a student pays $40,000 in tuition, the assessment is no longer viewed as a formative intellectual journey; it is viewed as a transaction. “I paid for the credential,” the logic goes, “therefore I must clear these hurdles.”
If a student can clear a hurdle by using a tool that is faster than their own brain, why wouldn’t they? To dismiss them as simply immoral is to ignore the structural pressures that have redefined a degree as a product rather than a process. When the system treats the student as a customer buying a grade, the student treats the assessment as a box to be ticked. AI is simply the most efficient ticker.
The Invisible Curriculum
The deeper tragedy of AI-assisted cheating is what the student loses without realizing it.
We often speak of university as teaching “content”—knowing the date of a treaty or the formula for a derivative. But the true value of a university education is the invisible curriculum: the ability to hold a complex thought in your head for weeks, to wrestle with ambiguity, to argue with yourself, and to tolerate the frustration of not knowing the answer.
When a student outsources a 2,000-word essay to an LLM, they don’t just steal a grade; they skip the intellectual workout. They avoid the cognitive friction required to build stamina for complex problems. It is akin to using an e-scooter to run a marathon. You technically reached the finish line, but you emerged with none of the cardiovascular strength.
The Flawed Prosecutor
Universities have responded with a mix of technological surveillance and paranoia, yet this approach is fraught with peril.
AI detectors have proven to be statistically biased, frequently flagging non-native English speakers whose “pattern” of language deviates from the statistical norm. We are now in the absurd position of accusing international students of cheating because their writing is too “rigid,” or accusing neurodivergent students because their syntax is too “patterned.”
Furthermore, the obsession with catching cheaters distorts the role of the professor. An educator should be a mentor, not a parole officer. When the primary interaction between student and teacher is mediated by suspicion and plagiarism hearings, the educational alliance is broken.
Redefining the “Open Book”
The uncomfortable truth is that we have entered a post-proctoring era. We cannot put the toothpaste back in the tube. We cannot un-invent the calculator, and we cannot un-invent the large language model.
The universities that survive this transition will be those that stop asking questions AI can answer.
If a generic prompt regarding “The causes of WWI” or “The marketing mix of Starbucks” can be answered perfectly by an LLM, the question is not “How do we stop them using it?” but rather, “Why are we still asking this?”
The future of assessment lies in process over product. Why grade the final code when you can grade the buggy commits on GitHub that show the student struggling to fix it? Why grade the literary essay when you can grade the student’s reflection on why they chose that particular thesis?
We need assessments that are:
· Personalized: Connect the topic to the student’s own life, experiences, or local context.
· Oral: Viva voce examinations, where the student must defend their work in real time, revealing immediately whether they wrote it or merely downloaded it.
· Collaborative: If we allow AI as a collaborator, we can assess how well the student directed the AI. Prompt engineering is now a legitimate literacy.
Conclusion
We are currently in the “moral panic” phase of this technology. But history shows us that banning tools rarely works; adapting how we use them does.
The student who uses AI to cheat is not a villain. They are a symptom. They are telling us, in the only language the system understands, that the currency of education—the assessment—has become so disconnected from actual learning that a machine can do it better.
If a machine can do it better, perhaps we shouldn’t be teaching it.
Recommendations
1. Abolish Generic, Reproducible
Assessments
Departments should audit their current assessment inventory and
identify any task that can be completed satisfactorily by a competent LLM with
a single prompt. These tasks—generic essays, standard problem sets, formulaic
code exercises—should be either substantially redesigned or retired entirely.
If a machine can do it, it is not measuring human intelligence.
2. Adopt Process-Oriented
Assessment
Shift weight from the final product to the intellectual journey:
- Require version histories: In
coding assignments, assess commit logs and debugging narratives, not
merely working code.
- Require annotated drafts: In
writing assignments, assess outlines, multiple drafts, and marginalia
showing how thinking evolved.
- Require failure documentation: Award
credit for intelligent failures—well-reasoned approaches that did not
work, with analysis of why.
3. Restore Oral Examination
The viva voce (oral exam) is the oldest and most reliable method
of verifying authentic understanding. It cannot be faked by AI. Institutions
should reintegrate oral defense components into major assessments, not as
punitive interrogations but as intellectual conversations. A five-minute
discussion of a student’s thesis reveals more about their comprehension than a
thirty-page essay ever could.
4. Rebrand AI Literacy as a Core
Competency
Rather than policing AI use, teach students how to direct it
critically. Assessment should include:
- Evaluation of AI-generated output
(identifying errors, biases, gaps)
- Prompt engineering as a form of research
methodology
- Reflection on when AI assistance augmented
versus diminished the student’s own thinking
This
reframes the technology from a cheating tool to a professional instrument,
which is what it actually is.
5. End the Surveillance Arms Race
Institutions should decommission AI detectors and deprioritize
proctoring software. These tools damage the educational alliance,
disproportionately penalize non-native and neurodivergent students, and create
a culture of suspicion. Replace detection resources with assessment redesign
resources.
6. Reconnect Assessment to the
Student’s Lived Context
The most AI-resistant assessments are those that require
personal, local, or experiential knowledge. Assignments that ask students to
connect course concepts to their own community, family history, workplace, or
observations cannot be outsourced. This also restores intrinsic motivation by
making academic work feel meaningful rather than transactional.
7. Train Faculty in Assessment
Design
Most academics have never received formal training in how to
design assessments. Institutions must invest in professional development that
equips faculty to create authentic, AI-resistant tasks. This is not about
making assessments “harder” but making them truer—better
mirrors of genuine intellectual work.


