The emergence of generative AI has the potential to transform how students research and write, but it has also made it easier to include plausible but manufactured reference lists in submitted coursework. While this has always been a risk, AI-generated citations can often mimic real academic sources, complete with plausible author names, journal titles, and even fake summaries that appear in search results.
Recently, a colleague marking some first-year essays discovered that two submissions contained over 60% hallucinated references. The citations were convincing enough to pass initial scrutiny, but upon closer inspection, they didn’t exist. This isn’t just a case of sloppy referencing, it’s a shortcut that is now more available to students who are looking for shortcuts, as well as a fundamental challenge to academic integrity, particularly in evidence-based disciplines like healthcare, law, and the sciences.
Undermining Evidence-Based Work
Accurate sourcing is a key aspect of the academic principles that underpin Higher Education. False references compromise the validity of student work.
Students may unknowingly build arguments on nonexistent evidence, weakening their learning.
Higher Education is about training the next generation of professionals. Where accurate sourcing is critical (e.g., clinical practice, policy research), false references will compromise the validity of future work outputs.
Increased Marking Burden
Spot-checking references is time-consuming, and verifying every citation is impractical at scale.
Current AI-detection tools (e.g., Turnitin) do not flag fabricated references, leaving markers to uncover them manually.
Blurring the Line Between Poor Practice and Misconduct
AI use is problematic as it can be either malpractice (requiring academic misconduct procedures) or simply poor academic skills, in this case, referencing (and needing feedback and resubmission)?
Without clear guidance, responses risk inconsistency across departments.
This issue isn’t entirely new—students have long fabricated or misrepresented sources, but AI has amplified the scale and sophistication of the problem.
Key factors include:
Overreliance on AI for Research: Students may use ChatGPT to "find" sources without verifying them.
Lack of Awareness: Many students don’t realise AI hallucinates references, assuming outputs are reliable.
Gaps in Detection: Plagiarism software checks for copied text, not invented citations.
To address this challenge, we need a proactive, multi-layered approach that balances education, policy, and practical marking strategies. Here’s where I think we should focus:
Reinforce Core Academic Principles
"Always read what you cite" must be non-negotiable. Even a quick skim ensures a source exists and is relevant.
Teach students to verify every AI-generated reference using library databases or Google Scholar.
Update Guidance on AI Use
Clarify that fabricating references with AI is academic misconduct, not just poor practice.
Consider* requiring students to declare AI use and/or submit copies of cited sources (or links) when requested. * I struggle with this advice and will write a blog on that at some point [in short...how does a declaration change grading, how is it perceived, is it an extra burden on those that struggle....we shouldn't be citing AI as a source like a reference as we shouldn't really be using it that way]
Adapt Marking Practices
Spot-check high-risk references (e.g., key claims, unusual journals) rather than verifying every citation.
For essays with widespread fabrication:
Withhold feedback and escalate to misconduct procedures if intentional.
Require resubmission (with capped grades if warranted) if the core argument relies on false evidence.
Invest in Student and Staff Support
Workshops on AI pitfalls: Highlight how hallucinations work and how to spot them.
Promote better AI tools: e.g., Perplexity.ai, which provides traceable sources, though verification is still essential.
Advocate for Policy Consistency
Develop university-wide guidelines on handling AI-generated references, ensuring fairness across programs.
Share case studies (like the 60% fabrication example) to highlight the risks and solutions.
AI is here to stay, but its misuse doesn’t have to become the norm. By strengthening academic norms, clarifying consequences, and equipping staff and students with better tools, we can uphold research integrity—even as the technology evolves.