Why I’ve Never Bought into AI Detection—and What Educators Should Be Doing Instead
by Caitlin Cloyd
When ChatGPT first launched, many in education panicked. I didn’t.
I watched the rush to install AI detectors in K–12 classrooms with a growing sense of unease—not because I didn’t understand the fear, but because I knew we were asking the wrong questions. I’ve never been on the "detection treadmill," and the more I’ve learned, the more confident I am that it's time to step off it entirely.
This post is based on a research report I created using Gemini and refined with ChatGPT. I’m sharing it now in my own words, because this conversation is too important to stay buried in a doc. You can read the full report at the link in the comments.
The uncomfortable truth: AI detectors don’t work (and probably never will)
Let’s get this out of the way—AI detectors are not reliable. Not a little unreliable—wildly so. False positives are disturbingly common, and false negatives are easy to produce with a few minor tweaks or a cheap "humanizer" tool. Even OpenAI quietly pulled their own detector due to poor accuracy (source).
So why are we still relying on these tools to make high-stakes decisions?
They’re not just ineffective—they’re harmful
False positives aren’t just technical errors—they’re human errors with real consequences. A single misfire can put a student’s academic future at risk. Students—especially English learners, neurodivergent kids, and Black students—are disproportionately flagged by these tools (EdWeek, Stanford). That’s not just an algorithmic flaw—it’s a serious equity issue.
And teachers? We’re left interpreting vague percentages and scores with no transparency. The trust between students and educators erodes, and instead of focusing on learning, we’re chasing ghosts.
The real issue isn’t AI—it’s outdated assessment
Let’s be honest: AI didn’t break writing instruction. It exposed what was already broken.
If your assessment can be completed in seconds by ChatGPT, the problem isn’t the tech. It's the task. We've clung to product-based, take-home essays for decades—despite knowing they’re vulnerable to cheating, ghostwriting, and yes, AI.
This is our chance to evolve.
💡 So what do we do instead?
We stop playing AI whack-a-mole and start designing learning that matters.
Here’s what that looks like in practice:
🔹 1. Leverage student-facing AI tools that support the writing process
MagicSchool Student Rooms let teachers create customized spaces where students can access curated tools like:
Idea Generator and Sentence Starters to overcome blank-page anxiety
Writing Feedback and Text Proofreader to guide revision and clarity
Assignment Scaffolders to break complex prompts into manageable steps
Exemplar & Non-Exemplar comparisons to model expectations
These tools build student confidence, reinforce writing as a process, and keep teachers in control. When used transparently, they shift the conversation from plagiarism to progress.
🔹 2. Shift to process-based assessment
Use in-class drafting, reflections, and revision tracking tools (like Brisk's document playback) to capture thinking, not just final products. Teach metacognition and invite students to reflect on how they write—not just what they write.
🔹 3. Explore “AI-resistant” assessment formats
Incorporate oral defenses, performance tasks, multimodal projects, and group discussions—formats that prioritize higher-order thinking and human nuance over formulaic outputs.
🔹 4. Teach AI literacy
If students don’t understand what AI is and how it works, they can’t use it ethically. Teach them to analyze outputs critically, use AI as a tool (not a crutch), and cite or reflect on when and why they used it.
🔁 Shift the mindset: AI isn’t the enemy—it’s the invitation
We’re not in a battle to outsmart students. We’re in a moment of transformation. AI gives us a real chance to rethink the purpose of assessment and move beyond tired, one-size-fits-all approaches.
If we lean into this shift, we can finally prioritize thinking over typing, process over product, and trust over tech-policing.
📌 This article and the research report it’s based on were created with the help of both Gemini and ChatGPT. The ideas, edits, and opinions are my own—and I believe in modeling transparent, ethical AI use.