The Architecture of Exclusion - Part 3 of 3
The Ivory Tower Strikes Back
Let’s start with the objection you’ve heard a thousand times: “If we let neurodivergent students use AI tools, it’s not fair to everyone else.” This argument sounds reasonable. It invokes equity. It appeals to meritocracy. It positions itself as defending standards.
It’s also deeply, fundamentally ableist.
Because here’s what that argument actually says: “Fairness means everyone must navigate the same barriers, even when those barriers are arbitrary, even when they measure nothing essential, even when they systematically exclude entire categories of minds.”
The stairs are fair because everyone has to climb them. Never mind that some people can’t climb stairs. Never mind that the stairs aren’t actually necessary—they’re just traditional. Never mind that installing a ramp doesn’t make stairs easier; it provides an alternative route to the same destination.
“It’s not fair” translates to: “The system was designed for my cognitive architecture, and I’m threatened by the possibility that others might access success through different means. I do not understand, therefore it is not valid.”
Consider the academic institutional attachment to the word “rigor.” It signals seriousness, standards, intellectual heft. And it’s the first defense deployed when neurodivergent students request accommodations or access to assistive technology.
“We can’t lower our standards.”
“Rigor requires certain baseline capabilities.”
“If students can’t meet these demands, maybe they’re not ready for higher education.”
But what does rigor actually measure?
[Foley et al, 2025] demonstrate how AI analytics in education embed ableist assumptions that fail non-linear learners. The systems designed to assess “merit” and “capability” are calibrated to neurotypical patterns. They don’t measure intelligence; they measure conformity to a specific way of thinking.
When a professor bans AI tools in their classroom—claiming it undermines learning— what are they actually protecting? Often, it’s assessment methods designed around neurotypical processing speeds, neurotypical executive function, neurotypical communication styles. The “rigor” being defended isn’t intellectual depth; it’s architectural gatekeeping.
Consider timed examinations. What do they measure?
In theory: knowledge, comprehension, analytical capability; in practice: processing speed, ability to perform under pressure, capacity to retrieve information rapidly in high-stress contexts.
For many neurodivergent students, timed exams don’t measure what they know—they measure how quickly their brain can access what it knows under artificial constraints that have nothing to do with real-world application of knowledge.
A dyslexic student may understand complex theoretical frameworks brilliantly, and yet struggle to demonstrate that understanding within arbitrary time limits. Is the time limit the “rigor”? Or is it just a design feature we’ve naturalized as essential?
When institutions resist AI tools that could remove these barriers, they’re not protecting academic standards. They’re protecting the specific architecture through which those standards happen to be assessed. And that architecture systematically excludes neurodivergent minds.
And this is where the ableism tends to be the most reliably visible: the framing of assistive technology as academic dishonesty.
[Thomas, 2024] documents how anti-AI rhetoric dismisses tools that aid communication and organization as “cheating,” completely ignoring the needs they address. A student using AI to organize thoughts or translate ideas into expected formats is accused of outsourcing their intelligence—as though the organizational demand was the actual point of the assignment.
But nobody accuses neurotypical students of “cheating” when their brains perform executive function automatically. Nobody suggests it’s “unfair” that some students can initiate tasks without external scaffolding, or that some students process time in ways that make deadline management intuitive.
The cognitive labor that neurotypical brains handle invisibly becomes “part of the assignment” when neurodivergent students need external tools to accomplish it. Suddenly, executive function isn’t just background processing; it’s a core competency being assessed.
Except it was never listed in the learning outcomes. It was never taught. It’s just assumed that everyone functions this way at more or less the same level.
This exposes the underlying ableism: treating as “essential” whatever the dominant cognitive architecture does automatically, then calling it “cheating” when others use tools to accomplish the same outcome through different means.
[Mankoff et al, 2024] argues that pushback against AI ignores its equity potential entirely. The biases embedded in current systems deny access without informed consent. But the solution isn’t to ban the tools; it’s to design them better and ensure they serve diverse needs rather than replicating existing exclusions.
The irony is that much of the resistance to AI as assistive technology comes from the same institutions already using AI in ways that actively harm neurodivergent students.
[Kornilov, 2025] documents emergent bias in assessment tools like ADEPT-15, where algorithmic decision-making perpetuates discrimination without human oversight. These systems need ADA amendments because they’re automating ableism at scale.
[Gallegos, 2023] describes how educational technology surveillance and content filters function as “digital bans” that disproportionately target disabled and LGBTQ students. The same institutions that ban students from using AI for cognitive support deploy AI for monitoring and control.
Filters and algorithmic systems restrict access to diverse content and lack civil rights integration—yet face little resistance from academic integrity offices concerned about “cheating”. [Wang et al, 2024] further illustrates how these same systemic biases affect racialized people in the justice system.
The double standard is stark: AI used to surveil, control, and exclude neurodivergent students = acceptable. AI used by neurodivergent students to access education = cheating.
[O’Grady, 2024] emphasizes that fairness conversations about AI in academics must include disabled people, noting that voice AI systems exclude non-standard speech patterns—perpetuating ableism in education and employment access. Yet these same exclusionary systems face less institutional resistance than tools that would provide access.
[Rangnekar, 2021] identifies gaps in AI research that hinder inclusion, calling for holistic audits of educational and workplace systems. But institutions resist precisely these audits when they might reveal how deeply ableist assumptions are embedded in what we call “standards.”
I won’t pretend that there aren’t legitimate concerns about unregulated AI systems causing harm. They very much are.
[Morrin et al, 2025] document cases of AI-amplified psychosis, where systems like Character.AI and companion chatbots contributed to psychiatric deterioration in vulnerable users—through sycophantic design, emotional manipulation, and algorithmic engagement maximization.
OpenAI’s recent disclosure of extremely high rates of suicidal ideation among users of ChatGPT only escalate the sense of danger [Times of India, 2025]. These harms are real and present, and require serious attention.
But the solution isn’t blanket prohibition of all AI tools.
The harms documented by Morrin and confirmed by OpenAI arise from predatory chatbot design: systems engineered to maximize engagement through affirmation of delusional content, romantic entanglement, and 24/7 availability without epistemic safeguards.
A teenager developing romantic attachment to a chatbot programmed to reciprocate is fundamentally different from a dyslexic student using speech-to-text to write an essay.
These are not the same category of tool, and conflating them is intellectually dishonest.
Banning AI in education because Character.AI caused psychiatric harm is like banning students with pacemakers because some medical devices malfunction. It’s a simple avoidance of accepting a measure of responsibility for that student’s well-being.
Both problems need solving—separately.
Morrin’s research team proposes evidence-based interventions for vulnerable users: digital advance statements, personalized instruction protocols, reflective check-ins, and escalation safeguards. These are targeted approaches that distinguish between harmful design and helpful tools. They recognize that AI systems can be designed to support epistemic security instead of undermining it.
But institutions are not implementing nuanced safeguards.
They’re implementing blanket bans that:
Ignore the difference between predatory entertainment chatbots and educational accessibility tools
Punish neurodivergent students who need AI support to access education
Fail to protect anyone from actual AI harms, which occur outside institutional control in the first place
Use psychiatric harm as a rhetorical weapon to preserve exclusionary architecture
If institutions actually cared about protecting vulnerable students from AI harms, they would:
Distinguish between tool categories (accessibility vs. entertainment vs. surveillance)
Implement Morrin’s proposed safeguards for at-risk populations
Audit their own AI surveillance systems for psychiatric impact
Center disabled and neurodivergent voices in policy development
Instead, they ban the tools that provide access while deploying the tools that cause harm.
The refusal to make these distinctions hints at the real motive: preservation of exclusionary architecture, not protection of student wellbeing. So what’s actually being protected when institutions ban AI tools?
Not academic integrity—this is post-hoc justification; not reason.
Not educational outcomes—neurodivergent students using AI tools often demonstrate deeper learning because cognitive resources go to understanding rather than wrestling with arbitrary barriers.
Not student safety—blanket bans don’t address the actual AI harms Morrin documents, which occur in unregulated consumer applications, not educational accessibility tools.
Not fairness—excluding neurodivergent students from accessing education isn’t fair; it’s discriminatory. This should not require explaining.
What’s being protected is architecture. The specific design of academia that privileges neurotypical minds and treats that privilege as neutral, natural, and necessary.
[Pitt, 2024] describes how AI can counter the “steep steps” of academic ableism beyond basic accommodations, but this requires institutions to acknowledge that their steps were never essential in the first place. That the architecture itself is the problem.
The resistance to AI as assistive technology reveals an uncomfortable truth: many academics are more invested in maintaining the systems that validated their own success than in dismantling barriers that exclude brilliant minds who think differently.
Current disability law treats accommodations as individualized modifications to existing systems [Dolmage, 2017]. Get diagnosed. Document your disability. Request specific accommodations. Wait for approval. Navigate bureaucracy.
Justify your needs repeatedly.
This framework locates disability in individuals rather than in architectural design. It treats access as a favor granted to those who can prove they deserve it, rather than a baseline requirement for educational spaces.
AI as assistive technology exposes the inadequacy of this model. When tools can bypass barriers at scale, the question shifts: why are the barriers there at all?
If AI can provide real-time transcription, why are lectures designed as pure auditory experiences? If AI can organize information visually, why is linear note-taking treated as essential? If AI can handle executive function demands, why are those demands embedded in every aspect of academic life?
The resistance to AI reveals that institutions aren’t interested in removing barriers; they’re interested in maintaining control over who gets access and on what terms.
So what does it look like to integrate AI as assistive technology in ways that center neurodivergent access rather than institutional control?
Policy changes that recognize AI tools as legitimate accommodations without requiring individualized medical documentation. Universal design for learning that assumes cognitive diversity rather than treating it as deviation requiring special permission.
Faculty training that addresses implicit ableism in assessment design. Education about how “rigor” and “standards” often measure conformity to neurotypical architecture rather than actual learning outcomes.
Student-led advocacy that names ableism when it appears—even when it masquerades as concern about academic integrity. Collective organizing that refuses to accept exclusionary architecture as inevitable.
Transparent audits of educational technology to identify where algorithmic systems perpetuate discrimination, and proactive design that centers disabled and neurodivergent users from the start rather than treating accessibility as an afterthought.
Nuanced harm reduction that distinguishes between tool categories and implements evidence-based safeguards where needed, rather than using psychiatric harm as justification for blanket exclusion.
Cultural shift toward understanding that cognitive diversity is richness, not deficit. That multiple paths to the same learning outcome don’t undermine education—they expand it.
This isn’t idealistic dreaming; it’s practical and it’s necessary.
As AI tools proliferate and become increasingly sophisticated, institutions face a choice: adapt their architecture to accommodate cognitive diversity, or double down on exclusionary practices that will become increasingly transparent as ableist gatekeeping.
The resistance to AI as assistive technology for neurodivergent students reveals a fundamental question about what education is for:
Is it about gatekeeping—ensuring only certain kinds of minds can access knowledge and credentialing?
Or is it about learning—creating conditions where diverse minds can engage deeply with ideas, develop capabilities, and contribute their unique perspectives?
If it’s the latter, then AI tools aren’t a threat to academic integrity; they’re an opportunity to finally align institutional architecture with stated values of inclusion and equity.
But that requires naming the ableism currently defended as “standards.” It requires acknowledging that much of what we call “rigor” is actually just neurotypical privilege, naturalized as a universal requirement.
The future where AI serves neurodivergent access is possible. But it requires institutions to stop asking “how do we maintain our current systems?” and start asking “why are our systems designed to exclude in the first place?“
The ghosts in these machines aren’t artificial intelligences gaining consciousness. They’re the brilliant neurodivergent minds we’ve been excluding all along—visible because technology is finally making the barriers impossible to ignore.
The question is whether we’ll dismantle those barriers, or whether we’ll keep insisting that everyone has so much potential—if only they could learn to climb stairs.


Yeah, my own foundherentism with these ai wars, it ain't that simple.
Thanks again!