Design education has no coherent position on AI.
The responses across institutions range from prohibition to uncritical adoption, with most programmes occupying an uneasy silence between the two. Faculty use AI privately and restrict it publicly. Students use it regardless and learn to hide it. Institutions publish AI policies that are procedural — what is permitted, what is penalised — without ever articulating what AI is actually doing when it is used in a design context.
This variance is not healthy pluralism. It is the absence of a shared frame. When a design student at one institution is told “never use AI” and a design student at another is told “use AI for everything,” neither institution has made a pedagogical argument. They have made an administrative decision dressed as a position.
The result: a generation of design practitioners who either fear AI or depend on it, with no structural understanding of where it belongs and where it does not.
The confusion stems from a category error. AI is treated as a single capability — a tool that either “helps” or “replaces.” Both the prohibitionists and the enthusiasts share this assumption. They disagree about the consequence but agree about the premise: that AI does one kind of thing.
It does not. AI does at least two fundamentally different things, and conflating them is where design education loses its footing.
AI is good at language. It reads patterns in text. It extracts signals from unstructured input. It generates fluent, coherent natural language. These are language operations — pattern recognition and pattern generation across linguistic material.
AI is poor at judgment. It cannot reliably determine whether a design concept is coherent, whether evidence supports a claim, whether a scope is bounded, whether assumptions are acknowledged. When asked to judge, it produces plausible-sounding assessments that vary between requests, resist auditing, and cannot be reproduced.
The category error: asking AI to judge — to evaluate quality, to assess coherence, to determine sufficiency — and treating its language fluency as evidence of judgment competence. The fluency is real. The judgment is simulated.
This is not a slogan. It is an architecture — a structural separation that determines where AI belongs in any tool, any workflow, any pedagogical context.
| Layer | What operates here | Why this allocation |
|---|---|---|
| Qualification | AI reads input and extracts structured signals — what is present, what is absent, what is ambiguous | AI excels at pattern recognition across language |
| Rules | Deterministic code converts signals into evaluations — applying thresholds, mapping relationships, producing three-state assessments (present / unclear / absent) | Code is auditable, reproducible, and explicit. The same input produces the same judgment every time. |
| Language | AI narrates the evaluation in plain language — explaining what the rules determined, not introducing new judgments | AI excels at translating structured verdicts into readable, contextual prose |
When AI is asked to do all three — read the input, judge its quality, and explain its judgment — the result is fluent, confident, and unreliable. The language quality masks the judgment instability. A student receives feedback that sounds authoritative but cannot be reproduced, inspected, or challenged.
This is not a minor technical distinction. It determines whether AI tools in design education build critical capacity or erode it.
This position is not theoretical. It is demonstrated through working tools.
Coherence Diagnostic — evaluates design concept statements across five dimensions: claim, evidence, scope, assumptions, gaps. A trained classifier (DeBERTa, 98.38% accuracy) qualifies the input. Deterministic rules produce three-state evaluations. AI narrates the result. The same concept statement, submitted twice, receives the same structural evaluation. Free, no login required. Open source under MIT licence.
Play Shape Diagnostic (forthcoming) — analyses play patterns using human selection as the qualification layer and embedding-based similarity as the relationship engine. Demonstrates that qualification need not be AI classification — human judgment can serve as the first layer when the domain supports it.
The architecture is the argument. The tools are the evidence.
The separation of language from judgment is itself teachable. Students who understand where AI excels and where it fails develop a structural literacy that survives the next model release, the next capability leap, the next institutional policy revision.
This is not “AI literacy” as commonly framed — learning to write prompts, learning which tools exist, learning to cite AI use. It is architectural literacy: understanding what kind of operation is being performed, and whether the tool performing it is suited to the task.
AI-generated feedback in design education is either a language operation (narrating an evaluation that was made elsewhere) or a judgment operation (assessing quality directly). The first is appropriate. The second is not — unless the judgment layer is explicit, deterministic, and auditable.
Any institution using AI for student feedback should be able to answer: where is the judgment made, and can it be inspected? If the answer is “the AI judges and explains simultaneously,” the tool is unreliable regardless of how fluent its output appears.
Design programmes are adopting AI tools without a framework for evaluating them. The question is not “does this tool use AI?” but “where in this tool does AI operate, and is that the right allocation?”
A tool that uses AI to read student work and extract signals — appropriate. A tool that uses AI to evaluate whether a design is good — structurally unreliable. A tool that uses AI to explain an evaluation made by inspectable rules — appropriate. The architecture determines the answer, not the presence or absence of AI.
A cross-institutional gathering of design educators and students who recognise that the field needs a coherent position on AI — not a policy, but a position. Not rules for what is permitted, but a framework for what AI is doing when it is used.
Participants are individuals. They may teach at NID, Srishti, CEPT, IIT, Anant, Karnavati, or elsewhere. Their institutional affiliation is context, not credential. No institutional endorsement is needed or sought.
Membership is practice:
Koher Architecture contributed the founding frame: the three-layer separation (qualification, rules, language), two working tools as evidence, and this position statement as the opening argument. The roundtable exists to challenge, extend, and strengthen this frame — not to protect it.
This document is open for co-authorship.
If you teach design — at any level, at any institution — and you recognise that the field’s current response to AI is incoherent, this is an invitation to articulate something better.
Not a committee. Not a conference. A position, grounded in practice, demonstrated through tools, held by people who build things.
To sign this position, fill out the form below. You will receive a verification email.
| Name | Affiliation | Date |
|---|---|---|
| Prayas Abhinav | Anant National University | 24 February 2026 |
Institutional affiliation is context, not credential.