============================================================ nat.io // BLOG POST ============================================================ TITLE: When Progress Breaks Identity: The No True Scotsman Problem in AI DATE: February 25, 2026 AUTHOR: Nat Currier TAGS: AI, Technology, Leadership, Business Strategy ------------------------------------------------------------ If you spend time in engineering Slack channels, product leadership meetings, hiring loops, or LinkedIn comment wars, you have seen this argument before. Someone says AI cannot write real code. Then an AI-assisted engineer ships something useful. The reply changes: real engineers do not rely on AI. Then senior engineers demonstrate that they do use it, and often productively. The reply changes again: that is not real engineering, that is just assembly, or glue, or review, or autocomplete, or scaffolding. The same shape shows up in art, research, design, and strategy work. "That is not real creativity." "That is not real thinking." "That is not real intelligence." A capability appears, a counterexample lands, and the category is redefined so the old hierarchy survives. The goalposts do not move once. They roll continuously. This matters because these arguments are rarely just internet semantics. They shape hiring standards, promotion logic, tool adoption, budget decisions, and how people protect their status when the work itself changes. If you are a founder, manager, operator, or experienced individual contributor, this pattern affects what your organization adopts, what it delays, and which people it trusts. In this post, I am not arguing whether AI is good or bad, predicting AGI timelines, or proposing policy. I am mapping a pattern: how the No True Scotsman fallacy behaves as an identity-defense mechanism when disruptive capabilities threaten expertise, status, and labor value. By the end, you will be able to spot goalpost shifting behavior more quickly in public discourse, in organizations, and in your own reactions. > **Key idea / thesis:** Many AI debates that sound technical are actually status and identity defenses that use No True Scotsman reasoning to redefine what counts. > **Why it matters now:** Capability shifts are already changing workflows, hiring signals, and labor bargaining, and misreading identity defense as pure technical disagreement leads to bad decisions. > **Who should care:** Leaders, hiring managers, builders, and knowledge workers who need to separate valid quality concerns from status-preserving goalpost shifts. > **Bottom line / takeaway:** Move from purity debates to capability framing: ask what works, where it fails, and how value creation shifts. > **Boundary condition:** This applies to category-policing behavior under capability disruption, not to every legitimate standards debate. [ The argument pattern that keeps regenerating itself ] ------------------------------------------------------------- The No True Scotsman fallacy is usually explained as a logic mistake. That is true, but it is incomplete. In practice, it is often a defensive maneuver. A category that once gave people status, identity, or bargaining power gets threatened by a counterexample. Instead of updating the claim, the category boundary is tightened. The classic structure is simple: 1. A person makes a broad category claim. 2. A counterexample appears. 3. The category is redefined to exclude the counterexample. 4. Category purity is restored, at least rhetorically. The important part is not just the logic. It is the function. The maneuver preserves identity at the expense of truth. > When a category carries status, redefinition becomes emotional self-defense, not just bad reasoning. That is why this pattern gets stronger when capability shifts arrive quickly. AI is not just introducing new tools. It is destabilizing categories people used to organize competence, legitimacy, and pay. [ Quick concept primer (for non-specialists) ] ------------------------------------------------------------ Before we go deeper, here are the terms that do most of the work in this essay. | Term | Plain-language meaning | Why it matters here | What it is not | | --- | --- | --- | --- | | No True Scotsman | Moving the definition of a group after a counterexample appears | Explains how "real engineer" and "real intelligence" claims keep mutating | Normal quality standards by themselves | | Goalpost shifting | Changing success criteria after evidence shows up | Makes debate feel endless even when capabilities improve | Updating criteria transparently before new evidence | | Motivated reasoning | Reasoning toward a preferred conclusion instead of from evidence | Helps explain why smart people defend identity narratives | Intentional dishonesty in every case | | Legibility | How easily a person or skill fits familiar categories | Explains hiring and promotion bias toward recognizable archetypes | Actual capability or output quality | So far, this sounds like an internet argument pattern. It is not. It is a workplace pattern, a labor-market pattern, and often a self-protection pattern. [ Why AI is such a strong trigger ] ------------------------------------------------------------ AI is a perfect trigger for No True Scotsman behavior because it disrupts more than task execution. It disrupts how people define themselves. When a tool shortens time-to-output, some people experience it as convenience. Others experience it as identity compression. If a skill took years to build and becomes easier to approximate, the threat is not only practical. It is existential. We now have enough data to show that this reaction happens in the middle of real adoption, not at the edges. GitHub's September 7, 2022 Copilot research post (updated May 21, 2024) reported a controlled experiment where developers using Copilot completed a task 55% faster on average. The same post also documented strong self-reported gains in flow and reduced frustration, which matters because subjective productivity changes how people experience work, not just how quickly they type. Microsoft and LinkedIn's Work Trend Index report, published May 8, 2024, reported that 75% of global knowledge workers were using AI at work, 78% of AI users were bringing their own tools to work, and more than half of AI users worried that using AI on important tasks made them look replaceable. That is an adoption signal and an identity threat signal in the same dataset. Gallup's January 25, 2026 workplace update reported frequent AI use (a few times a week or more) among U.S. employees rose to 26%, with daily use at 12%, while total usage was uneven across roles and industries. In other words, the capability shift is real, but it is not evenly distributed, which increases local status tension. McKinsey's November 5, 2025 global AI survey adds another layer: broad organizational use is rising, but most companies are still stuck in pilot or experimentation phases. That combination produces exactly the kind of discourse instability that breeds category policing. Capabilities are visible enough to threaten identity, but implementation outcomes are still mixed enough to support denial. Now we need to separate two things that often get collapsed together: valid skepticism and identity defense. [ Valid skepticism vs identity defense ] ------------------------------------------------------------ Some AI criticism is correct and necessary. Models hallucinate. AI-generated code can create subtle defects. Reliability, attribution, and verification matter. None of that is in dispute. The pattern this essay names is narrower. It shows up when the argument changes shape mainly to preserve identity status rather than to improve the quality of standards. Here is a practical test. | What the argument says | What changes after counterexamples appear | What may actually be getting protected | | --- | --- | --- | | "AI cannot do meaningful coding" | "That output does not count as real coding" | Craft identity, gatekeeping authority, compensation logic | | "AI art is not real art" | "Real art requires effort/suffering/manuality" | Authorship identity, legitimacy, scarcity value | | "LLMs are useless because they hallucinate" | "Any useful workflow that uses them still does not count" | Expertise monopoly, existing workflow prestige | | "This is not intelligence" | "Intelligence now means whatever AI still cannot do" | Human exceptionalism under pressure | | "That candidate is too broad" | "Real leaders are specialists" (or the reverse) | Hiring legibility, status templates, risk aversion | A standards debate becomes identity defense when the criteria tighten only after a threatening counterexample appears, and the new criteria track status preservation more than outcome quality. > Capability abundance does not erase status. It forces status to migrate. [ The recurring pattern across domains ] ------------------------------------------------------------ > Programming: "Real engineers do not use AI" This is one of the clearest cases because the status structure is so visible. The strongest critiques are not wrong on the surface. AI-generated code can be sloppy, overconfident, and context-blind. It can produce convincing garbage. Experienced engineers are often right to be skeptical of unreviewed outputs. But watch what happens when AI-assisted work starts producing acceptable results under senior supervision. The debate often moves from quality to legitimacy. The claim quietly changes from "this is bad" to "this does not count." That is the tell. The real shift is not that code quality no longer matters. It is that the center of gravity of expertise moves from raw code production toward system framing, decomposition, review, verification, and integration. People whose identity is anchored to visible difficulty often experience that shift as devaluation. Tool use has always been part of engineering. We celebrate abstraction until it becomes common. Then we call it cheating. > Tool use is not the opposite of expertise. In many cases, it is the mechanism through which expertise compounds. The uncomfortable implication is that some prestige in software was attached not only to correctness, but to the scarcity of being able to produce syntax and structure at speed. When tools lower that scarcity, status has to move somewhere else. Review quality, judgment, architecture, reliability, and problem framing become more valuable. People who adapt feel leverage. People who do not often defend the old definition. > Art and creativity: "It is not real art" The art version is more emotionally intense because identity is usually more personal. Arguments here often revolve around authorship, authenticity, effort, and a moralized version of legitimacy. Some of these concerns are valid. Credit, consent, training data provenance, and compensation are real issues. But identity defense enters when the definition of art itself is narrowed only after hybrid workflows produce work the audience responds to. The pattern is familiar across media history. Photography was once framed as mechanical and therefore inferior. Synthesizers were framed as cheating. Digital tools were framed as less authentic than analog processes. The boundary moved, then stabilized, then history forgot the earlier purity test. AI intensifies this because it compresses visible effort. Many people use effort as a proxy for value. When the effort becomes less visible, the value feels illegitimate even if the end result is useful, moving, or commercially relevant. This is one reason AI debates in creative fields often feel moral before they feel technical. They are often arguments about who gets to retain the identity of "artist" when production mechanics change. > Research and knowledge work: "If it hallucinates, it is useless" Knowledge work produces a subtler version of the same pattern because the outputs are harder to inspect at a glance. A common move is to point out a real failure mode, such as hallucination, and then use that failure mode as a totalizing category judgment. Because LLMs can be wrong, they are framed as useless. But in practice, many professionals use them as drafting, synthesis, brainstorming, translation, and scaffolding tools where human verification remains central. That is not replacement. It is workflow reconfiguration. The identity threat appears when the social signal of being able to produce a first-pass synthesis, outline, or summary quickly gets weakened. If part of your prestige came from owning that stage of work, AI does not need to replace your whole job to trigger defensive redefinition. It only needs to compress a visible segment of your value chain. At this point, the pattern is visible: the sharper the identity attachment to a subtask, the stronger the pressure to redefine the category when that subtask gets easier. > AGI debates and intelligence goalpost drift AGI discourse makes the pattern even easier to see because the disputed category is intelligence itself. Historically, machine success in domains like chess and Go was often reclassified as "narrow" after the fact. That can be a legitimate technical distinction. But the broader public pattern often goes further: intelligence becomes defined as whatever machines cannot yet do. Then language competence appears, and it is "just statistics." Multimodal systems improve, and the line moves to grounding. Agents improve, and the line moves to autonomy, then self-modeling, then embodiment, then something else. Some of these refinements are analytically useful. Some are retrospective category defense. The problem is not that definitions evolve. The problem is when definitions evolve only to preserve a threatened status claim. [ The symmetry problem: hype and skepticism both do this ] ---------------------------------------------------------------- If you want to see this clearly, you have to admit the pattern is symmetrical. Skeptics use No True Scotsman reasoning when they redefine "real impact" every time a capability crosses a threshold. But maximalists do something similar when they redefine "real expertise" downward to claim that domain knowledge no longer matters. On one side, the move is: "Nothing meaningful has changed." On the other side, the move is: "Everything meaningful has changed." Both positions protect identity. The skeptic may be protecting status and existing labor value. The maximalist may be protecting a new identity as early adopter, prophet, or winner of a new hierarchy. In both cases, inconvenient reality gets excluded. AI tools can be useful and unreliable. They can change labor markets without replacing all workers. They can compress some skills while increasing the value of others. > Most bad AI discourse is not wrong because it has a side. It is wrong because it protects a side by excluding reality. [ The hiring market version: legibility over capability ] --------------------------------------------------------------- This is where the pattern becomes expensive. Organizations say they hire for capability. Most of the time, they hire for legibility. If someone fits a familiar template, their competence is interpreted as real. If someone spans categories, their competence is often treated as suspicious. AI is making this worse and more visible at the same time because it increases the value of hybrid work while many hiring systems still evaluate for stable category identity. A candidate who can orchestrate AI-assisted workflows, review outputs critically, communicate with stakeholders, and ship across functions may create enormous value. But if they do not look like the company's internal template of a "real" engineer, "real" strategist, or "real" operator, their strengths are often reclassified as shallowness. This is Scotsman logic applied to people. You see it in phrases like \"too technical for leadership,\" \"too strategic for engineering,\" \"too broad to be deep,\" or \"overqualified, but not specialized enough for this role.\" None of these statements are always wrong. Sometimes they are accurate. The problem is how often they function as template defense when capability no longer fits old category boundaries. Here is what this means in a hiring loop: when capability becomes easier to produce but harder to categorize, institutions often tighten labels instead of improving evaluation. > Organizations do not just protect standards. They often protect the social map that makes compensation and status feel stable. [ The psychological core: why smart people do this ] ------------------------------------------------------------ You do not need to assume bad faith to explain most of this behavior. Several well-established mechanisms help explain why people defend category purity under capability shocks. Cognitive dissonance is the discomfort people feel when reality conflicts with self-concept. If you built your identity around a skill monopoly and a new tool weakens that monopoly, one way to reduce dissonance is to downgrade the category itself. The skill you protected is still valuable, but now the thing the machine did "does not count." Motivated reasoning then does the cleanup work. You start with the conclusion that preserves identity, and your reasoning backfills the argument. Because the person is often intelligent and articulate, the justification can sound rigorous while still being defensive. Status threat adds social pressure. In most professions, expertise is not just internal confidence. It is a public rank signal. When rank signals become noisy, people defend definitions more aggressively because definitions stabilize status. Effort justification is another major force. People tend to value things partly because they were hard to obtain. When AI reduces visible effort, perceived value can fall even if outcome value remains high. This is why some debates become moralized around process, suffering, or difficulty. The final layer is scarcity collapse. When a capability becomes more abundant, status cannot stay attached to the same bottleneck forever. It has to move. The conflict is often not about whether value exists. It is about who gets to claim it next. Next, we move from psychology to the economic subtext that people often argue around instead of naming directly. [ The economic subtext: labor value and bargaining power ] ---------------------------------------------------------------- Many No True Scotsman arguments in AI discourse are not only identity defense. They are bargaining behavior. People are often asking economic questions in moral or technical language because those questions feel safer to argue indirectly. If AI makes me faster, am I worth more because I can produce more, or less because my output is easier to substitute? If a junior worker can now produce work that looks senior at first pass, who captures the surplus? Does the firm pay for judgment, output volume, or category membership? These are labor-value questions. They are not resolved by proving whether a system is "real intelligence." This is why some arguments become strangely intense around labels. Labels help determine compensation logic. If AI-assisted output is framed as lesser work by definition, then incumbent status and pay can remain anchored to older categories longer. If it is framed as legitimate work, compensation logic and career ladders have to be renegotiated. McKinsey's November 5, 2025 survey underscores the instability here: organizations report rising AI use and case-level benefits, but many remain in pilot mode and only a minority report enterprise-level EBIT impact. That creates a political gap between visible capability and unclear surplus capture. In that gap, category fights thrive. No True Scotsman reasoning becomes a cultural immune system partly because it buys time in a labor renegotiation. [ Why this matters beyond annoying internet debates ] ------------------------------------------------------------ This pattern has real operational consequences. When leaders mistake identity defense for purely technical disagreement, they make poor decisions about adoption, training, hiring, and change management. They over-index on argument content and under-index on what the argument is protecting. The costs show up in multiple places. Teams delay useful tools for performative reasons. Organizations polarize into pro-AI and anti-AI camps instead of building capability-specific workflows. Hiring markets misclassify strong hybrid operators. Workers experience real distress because a change in task mechanics feels like a referendum on personal worth. Policy discussions can also degrade when public narratives are driven by identity signaling instead of observed capability boundaries. You do not need culture-war framing to get culture-war dynamics inside a company. You only need status uncertainty plus weak language for discussing identity threat. The practical mistake is assuming resistance is mainly ignorance. Often it is grief, bargaining, or rank defense wearing the language of standards. [ Common objections ] ------------------------------------------------------------ > "Sometimes the category really should be redefined" Correct. Categories should be refined when new evidence reveals the old definition was sloppy. The difference is procedural. Legitimate redefinition clarifies criteria in a way that improves prediction and evaluation. Scotsman-style redefinition appears mainly after a threatening counterexample and usually preserves status more than it improves accuracy. A useful test is whether the new definition would have been stated in advance and applied consistently to all cases, including human ones. > "This reduces real technical concerns to psychology" It does not. Technical concerns remain technical concerns. The argument here is that many AI debates are multi-layered. A person can have valid reliability concerns and still be defending identity. Those motivations can coexist. Ignoring the psychological layer does not make it disappear. It just makes organizational responses worse. > "Skepticism is rational because AI is overhyped" Also correct, often. Skepticism becomes Scotsman behavior only when evidence thresholds move selectively to keep the conclusion fixed. Rational skepticism updates with new capabilities and new failure modes. Identity defense updates mainly to preserve category purity. [ A healthier response: move from purity to capability framing ] ---------------------------------------------------------------------- If the goal is better decisions, the right shift is from purity debates to capability framing. Instead of asking whether something counts as "real" intelligence, "real" engineering, or "real" creativity, ask what the system can do reliably, under what constraints, and how that changes the human workflow. That does not lower standards. It makes standards operational. Use questions like these: - What can this system now do reliably enough to matter in our workflow? - Where does it fail, and what verification or guardrails are required? - Which human skills become more valuable because the bottleneck moved? - Who captures the productivity surplus, and how do we make that transition fair enough to sustain adoption? That last question is especially important because it names the economic issue directly instead of forcing it into a category-purity fight. Here's what this means in practice: if you are leading a team through AI adoption, treat resistance as a combined technical, identity, and incentive problem. Solve only one layer, and the other two will keep recreating the conflict. [ What to do when this shows up in your team ] ------------------------------------------------------------ Leaders who understand this pattern can reduce internal culture war without pretending every concern is irrational. Start by naming the distinction between standards and status. Make it acceptable to say, "This tool may help, and I am worried about what it means for how my contribution is valued." That one sentence often surfaces the real issue faster than another round of technical debate. Then restructure evaluation around observed capability and decision quality, not category purity. Reward people for verification, judgment, workflow redesign, and outcome quality, not only for preserving older forms of visible effort. A useful operating sequence is: 1. Separate reliability objections from identity or role-value concerns. 2. Run bounded trials tied to specific workflows and success criteria. 3. Update role expectations explicitly as bottlenecks move. 4. Revisit compensation, promotion, and hiring signals before resentment hardens. This is slower than posting a take and faster than organizational denial. [ What progress is actually exposing ] ------------------------------------------------------------ If you skimmed to this point, here is the shortest version: many AI arguments are not primarily about capability. They are about whether identity, status, and labor value survive contact with changing capability. People do not resist technology in the abstract. They resist identity displacement, status compression, and unclear surplus capture. When a capability shifts, communities often redefine what counts as real expertise, intelligence, or innovation. Sometimes that protects standards. Sometimes it protects a hierarchy that no longer maps cleanly to reality. The skill is learning to tell the difference. Progress does not destroy value. It forces us to decide what value moves to next. > The strategic advantage is not winning the purity argument. It is seeing which bottleneck moved before your organization starts defending the old one.