============================================================ nat.io // BLOG POST ============================================================ TITLE: Why Smart People Struggle to Write Simple Things DATE: March 4, 2026 AUTHOR: Nat Currier TAGS: Communication, Systems Thinking, Leadership, Decision Science ------------------------------------------------------------ You can hold a whole architecture in your head. You can map dependencies across teams, anticipate where incentives will break, and predict second-order effects before they show up in production. In conversation, you can walk someone through failure modes, tradeoffs, and non-obvious constraints with precision. You are not confused about the idea. You can see it at multiple resolutions. Then the meeting starts. Someone says, "Can you put this into three bullet points for one slide?" And suddenly your brain stalls. You are not blank. You are overloaded by accuracy. The moment you compress the idea, you can feel what gets deleted: the assumptions that hold it together, the boundary conditions that make it safe, the feedback loops that make it true, and the failure paths that will matter the second reality deviates from plan. So you stare at the slide longer than everyone else. You keep rewriting the headline because each version sounds cleaner but less honest. You trim context and the point becomes fragile. You add context and the slide becomes "too dense." What looks like communication friction from outside feels like epistemic conflict from inside. This tension is common among smart practitioners, especially systems thinkers, engineers, architects, operators, and technical leaders. They are often told they "overcomplicate" ideas when what they are actually doing is protecting causal integrity. That distinction matters. If we treat this as a personality flaw, we train people to speak with confidence before they have preserved truth. If we treat it as a compression problem, we can design better communication systems that keep fidelity while still enabling speed. In this post, I will separate systems thinking from summary thinking, explain why compression friction appears in high-capability minds, and offer a practical workflow for translating complex models without destroying meaning. > **Thesis:** Many highly capable thinkers struggle with simple explanations because they think in systems, not summaries. > **Why now:** Modern organizations optimize for communication speed, while system complexity in AI, software, and operations keeps increasing. > **Who should care:** Builders, technical leaders, operators, founders, and anyone translating complex reality into decisions. > **Bottom line:** The problem is not intelligence. The problem is compression. [ Systems thinking and summary thinking are different cognitive modes ] ----------------------------------------------------------------------------- Some minds think in summaries. Other minds think in systems. Summary thinking is optimized for fast communication under limited attention. It extracts the apparent center of gravity and collapses detail into digestible claims. That makes it useful in meetings, executive updates, and decision forums where time is scarce. Systems thinking is optimized for truth under uncertainty. It models relationships, not just statements. It keeps track of dependencies, feedback loops, interactions across layers, and failure behavior when assumptions break. Neither mode is universally better. They are optimized for different outcomes. | Cognitive mode | Primary unit | Optimization target | Typical failure mode | | --- | --- | --- | --- | | Summary thinking | Isolated claim | Speed and clarity | Oversimplification and false confidence | | Systems thinking | Interdependent relationships | Reliability and truth preservation | Communication drag and over-contextualization | In most organizations, summary thinking wins by default because it fits the operating cadence. But in complex environments, reliability and sound strategy require systems cognition. That is why many strong thinkers feel chronically misread. They are being judged by a format optimized for one objective while doing work optimized for another. > Some minds think in summaries. Other minds think in systems. [ The compression problem ] ------------------------------------------------------------ At this point, it helps to name what communication is doing computationally: it is compressing information. Compression is not optional. Every explanation is a lossy transform from full mental model to constrained channel. A slide compresses more aggressively than a memo. A memo compresses more aggressively than a design document. A design document compresses more aggressively than the actual system. For many people this feels normal. For systems thinkers, compression often feels dangerous because they can immediately see what was removed. When they compress too early, they can feel the deletions in real time: hidden assumptions that should have been explicit, edge conditions that change the decision, interactions that reverse the expected outcome, failure modes that appear only at scale, and timing dependencies that make a good strategy fail operationally. From the outside, this can look like "they keep adding complexity." From the inside, it often feels like context restoration. They are not inflating the idea for status. They are trying to put back the relationships that made the compressed version misleading. > When you compress a complex idea too early, you do not clarify it. You erase the relationships that make it true. So far, this explains the internal friction. Next we need to explain the external pressure. [ Why business culture rewards oversimplification ] ------------------------------------------------------------ Most modern organizations run on communication latency constraints. Meetings are short. Attention is fragmented. Multi-threaded teams need fast alignment. Executive channels often reward confidence and brevity over precision and uncertainty handling. That creates a default communication template: one idea, three bullets, one recommendation, next slide. This template is efficient for low-interaction topics. It becomes risky for high-interaction systems where behavior emerges from relationships, constraints, and second-order effects. A complex decision usually looks less like a slogan and more like interacting system behavior, explicit constraint boundaries, feedback loops, failure paths, and tradeoff regimes. If this is flattened into a headline without boundary conditions, organizations can mistake legibility for validity. History gives us painful examples. The Columbia Accident Investigation Board documented how risk communication practices, including slide formatting choices, obscured critical uncertainty and degraded decision quality. The issue was not only data availability. The issue was representational form. | Communication artifact | What it optimizes | What it often hides | | --- | --- | --- | | Executive slide | Decision velocity | Constraint interactions, uncertainty structure | | Status update summary | Narrative coherence | Drift, weak signals, unresolved dependencies | | One-line strategy slogan | Cross-functional alignment | Boundary conditions and implementation fragility | That pattern is still alive in AI programs today. A clean summary can make a system look production-ready before governance, data quality, and failure handling are mature. Simplicity is powerful. But unearned simplicity is dangerous. [ The curse of seeing the whole system ] ------------------------------------------------------------ Systems thinkers tend to run a fast internal simulation loop whenever they hear a simplified claim. Their mind automatically asks what changes if an input distribution shifts, what fails when we scale by an order of magnitude, which assumptions are still implicit, where the plan depends on ideal behavior, and what happens if the model is wrong rather than merely noisy. This simulation reflex is useful in engineering, safety, and strategy. It is less useful in communication formats that punish branching context. Now we get the paradox: the same cognition that reduces downstream failure can increase upstream communication friction. This is why high-capability people can feel "slow" in presentation prep and "fast" in live debugging. In debugging, they are allowed to follow system behavior. In slide prep, they are asked to collapse that behavior into a static artifact before interaction happens. If they comply too early, they feel inaccurate. If they preserve nuance, they are told they are unclear. That tension is not imaginary. It is structural. [ Why presentations can feel like intellectual violence ] --------------------------------------------------------------- For some people, slide work is easy because the format matches their natural cognition. For others, especially system-oriented thinkers, it feels like flattening a three-dimensional object into a shadow. The format usually demands: Idea -> headline -> minimal support. But system ideas often require: Context -> interaction -> boundary conditions -> implications. When those are skipped, the output may be visually clean and operationally wrong. This is why many deep thinkers prefer writing over slides. Writing supports progressive revelation. You can layer definitions, surface assumptions, branch into edge cases, and then re-synthesize into a decision frame without pretending the middle layer does not exist. Presentations can still be useful, but they work better as a final extraction artifact, not the first container for thought. [ A recurring real-world scene ] ------------------------------------------------------------ Consider a composite scenario drawn from typical technical leadership environments. A platform lead is asked to present "AI readiness" to executives in eight slides. The team has real progress: retrieval is improving, failure audits exist, and latency is trending down. But there are unresolved dependencies across data quality, evaluation design, and escalation governance. The first draft slide says "Model quality improving and deployment on track." It is concise and reassuring. It is also incomplete enough to invite the wrong decision. The lead rewrites the slide three times because each short version compresses away a critical relationship: model quality is improving in controlled tests, but production reliability depends on upstream data discipline and downstream fallback behavior. Remove that relation and leadership hears confidence where there should be conditional commitment. The room interprets the delay as "difficulty communicating simply." The lead experiences it as "difficulty communicating honestly under compression constraints." Both perceptions are understandable. Only one addresses the root cause. At this point, we can move from diagnosis to method. [ The workaround: delay compression until the system exists ] ------------------------------------------------------------------- The most effective pattern I have seen is simple: **depth first, compression later**. Instead of starting with slides, start with full-system articulation. Write the model in long form first. Capture the core objective and scope boundary, assumptions and non-assumptions, key dependencies and interactions, failure modes with re-open triggers, and decision implications by audience. Once the full system exists, extract communication layers in order: full technical articulation for builders, structured operating notes for owners, a decision memo for leaders, and finally the slide summary for executive bandwidth. This sequence preserves truth while still delivering speed where needed. It also makes the summary stage faster because you are extracting from a stable model, not inventing certainty under deadline pressure. > Systems thinkers expand explanations because they see the edges of the system. [ Why this matters more in the AI era ] ------------------------------------------------------------ AI systems amplify the cost of hidden assumptions. A compressed narrative like "the model works" can hide decisive realities: dataset drift risk, silent failure modes, policy non-compliance, ambiguous human override paths, and confidence calibration errors that break trust in production. Oversimplified communication in AI programs often creates predictable failures: misaligned expectations between leadership and delivery teams, incorrect deployment timing relative to risk posture, and trust collapse after avoidable incidents. Resilient AI operations need people who can see interaction structure, not just benchmark outputs. The minds that struggle with simplistic communication formats are often the same minds that detect fragility before it becomes headline risk. Treating that cognitive profile as a communication defect is an organizational self-own. The real move is to build translation infrastructure around it. [ A layered communication model organizations can adopt ] --------------------------------------------------------------- Instead of forcing every concept into one compression level, design multi-resolution communication as a standard operating model. | Layer | Primary audience | Core question answered | Artifact type | | --- | --- | --- | --- | | Layer 1: System integrity | Builders and architects | "How does this actually behave?" | Design memo / architecture narrative | | Layer 2: Operating control | Operators and managers | "How do we run this safely and consistently?" | Runbook / risk and ownership brief | | Layer 3: Decision compression | Executives and sponsors | "What should we decide now and why?" | Summary memo / slide deck | This is how good maps work. You do not use a global political map for street-level navigation. You switch resolution based on the decision task. Communication should do the same. [ Common objections and sharper answers ] ------------------------------------------------------------ > "If you cannot explain it simply, you do not understand it" This is partially true and often misused. Inability to simplify can indicate weak understanding. But forced simplification can also remove causal structure that determines whether the idea is true in the real world. The right test is not minimal word count. The right test is whether the chosen level of compression preserves decision-relevant relationships. > "Executives do not have time for nuance" Correct, and incomplete. Executives usually do not need all nuance, but they do need the specific constraints that change decision risk. The translation challenge is to preserve those constraints while stripping non-essential detail. > "Detailed people are just avoiding commitment" Sometimes yes. But often the opposite is happening. People are trying to prevent false commitment built on missing assumptions. Distinguishing stalling from fidelity protection is a leadership skill. > "Slides need to be clean, period" Clean is good. Clean and wrong is expensive. If slide clarity is achieved by deleting core dependencies, the cost appears later as rework, incident response, or strategic reversal. [ What to change this week if this is your pattern ] ------------------------------------------------------------ If you are the person who freezes at "just make it three bullets," do not fight your cognition directly. Design a translation pipeline that uses it. 1. Start with a one-page systems draft before opening slides. 2. Mark non-negotiable constraints and assumptions explicitly. 3. Extract the executive version only after system integrity is documented. 4. Label conditional claims so confidence is not misread as certainty. 5. Keep a "lost in compression" appendix for details that may become critical. If you lead teams with mixed cognitive styles, protect both speed and truth. 1. Ask for a two-layer output by default: summary plus constraints. 2. Separate "decision now" from "known unknowns" in every review. 3. Reward people for surfacing dependency risk early, not only for clean slides. 4. Use re-open rules when new evidence changes the model. 5. Judge communication quality by downstream decision outcomes, not deck aesthetics. [ Communication is not reduction, it is resolution control ] ------------------------------------------------------------------ Many people who struggle to simplify are told they are weak communicators. In high-complexity environments, that is often backward. They may be the ones protecting the idea from premature collapse. They may be the ones noticing that the summary is technically elegant and operationally false. They may be the ones refusing to trade causal integrity for performative clarity. The goal is not to abandon simplicity. The goal is to earn it. You earn simplicity by modeling the system fully, then compressing with intent, boundary conditions, and audience-specific resolution. That is not overthinking. That is responsible translation. > The problem is not intelligence. The problem is compression. [ For builders and leaders: a practical invitation ] ------------------------------------------------------------ If this post describes your team, run one experiment this week. Pick one strategic update that usually becomes a polished but fragile slide. Build it in two passes: first the full system memo, then the executive compression. Compare decision quality, rework, and follow-up confusion two weeks later. If you want a second set of eyes on where your communication pipeline is losing system truth, send me the structure. I can help you redesign it so speed and fidelity stop fighting each other. [ The line worth keeping ] ------------------------------------------------------------ Complex ideas do not become useful because they are short. They become useful when they are represented at the right resolution for the decision.