If Taiwan's exports are up about 70 percent in January 2026, TSMC is guiding roughly $52 to $56 billion in 2026 capex, and everyone from cloud providers to startups is buying compute as if the future depends on it, should software teams celebrate, or should they brace for impact?
This is the right question in February 2026.
The headlines are loud, and they are not fake. Taiwan's AI-linked acceleration has been extraordinary. Reuters reported Taiwan raising its 2026 growth forecast on AI demand. Bloomberg reported the same expansion wave and tied it directly to semiconductor momentum. TSMC posted major profit growth in 2025, and local property markets around key AI facilities are already reacting as if this is a multiyear supercycle.
At the same time, TSMC Chairman C.C. Wei publicly warned that if customer demand turns out to be unreal, this level of investment could become a disaster for TSMC. That statement matters because it comes from the center of global AI supply, not from a skeptical outsider.
The fast answer is this.
The boom is real, and bubble dynamics are also real.
Those statements can both be true without contradiction.
The real strategic question for software leaders is not whether AI demand exists. It obviously does. The strategic question is whether your product economics still work when demand growth normalizes, procurement slows, and customers stop paying for experimental velocity.
If you are deciding strategy, architecture, or execution priorities in this area right now, this essay is meant to function as an operating guide rather than commentary. In this post, founders, operators, and technical leaders get a constraint-first decision model they can apply this quarter. By the end, you should be able to identify the dominant constraint, evaluate the common failure pattern that follows from it, and choose one immediate action that improves reliability without slowing meaningful progress. The scope is practical: what to do this quarter, what to avoid, and how to reassess before assumptions harden into expensive habits.
Key idea / thesis: Durable advantage comes from disciplined operating choices tied to real constraints.
Why it matters now: 2026 conditions reward teams that convert AI narrative into repeatable execution systems.
Who should care: Founders, operators, product leaders, and engineering teams accountable for measurable outcomes.
Bottom line / takeaway: Use explicit decision criteria, then align architecture, governance, and delivery cadence to that model.
- The constraint that matters most right now.
- The operating model that avoids predictable drift.
- The next decision checkpoint to schedule.
| Decision layer | What to decide now | Immediate output |
|---|---|---|
| Constraint | Name the single bottleneck that will cap outcomes this quarter. | One-sentence constraint statement |
| Operating model | Define the cadence, ownership, and guardrails that absorb that bottleneck. | 30-90 day execution plan |
| Decision checkpoint | Set the next review date where assumptions are re-tested with evidence. | Calendar checkpoint plus go/no-go criteria |
Direction improves when constraints are explicit.
What this boom actually is
A lot of commentary treats "the AI boom" as one thing. It is not one thing. It is several demand curves stacked together:
hyperscaler infrastructure race, sovereign and enterprise compute positioning, corporate fear-driven AI catch-up spending, genuine workflow automation demand.
The first three curves can produce extraordinary short-term numbers even before the fourth curve is mature.
That distinction matters because software durability comes from curve four, not from curve one.
If your revenue model depends on hyperscaler capex expansion continuing at emergency pace forever, you are not building a software business. You are surfing an infrastructure spending anomaly.
If your product directly reduces cycle time, error rate, or labor intensity inside a workflow customers cannot ignore, you have a chance to outlive the capex spike.
Taiwan is a useful signal source because it sits at the manufacturing and supply intersection. When Taiwan moves this fast, global demand is not imaginary. But Taiwan is also where overbuild risk appears early, because fabrication, packaging, power, and logistics are physical and expensive.
That is why this moment is both exciting and dangerous.
So far, the core tension is clear. The next step is pressure-testing the assumptions that usually break execution.
Why bubble fears are rational, not cynical
Bubble talk often sounds lazy, but dismissing bubble risk in 2026 is equally lazy.
Three mechanisms create legitimate downside risk.
First, speculative demand contamination.
Some customers are buying future optionality, not present necessity. That behavior is rational in a race, but it can reverse suddenly when boards ask for near-term ROI proof. When optionality spending drops, suppliers feel it first, and downstream software valuations adjust fast.
Second, utilization illusion.
Many organizations bought model capacity before they had operational readiness: no stable data contracts, no evaluation discipline, no ownership model for AI incidents. Compute can be fully provisioned while value creation remains low. That gap can persist for quarters, then trigger budget correction.
Third, financing reflex loops.
When asset prices rise around an industry cluster, confidence can feed leverage, and leverage can feed fragile planning assumptions. Real estate jumps near AI hubs are a useful mood indicator, but they are not proof of software cash-flow quality.
Bubble risk is not a moral judgment. It is a cycle diagnosis.
Now we need to move from framing into operating choices and constraint-aware design.
Momentum without control is usually delayed failure.
Why "it is all a bubble" is also wrong
Now the other side.
Calling this entire wave fake ignores structural demand that is unlikely to disappear by 2028.
Enterprise software still has massive unautomated coordination work: triage, reconciliation, exception handling, compliance checks, procurement matching, contract intelligence, multilingual support, and domain retrieval under uncertainty. Those workloads did not vanish after the first LLM hype cycle. They got more visible.
Model quality, tool reliability, and deployment patterns have also improved materially versus early 2023 and 2024 deployments. Agentic workflows remain uneven, but retrieval-grounded and policy-gated systems are already producing durable value in support, operations, and risk-sensitive analysis tasks.
There is also a geopolitical driver that extends beyond pure hype. Nations and major firms do not want strategic dependence on one vendor or one geography for AI-critical compute. That pressure supports continued investment, even when short-term spending intensity cools.
So yes, some budgets are speculative.
No, the underlying transformation is not imaginary.
At this point, the question is less what we believe and more what we can run reliably in production.
The software layer where collapse risk is highest
Not all software categories are exposed equally. Risk concentrates where products are farthest from hard operational outcomes.
High-risk zone one is feature theater.
If the product sells "AI capability" as a label rather than outcome, retention will decay when procurement teams demand proof. Demos convert curiosity. They do not guarantee renewal.
High-risk zone two is token-cost arbitrage with no moat.
Teams that built business models around temporary model-pricing gaps may discover that margin disappears when platform pricing shifts or competitors replicate the same wrapper pattern.
High-risk zone three is labor substitution promises with weak workflow ownership.
The sales story is often "replace X percent of people." The real world requirement is "own failure modes when the system is wrong in production." Without accountability architecture, savings claims collapse under incident pressure.
Lower-risk zone one is workflow-critical reliability software.
Products that improve decision quality, reduce exception backlog, or lower compliance risk in measurable ways can survive demand normalization because they tie directly to operating metrics.
Lower-risk zone two is integration and governance infrastructure.
As model stacks diversify across open-weight, hosted frontier, and on-prem deployments, organizations need orchestration, auditability, and policy enforcement layers. That need grows when volatility increases.
Lower-risk zone three is domain-specific edge intelligence.
Where latency, data locality, or network constraints are non-negotiable, software tied to physical operations can retain value even if cloud spending enthusiasm slows.
Here's what this means: if decision rules are implicit, execution drift is usually inevitable.
A practical sustainability test for 2026 to 2028
If you lead a software team and want to avoid narrative-driven planning, run this five-part test.
1) Revenue reality test
What percentage of your top-line is tied to pilots or innovation budgets versus contractually embedded operational programs?
Pilot-heavy portfolios are fragile in tightening cycles. If you cannot show clear conversion paths from pilot to system-of-record usage, treat projected growth as speculative.
2) Unit economics stress test
Can you sustain gross margin if inference cost does not improve as fast as expected, or if customer usage becomes bursty and unpredictable?
Too many models assume linear cost decline and linear demand growth. Neither is guaranteed.
3) Workflow indispensability test
If your product disappeared for two weeks, what concrete operational pain appears at the customer?
If the honest answer is "mostly inconvenience," you are still in optional tooling territory.
4) Trust and governance test
Can your system explain what it did, why it did it, and what evidence supported it?
In high-consequence environments, trust is not a brand adjective. It is an operational capability.
5) Talent durability test
Are you building a team that can ship reliable systems through changing model paradigms, or a team optimized for one short-lived prompt era?
Sustainable organizations invest in evaluation engineering, data contracts, and domain operations fluency, not only in prompt craftsmanship.
Scenario map through 2028
Nobody can forecast this cycle with precision, but you can prepare for plausible paths.
Scenario A: Sustained expansion with moderation
Demand remains strong but no longer manic. Capex growth slows from emergency pace to strategic pace. Winners are teams with workflow depth, integration strength, and clear governance posture.
Scenario B: Air pocket and recovery
Procurement pauses after overbuying. Weak products fail. Strong operators survive and gain share as customers consolidate spend toward trusted vendors.
Scenario C: Hard correction
Multiple sectors cut AI budgets sharply after ROI disappointments. Infrastructure overbuild becomes visible, and software multiples compress. Only products tied to critical operations and clear cash savings remain resilient.
A disciplined strategy should work in Scenario A and still survive Scenario C.
That is the standard.
What Taiwan teaches the rest of the software market
Taiwan is not the whole world, but it is a high-resolution mirror for this cycle.
When export growth, fabrication expansion, and global AI demand align, Taiwan captures upside quickly. When demand assumptions wobble, Taiwan also feels pressure quickly because physical supply commitments are heavy.
For software leaders outside Taiwan, that signal is useful. Do not copy the headline enthusiasm. Copy the planning seriousness.
Use this period to lock in durable product value while demand is abundant. Do not confuse abundant demand with permanent demand.
For teams in Taiwan, there is an additional strategic advantage. You are close to the real constraints: packaging lead times, power and transformer limits, equipment turnover risk, and customer forecast volatility. That proximity can produce better software strategy if you use it as information, not anxiety.
What procurement behavior will reveal in 2026 and 2027
Most teams watch macro demand numbers and miss the stronger signal hiding inside procurement behavior. Cycle durability becomes visible first in how buying committees change their questions.
In acceleration phases, procurement conversations usually center on capability breadth. Buyers ask whether you support the newest models, the fastest integrations, or the largest deployment footprint. In normalization phases, procurement shifts from breadth to survivability. Buyers ask what your contract protects, how quickly you can isolate failures, and whether outcomes remain stable when they reduce exploratory spend.
That shift matters because many AI vendors built pitch narratives for the first conversation and never rebuilt their product and commercial posture for the second. When committees start asking for governance commitments, model fallback guarantees, and explicit pricing predictability under usage volatility, companies with thin operational substance are forced into discounting. They may still close deals, but they do so with deteriorating margin and rising support burden.
A practical read of 2026 is that many enterprises are entering mixed mode. They are still willing to fund selected AI expansion, but they are adding finance, risk, and operations stakeholders much earlier in deal cycles. This is the moment where software teams should treat procurement friction as product feedback, not only as sales friction. If objections cluster around explainability, auditability, or failure ownership, the problem is rarely a weak sales deck. The problem is usually that the system does not yet express operational trust in terms customers can buy.
For founders and product leaders, the tactical implication is straightforward. Instrument your pipeline by objection class and by stage. Track which objections appear before technical evaluation versus after legal review versus before final signature. Those distributions tell you whether your durability gap is feature depth, governance depth, or commercial structure. Teams that do this can correct architecture and packaging before a broader demand cooldown exposes the weakness at scale.
Capital discipline for software teams inside hardware-led booms
When a hardware-driven supercycle expands quickly, software teams often inherit optimism assumptions from the surrounding ecosystem. Hiring plans stretch, customer success capacity is backloaded, and roadmap scope expands as if conversion and retention risk are secondary.
In this cycle, that is a dangerous pattern.
Hardware booms can create temporary demand abundance in software adjacent categories, but temporary abundance is exactly when cost structures become fragile. A disciplined operating model in 2026 should include explicit scenario-weighted planning. Build one case where demand remains elevated, one where growth moderates, and one where a procurement pause lasts two to four quarters. Then stress each case against payroll commitments, model inference obligations, and support expectations already embedded in signed contracts.
The goal is not pessimism. The goal is avoiding a forced strategic retreat when market conditions change. Teams that hire and scope against only the top scenario often discover that a moderate slowdown leaves them with too many partially adopted features and too few deeply embedded workflows. They then cut in panic and remove the very reliability and integration investments that would have preserved renewals.
Capital discipline at the software layer is mostly portfolio design. Protect budget for reliability engineering, integration durability, and customer outcome instrumentation even when top-line growth is strong. Treat those as non-discretionary. Make experimental exploration explicit and capped. The companies that emerge stronger from cycle transitions are rarely the ones that spent the least. They are the ones that allocated with a clear boundary between durable core and speculative edge.
Another overlooked lever is contract design. If a significant share of your revenue is tied to annual innovation budgets, use the current demand window to migrate accounts toward multi-year operational programs with adoption milestones. This is not a legal detail. It is cycle insurance. You are converting narrative-sensitive spend into operationally defended spend, and that conversion changes survival odds when sentiment resets.
Architecture decisions that determine cycle resilience
In AI-heavy products, architecture is now a financial decision as much as a technical one. Two products can deliver similar demo outcomes and have radically different resilience when demand normalizes.
Resilient architectures share a few traits. They separate critical-path decisions from optional enrichment. They maintain explicit model routing logic rather than hardwiring one provider assumption. They include degraded operation modes that preserve core workflow continuity under latency or cost pressure. They record evidence chains so humans can verify decisions when trust is questioned.
Fragile architectures tend to maximize peak capability without protecting routine reliability. They route every step through the most expensive model path, assume stable response-time behavior from external dependencies, and blur boundary lines between deterministic system logic and probabilistic model judgment. In growth phases this can feel efficient because everything works "well enough" under generous budgets. Under tighter constraints, failure costs compound quickly.
For product teams in 2026, one practical method is to classify every workflow step by consequence class. High-consequence steps should include deterministic checks, policy gates, and escalation controls even if that slows nominal throughput. Low-consequence steps can use cheaper or faster paths with lighter controls. This consequence-aware architecture usually improves both margin and trust because expensive safeguards are concentrated where they matter, not sprayed across the stack.
A second method is to track model-path utilization by business outcome, not by engineering convenience. Many teams discover that a large share of inference cost sits in steps with weak outcome contribution. Once visible, this creates room to simplify routing and protect gross margin without degrading value. In cycle transitions, that optionality is strategic. Teams with routing flexibility can adapt to cost and latency shifts in weeks. Teams without it are locked into painful tradeoffs between reliability and margin.
The talent pattern behind durable versus fragile AI products
It is tempting to frame cycle risk as a market problem and miss the organizational cause. In practice, product fragility often reflects team-shape fragility.
Durable teams usually have strong translation layers between domain operators, product owners, and engineering. They build decision contracts before scaling automation. They evaluate systems with scenario-based tests tied to real failure costs. They treat incident analysis as input to roadmap prioritization, not as support cleanup.
Fragile teams often over-index on model experimentation skill while under-investing in workflow ownership and verification discipline. They can prototype quickly but struggle to sustain reliable behavior across changing data, policy, and infrastructure conditions. During boom phases, this gap is masked by enthusiasm and budget tolerance. During normalization, it becomes visible as renewal friction and support escalation.
A practical intervention for 2026 is to redesign performance metrics for AI product teams. Reward measured error-cost reduction, escalation quality, and workflow completion reliability, not only feature throughput or model novelty. These incentives shape architecture choices and operational behavior long before financial pressure arrives.
Another intervention is to institutionalize pre-mortem reviews for major AI launches. Ask what could break if customer demand halves, if model cost rises, if legal review tightens, or if latency budgets shrink. Teams that run these reviews early are not less ambitious. They are less surprised.
If leaders want one simple test, it is this. Can your team explain, in plain language and with evidence, why each major AI workflow remains valuable under conservative spending assumptions? If the answer is vague, the risk is organizational, not merely macroeconomic.
A board-facing scorecard for bubble resilience
Many organizations talk about resilience but do not operationalize it for governance. Board conversations stay at headline level and miss execution reality.
A useful board-facing scorecard should track four families of indicators over rolling quarters. The first family is adoption depth: percentage of customers running AI workflows in business-critical paths versus pilot environments. The second is economic quality: gross margin stability under inference volatility and support-load variation. The third is trust performance: incident severity distribution, mean-time-to-containment, and auditability coverage for high-consequence actions. The fourth is exposure concentration: revenue share tied to discretionary innovation budgets versus operational programs.
These metrics do not predict the macro cycle perfectly, but they detect fragility early enough to act. If adoption depth is shallow while cost exposure is high, growth is likely sentiment-dependent. If trust performance degrades as volume rises, scale is creating hidden liability. If concentration in discretionary budgets remains high late in the cycle, renewal risk is likely understated.
A scorecard like this also improves decision quality around capital allocation. Instead of debating "are we bullish or bearish on AI," leadership can ask where additional spend has highest durability return. In many cases, the answer is not more feature surface. It is stronger workflow embedding, better failure containment, and clearer contract alignment with customer operations.
Teams that treat this as governance overhead will miss the point. In volatile cycles, governance clarity is execution speed. It reduces wasted debate, aligns product and finance decisions, and prevents reactive pivots that damage customer trust.
How to convert today's demand into 2028 durability
If this cycle remains positive through 2027, teams still face a strategic trap: scaling output without scaling defensibility. Durability requires converting demand energy into system assets that keep paying off when growth normalizes.
The first asset is workflow ownership depth. For each high-value use case, teams should know where value is created, where failure cost concentrates, and where human escalation is non-negotiable. Ownership depth prevents roadmap drift toward broad feature sprawl with weak adoption.
The second asset is evaluation memory. Organizations that store only metrics but not failure context repeat the same errors under new model versions. Durable teams preserve incident narratives, edge-case traces, and mitigation rationale in reusable form. This improves future rollout quality and reduces relearning cost.
The third asset is commercial memory. Teams should record which contract structures preserved renewals during budget scrutiny and which structures collapsed when exploratory spending tightened. Commercial memory helps sales and finance avoid repeating fragile deal patterns.
The fourth asset is architecture optionality. Products designed with explicit model-routing flexibility, deterministic controls for high-consequence decisions, and clear evidence chains can adapt to pricing, latency, and regulatory shifts without full redesign.
Put differently, long-cycle winners are not merely the fastest builders during expansion. They are the fastest adapters when conditions change. Adaptation speed is built before the shock, not during it.
Consolidation risk and opportunity as the cycle matures
As markets move from acceleration to consolidation, software strategy needs to account for counterparties changing shape. Some customers will consolidate vendors to reduce governance overhead. Some competitors will be acquired for specific capabilities. Some integration partners will disappear or merge.
Teams that prepare for this can turn consolidation into an advantage.
Preparation starts with interface and data portability. If customers rationalize vendor stacks, products that can migrate data cleanly, preserve audit history, and maintain workflow continuity have stronger retention leverage. If migration is painful or opaque, procurement teams may choose alternatives even when product quality is high.
Consolidation also changes partnership strategy. During boom phases, broad partner networks can help rapid coverage. During consolidation, partner quality variance becomes a larger risk. Teams should tighten partner governance and prioritize integration pathways that are maintainable under fewer, stronger relationships.
Another consequence is pricing pressure on undifferentiated layers. As buyers reduce overlap, they demand clearer value attribution. Products with weak outcome linkage often face discount pressure. Products tied to measurable operational impact can hold pricing power better.
Leaders should therefore model consolidation as a baseline scenario, not as an edge case. Ask which capabilities remain critical when customers cut tool count, which integrations remain essential when partner ecosystems contract, and which workflows keep budget when experimentation spend declines.
Companies that answer these questions early are less likely to experience consolidation as a threat and more likely to capture share from weaker, narrative-dependent competitors.
Common Objections
"If demand is this strong, why not maximize growth and worry about sustainability later?"
Because cycle peaks punish overconfident cost structures. Growth is not the enemy. Fragility is the enemy.
Maximize growth where value is proven, and keep flexibility where assumptions are still uncertain. Fast growth with no downside plan is not bold strategy. It is deferred risk recognition.
"Is this just another version of the dot-com warning that misses the upside?"
Only if you misread the argument.
The claim here is not "AI is fake." The claim is that market-level transformation and company-level failure can coexist in the same cycle. The internet transformed everything, and most early internet companies still failed. Both facts are historically true.
"Software should not care about semiconductor cycles that much"
In 2026, this is no longer true for many categories.
Model availability, inference cost, latency behavior, procurement timing, and deployment architecture are all affected by semiconductor realities. Software teams that ignore this coupling end up surprised by constraints their customers already feel.
Evaluate exposure, not headlines
Run a 90-day durability sprint.
Pick your top three AI product workflows and force each through a recession-grade test: conservative demand assumptions, stricter ROI threshold, slower customer decision cycles, and higher trust requirements. If the workflow still clears the bar, double down. If it fails, redesign now while budgets are still open.
At the same time, shift internal metrics away from vanity velocity. Track production adoption depth, error-cost reduction, escalation quality, and renewal-linked outcomes. Those indicators tell you whether your value is infrastructural or fashionable.
If your team is wrestling with this exact transition from hype-aligned roadmap to cycle-resilient roadmap, I am open to advisory conversations focused on operating models, architecture tradeoffs, and go-to-market sequencing.
Clear decision contracts beat role-based debate.
Before closing, run this three-step check this week:
- Name the single constraint that is most likely to break execution in the next 30 days.
- Define one decision trigger that would force redesign instead of narrative justification.
- Schedule a review checkpoint with explicit keep, change, or stop outcomes.
Sustainability depends on infrastructure reality
Taiwan's 2026 surge is not a hallucination. It is one of the clearest signals that AI demand is materially reshaping industrial and software priorities.
The bubble question is still valid because parts of this expansion are speculative, and speculation always creates fragile edges.
The winning software posture is neither denial nor euphoria.
Treat the boom as real, treat cycle risk as real, and build products that remain indispensable when customers stop paying for ambition and start paying only for durable outcomes.
