Design systems don't crash. They fade. The Figma file stops getting updates. The documentation goes stale. The Slack channel goes quiet. Teams quietly go back to designing one-offs because "the system doesn't have what I need." Six months later nobody's maintaining it. A year later nobody remembers what was in it.
The 2026 data finally gives this decay pattern a name. Per zeroheight's 2026 Design Systems Report, buy-in satisfaction for design systems dropped from 42% to just 32% year over year. Gartner's 2025 Hype Cycle moved design systems from the "Peak of Inflated Expectations" into the "Trough of Disillusionment" — the phase where early enthusiasm meets the hard reality of maintenance, adoption, and organizational buy-in. Adoption is what the zeroheight team called "the existential challenge." The resource crisis is accelerating. Teams are doing more with less.
You've probably watched a design system die. This post is about why it happens — grounded in the 2025 and 2026 zeroheight reports, Gartner's analysis, NN/g's DesignOps maturity research, and the seven failure patterns that show up consistently across organizations. It covers the early-warning signals to watch for, a revival playbook if your system is already decaying, and — importantly — the new 2026 AI-era failure modes that didn't exist two years ago. Figma MCP integration gaps, AI-generated code that bypasses the system, and token drift between Figma variables and codebase are killing design systems in new ways that most teams haven't named yet.
One note on a specific number that circulates widely: the "80% of design systems abandoned within two years" figure is cited in dozens of design blogs without a rigorous primary source. I searched and could not verify it. What's rigorously sourced is this: design system buy-in satisfaction fell 42%→32% in a year, adoption is the single most-cited challenge, and Gartner explicitly places the category in the Trough of Disillusionment. That's the evidence. The exact abandonment rate is directional industry consensus, not a peer-reviewed statistic.
TL;DR — Key Takeaways
- Design systems are maturing and struggling at the same time. zeroheight's 2026 report (nearly 300 respondents) shows buy-in satisfaction dropped 42%→32% YoY, while dedicated design system teams grew 5% and design tokens hit 84% adoption. Mass adoption of the foundations; collapse of executive confidence.
- The dominant failure pattern isn't bad components — it's weak ownership, big-bang launches, and broken feedback loops. A design system treated as a project will decay by default.
- Seven early-warning signals: no curator, no feedback loop, no versioning discipline, Figma-code drift, stalled adoption metrics, surprise breaking changes, ghost backlog.
- Three new 2026 failure modes: MCP integration gaps (AI agents bypassing the system), AI-generated code that doesn't pull from the component library, and design token drift between Figma Variables and code.
- The mindset shift that saves systems: treat the design system as a product, not a project. Products have owners, roadmaps, metrics, release cycles. Projects end.
- Revival is possible but expensive: audit what's actually in use, prune aggressively, reclaim named ownership, rebuild trust with small wins.
What the 2026 Data Actually Says
Before the failure modes, the verified state of the category in April 2026. Data from zeroheight's 2026 Design Systems Report and the 2025 edition (survey n≈300 in each year), plus supporting data from Gartner, UX Tools, and Nielsen Norman Group.
The good news:
- Dedicated design system teams are growing. From 72% to 79% of organizations between 2024 and 2025. Another 5% jump in 2026. Even in companies with fewer than 100 employees, there's a 71% chance they have a dedicated team.
- Design tokens went mass-market. 56% adoption in 2024 → 84% in 2025. Tokens are now the default infrastructure for design systems.
- Accessibility concerns dropped sharply — from 46% to 10% year over year in 2026. The industry is genuinely getting better at WCAG discipline.
- Consistency and communication concerns dropped almost 20% YoY. The teams that survive the first few years are getting better at the basics.
The bad news:
- Buy-in satisfaction dropped from 42% to 32% between the 2025 and 2026 zeroheight reports. This is a 10-point collapse in how satisfied design system teams feel about executive/stakeholder buy-in.
- Gartner's 2025 Hype Cycle places design systems in the "Trough of Disillusionment." Early enthusiasm has collided with the reality of ongoing maintenance, adoption friction, and ROI justification.
- Automation is down 8% from 2025. Teams want automation. They cite resource constraints as the main barrier to implementing it.
- 31% of teams are still gatekeeping contributions. 69% encourage open contribution, but the gatekeeping minority creates adoption friction at scale.
- Team sizes top out at 20-25 people — unchanged since 2022, despite systems being three years more mature. Design system teams are being asked to do more with the same headcount.
- Only 64% of teams document UI patterns. Patterns are the most opinionated part of a design system. A third of teams don't document them.
- Only 10% actively use AI for design system work (per the 2025 report). AI adoption at the systems level lags AI adoption in adjacent design work.
The picture is a maturing category with a confidence crisis. Teams know design systems are essential. They're struggling to prove value to stakeholders who want numbers, not vibes.
NN/g's DesignOps Maturity Research (n=557 UX/design practitioners) adds related context: practitioners reported an average of only 7.5 of 34 recommended DesignOps items at their organization — 22% maturity across the category. Design systems are one component of DesignOps, and the sub-22% maturity of design operations broadly is an upper bound on how well design systems can thrive inside most organizations.
The Core Failure Pattern
Before the specific failure modes, the macro pattern that drives most of them.
Design systems are usually created as projects. A team — often design leadership plus a few senior engineers — gets funded for a quarter to "build a design system." They work on it, ship version 1.0, and then that project ends. The people who built it return to product work. Nobody owns version 1.1.
Products evolve. Projects end. A design system that's treated as a project will be abandoned by default, because there's no ongoing mechanism to keep it alive.
The single most important mindset shift in design systems work is: the system is a product. Products have owners, roadmaps, release cycles, user feedback, and metrics. When your design system doesn't have those, you're staring down the decay curve.
Every specific failure mode below is downstream of this core pattern. When zeroheight's 2026 report says buy-in satisfaction dropped 42%→32%, it's largely because the systems that were launched with project-mindset executive sponsorship two years ago are now in their decay phase, and the executives are noticing.
The Seven Failure Modes That Kill Design Systems
Failure 1: No owner (or ownership by committee)
A design system without a single accountable owner is a design system that will be abandoned. Committees don't maintain products; individuals do. When everyone is responsible, no one is.
The zeroheight 2026 report correlates dedicated design system teams with higher trust, adoption, and team happiness. It's the single strongest correlation in their data. The fix is a named owner — often a design system lead or design ops manager — with dedicated time (zeroheight's data suggests full-time employment at 55% of surveyed teams, part-time at 22%, none at 24%) and authority to make decisions about what goes in the system, what gets deprecated, and what gets prioritized.
Watch for the anti-pattern: a system "owned by the design team" (everyone owns it, no one leads it) or "owned by engineering" (maintained at the component-library level with no design governance).
Failure 2: Big-bang launch followed by neglect
The "we'll spend a quarter building this and then ship it to everyone" pattern sounds reasonable but almost never works. Big-bang launches create a massive adoption cliff — teams are expected to migrate overnight. The system inevitably has gaps. Teams hit friction, form negative opinions, and either abandon the system or route around it. Meanwhile, the system team is drowning in support requests and can't iterate fast enough. Trust erodes. Adoption plateaus.
This is the pattern zeroheight's drop in buy-in satisfaction largely tracks. Systems launched in 2023 with big-bang energy are now in their 2026 decay phase.
The fix is incremental launch. Start with the five or ten most-used components. Get them rock-solid. Launch with one team, iterate based on their feedback, then expand. It feels slower but the adoption curve is dramatically better.
Failure 3: Built in isolation from real product work
Design systems designed in a vacuum — by a team that isn't shipping products themselves — are usually wrong about what product teams actually need. The components don't match the patterns product teams are building. The documentation answers questions nobody asked. The API is awkward in real use.
This is specifically what zeroheight's 2026 report points to when it says "adoption remains the existential challenge." Teams aren't adopting the system because the system wasn't built with them in mind.
The fix is co-location with product work. The people building the system should also be shipping products using the system. If that's not possible structurally, at minimum the system team should have regular user research sessions with the product teams that will consume the system, and those insights should drive the roadmap.
Failure 4: No feedback loop
Feedback loops are the difference between a design system that adapts and one that decays. Without a clear way for product teams to report bugs, request components, and see their feedback acted on, the system feels like a top-down mandate instead of a shared resource.
Communication is one of the top concerns in zeroheight's 2026 report across almost all disciplines. The industry knows communication is the problem. Teams still deprioritize it when resource-constrained, which accelerates decay.
The fix is treating feedback like customer support. Acknowledge issues promptly. Triage them visibly (public backlog, not private). Communicate what's being prioritized. Close the loop — when a fix ships, tell the person who reported the issue. People will tolerate a system with limitations if they feel heard. They'll abandon a system that ignores them.
Failure 5: Breaking changes without migration paths
A team updates to the latest version of the design system. Their build breaks. They dig through the changelog and find that a prop was renamed, or a component was deprecated, or the entire API changed. Nobody warned them. Nobody helped them migrate.
After one or two incidents of this, that team stops updating. They pin to an old version forever, and the system is effectively dead to them.
The fix is versioning discipline. Semantic versioning. Deprecation notices before breaking changes. Migration guides when APIs change. Codemods where possible. Every breaking change that goes out without support is a trust withdrawal from your adoption bank.
Failure 6: Documentation that's wrong or missing
Only 64% of teams include UI patterns in their documentation per zeroheight's 2025 report. Patterns are the most opinionated, most-valuable part of a design system — guidance on how components go together to solve specific problems. When a third of teams skip documenting them, the system becomes a component library, not a design system, and the unique value fades.
The fix is treating documentation as product. Ship docs alongside components. Write guidance for patterns, not just specs for components. Run accessibility audits of docs the same way you run them for UIs. If it's not documented, it doesn't exist — and more importantly, if the docs are wrong, the system loses credibility with every misuse.
Failure 7: Ghost backlog
A backlog of requested components, feature requests, and bug reports piling up with no visible progress. Teams see their requests sit for months. They stop asking. Then they stop using.
The fix is radical transparency. Public backlog. Quarterly planning documents. Weekly changelog posts. Report what didn't ship and why. Treat the roadmap as a conversation, not a gift.
Early-Warning Signals Your System Is Dying
A checklist for diagnosing a design system before it collapses. Three or more of these means you're in Gartner's Trough of Disillusionment and heading for abandonment.
1. No single name answers "who owns this?" If asking about the design system's direction leads to "the design team" or "engineering" or silence, ownership is effectively null.
2. Component usage metrics are flat or declining. Track actual production adoption (not Figma file views). If new UI isn't pulling from the system, new UI isn't using the system.
3. The system's Slack channel has <1 message per day. Healthy systems have constant low-level chatter about usage questions, feature requests, and bug reports. Silent channels mean the system is invisible.
4. Product teams are building duplicate components. When a button, card, or input shows up in three product teams' code without coming from the system, your system has failed at coverage or communication.
5. The last Figma file update was more than 30 days ago. Live systems get touched constantly. Dead systems don't get touched at all.
6. Documentation is out of sync with the component library. If a prop in code doesn't appear in docs, or vice versa, trust is already compromised.
7. Breaking changes ship without migration guides. This is the single fastest way to lose a team.
8. Design and code tokens have drifted. A color value in Figma Variables doesn't match the CSS custom property in code. Any drift is a sign of broken governance.
The Revival Playbook
If your system has two or more of the warning signals above, it needs revival. This is expensive, takes 6-12 months minimum, and requires organizational commitment. But it works.
Step 1: Audit what's actually in use
Instrument your system to see which components are used in production and which aren't. Most systems have a long tail of components nobody's used in months. These are dead weight — they drain maintenance capacity without providing value. The first step of revival is honest measurement of current state.
Step 2: Prune aggressively
Deprecate anything that isn't being used. Reduce your component library to the 20-30 components that do 80% of the work. Fewer, better components is the 2026 pattern — zeroheight's report on contributions suggests most teams are drowning in sprawl, not starved for variety. Pruning creates space to rebuild trust.
Step 3: Reclaim named ownership
Pick an owner. Give them time (50%+ of their role minimum). Give them authority. Announce the change publicly. If your organization can't fund ownership, the system can't be saved — which is also a signal worth surfacing to leadership. Per zeroheight's data, organizations with dedicated teams see higher trust and adoption; this correlation isn't coincidence.
Step 4: Rebuild the feedback loop
Create a single visible channel for feedback (Linear issues, a dedicated Slack channel, whatever works). Commit to responding to every new request within a week. Do the work publicly. Reporting on a fix is almost as valuable as shipping the fix, because it rebuilds trust.
Step 5: Ship small wins to rebuild trust
Don't announce a "v2 relaunch" — nobody will believe you. Instead, ship small, visible improvements weekly for three months. A new form component with better validation. A dark mode pass. An accessibility audit that ships real fixes. The rhythm matters more than any single release. zeroheight's data on accessibility concerns dropping 46%→10% YoY shows that incremental, sustained accessibility work compounds — use that pattern.
Step 6: Measure adoption and report publicly
Instrument your system. Track which components are used, which teams have adopted, which patterns are most-requested. Publish a monthly update. Visibility creates accountability, and accountability is the forcing function that keeps the system alive past the initial revival.
Design system tools like zeroheight explicitly pitch adoption analytics as the revival lever — for good reason. Teams that measure adoption recover. Teams that don't, don't.
The 2026 AI-Era Failure Modes
Three new failure patterns have emerged specifically in the last 18 months. None existed meaningfully two years ago. Each one kills systems faster than any traditional failure mode, because AI tools generate plausible-looking UI that's subtly inconsistent with your system.
Failure Mode 1: MCP integration gaps
Figma's Dev Mode MCP server, launched in 2025 and expanded in 2026, lets AI coding agents read Figma files directly as design context. This is powerful — but it depends on Code Connect, which maps Figma components to real code components. If your design system isn't configured with Code Connect, AI agents (Cursor, Claude Code, Copilot, Windsurf under Cognition) will generate code that bypasses your system. They don't know what's in your component library. They generate new code from scratch.
Over weeks and months, you end up with a codebase full of one-off components that don't use your system. Each PR is small. The accumulation kills coverage.
The fix: Set up Code Connect for every critical component. Audit AI-generated PRs for system pulls versus reinventions. This is operational burden that didn't exist before 2025. Claude Design specifically takes the opposite approach — it reads your codebase's design system during onboarding and applies it to generated work. If your organization is adopting Claude Design, your Code Connect investment pays off immediately. If not, the MCP gap is live and growing.
Failure Mode 2: AI-generated code that bypasses the system
Related but distinct from MCP. Engineers use AI coding assistants (Cursor, Claude Code, Copilot, Windsurf) to ship faster. The AI, without system context, generates code that looks fine but doesn't use the component library. Nobody notices because the AI output "works." Six months later, half your new UI doesn't use the system.
This is the most common 2026 drift pattern. It's invisible in the moment and catastrophic at scale.
The fix: AI context documents in your repo that tell the AI what patterns to use. CLAUDE.md files for Claude Code. Cursor Rules for Cursor. Repo-level prompt guides. Linting rules that flag non-system component usage. This is a 2026-native discipline most teams are still figuring out. The zeroheight report notes only 10% of teams actively use AI for design system tasks — the inverse is that 90% of teams have AI generating code that isn't guided by the design system.
Failure Mode 3: Design token drift between Figma Variables and code
Figma Variables launched in 2023 and became core to design systems by 2024-2025 (84% adoption per zeroheight's 2025 data). But many teams maintain Figma variables separately from the CSS custom properties or Tailwind config in code. Over time, they drift. A color value gets updated in Figma but not in code. Or a spacing scale changes in code but not in Figma. Each drift event is small. The accumulation breaks trust in both sources.
The W3C Design Tokens Community Group published the DTCG 2025.10 format spec that standardizes token structure across tools. Tools like Style Dictionary, Tokens Studio, and Figma's own variable sync are the countermeasure.
The fix: Generate both from a single source. One commit, both platforms update. Manual double-maintenance is not sustainable at any organization size. See our deep dive on color systems that scale for the specific implementation patterns.
Failure Mode 4 (emerging): Generative design tool output that doesn't match the system
When designers use Claude Design, Figma Make, Lovable, or Google Stitch to generate exploratory designs, the output is often inconsistent with the team's design system. Claude Design mitigates this by reading the codebase during onboarding. Figma Make is improving at system adherence. But default behavior across the category still produces generic-looking UI that requires manual reconciliation with the system.
The fix: Use AI design tools that can ingest your design system (Claude Design's codebase extraction, Figma Make's file-scoped generation, Stitch's DESIGN.md import). For tools that don't, treat output as exploration-only and require system-conformance before merging to production.
For more on the tool landscape specifically, see Claude Design vs Figma vs Lovable vs v0. For the broader handoff reality, see Design Handoff in 2026: Dev Mode, MCP, and Why Screenshots in Slack Need to Die.
The Mindset Shift That Saves Systems
All of the fixes above are tactical. The deeper shift is philosophical.
Treat your design system as a product, not a project. Products have owners. They have roadmaps. They have users (your product teams). They measure success. They evolve in response to feedback. They get end-of-lifed only through deliberate decisions, not through neglect.
The design systems that survive five years or more all have this characteristic: someone in the organization treats them as a product they're responsible for, forever. The ones that die all have the opposite — they were a project someone shipped and then moved on from.
If you want your system to outlive you, you need to design succession. Write the documentation you'd want if you were taking over someone else's system. Train a backup owner. Make the system observable and understandable from the outside. These are not system-building activities; they are product stewardship activities, and they're the difference between a system that lives and a system that fades.
The zeroheight 2026 report closes with a question: design systems are essential infrastructure, but can teams secure the resources and executive support to sustain them? The data says 32% feel their buy-in is strong enough. 68% don't. That's the work to do.
Frequently Asked Questions
Why do most design systems fail?
Most design systems fail because they're treated as projects instead of products. A team builds the system, launches it, and then disbands. Without an ongoing owner, maintenance decays. Without maintenance, components go out of sync with product needs. Without relevance, product teams stop using the system. Within 12-24 months, adoption has dropped below the point where the system is worth maintaining. Per zeroheight's 2026 Design Systems Report, buy-in satisfaction dropped from 42% to 32% year over year, and Gartner moved design systems to the "Trough of Disillusionment" on its 2025 Hype Cycle. The root cause is organizational structure, not bad components.
What percentage of design systems actually get abandoned?
There's no rigorous peer-reviewed figure. The "80% abandoned within two years" number circulates widely in design blogs without a traceable primary source. What is rigorously documented in zeroheight's 2025 and 2026 Design Systems Reports (n≈300 in each year): buy-in satisfaction dropped from 42% to 32% year over year, adoption is the "existential challenge," and Gartner's 2025 Hype Cycle places design systems in the Trough of Disillusionment. The exact abandonment rate is directional industry consensus, not a peer-reviewed statistic. Treat "most" (rather than "80%") as the honest answer.
How do I get teams to adopt a design system?
Adoption comes from trust and utility. Teams adopt when the system is clearly better than building from scratch, which requires: components that match real product needs, documentation that's accurate and complete, a responsive team that fixes bugs quickly, clear migration paths when APIs change, and visible commitment from leadership. Per zeroheight's data, organizations with dedicated design system teams see higher trust, adoption, and team happiness. The biggest mistake is mandating adoption without doing the work to make the system genuinely valuable — that creates resentment without changing behavior. Also: measure adoption rigorously. What you can't measure, you can't improve.
What is design system debt?
Design system debt is the accumulated cost of inconsistencies, out-of-date components, missing documentation, and divergence from product reality. Every design system accrues some debt naturally over time. Healthy systems pay it down through regular maintenance. Unhealthy systems let it compound until the debt exceeds the value of the system, at which point teams start building outside the system and the system fades. In 2026, a new form of debt is accumulating fast: drift between Figma Variables and code custom properties, and AI-generated PRs that bypass the component library. Both are invisible in the moment and catastrophic at scale.
How do I measure design system success?
The most important metric is adoption — what percentage of UI components in production come from the system. Other useful metrics: time-to-ship for new features (should decrease as the system matures), consistency audit scores (how much UI across the product uses shared patterns), support ticket volume about UI inconsistencies (should decrease), and direct user feedback from product teams. Vanity metrics like "number of components" or "Figma file page count" are misleading — a system with 50 well-used components is healthier than one with 300 unused ones. Tools like zeroheight explicitly pitch adoption analytics as the core measurement discipline.
Who should own a design system?
Ideally, a dedicated design system team with both design and engineering representation. At minimum, a single named owner with 50%+ of their role dedicated to the system. Per zeroheight's 2025 report, 55% of surveyed teams have a full-time dedicated owner, 22% part-time, and 24% have no dedicated resource. The no-owner category correlates with the lowest adoption and satisfaction. The anti-patterns: ownership by committee (nothing gets decided), ownership by the most senior designer in their spare time (nothing gets done), and ownership by engineering only or design only (the system skews toward whichever side owns it). The best design systems have shared ownership with a clear accountable lead.
What's the biggest mistake design system teams make?
Building in isolation from real product work. When a design system team isn't also shipping products that use the system, they don't have the signal to know what's working and what isn't. The system becomes an artifact of design thinking rather than a tool for product teams. The fix is tight feedback loops — either the system team ships alongside product teams, or they have mandatory weekly touchpoints with product teams that drive the roadmap. The second-biggest mistake (2026-specific): not configuring Code Connect or AI context files, so AI coding tools bypass the system silently.
How are AI tools killing design systems in 2026?
Three main ways. First, MCP integration gaps: AI coding agents that can read Figma files will generate code that bypasses your component library unless Code Connect is configured. Second, AI-generated code outside the system: Cursor, Claude Code, Copilot, Windsurf (now Cognition) generate components from scratch unless guided by repo-level rules and CLAUDE.md-style context files. Third, design token drift: Figma Variables and code tokens get out of sync because manual maintenance doesn't scale. All three are new 2026 patterns; all three are solvable with operational discipline, but most teams haven't named them yet. For more on AI design tools specifically, see Claude Design vs Figma vs Lovable vs v0.
How long does design system revival take?
Six to twelve months minimum for a system in significant decay. Two quarters of audit, pruning, and ownership reclamation before trust starts to rebuild. Visible wins need to ship weekly for three months before adoption metrics move. If organizational commitment is genuine, revival is achievable. If the organization isn't willing to fund dedicated ownership, the system can't be saved — and that's a signal worth surfacing to leadership. Ending a design system deliberately is better than letting it decay indefinitely.
Is my design system worth saving?
Run the early-warning signal checklist above. If you have three or fewer warning signals, investment in tactical fixes is likely worth it. If you have five or more, consider whether revival is better than a deliberate sunset plus a smaller, more focused rebuild. A fresh 20-component system with a dedicated owner often outperforms a revived 300-component legacy system. The honest question: does your organization have the resources to support a design system at the scale you currently have, or would a smaller system serve better?
For tokens specifically (which hit 84% adoption in 2025), see [The Color Systems Guide: How Stripe, Linear, and Radix Scale Color in 2026](https://mantlr.com/blog/color-systems-that-scale). For handoff patterns that keep the system connected to production, see [Design Handoff in 2026: Dev Mode, MCP, and Why Screenshots in Slack Need to Die](https://mantlr.com/blog/design-handoff-2026-dev-mode-mcp). For the AI design tool landscape including how Claude Design extracts design systems from codebases, see [Claude Design vs Figma vs Lovable vs v0](https://mantlr.com/blog/claude-design-vs-figma-lovable-v0). For spacing systems that connect to tokens, see [The Spacing System Cheat Sheet](https://mantlr.com/blog/spacing-system-cheat-sheet).
Browse Mantlr's curated [design system resources](https://mantlr.com/categories/design-systems), [component libraries](https://mantlr.com/categories/component-libraries), and [Figma resources](https://mantlr.com/categories/figma-resources) — all vetted by working designers.
Primary source references (all retrieved April 24, 2026):
- zeroheight Design Systems Report 2026 — nearly 300 respondents, buy-in satisfaction 42%→32% YoY
- zeroheight Design Systems Report 2025 (PDF) — n≈300, design tokens 56%→84%, dedicated teams 72%→79%
- Gartner 2025 Hype Cycle — design systems moved to Trough of Disillusionment
- NN/g DesignOps Maturity Research — n=557 practitioners, 22% DesignOps maturity average
- UX Tools Design Systems Survey — n=2,220 designers, Nov 2024–Jan 2025
- Sparkbox Design Systems Survey 2022
- State of DesignOps 2022 — 444 respondents, 45 countries
- zeroheight: AI in Design Systems — What's Changing in 2026 — Elyse Holladay on MCPs, skills, and design guidelines
- W3C Design Tokens Format Module (DTCG 2025.10)
- Figma Code Connect documentation
Methodology note: Numbers cited in this post are primary-source data retrieved on April 24, 2026, primarily from zeroheight's 2025 and 2026 Design Systems Reports (each survey ~300 respondents) and NN/g's DesignOps research (n=557). The "80% abandoned" figure commonly cited in design blogs was searched extensively; no rigorous peer-reviewed source was found. The honest framing — that most design systems get abandoned, per adoption-satisfaction data, but the exact rate is not rigorously established — is the defensible claim this post stands behind.