Glossary

Definitions that do not suck.

Tight, practical definitions for design systems, governance and adoption.

Term index

Open any card for the full entry.

The feedback cycle that turns system usage into sustained behavior.
Without an adoption loop, teams revert to custom UI under pressure.
Inconsistencies and gaps that slow delivery and increase rework.
Debt compounds and becomes operational risk.
When token usage diverges from the intended semantic roles.
Breaks theming and erodes consistency.
A defined API and behavior spec for a component.
Contracts reduce interpretation and bugs.
Rules and rituals that keep a system stable and trustworthy.
Without it, systems become snapshots.
A request-for-change template that documents decisions.
Prevents ad-hoc changes and rework.
The planned retirement of an API or component.
Avoids breaking teams unexpectedly.
How teams propose, build and ship changes to the system.
Without a model, adoption stalls.
Reusable interaction patterns beyond components.
Stops repeated design decisions.
Minimum a11y behaviors enforced by the system.
Prevents late-stage fixes and rework.
Predictable rhythm for shipping system updates.
Builds trust and reduces surprise.
A short log of the what, why and tradeoffs of a decision.
Makes decisions testable and reversible.
Operational systems that let design scale.
Without it, delivery slows as teams grow.
Docs structured so tools and humans can consume reliably.
Makes AI assistance accurate and safe.
A map of what exists, what is missing and what is drifting.
Makes scope and priorities visible.
Signals that teams are using the system as intended.
Lets you see if the system is working.
The structure of core, semantic and component tokens.
Separates raw values from intent.
Named decisions for visual and behavioral design values.
They are the public API for design.
A list of components in use across the product.
Shows drift and duplication.
The number of decisions a team must make to ship.
High decision load slows teams.
AI workflows that reduce repetitive design tasks safely.
Compresses cycles without losing quality.
Constraints that keep AI outputs aligned with system rules.
Prevents drift and unsafe automation.
The agreed scope, sources and limits of context given to AI tools.
Better context reduces hallucinations and rework.
Rules, review and versioning for prompts used in production.
Prompts are product logic, they need change control.
The curated data set used for retrieval in AI workflows.
Quality of retrieval defines quality of answers.
Service level indicators that describe system health.
Turns adoption and quality into measurable signals.
The acceptable level of UI drift before intervention.
Lets teams move fast without losing consistency.
Automated checks that block invalid token usage.
Prevents raw values from leaking into product code.
Theme changes driven by semantic tokens, not raw values.
Enables fast rebrands and dark mode without refactors.
Instrumentation that tracks component usage and overrides.
Shows where the system is used and where it leaks.
Structured review points where humans approve AI outputs.
Keeps quality and accountability intact.
A structured prompt pack that defines scope and acceptance.
Clear constraints reduce waste and rework.
Scoring criteria to judge AI outputs for quality and compliance.
Turns subjective review into repeatable checks.
A workflow where an AI agent plans steps, calls tools and verifies results.
Autonomy speeds delivery but increases risk without constraints.
Logic that selects the right tool for each subtask.
Wrong tool choice creates errors and wasted cycles.
A controlled way for AI to invoke tools with structured inputs.
Keeps actions deterministic and auditable.
A formal structure for AI outputs that enforces required fields.
Reduces parsing errors and rework.
Tracking prompt changes like code with history and ownership.
Prompts are logic, they need traceability.
A restricted environment that limits tool side effects.
Prevents destructive or unsafe actions.
A score that estimates whether a tool call will succeed.
Lets agents choose safer paths under uncertainty.
A constraint on how much compute or time an AI can spend.
Controls cost and latency.
The maximum amount of information an AI can consider at once.
Overfill causes truncation and missed details.
How up to date the retrieved content is.
Stale data creates bad decisions.
Percent of answers supported by retrieved sources.
Low coverage increases hallucination risk.
AI output constrained to retrieved or approved sources.
Protects accuracy and compliance.
The allowed level of unsupported output before failure.
Sets acceptable risk for AI assistance.
A curated set of prompts with owners and use cases.
Stops ad hoc prompt sprawl.
Turning research and requirements into a structured spec.
Makes design and engineering alignment fast.
Using AI to find issues, gaps or inconsistencies.
Catches mistakes before release.
Splitting large tasks into smaller promptable units.
Improves precision and control.
Rules that prevent unsafe or unintended tool actions.
Protects production systems and data.
Automated checks that enforce content or compliance rules.
Prevents risky outputs at scale.
Selecting a prompt based on intent and context.
Improves accuracy and reduces drift.
The ability to trace outputs back to inputs and decisions.
Required for governance and auditability.
Different safety rules for different classes of tasks.
Balances speed and risk.
Content authored to match a known schema from the start.
Prevents reformatting and ambiguity.
A fixed set of tasks used to test AI changes.
Prevents silent regressions.
A concise brief that defines system scope, goals and non goals.
Aligns teams on what the system is for.
A prioritized list of system improvements and requests.
Keeps work visible and aligned to outcomes.
A stage model for component readiness and stability.
Sets expectations for adoption and support.
A readiness model for patterns and workflows.
Prevents premature standardization.
Metrics that show system usage, quality and drift.
Lets you manage the system like a product.
Targets for system reliability, support and quality.
Defines what teams can expect.
The degree of alignment between design and production.
Low parity creates QA and rework.
Rules about when and how components can be customized.
Protects consistency while allowing flexibility.
Unexpected UI changes detected via snapshots.
Prevents UI drift and accidental changes.
Automated visual diffs of UI states.
Catches unintended changes early.
The rituals, templates and tools that scale reviews.
Keeps quality without slowing delivery.
How components move from proposal to deprecation.
Sets expectations for support and stability.
How tokens are created, used and retired.
Prevents token sprawl and confusion.
The stages for introducing, validating and standardizing patterns.
Prevents early standardization of unproven ideas.
A cross functional group that approves system decisions.
Ensures decisions reflect product reality.
A rubric for choosing between competing options.
Speeds decisions and reduces conflict.
A dashboard that summarizes system health and impact.
Makes value visible to leadership.
A dashboard showing system usage across products.
Reveals where adoption is blocked.
Operational practices for running a design system.
Keeps the system stable and trustworthy.
The help and enablement provided to teams using the system.
Adoption requires support, not just documentation.
The portion of product UI covered by system assets.
Low coverage signals hidden drift risk.
Clear accountability for component maintenance.
Prevents orphaned components.
Accountability for token definitions and changes.
Prevents token conflicts and drift.
Accountability for pattern quality and maintenance.
Keeps patterns current and usable.
The risk created by inconsistent UI and unclear standards.
Risk shows up as bugs, delays and churn.
Missing a11y behaviors that accumulate over time.
Debt becomes legal and reputational risk.
Automated checks that enforce system rules.
Keeps standards at scale without manual policing.
A time based view of system priorities and outcomes.
Aligns leadership and teams on direction.
The process of teaching teams to use the system.
Reduces misuse and accelerates adoption.
Workshops that build shared system literacy.
Makes adoption faster and more consistent.
A specific area of a product where UI patterns are applied.
Surfaces define adoption priorities.
A model that ranks surfaces by criticality.
Sets adoption requirements by impact.
Where a component originated and why it exists.
Stops redundant variants and clarifies intent.
A record of releases, changes and migrations.
Keeps teams aligned and reduces surprises.
A comparison that highlights token changes across releases.
Prevents unintentional breaking changes.
A report that shows component behavior changes.
Prevents regressions and breakage.
A systematic review of system health and product drift.
Keeps reality visible and actionable.
A focused audit to quantify inconsistency and gaps.
Turns subjective problems into measurable action.
A periodic review of system rules and outcomes.
Keeps governance relevant as the product evolves.
A staged plan to expand system usage across teams.
Adoption requires sequencing and enablement.
Initial setup and education for new teams.
First impressions shape adoption outcomes.
Shared rules for UI behavior, content and accessibility.
Standards remove ambiguity and reduce drift.
Rules for copy, tone and microcopy in the system.
Protects brand voice and clarity.
Usage rules that prevent misuse of components.
Stops components from drifting into bad patterns.
Rules that define how tokens should be applied.
Prevents raw values and ad hoc styling.
Rules for applying patterns consistently.
Prevents pattern drift and confusion.
Long term care for system quality and relevance.
Systems fail without stewardship.
The transfer of system knowledge between teams.
Prevents knowledge loss during org changes.
Planned changes as the product and org scale.
Systems must evolve without breaking teams.
A coordinated move from old patterns to new ones.
Unmanaged migrations stall adoption.
Reusable AI interaction patterns and behaviors.
Prevents inconsistent AI experiences.
A user facing area where AI features appear.
AI surfaces need consistent behavior and safety.
Copy that explains AI behavior and limitations.
Sets user expectations and builds trust.
Signals that show how confident AI outputs are.
Helps users decide when to trust results.
Clear notice that a response is AI generated or assisted.
Required for trust and compliance.
A feedback cycle that improves AI behavior over time.
Prevents model drift and quality decay.
A gradual change in AI outputs as context or models evolve.
Creates inconsistent experiences over time.
A failure where AI output causes a user or business issue.
Incidents demand governance and prevention.
Records of prompts, sources and outputs for compliance.
Required for accountability.
Operational workflows for AI content creation and review.
Keeps AI content consistent and safe.
Docs that explain AI features, limits and safe use.
Reduces confusion and support load.
A set of patterns and components for AI interactions.
Ensures consistent AI UX and safety.
Choosing a model based on task requirements and risk.
Different tasks require different tradeoffs.
Testing model performance on standard tasks.
Validates capability and safety.
Aligning model confidence with actual accuracy.
Prevents overconfidence in outputs.
Operational practices for managing AI models in production.
Prevents outages and regressions.
Constraints that prevent prompts from causing harm.
Prompts can bypass safety if uncontrolled.
Rules about which sources can be used for a task.
Prevents leakage and hallucination.
Testing prompts against expected outcomes.
Prevents prompt regressions.
A drop in output quality after a prompt change.
Uncontrolled changes break workflows.
A measurable indicator of design quality or consistency.
Signals help leadership see value.
A composite view of adoption, quality and stability.
Keeps the system accountable to outcomes.
How clearly the system value and usage are communicated.
Visibility drives adoption and funding.
A guide that explains how to design and build components.
Standardizes quality and decisions.
A guide for designing and validating patterns.
Prevents ad hoc pattern creation.
A guide for defining and rolling out tokens.
Avoids token sprawl and misuse.
A model that describes operational maturity of design.
Shows what to improve next.
A staged model of system capability and impact.
Aligns investment with outcomes.
A model that describes how safely AI is used in workflows.
Prevents premature automation.
How well system decisions match product reality.
Misalignment leads to rejection and drift.
A short briefing that aligns stakeholders on scope.
Keeps expectations realistic.
Guiding principles for system decisions.
Keeps choices consistent over time.
Keeping AI behavior aligned with product goals and ethics.
Misaligned AI damages trust.
Evaluating potential harms before deploying AI features.
Prevents legal and user harm.
Clear pass criteria for AI outputs.
Makes review consistent and fast.
Policies and processes for AI features and outputs.
Keeps AI safe and consistent.
A checklist for shipping AI features safely.
Reduces risk at launch.
Measures that show AI performance and impact.
Lets teams improve and justify investment.
Instrumentation for AI usage and outcomes.
Reveals usage patterns and failure modes.
Mechanisms for users to rate and correct AI outputs.
Improves accuracy and trust.
A record of AI model or prompt changes.
Keeps teams aware of behavior changes.
The minimum quality level required for AI outputs.
Prevents shipping low quality AI features.
A safe alternative when AI output is uncertain.
Keeps users moving when AI fails.
A mode where AI suggests but does not decide.
Reduces risk while still helping users.
A mode where AI acts with minimal user input.
High speed but high risk.
Understanding user intent to select the right action.
Intent errors cause wrong outputs.
Rules that define which sources AI can access.
Protects sensitive data and accuracy.
Rules that control tone, length and structure of AI outputs.
Keeps responses consistent with brand and UX.
A standardized UI block for AI answers with sources.
Creates predictable interaction and trust.
A visual cue that indicates the origin of information.
Transparency improves trust.
A signal that the model is unsure about an answer.
Helps users make safe decisions.
When a user or system replaces an AI output.
Overrides show where AI fails.
Rules for when AI can be used in workflows.
Protects sensitive tasks.
A bounded interaction period with an AI system.
Session data affects context and privacy.
Clearing or reducing context to avoid contamination.
Prevents unwanted carryover.
A recurring pattern of AI mistakes or breakdowns.
Knowing modes helps prevent incidents.
A standardized test for AI performance on tasks.
Enables consistent comparisons.
A matrix of task risk vs automation level.
Determines when to allow autonomy.
A group that oversees AI policies and releases.
Keeps oversight accountable.
Metrics that describe how AI is used in practice.
Highlights adoption and failure points.
The time it takes for AI to respond.
Slow responses reduce adoption.
A limit on spending for AI usage.
Controls operational cost.
An automated test that runs a full AI flow.
Catches integration issues and drift.
Consistency of AI outputs under similar inputs.
Unreliable output breaks trust.
Alignment of AI tone and structure across outputs.
Inconsistent output feels unprofessional.
How understandable AI outputs are to users.
Clarity drives adoption and trust.
Principles that guide responsible AI behavior.
Ethical failures damage trust and brand.
Steps to reduce biased AI outputs.
Bias creates harm and legal risk.
Ensuring AI outputs and UI are accessible to all users.
Accessibility is required, not optional.
Adapting AI outputs to language and cultural context.
Localization affects clarity and trust.
Transition from AI to a human when needed.
Prevents user frustration in complex cases.
Actions that require extra checks before execution.
Reduces risk from autonomous actions.
Rules that define what AI can access and do.
Prevents data leaks and unsafe actions.
Providing only the minimum context needed for a task.
Reduces leakage and improves focus.
Visibility into AI behavior and outputs in production.
Without observability, issues go unseen.
Ensuring AI objectives match user and business goals.
Misaligned goals create harmful outcomes.
Rules that limit the shape and scope of AI outputs.
Prevents unsafe or verbose responses.
The point in time after which the model lacks training data.
Prevents users from trusting outdated info.
UI cues that communicate reliability and transparency.
Trust signals reduce user doubt.
The set of tokens, components and rules exposed to teams.
Defines what teams can rely on.
The shared expectations between system and product teams.
Keeps adoption and support clear.
Key metrics that reflect system impact.
KPIs justify investment and focus.
The measurable benefit created by the system.
Value perception drives funding.
How much UI is built from system components.
Coverage shows adoption strength.
Percent of UI values sourced from tokens.
Low coverage signals drift and hardcoded values.
Percent of workflows using standard patterns.
Shows consistency at the workflow level.
Consistency and stability of system outputs.
Teams rely on the system as infrastructure.
Time from request to delivered system change.
Slow latency drives teams to bypass the system.
The speed at which system improvements ship.
Velocity builds trust and adoption.
The perceived and measured quality of system assets.
Quality drives usage and reduces rework.
How well system outputs align with product goals.
Misalignment reduces adoption and trust.
How updates and decisions are shared with teams.
Poor communication slows adoption.
Active promotion and support for system adoption.
Advocacy builds trust and momentum.
The group of teams and users engaged with the system.
Community creates shared ownership.
Storytelling that explains system value and usage.
Narrative drives adoption and funding.
Resourcing the system as a product.
Without funding, systems degrade.
Quantitative measures of system impact.
Metrics prove value and guide priorities.
The collection of system assets and patterns.
Portfolio clarity helps adoption and planning.
The boundaries of what the system includes.
Clear scope prevents expectation gaps.
The long term plan for system impact and evolution.
Strategy aligns work with outcomes.
The confidence teams have in system quality and support.
Trust drives adoption and reduces bypassing.
The usability of system documentation and tooling.
Bad UX reduces adoption.
A clear statement of the system future state.
Vision guides long term decisions.
The end to end process for system changes.
Clear workflows reduce delays and confusion.