Skip to content
Harshit Purwar
Go back

How to Track Design System Adoption

A design system is a library of reusable components with defined guidelines that enable product consistency across teams. But launching a design system is only half the battle — you need to know if teams are actually using it. Without systematic tracking, you’re flying blind.

After launching our design system, over 25 projects adopted it within the first year. But we had no clear way to measure adoption, identify underused components, or track migration progress from the legacy library. Manual audits across dozens of repositories weren’t sustainable.

We needed automation.

Four Ways to Measure Adoption

Before building anything, we identified four dimensions of tracking:

  1. Quantitative data — component usage frequency, project counts, package versions, and the ratio of design system components vs. legacy ones
  2. Survey-based feedback — user experience assessments, feature satisfaction, and pain points from adopters
  3. Adoption rate monitoring — new user onboarding metrics, project-level adoption percentages, and trend analysis over time
  4. Stakeholder engagement — collecting input from both design and development teams

The first dimension — quantitative data — is the one we automated.

The Automated Pipeline

We built a four-job workflow using GitHub Actions, scheduled as a monthly cron job to batch-process all frontend repositories.

Job 1: Repository Collection

Gather details of all frontend repositories and store them into a matrix output variable. This creates the list of projects to scan.

Job 2: Asynchronous Cloning

Using GitHub Actions’ matrix strategy, every project is cloned simultaneously in parallel. The results are uploaded as GitHub artifacts, making the process significantly faster than sequential cloning.

Job 3: Parsing and Report Generation

All artifacts are downloaded into a single folder. React-scanner, an npm package that parses React codebases, scans the code to identify which components are being used, how often, and in what combinations. The parsed results are recorded in the repository.

Job 4: Database Integration

The parsed data is pushed to Metabase (an open-source business intelligence platform) using Prisma as the ORM. This transforms raw scan data into queryable, visualizable dashboards.

Key Metrics and Dashboards

Once the data lands in Metabase, we track four key dimensions:

Component Instance Count

A line chart showing the total number of design system component instances across the entire organization over time. This is the clearest indicator of overall adoption growth.

Component Distribution per Project

Charts revealing which components each project uses and how many. This helps identify popular components (validating good API design) and underutilized ones (which may need better documentation or promotion).

Adoption Ratio

A comparison of design system components versus legacy frontend library components. This visualization shows migration progress — how quickly teams are replacing old components with new ones.

Package Version Tracking

A table recording which version of the design system package is installed on each project. This surfaces outdated implementations and helps prioritize upgrade efforts.

What We Learned

The automated tracking revealed insights we couldn’t have gathered manually:

Why Automate

Manual tracking doesn’t scale. With 25+ projects and growing, an automated pipeline gives you:

Key Takeaways

  1. Define your metrics early — know what you want to measure before building the pipeline
  2. Automate the scan — GitHub Actions + react-scanner + a matrix strategy handles parallel repo scanning efficiently
  3. Visualize the data — raw numbers are useless without dashboards that tell a story
  4. Track the ratio — design system vs. legacy component usage is the single most important migration metric
  5. Run it regularly — monthly scans give you trend data that reveals whether adoption is accelerating or stalling

Share this post on:

Previous Post
Lighthouse Performance Metrics to Improve User Experience