All Learn articles » axe-core vs WAVE vs Lighthouse
axe-core vs WAVE vs Lighthouse: which automated WCAG engine to trust
If you have ever run two automated accessibility scans on the same page and gotten different results, you've discovered why this comparison matters. The three most-used automated engines have different rule sets, different sensitivities, and different blind spots. Knowing which to use for which job saves you weeks of confusion.
The three engines
axe-core (Deque Systems)
Open-source rule engine. Powers axe DevTools (browser extension), axe-core CLI, and is integrated into Cypress, Playwright, Pa11y, Storybook, and many other tools. Site Brace's audit pipeline uses axe-core. Latest versions ship around 90 distinct rules across WCAG 2.0, 2.1, and 2.2.
WAVE (WebAIM)
Browser-based engine. The WAVE browser extension (free) is widely used by developers for interactive single-page debugging. The same engine is commercialized by Pope Tech for multi-page enterprise scanning. WAVE has its own rule set and labels findings as "errors," "alerts," "features," "structural elements," "ARIA," and "contrast errors." The taxonomy is different from axe-core's impact buckets.
Lighthouse (Google)
Originally built for Chrome's developer tools. Includes an Accessibility category among Performance, Best Practices, SEO, and PWA. Under the hood, Lighthouse's accessibility category runs a subset of axe-core (currently around 50 rules). Lighthouse is included in every Chrome browser via DevTools and runs on PageSpeed Insights.
How much do they overlap?
The Deque team has published comparison data over the years; an oft-cited statistic is that automated tools as a class catch roughly 30 to 57 percent of WCAG issues, depending on the page and the rule set. For a single page, expect:
- axe-core to find a superset of Lighthouse findings, because Lighthouse runs a subset of axe-core's rule library.
- WAVE and axe-core to overlap on most critical findings (color contrast, missing alt text, missing labels) but to disagree on edge cases (some ARIA usages, some heading structure rules).
- None of the three to catch every issue. Form keyboard traps, screen-reader-specific announcement issues, semantic heading order mistakes that look correct in code but read wrong, and most cognitive-accessibility issues require manual review.
Side-by-side comparison
| Feature | axe-core | WAVE | Lighthouse |
|---|---|---|---|
| Open source | Yes (MIT-licensed core, separate commercial axe DevTools Pro) | No (proprietary) | Yes (Apache 2.0) |
| Number of rules (WCAG 2.1 AA) | ~90 | ~70 (different taxonomy) | ~50 (axe-core subset) |
| Output taxonomy | Impact buckets: critical, serious, moderate, minor | Errors / Alerts / Features / Structural / ARIA / Contrast | Pass / Fail with score 0 to 100 |
| Single-page interactive | axe DevTools browser extension | WAVE extension (best-in-class for this) | Chrome DevTools (built in) |
| Multi-page batch | axe CLI, Pa11y, Site Brace, Tenon, etc. | Pope Tech (commercial) | Lighthouse CI (with config) |
| Best for | Programmatic CI integration, comprehensive WCAG rule coverage | Interactive single-page debugging by developers | Quick free scan in a browser tab |
When they disagree, who is right?
This happens often enough to be worth a strategy. The honest answer: they are usually both right, looking at different aspects.
Common cases:
- WAVE flags a structural element that axe-core does not. WAVE has more "structural" notes (heading hierarchy, list semantics, regions). Axe-core focuses on testable WCAG criteria. The WAVE finding is informational; the page may still pass WCAG.
- axe-core flags an ARIA usage that WAVE does not. axe-core's ARIA rule set is more comprehensive. The axe-core finding is usually a real issue.
- Lighthouse gives a 100 score and axe-core finds 5 issues. Lighthouse runs a subset; a 100 score from Lighthouse means "passes the subset," not "fully WCAG conformant." Trust axe-core's broader rule set.
- Color contrast disagreement. Engines compute contrast against the actual rendered pixels. If one engine is rendering the page differently (different fonts loaded, different cascade resolution), the contrast ratio can differ. Re-test with the same browser, same viewport, same fonts loaded.
Which one should you actually use?
Three answers depending on the job:
For a developer fixing one page right now
WAVE browser extension. It is free, fast, and the visual page overlay (showing icons next to each issue) is the most efficient way to debug a single page interactively.
For a CI/CD pipeline catching issues on every pull request
axe-core, integrated via Playwright, Cypress, or jest-axe. Rule coverage is broader than Lighthouse's, and the JSON output is well-structured for CI assertions.
For a one-time multi-page audit (the question Site Brace exists to answer)
axe-core, run against your top pages. Site Brace audits use axe-core under Playwright; the report you receive is the same engine output a developer would see, packaged with remediation guidance and copy-paste LLM prompts. Start an audit, $149.
For a quick free check from a browser tab
Lighthouse, in Chrome DevTools. Treat the score as directional, not authoritative. A 100 in Lighthouse does not mean conformance; a 70 in Lighthouse is a starting point, not a verdict.
The most important caveat
All three engines are complements to manual review, not substitutes. Automated tools catch roughly half of WCAG issues. If your site is high-stakes (regulated industry, post-lawsuit remediation, complex single-page application with custom widgets), pair the automated audit with a human reviewer. Site Brace audits are automated; for high-stakes situations we recommend pairing our report with an hour or two of consulting from a certified accessibility specialist.