Track Issues +AI

Explore Accessibility Tracker

Analysis of ‘Two-Part Audit’ Combining ‘Manual Testing’ and ‘Automated Scanning’

An accessibility audit is a fully manual evaluation. There is no two-part audit where we combine “manual testing” and “automated testing.”

Scans can only reliably flag 13% of WCAG success criteria so automated scan results should never be accepted as conclusive. Rather, scans should be viewed as a review layer in the audit process to ensure that all issues correctly flagged by a scan are included in the audit report.

In effect, we are not combining scan results and an accessibility audit report, we are conducting an accessibility audit using fully manual evaluation methodologies and then using a scan as a review layer.

If we were to able combine an audit and a scan, then we would effectively be able to copy and paste scan results for some issues and then only need to audit for the remaining issues. This is not the case, we must manually evaluate the entire digital asset.

Audit vs. Automated Scan
Testing Method What It Means for You
Audit Evaluates 100% of WCAG success criteria through manual evaluation methodologies using screen readers, keyboard testing, and code inspection
Automated Scan Only reliably flags 13% of WCAG 2.2 AA success criteria, requiring manual review of all results

What is an Accessibility Audit?

An accessibility audit is a formal evaluation of a digital asset conducted by a technical accessibility expert. An audit must be conducted fully manually using multiple evaluation methodologies including:

  • screen reader testing
  • keyboard testing
  • code inspection
  • visual inspection
  • audio inspection

What Role Does a Scan Play in an Audit?

As a best practice, it is recommended that an automated scan also be used as a review of the evaluation. The scan should not be used as a primary means of identifying issues, but as a review to ensure that all issues correctly flagged by a scan are included in the audit report.

Rather, the evaluation is fully manual and an automated scan is used as a secondary review to ensure that all issues correctly flagged by a scan are included in the audit report.

Thus, we do not combine “manual testing” and “automated testing,” we layer them:

  1. Conduct an accessibility audit using diverse evaluation methodologies
  2. Review audit report with an automated scan

An automated scan is a review layer, not a primary means of identifying accessibility issues.

Why Can’t We Combine Automated Scan Results?

Because automated scans are extremely limited and all scan results must be manually reviewed. Scans can only reliably flag 13% of WCAG 2.2 AA success criteria. They can partially flag 45% of success criteria. And they can’t detect 42% of WCAG success criteria at all.

Automated scans are also subject to false positives (where errors are indicated where none exist) and false positives (where an issue exists, but no error is dedicated).

Automated scans are limited because they are based on programmed rule sets that can detect certain accessibility issues based on code structure. However, because there is much more to WCAG conformance than what can be detected in the code, then only a very small percentage of issues can be reliably flagged — and even those issues still need to be manually reviewed.

Key Insights

  • Accessibility audits require complete manual evaluation by technical experts using multiple methodologies
  • Automated scans can only reliably detect 13% of WCAG success criteria and cannot replace manual testing
  • Scans serve as a review layer after manual evaluation, not as a primary testing method
  • False positives and false negatives make scan results unreliable without manual verification
  • Proper sequencing places manual evaluation first, followed by scan review for quality assurance
  • Organizations need comprehensive manual audits to identify all accessibility barriers and reduce risk

Frequently Asked Questions

Why isn’t an automated scan enough?

Scans only detect a small fraction of accessibility issues. They reliably flag about 13% of WCAG success criteria, partially detect others, and completely miss the rest. That means most critical barriers—especially those affecting real users—go unreported. A scan by itself is never conclusive.

How does a (manual) audit differ from a scan?

An audit evaluates every aspect of accessibility with multiple evaluation methodologies: screen reader testing, keyboard navigation, code inspection, and visual/audio review. Unlike a scan, it confirms issues in context and captures barriers no machine can detect. A scan may be used as a review layer, but it is not a substitute.

How long does an audit take?

Most accessibility audits take 1-3 weeks to complete.

Are all audits of the same quality?

No, the highest quality accessibility audits are conducted by technical accessibility experts who are fluent in WCAG and genuinely care about accessibility.

Why can’t we just combine the two and save time?

Because combining implies you can replace part of the manual process with a scan. That isn’t possible. Every scan result still has to be checked manually, and the issues that scans miss still need to be found through full evaluation. In practice, combining doesn’t save time, we still need to audit everything.

If scans are limited, why use them at all?

Scans are helpful as a review layer to ensure that we don’t leave any issues correctly flagged by a scan out of our audit report. Also, scans are helpful tools for accessibility practitioners and can add value during the development process.

How can we confirm our audit is truly manual?

A genuine audit report will contain detailed information about every single accessibility issue identified in the report. Your report should contain all or most of the information listed below.

  • Issue Description: Each accessibility issue is described in detail.
  • Location: Specifies where on the webpage the issue occurs.
  • Page URL: Lists the exact URL where the issue was found for direct access.
  • Environments: Identifies the environments (e.g., Windows, Chrome, NVDA) that produced the issue.
  • Applicable Code: Includes specific code snippet for the issue.
  • WCAG Success Criterion: Names the relevant WCAG success criteria for the issue.
  • Recommendations: Provides detailed remediation steps, often with code examples, to guide fixes.
  • Screenshots: Adds screenshots or screen recordings showing the issue for better understanding.
  • Notes: Includes any additional relevant information, best practices, etc.

In contrast, if your audit report contains more generalized information and looks like “errors” have been copied and pasted, then your auditor has used scan results for your audit report.

Get Started

If you’d like to get started with an accessibility audit for your digital asset, we’d love to help. Contact us and we’ll reply very soon.

Related Posts

Sign up for Accessibility Tracker

New platform has real AI. Tracking and fixing accessibility issues is now much easier.

Kris Rivenburgh, Founder of Accessible.org holding his new Published Book.

Kris Rivenburgh

I've helped thousands of people around the world with accessibility and compliance. You can learn everything in 1 hour with my book (on Amazon).