Major Release: You can now generate VPATs® Using AI.

Explore Accessibility Tracker

ACR Real Audit vs Scan: How to Tell the Difference

An Accessibility Conformance Report (ACR) built from a real audit will contain detailed, page-specific observations written by a human auditor. An ACR generated from an automated scan will be vague, repetitive, and missing entire categories of accessibility issues. The difference matters because procurement teams, legal reviewers, and product buyers rely on ACRs to verify WCAG conformance. A scan-based ACR misrepresents the accessibility status of a product.

The VPAT is the template. The ACR is the completed document. When someone hands you an ACR, the first question is whether a qualified auditor actually evaluated the product or whether software populated the fields. Here is how to tell.

Key Differences Between Audit-Based and Scan-Based ACRs
Indicator What It Tells You
Issue specificity Audit-based ACRs reference specific pages, components, and user interactions. Scan-based ACRs use generic language across all criteria.
Evaluation methods A real ACR lists assistive technologies used, browser and device combinations, and the auditor’s methodology. Scan-based ACRs reference only automated tools.
Criteria coverage Scans only flag approximately 25% of issues. ACRs from scans will show “Supports” for criteria that can only be evaluated by a human, which is a red flag.
Remarks quality Audit-based remarks describe the issue, its location, and often its impact. Scan-based remarks are either empty or copied from tool output.
Issuing party A credible ACR is issued by an independent accessibility company or auditor with verifiable credentials.

What Does a Credible ACR Look Like?

A credible ACR reflects a thorough evaluation of the digital asset against a specific WCAG version, typically WCAG 2.1 AA or WCAG 2.2 AA. The Evaluation Methods Used section will describe the auditor’s approach, including screen reader evaluation with NVDA, JAWS, or VoiceOver, keyboard-only navigation, and color contrast analysis using verified tools.

The remarks column is where the real evidence lives. For each WCAG criterion, the auditor documents what they observed. An audit-based ACR might note that a modal dialog on the checkout page traps keyboard focus, or that data table headers on the reporting dashboard lack programmatic associations. These are observations a scan cannot produce.

Accessible.org ACRs are always built from fully (manual) audits, which means every criterion is evaluated by a human auditor, not populated by software output.

Red Flags That an ACR Came from a Scan

Several patterns indicate an ACR was generated from automated scan results rather than a genuine audit.

Blanket “Supports” ratings across keyboard and screen reader criteria. Criteria like 2.1.1 Keyboard, 2.4.3 Focus Order, and 4.1.2 Name, Role, Value require a human to interact with the product. If all of these show “Supports” with no remarks, the criteria were never properly evaluated.

Identical remarks across multiple criteria. Scans produce repetitive output. If the remarks for 1.1.1 Non-text Content and 1.3.1 Info and Relationships read nearly the same, that content was likely copied from a scan report.

No mention of assistive technology in the evaluation methods. If the ACR does not reference screen readers, keyboard navigation, or mobile accessibility evaluation, there is a strong chance no human auditor was involved.

Missing or vague scope definition. A real ACR clearly defines which pages, screens, or workflows were evaluated. Scan-based ACRs often list the product name without specifying what was actually covered.

Why Does the Distinction Matter for Procurement?

Organizations that purchase software, web apps, or SaaS products increasingly require ACRs as part of procurement. Government agencies evaluating Section 508 conformance, companies subject to EN 301 549 requirements, and educational institutions all depend on accurate ACRs to make informed decisions.

A scan-based ACR creates a false picture. Because scans only flag approximately 25% of issues, a product could have dozens of unidentified accessibility issues that the ACR never mentions. The buyer believes the product conforms. Their users with disabilities experience something different.

For vendors, an inaccurate ACR is also a liability. If a buyer discovers the product does not match what the ACR claims, trust erodes quickly. Procurement teams talk to each other. A misleading ACR can close doors across an entire sector.

How Should You Evaluate an ACR You Receive?

Start with the Evaluation Methods Used section. This is typically on page one or two of the ACR. Look for specific assistive technologies, browser and device combinations, and whether the evaluation was conducted by an identified auditor or company.

Next, read the remarks for criteria that require human judgment. Check 1.3.1 Info and Relationships, 2.4.6 Headings and Labels, and 3.3.2 Labels or Instructions. These criteria involve semantic structure and contextual meaning that scans cannot assess. If the remarks are empty or generic, that is a signal.

Then look at the conformance level ratings. An honest ACR will have a mix of “Supports,” “Partially Supports,” and “Does Not Support” ratings. A product that shows “Supports” for every single criterion is either flawless (rare) or was not thoroughly evaluated (common).

Accessible.org recommends requesting the underlying audit report alongside the ACR. The audit report contains the full detail behind each rating. If the vendor cannot provide one, that tells you something about how the ACR was produced.

Can a Vendor Self-Report an ACR Without an Audit?

Yes, and many do. There is no governing body that certifies ACRs or requires them to be independently produced. Any organization can fill in a VPAT template and publish the resulting ACR. This is exactly why buyers need to know what to look for.

Self-reported ACRs are not inherently inaccurate. A vendor with strong internal accessibility expertise can produce an honest self-assessment. But the risk is higher. Without independent evaluation, confirmation bias and knowledge gaps tend to skew the results toward “Supports.”

The strongest ACRs come from independent accessibility companies that conduct a (manual) audit and then populate the VPAT based on their evaluation. This separation between the product team and the auditor produces more accurate documentation. Accessible.org makes it possible to convert audit results into ACR documentation efficiently, keeping the process grounded in real audit data.

What About the VPAT Edition?

The VPAT comes in four editions: WCAG, Section 508, EN 301 549, and INT (International). The WCAG edition is the most common for SaaS companies and commercial products. Section 508 is required for U.S. federal procurement. EN 301 549 maps to European Accessibility Act (EAA) compliance requirements.

The edition itself does not tell you whether the ACR is audit-based or scan-based. But the edition does indicate which standards the product was evaluated against, and you can cross-reference that with the evaluation methods and remarks to assess credibility.

What is the fastest way to verify an ACR is legitimate?

Read the remarks for WCAG criteria 2.1.1 Keyboard and 4.1.2 Name, Role, Value. These require hands-on evaluation with assistive technology. If the remarks are empty, generic, or show “Supports” without explanation, the ACR likely was not produced from a real audit. Then check the Evaluation Methods section for references to screen readers and keyboard navigation.

Should I request the audit report that supports the ACR?

Yes. A credible vendor will have a detailed accessibility audit report behind their ACR. The audit report contains page-by-page observations, severity ratings, and WCAG criterion-level documentation. If the vendor cannot produce one, the ACR was likely generated from a scan or self-assessment without structured evaluation. Accessible.org always delivers the audit report alongside the completed ACR.

How often should a vendor update their ACR?

ACRs do not have a formal expiration date. The best practice is to update the ACR after any significant product changes, such as a major redesign, new feature launch, or platform migration. An ACR from two years ago on a product that ships monthly updates is not a reliable indicator of current WCAG 2.1 AA or WCAG 2.2 AA conformance.

Does a “Supports” rating on every criterion mean the product is fully accessible?

Not necessarily. A wall of “Supports” ratings with no accompanying remarks is a red flag. It could mean the product is genuinely accessible, but it more commonly indicates the ACR was not produced through a thorough evaluation. Real audits almost always identify at least some issues, even in well-built products.

Knowing how to read an ACR is becoming a core procurement skill. The difference between an ACR grounded in a real audit and one generated from scan output is the difference between accurate documentation and a liability.

Contact Accessible.org for audit-based VPAT services and ACR documentation you can trust.

Related Posts

Sign up for Accessibility Tracker

New platform has real AI. Tracking and fixing accessibility issues is now much easier.

Kris Rivenburgh, Founder of Accessible.org holding his new Published Book.

Kris Rivenburgh

I've helped thousands of people around the world with accessibility and compliance. You can learn everything in 1 hour with my book (on Amazon).