The evaluation methods section of a VPAT should list how the product was evaluated for accessibility. Cover the evaluation type, the assistive technologies and browsers used, the WCAG version and conformance level, and who performed the work. Evaluation types include manual review, automated scanning, AT compatibility checks, and user evaluation. This gives the reader a clear picture of the rigor behind the conformance claims in the report.
This section is short, but it carries weight. Procurement teams read it to judge whether the ACR was produced through serious evaluation or filled in without proper review.
| Element | What to Include |
|---|---|
| Evaluation type | Manual review against WCAG, automated scans, assistive technology checks, user evaluation |
| Standard applied | WCAG 2.1 AA or WCAG 2.2 AA, plus Section 508 or EN 301 549 if relevant to the edition |
| Assistive technology | Screen readers (NVDA, JAWS, VoiceOver, TalkBack), magnification, voice input |
| Browsers and operating systems | Chrome, Firefox, Safari, Edge paired with Windows, macOS, iOS, Android |
| Evaluator | Internal team, third party auditor, or both, with credentials noted |
| Date of evaluation | The date or date range the evaluation was conducted |

What is the purpose of the evaluation methods section?
The evaluation methods section tells readers how the conformance claims in the rest of the ACR were reached. Without it, the report is a list of assertions with no backing.
A buyer reviewing two ACRs side by side will look at this section to compare credibility. One report may describe a thorough manual review against WCAG 2.1 AA across multiple assistive technologies. The other may reference an automated scan and nothing else. The conformance claims that follow carry very different weight.
Evaluation Types to List
Most strong evaluation methods sections describe a combination of approaches. Each one contributes something the others cannot.
Manual review against WCAG: An auditor working through each success criterion at the target conformance level. This is the foundation of any credible ACR.
Automated scanning: Tools used to flag issues that scanners can detect. Scans only flag approximately 25% of issues, so this is a supporting input, not a substitute for manual review.
Assistive technology compatibility: Verifying the product works with screen readers, magnifiers, and voice input.
User evaluation: Sessions with people who rely on assistive technology day to day. Optional, but adds significant credibility when included.
Standards and Versions Applied
Name the WCAG version and level the evaluation was conducted against. WCAG 2.1 AA remains the most common standard in the marketplace, with more buyers asking for WCAG 2.2 AA.
If the VPAT edition covers Section 508 or EN 301 549, the evaluation methods section should confirm those frameworks were applied during the review. The International edition includes all three standards—WCAG, Section 508, and EN 301 549—so its methods section should address each one.
Assistive Technology, Browsers, and Operating Systems
List the specific combinations used. “Evaluated with screen readers” is too vague to be useful. A credible entry looks like NVDA with Chrome on Windows, VoiceOver with Safari on macOS, VoiceOver with Safari on iOS, TalkBack with Chrome on Android.
The detail confirms the auditor accounted for the major assistive technology environments end users actually rely on.
Who Conducted the Evaluation
Name the evaluator. If the work was done internally, say so. If a third party auditor was engaged, name the company. Independently issued ACRs carry more weight in procurement because the evaluator has no commercial interest in the conformance outcome.
Credentials add useful context. Auditors with backgrounds in accessibility consulting, DHS Trusted Tester certification, or years of WCAG audit experience signal the work was done by qualified hands.
Date of Evaluation
Include the date or range. ACRs do not have a formal expiration, but readers want to know how recent the evaluation is. After significant product changes, the ACR should be updated and the new evaluation date reflected.
Common Mistakes in This Section
The most frequent issue is vagueness. “Evaluated for accessibility” tells a reader nothing. So does “reviewed for WCAG” without naming the version or level.
The second issue is overstating the rigor. If the evaluation was an automated scan and a quick manual sweep, the methods section should reflect that, not describe a full WCAG conformance evaluation that did not happen. Inflated methods sections collapse under procurement scrutiny.
The third issue is omitting the evaluator. Anonymous ACRs raise immediate questions.
Does the evaluation methods section affect whether a buyer accepts the ACR?
Yes. Procurement reviewers often read this section before the conformance tables. A weak or vague methods section can lead the buyer to discount the rest of the report or request a stronger one. A specific, well-documented methods section builds trust in everything that follows.
Should the evaluation methods section reference an accessibility audit?
If the VPAT was completed using audit findings, the methods section should reference the audit, including the standard applied and the date. An ACR backed by a recent WCAG audit is the strongest version of this document. Without an audit, the conformance claims rest on much thinner ground.
How long should this section be?
Long enough to cover the elements above clearly. A few short paragraphs or a structured list works well. The section is meant to be informative, not exhaustive, and a reader should be able to take in the full evaluation approach in under a minute.
A well-written evaluation methods section is one of the cleanest signals that an ACR was produced with care. It sets the tone for the conformance tables that follow and gives buyers a reason to trust what they read.
Contact Accessible.org to request a VPAT backed by a thorough WCAG audit.