Major Release: You can now generate VPATs® Using AI.

Explore Accessibility Tracker

How Auditors Evaluate Screen Reader Compatibility

During an accessibility audit, auditors evaluate screen reader compatibility by navigating every interactive element, reading order sequence, and content structure with one or more screen readers. This process determines whether assistive technology users can perceive, operate, and understand digital content the way sighted users do.

Screen reader evaluation is one of the most revealing parts of an accessibility audit. Automated scans cannot replicate this process (scans only flag approximately 25% of issues). A human auditor using a screen reader identifies issues that no automated tool can detect, including illogical reading order, missing context, and interactive elements that do not announce their state or purpose.

Screen Reader Evaluation in an Accessibility Audit
Evaluation Area What the Auditor Checks
Reading Order Content is announced in a logical, meaningful sequence
Interactive Controls Buttons, links, and form fields announce their name, role, and state
Dynamic Content Live regions, modals, and updates are communicated without requiring a page refresh
Images and Media Alt text is accurate, decorative images are hidden, and media controls are operable
Navigation Landmarks, headings, and skip links allow efficient movement through the page

Which Screen Readers Do Auditors Use?

Most auditors evaluate with NVDA on Windows and VoiceOver on macOS and iOS. These two cover the vast majority of assistive technology usage. Some auditors also evaluate with JAWS, particularly when the audience includes enterprise or government users where JAWS remains common.

The browser pairing matters too. NVDA with Firefox and Chrome, VoiceOver with Safari. Each combination can produce different behavior for the same code, which is why experienced auditors do not rely on a single screen reader and browser pairing.

Accessible.org audits are always fully manual, and screen reader evaluation is built into every engagement. The auditor is not running a checklist against a tool output. They are experiencing the content the way a screen reader user would.

What Does a Screen Reader Evaluation Actually Cover?

The auditor moves through the page the way an assistive technology user would: linearly, using keyboard commands to jump between headings, landmarks, links, and form fields. Every element is evaluated for whether it communicates its purpose without visual context.

Here is what that looks like in practice:

Heading structure. The auditor checks that headings follow a logical hierarchy and that the heading text accurately describes the section content. A screen reader user navigating by headings relies on this structure to understand page organization.

Form inputs. Every field needs a programmatically associated label. The auditor tabs through each field and confirms the screen reader announces the label, any required status, error messages, and input format expectations.

ARIA usage. ARIA attributes add semantic meaning when native HTML falls short. But incorrect ARIA is worse than no ARIA. Auditors verify that roles, states, and properties are applied correctly and that they match the visual behavior of the component.

Dynamic content. When content changes on the page without a full reload, the screen reader needs to be notified. Auditors evaluate whether live regions, toast notifications, and modal dialogs are announced at the right time and with the right level of urgency.

How Does This Map to WCAG Conformance?

Screen reader evaluation directly maps to multiple WCAG 2.1 AA and WCAG 2.2 AA success criteria. Several of the most commonly cited issues in audit reports originate from screen reader evaluation.

WCAG 1.3.1 (Info and Relationships) requires that information conveyed through presentation is also available programmatically. If a visual layout implies a relationship that the screen reader does not announce, that is a WCAG conformance issue.

WCAG 4.1.2 (Name, Role, Value) requires that all user interface components have an accessible name and that their role and state are exposed to assistive technology. This is where the auditor identifies buttons coded as divs, toggle switches without state announcements, and custom components missing accessible names.

WCAG 1.3.2 (Meaningful Sequence) requires that the reading order makes sense when the visual layout is removed. The screen reader reveals whether the DOM order matches the intended content flow.

A thorough accessibility audit from a qualified provider evaluates all of these criteria through real screen reader interaction, not code inspection alone.

Why Can Scans Not Replace Screen Reader Evaluation?

Automated scans can detect whether an image has an alt attribute. They cannot determine whether that alt text accurately describes the image. Scans can flag a missing form label. They cannot tell you whether an ARIA label on a custom dropdown correctly describes its options after a user makes a selection.

Screen reader compatibility is inherently experiential. The only way to know if a page works with a screen reader is to use a screen reader. This is why a manual accessibility audit is the only way to determine WCAG conformance.

Accessible.org evaluates every page in scope with actual assistive technology. The audit report identifies each issue with its WCAG criterion, severity, location, and recommended remediation path.

What Issues Come Up Most Often?

Across web apps, ecommerce sites, and informational websites, certain screen reader compatibility issues appear frequently:

Custom components (tabs, accordions, carousels) that are visually functional but completely silent to a screen reader.

Focus management issues where opening a modal does not move focus into the modal, or closing it does not return focus to the trigger.

Missing or inaccurate alt text on images that carry meaningful content.

Tables without proper header markup, making data relationships invisible to assistive technology.

Links and buttons with duplicate or generic accessible names like “read more” repeated across the page.

These are the types of issues that accessibility services built around manual evaluation are designed to identify. Remediation guidance in the audit report gives developers the specifics they need to address each one.

How Does Screen Reader Evaluation Fit into the Full Audit Process?

Screen reader evaluation is one component of a comprehensive accessibility audit. The auditor also evaluates keyboard operability, color contrast, content resizing behavior, and cognitive accessibility considerations. Each of these evaluation methods reinforces the others.

For example, a keyboard evaluation might reveal that a component is operable via Tab and Enter. But the screen reader evaluation might reveal that same component announces nothing when focused. Both evaluations are necessary to get the full picture.

Do All Accessibility Auditors Evaluate with Screen Readers?

Not all of them. Some providers rely heavily on automated scans with minimal manual review. Others inspect code without ever launching a screen reader. An audit that does not include screen reader evaluation will miss a significant portion of conformance issues. When selecting a provider, ask specifically whether the auditor uses NVDA, VoiceOver, or JAWS during the evaluation. The answer tells you a lot about the depth of the audit.

Is Screen Reader Evaluation Needed for a VPAT?

Yes. A VPAT is the template. The completed document, the ACR, is based on an accessibility audit. If that audit does not include screen reader evaluation, the ACR will contain gaps. Procurement teams reviewing your ACR may ask about evaluation methods. An ACR backed by real assistive technology evaluation carries more weight than one produced from scan data alone.

How Often Should Screen Reader Compatibility Be Evaluated?

Every time a new accessibility audit is conducted. For most organizations, that means after significant product changes, major redesigns, or on an annual cycle. Screen reader compatibility can shift with code updates, new features, or browser and assistive technology version changes.

Screen reader evaluation is where accessibility audits prove their value. It is the part of the process that no scan, checker, or automated tool can replicate. And it is the part that matters most to the people who depend on assistive technology every day.

Contact Accessible.org to schedule an accessibility audit that includes full screen reader evaluation against WCAG 2.1 AA or WCAG 2.2 AA.

Related Posts

Sign up for Accessibility Tracker

New platform has real AI. Tracking and fixing accessibility issues is now much easier.

Kris Rivenburgh, Founder of Accessible.org holding his new Published Book.

Kris Rivenburgh

I've helped thousands of people around the world with accessibility and compliance. You can learn everything in 1 hour with my book (on Amazon).