Track all accessibility issues

Explore Accessibility Tracker

Notes for JAWS Screen Reader Testing in Audits

JAWS screen reader testing notes are the running record an auditor keeps while evaluating a website or app with JAWS in a Windows environment. The notes capture what JAWS announced, what it failed to announce, where focus moved unexpectedly, and which WCAG success criteria each observation maps to. Good notes turn a live evaluation session into clear, defensible audit findings. They include the page or screen evaluated, the JAWS and browser version, the command used, the actual output, the expected output, and a short reproduction path. Without disciplined notes, screen reader observations lose context by the time the audit report is written.

Core Elements of JAWS Screen Reader Testing Notes
Element What to Capture
Environment JAWS version, Windows version, browser and version
Location URL, page title, and the specific component or region
Action Exact JAWS command or keystroke used
Output What JAWS actually announced, verbatim where possible
Expected What a conforming experience would have announced
WCAG Mapping Success criterion the observation falls under

Setting Up the Environment Before You Start

Record the JAWS version, Windows version, and browser at the top of the notes file. JAWS pairs most reliably with Chrome and Firefox on current Windows builds. If the audit covers more than one browser, note the pairing for each session separately.

Confirm verbosity is set to a level that surfaces semantic detail. Many auditors keep punctuation at most and verbosity at intermediate during evaluation so role, state, and label announcements come through clearly.

What Should Each Note Contain?

A useful note is short, specific, and reproducible. It tells a developer exactly where to go, what to do, and what they should hear once the issue is fixed.

Capture the URL, the component, the keystroke or command used, and the announcement verbatim. Quote JAWS output the way it speaks it, including pauses and missing words. If JAWS said nothing, write “silent” rather than leaving the field blank.

Pair the observation with the relevant WCAG 2.1 AA or WCAG 2.2 AA success criterion. A button with no accessible name is a 4.1.2 Name, Role, Value issue. A modal that traps focus incorrectly maps to 2.1.2 No Keyboard Trap. The mapping is what turns an observation into an audit finding.

Key JAWS Commands Worth Documenting

Auditors rely on a small set of commands repeatedly. Noting the command alongside the result removes ambiguity later.

Tab and Shift+Tab: moves through interactive elements and reveals focus order issues.

H and Shift+H: navigates by heading and exposes heading structure problems.

R: jumps between landmarks and regions.

F: moves through form fields.

Insert+F7: opens the links list, useful for evaluating link text quality.

Insert+F6: opens the headings list for structural review.

Insert+F5: opens the form fields list.

Down Arrow: reads next line in browse mode and surfaces reading order issues.

When a command produces something unexpected, the note should show the command, the page state, and the announcement together. That triplet is what makes the issue reproducible.

Common Issues to Watch For

Certain patterns appear repeatedly during JAWS evaluation. Knowing them in advance speeds up the audit and sharpens the notes.

Unlabeled buttons and icon-only controls are the most frequent. JAWS will often announce “button” with no name, or read raw class names when an aria-label is missing. Custom dropdowns and comboboxes built without proper ARIA roles often announce as a generic group rather than a combobox with expanded state.

Modal dialogs are another recurring source of issues. Focus may not move to the dialog on open, the dialog may not be announced as a dialog, and Escape may not close it. Each of these gets its own note.

Reading order in browse mode often diverges from visual order when CSS positioning is used heavily. Capture the actual sequence JAWS reads and compare it to the visual layout.

How Do You Translate Notes Into an Audit Report?

Notes are the raw material. The audit report is the structured output. Each note that represents a real issue becomes a finding with a description, the WCAG criterion, the location, severity, and a recommended fix.

The audit identifies the issue, references the JAWS output as evidence, and gives the development team enough detail to reproduce and remediate. Scans cannot capture screen reader behavior, so JAWS notes are part of what makes a manual audit the only way to determine WCAG conformance. Scans only flag approximately 25% of issues.

Accessible.org audits include screen reader evaluation across JAWS, NVDA, and VoiceOver depending on scope. The notes from each session feed directly into the final audit report so findings are grounded in observed behavior rather than assumed behavior.

Tips for Cleaner Notes

Use a consistent template across sessions. A spreadsheet or a structured document with fixed columns prevents drift and makes the notes easier to convert into report findings.

Timestamp entries when evaluating dynamic content. If a live region announces too late or not at all, the timestamp helps tie the observation to the page state.

Record what worked, not only what failed. Confirming that a heading structure or a form label reads correctly is part of the conformance picture and supports the final report.

How long should JAWS testing notes be per page?

Length depends on page complexity. A static informational page may produce a handful of entries. A checkout flow or a data-heavy dashboard can generate several dozen. Focus on completeness and reproducibility, not word count.

Should JAWS notes include screenshots?

Yes, when the issue involves visual focus indicators, layout, or where a control is located on the page. A screenshot with the focused element highlighted clarifies the note for developers who did not sit through the session.

Do JAWS notes replace NVDA or VoiceOver evaluation?

No. JAWS, NVDA, and VoiceOver each behave differently. An audit that covers desktop and mobile environments needs notes from each relevant screen reader. JAWS notes alone do not represent the full screen reader experience.

Can AI assist with organizing JAWS testing notes?

AI can help structure and cluster raw notes by WCAG criterion or by component, which speeds up the move from session notes to audit findings. Accessible.org Labs is actively researching how AI can support auditing and remediation workflows without replacing human evaluation.

Disciplined notes are the difference between a screen reader session that sharpens an audit and one that gets reconstructed from memory. The auditor who writes clearly during the session writes a stronger report after it.

Contact Accessible.org to discuss a manual accessibility audit that includes JAWS evaluation.

Related Posts

Sign up for Accessibility Tracker

New platform has real AI. Tracking and fixing accessibility issues is now much easier.

Kris Rivenburgh, Founder of Accessible.org holding his new Published Book.

Kris Rivenburgh

I've helped thousands of people around the world with accessibility and compliance. You can learn everything in 1 hour with my book (on Amazon).