To conduct an accessibility audit, manually evaluate a digital asset against WCAG standards using keyboard testing, screen reader testing, visual inspection, and code inspection to find all accessibility issues.
The 5 Steps:
- Define Technical Standards – Choose WCAG 2.0 AA, 2.1 AA, or 2.2 AA as your technical standard
- Define Environments – Select browsers, operating systems, devices, and assistive technologies to test with
- Define Scope – Identify the specific pages, screens, and user flows to audit (typically 7-15 pages)
- Conduct Evaluation – Evaluate the full scope of the digital asset using diverse evaluation methodologies
- Compile Report – Document all issues with descriptions, WCAG criteria, locations, and fix recommendations
What is an Accessibility Audit?
An accessibility audit is a formal, manual evaluation of a digital asset. During the audit, the asset is graded against a technical accessibility standard.
What is a Digital Asset?
A digital asset is any digital content, application, or system that an organization creates or maintains for users to access.
Examples of digital assets include:
- Websites
- Platforms
- Mobile applications
- Web apps
- Desktop software
- Digital kiosks
- Social media
- Learning management systems (LMS)
Documents can also be categorized as digital assets, but documents are rarely audited. Rather, they usually undergo remediation directly.
Step | Action Item | Description |
---|---|---|
Step 1 | Define Technical Standards | Select the WCAG version for evaluation (2.0 AA, 2.1 AA, or 2.2 AA). Higher versions include more success criteria to evaluate. |
Step 2 | Define Environments | Choose browsers, operating systems, devices, and assistive technologies. Common combinations: Windows + Chrome + NVDA for desktop; iOS + Safari + VoiceOver for mobile. |
Step 3 | Define Scope | Specify all URLs, pages, screens, domains, and subdomains to audit. Focus on 7-15 pages for websites including primary templates and user flows. |
Step 4 | Evaluation | Systematically test against all success criteria using screen reader testing, keyboard testing, visual inspection, and code inspection. |
Step 5 | Report | Compile all identified issues into comprehensive report in Excel format with issue details and remediation guidance. |
Table of Contents
Step 1: Define Technical Standards
Before we audit any digital asset, we must define the standards to be used during evaluation. In most cases, we use one of the three versions of the Web Content Accessibility Guidelines (WCAG):
- WCAG 2.0 AA
- WCAG 2.1 AA
- WCAG 2.2 AA
The higher the WCAG version, the more success criteria we will need to account for.
Although different digital assets may have different interfaces and functionality compared to websites, WCAG principles and success criteria still provide the foundation for evaluation. While not every success criterion may apply directly, the core principles of perceivable, operable, understandable and robust (POUR) remain relevant across all digital formats.
Step 2: Define the Environments
Next we must define the environments under which the audit will be conducted. Environments include:
- Browsers
- Operating Systems
- Devices
- Assistive Technologies
The more environments we use, the more robust our audit is because we’re ensuring there are no accessibility issues across many different types of environment combinations.
Adding more environments to the audit process is beneficial in theory, but, in practice, for every additional environment added, the time and cost of the audit increases. Because of the expense involved, it’s best to use a limited set of environments that yield the most ROI.
Common Environment Combinations
An audit using the desktop environment combination of Windows Desktop, Google Chrome, and NVDA screen reader is quite common. Also, including a mobile environment combination of Safari browser, VoiceOver screen reader, and an iPhone makes a lot of sense due to Apple’s popularity.
For reference, here are other widely used environments:
Browsers: Google Chrome, Mozilla Firefox, Microsoft Edge, and Safari
Operating Systems: Windows, macOS, iOS, and Android
Devices: Desktop/Laptop, Smartphone, Tablet
Assistive Technologies: Screen readers (JAWS, NVDA, VoiceOver, TalkBack), Screen magnifiers, Speech recognition Software (Dragon Naturally Speaking)
Step 3: Define Scope
The scope of an accessibility audit specifies all URLs/pages, screens, domains, and subdomains that the audit will span. In most cases, the audit should only include one digital asset (e.g., a website, mobile app, or software). If audits for multiple assets are needed, they should be segmented into separate audits.
Scope for Websites
For websites, the scope will typically be the primary pages/page layouts. When defining the scope, we want to make sure that all of the pages/layouts that receive the most views and/or have the most activity are audited. Most websites usually have 7-15 pages in scope.
For example, the following pages of an ecommerce website would be audited:
- Homepage
- Product search page
- Product page
- Checkout
- Registration page
- Account page
Scope for Other Digital Assets
For other digital assets, we’ll define a scope that involves the primary page templates, user flows, important pages, and unique content.
When the audit report is received, the digital team can then apply changes to other similar pages, layouts, or content. Outside of unique content, most changes will be applied sitewide because the layout template for a page will be updated.
Step 4: Evaluation
With all of the parameters defined, it’s time for the auditor to begin evaluating the digital asset.
Evaluation Starting Points
There are many different approaches and starting points when it comes to evaluation including:
- Working from top to bottom and left to right of each page
- Working from a WCAG checklist
- Working from scan results first (scan results must always be manually reviewed)
- Looking for certain accessibility issues first
- Starting with the header and footer of a website
- Starting with keyboard navigability
Whatever method is chosen, the most important result is that all of the accessibility issues are found and reported on.
How to Technically Audit
The ability to find accessibility issues is mostly intuitive if you’re familiar with the Web Content Accessibility Guidelines (WCAG) because those are the technical standards that we’re grading a digital asset against.
Keyboard Testing
For success criteria 2.1.1 Keyboard and 2.1.2 No Keyboard Trap, test every interactive element using only the keyboard. Navigate through the entire page using Tab, Shift+Tab, Enter, Spacebar, and arrow keys. Ensure all functionality is accessible and there are no keyboard traps.
Start at the top of the page and tab through every element. Document the tab order and watch for any place where focus disappears or gets stuck. Pay special attention to modal dialogs, dropdown menus, and custom widgets.
Mobile keyboard testing requires connecting an external keyboard to your device. On iOS, use Control+Option with arrow keys for VoiceOver navigation. On Android, use Alt+Shift with arrow keys for TalkBack navigation.
What to check:
- Tab through every interactive element on the page
- Verify focus indicators are visible on all interactive elements
- Test all dropdown menus, modals, and overlays for keyboard access
- Ensure Escape key closes modals and dropdowns
- Check that skip links work and go to the correct location
- Test form submission using Enter key
- Verify custom components (sliders, date pickers, carousels) work with keyboard
- Check that keyboard focus doesn’t get trapped anywhere
- Test keyboard shortcuts don’t conflict with screen reader commands
- Ensure focus order matches visual reading order
Screen Reader Testing
Screen reader testing requires proficiency with at least one screen reader. Navigate through each page listening to how content is announced. Check that all images have appropriate alternative text, form fields have labels, and the reading order makes sense.
Test with multiple screen readers when possible, as they interpret content differently. NVDA and JAWS behave differently on Windows, while VoiceOver has distinct behaviors on macOS versus iOS. Each screen reader has different commands and interaction modes that reveal different issues.
For mobile testing, VoiceOver on iOS uses swipe gestures to navigate. TalkBack on Android uses similar swipe patterns but announces content differently. Test with both touch exploration and linear navigation through swiping.
What to check:
- Navigate using headings (H key) and verify heading structure is logical
- Check all images announce meaningful alternative text
- Verify form fields announce their labels, required status, and errors
- Test that buttons and links announce their purpose clearly
- Check data tables announce headers and relationships correctly
- Verify lists are announced with count and position
- Test landmark regions are properly announced
- Ensure dynamic content updates are announced
- Check that decorative images are hidden from screen readers
- Test interactive elements announce their state (expanded/collapsed, selected, checked)
Visual Inspection
Visually examine the page for color contrast issues, text sizing, focus indicators, and visual presentation of information. Check that information isn’t conveyed by color alone and that text can be resized to 200% without loss of functionality.
Use browser zoom to test text resizing. Look for text that gets cut off, overlaps, or becomes unreadable. Check horizontal scrolling at 200% zoom on desktop and verify content reflows properly at 400% zoom for text-only scaling.
On mobile devices, test with system-wide text size settings at maximum. Verify that pinch-to-zoom works unless the content is responsive and readable without zooming.
What to check:
- Measure color contrast ratios for all text (4.5:1 for normal text, 3:1 for large text)
- Test color contrast for interactive elements and focus indicators (3:1 minimum)
- Verify error messages don’t rely on color alone
- Check that links are distinguished by more than just color
- Test page at 200% browser zoom for functionality
- Verify no horizontal scrolling at 1280px width and 400% zoom
- Check focus indicators are visible against all backgrounds
- Test with Windows High Contrast Mode
- Verify text spacing can be adjusted without loss of content
- Check that status indicators use icons or text, not just color
Code Inspection
Inspect the HTML code to verify semantic markup, ARIA implementation, and proper heading structure. Check for programmatically determinable relationships between content and ensure proper language attributes are set.
Look for form field labels in the code, verify landmark regions, and check that error messages are properly associated with their fields. Review ARIA usage to ensure it’s necessary and correctly implemented. Remember that no ARIA is better than bad ARIA.
Mobile apps require different inspection techniques. For iOS, use Xcode’s Accessibility Inspector. For Android, use Android Studio’s Layout Inspector. Check that native controls are used where possible and custom controls have proper accessibility properties.
What to check:
- Verify heading hierarchy (h1-h6) is logical and doesn’t skip levels
- Check all form inputs have associated labels (for/id or aria-labelledby)
- Verify page has one h1 element
- Check lang attribute is set on html element
- Inspect ARIA roles, states, and properties for correct usage
- Verify required fields use aria-required=”true”
- Check error messages use aria-describedby to associate with fields
- Inspect button elements are actual buttons, not divs with onclick
- Verify lists use proper ul/ol/dl markup
- Check tables use th elements with scope attributes
Listening for Audible Cues
Play any media content and check for audio descriptions and closed captions. Ensure that audio doesn’t play automatically and that controls are provided to pause, stop, or adjust volume.
Evaluation Methodologies
Overall, the comprehensive evaluation of the accessibility of a digital asset involves a combination of the following methodologies:
- Keyboard testing
- Screen reader testing
- Visual inspection
- Listening for audible cues
- Code inspection
- Automated scans
Note: automated scans act as a nice review to ensure all issues correctly flagged by automation are included in the audit report. However, scan results should not be relied upon as they may contain false positives or false negatives and are thus not conclusive.
Technical Expertise Required
Simply knowing what accessibility requirements to look for doesn’t mean you will be able to identify and test for them. This is because checking for some accessibility issues requires more advanced knowledge.
If you do not know how to use a screen reader, then you can’t fully evaluate some success criteria. If you’re not experienced in evaluating code, you will not be able to fully evaluate for some of the more technical accessibility considerations.
The best way to start learning how to audit is to attempt to grade your digital asset against all success criteria in the Web Content Accessibility Guidelines (WCAG). You will be able to identify some accessibility issues just through common sense (here’s the requirement, does the asset meet the requirement). However, for other issues you will need a development background and/or experience using a screen reader.
Thus, the material aspect of learning how to audit comes down to understanding the different WCAG success criteria. As a primer to start learning WCAG, we highly recommend our WCAG 2.1 AA checklist.
Step 5: Report
After all accessibility issues have been identified, they are compiled into a final report that will be delivered to the client.
The final report is typically in Excel spreadsheet format and includes all important details including standards used, environments used, date, and other information pertinent to the report.
What to Include in the Report
Our audits tell clients all of the practical information they need to know to take action including:
- What issue exists
- A description of the issue
- Relevant success criterion (WCAG requirement)
- Location of issue
- Applicable code, if any
- Recommendation for fixing issue
- Screenshot or screen recording
These additional details are all designed to make it as easy as possible for the client or remediation team to fix the issues in the final audit report.
Audit vs. Automated Scan
The term audit is sometimes used to describe an automated scan whereby the scan will be referred to as an “audit” or “automated audit.” This is an incorrect usage of the terms as audit always indicates the manual evaluation of a website or other asset.
Although scans are sometimes used during the course of an audit, audits are never based on automation; all scan results must be manually reviewed.
Audit vs. User Testing
The terms audit and user testing are often conflated, but they are two different methods of assessing the accessibility of a digital asset.
Even though an audit involves screen reader testing and keyboard testing, it is not user testing. Rather, an audit is the formal, technical evaluation of a website against WCAG or other standards by one or more technical accessibility experts.
In contrast, user testing is conducted by an accessibility professional with one or more disabilities, typically using assistive technology. In the context of websites, user testing is typically conducted by a professional who is blind or visually impaired and using a screen reader.
During user testing, the professional will relay their practical experience in using the website for a set duration of time (e.g., 30 minutes). Compare this to an audit where the audit is not complete until every page within the scope is evaluated.
Key Differences
The key differences between an audit and user testing are:
- Conducted by experts with or without disabilities vs. conducted only by professionals with disabilities
- Technical vs. practical evaluation
- Based on standards vs. based on experience
- Comprehensive vs. limited by parameters
Summary
Because an audit is a formal evaluation, a regimented and consistent process must be followed. When you’re auditing, the objective isn’t to find some or even most of the accessibility issues, it’s to identify all of them.
This is in part why experience and background knowledge along with a diverse set of evaluation methodologies is so crucial when conducting an accessibility audit.
Help
Does your organization need an accessibility audit?
We’d love to help. Just us a message below or contact us and we’ll be right with you.