Track all accessibility issues

Explore Accessibility Tracker

How to Conduct an Audit for a Native Mobile App

A native mobile app audit is a fully manual evaluation of an iOS or Android app against WCAG 2.1 AA, conducted by a trained auditor using assistive technology on real devices. The auditor reviews every screen, state, and user flow, then identifies issues with severity ratings, location references, and remediation guidance. Scans cannot determine conformance because they only flag approximately 25% of issues, and many mobile-specific issues require human judgment. The output is an audit report your developers can act on.

The process below covers scope, environments, screen reader evaluation, gesture and touch target checks, and how to package findings into a report.

Native Mobile App Audit Overview
Element Detail
Standard WCAG 2.1 AA (or 2.2 AA on request)
Method Fully manual evaluation by a trained auditor
iOS environment VoiceOver, Zoom, Dynamic Type, Voice Control
Android environment TalkBack, Magnification, Font Size, Switch Access
Output Audit report with issues, severity, location, recommendations
Scan role Scans flag approximately 25% of issues and cannot determine conformance

Define the Scope Before You Start

Scope drives everything that follows. For a native mobile app, scope is defined by screens and unique states, not URLs. A login screen, a logged-in dashboard, a settings panel, and a checkout flow each count as separate screens.

List every screen the user can reach. Include modals, error states, empty states, permission prompts, and any flows that appear only after specific actions. If the app supports both iOS and Android, each platform is audited separately because the assistive technology and platform conventions differ.

A typical mobile app audit covers 15 to 40 screens per platform. Larger apps with deep flows may go higher.

Set Up the Evaluation Environment

Audits are conducted on real devices, not simulators. Simulators do not reproduce screen reader behavior accurately, and gesture handling differs from physical hardware.

For iOS, use a recent iPhone with the current iOS version. Enable VoiceOver, Zoom, Dynamic Type at the largest setting, and Voice Control during respective passes. For Android, use a current Pixel or comparable device. Enable TalkBack, Magnification, large font scaling, and Switch Access during respective passes.

Connect the device to a computer for screen recording. Recordings document issues for the report and give developers visual reference during remediation.

What Does the Auditor Evaluate?

The auditor works through WCAG 2.1 AA success criteria as they apply to native mobile contexts. Some criteria translate directly from web. Others require mobile-specific interpretation.

Core areas of evaluation include screen reader output, where every interactive element must announce its name, role, state, and value. Focus order must follow a logical, predictable movement through the screen with VoiceOver or TalkBack. Touch target size must meet a minimum of 44×44 points on iOS and 48×48 dp on Android. Color contrast for text and meaningful UI elements must meet 4.5:1 or 3:1 ratios. Text resizing must hold up at 200% Dynamic Type or large font scale. Content must work in both portrait and landscape unless one orientation is essential. Any complex gesture must have a single-pointer alternative. Forms and errors require labels, instructions, and error messages that are programmatically associated. Motion such as parallax and auto-playing animation must be reducible or pausable.

Each issue is mapped to the specific success criterion it nonconforms with.

Evaluate Each Screen With the Screen Reader

The screen reader pass is the longest part of the audit. The auditor swipes through each screen element by element, listening to what is announced and comparing it to what is visible.

Common issues identified during this pass include unlabeled buttons announcing as “button” with no name, decorative images that should be hidden but are exposed to the screen reader, custom controls that announce only their visible label without role or state, headings not marked as headings, and form fields missing accessible names.

The auditor also evaluates focus order, gesture compatibility with screen reader navigation, and whether dynamic content updates are announced.

Document Each Issue

Every issue gets a record. The record includes the screen and location such as “Settings, Notifications row.” It includes the WCAG success criterion, for example “1.3.1 Info and Relationships, Level A.” It contains a description of the issue and what the user experiences, a severity rating (critical, high, medium, low) using a defined Risk Factor or User Impact prioritization formula, a recommended fix written for the platform (iOS or Android), and a screenshot or short video clip showing the issue.

Mobile remediation guidance is platform-specific. An iOS recommendation references UIAccessibility properties. An Android recommendation references contentDescription, AccessibilityNodeInfo, or Compose semantics.

Package the Audit Report

The final report consolidates findings into a document the development team can work from. Issues are grouped by screen, by severity, or by WCAG criterion depending on what is most useful for the team.

A strong audit report gives developers enough context to fix each issue without going back to the auditor for clarification. That includes the platform-specific code area to look at, the expected behavior after the fix, and the assistive technology output that should result.

Once fixes are made, a validation pass confirms the issues are resolved and that no new issues were introduced.

Where Scans Fit

Automated mobile scanning tools exist (Accessibility Scanner on Android, Accessibility Inspector on iOS), and they catch a portion of issues like missing labels and low contrast. But they only flag approximately 25% of issues and cannot evaluate context, logic, focus order through a flow, gesture alternatives, or whether a screen reader announcement actually makes sense.

A scan is useful as a quick pre-check before development hands the build to an auditor. It is not a substitute for the audit.

Frequently Asked Questions

How long does a native mobile app audit take?

Turnaround depends on scope. A 20-screen single-platform app typically takes about two weeks from kickoff to delivered report. Two platforms double the work because each is evaluated independently. Accessible.org provides a firm timeline as part of the quote.

Should I get WCAG 2.1 AA or 2.2 AA?

Most mobile app audits use WCAG 2.1 AA because it remains the reference standard for most legal frameworks and procurement requirements. WCAG 2.2 AA adds criteria worth meeting, and some clients request it. Either is defensible. The audit report states which version was used.

Can the same auditor evaluate both iOS and Android?

Yes, when the auditor is trained in both platforms and their respective screen readers. Each platform is still evaluated as its own pass with its own report section, because the underlying APIs and conventions differ.

What happens after the audit?

The development team works through the report, prioritizing by severity. After fixes ship, a validation pass confirms resolution. Many teams then request an ACR built from the audit to share with enterprise customers or procurement teams.

A native mobile app audit is the only way to determine WCAG conformance for an iOS or Android product. The work is hands-on, platform-specific, and detailed by design.

Contact Accessible.org for a mobile app audit quote or reach the team directly.

Related Posts

Sign up for Accessibility Tracker

New platform has real AI. Tracking and fixing accessibility issues is now much easier.

Kris Rivenburgh, Founder of Accessible.org holding his new Published Book.

Kris Rivenburgh

I've helped thousands of people around the world with accessibility and compliance. You can learn everything in 1 hour with my book (on Amazon).