What an Accessibility Auditor Should Document

An accessibility auditor should document the WCAG success criterion violated, the conformance level, the exact location of the issue, a clear description of what’s wrong, evidence (screenshot or code snippet), the user impact, a severity rating, and recommended remediation guidance. This documentation makes each issue traceable, actionable, and verifiable during fix validation.

Without consistent documentation, developers guess at fixes, project managers can’t prioritize, and validators can’t confirm closure. The audit report becomes a list of complaints rather than a working document.

Required Documentation Fields per Issue
Field What It Captures
WCAG Criterion The specific success criterion (e.g., 1.4.3 Contrast Minimum) and conformance level (A or AA)
Location Page URL, screen name, or component path where the issue appears
Description Plain language explanation of what’s wrong and why it conflicts with WCAG
Evidence Screenshot, code snippet, or screen reader output showing the issue
User Impact Who is affected and how the issue blocks or degrades their experience
Severity Critical, high, medium, or low based on user impact and frequency
Remediation Specific guidance for the fix, including code patterns where useful
Status Open, in progress, fixed, validated, or closed

Why every issue needs structured documentation

An audit report is a working document, not a deliverable. The development team uses it to make fixes. The project manager uses it to track progress. The auditor uses it again during validation to confirm each issue is resolved.

If a single field is missing, that workflow breaks. A description without a location forces developers to hunt. A WCAG reference without evidence makes the issue unverifiable. Severity without user impact reduces prioritization to guesswork.

The WCAG criterion and conformance level

Every issue maps to a specific WCAG success criterion. The auditor records the criterion number, name, and conformance level (A or AA). This anchors the issue to an objective standard rather than personal preference.

If an issue could be cited under multiple criteria, the auditor selects the most precise one. Listing five related criteria for a single issue inflates the count and confuses remediation.

Location and reproduction steps

Location includes the URL or screen name plus the specific component. “Homepage” isn’t enough. “Homepage, primary navigation, third menu item” is.

For interactive issues, the auditor adds reproduction steps. Open the menu, focus the third item, press Enter. Without reproduction steps, the developer may not see what the auditor saw.

Evidence that holds up under review

Evidence makes an issue defensible. A screenshot showing the failing contrast ratio. A code snippet showing the missing label. A screen reader transcript showing what was announced (or not announced).

Evidence also speeds validation. When the auditor returns to verify a fix, the original evidence shows exactly what the prior state looked like.

Severity and user impact

Severity ratings drive prioritization. A keyboard trap on a checkout button is critical. A missing decorative alt attribute is low. The auditor assigns severity based on how the issue affects real users and how often it occurs.

User impact describes who is affected: screen reader users, keyboard-only users, users with low vision, users with cognitive disabilities. This context turns severity into a defensible rating rather than an arbitrary label.

Remediation guidance

The auditor writes remediation guidance specific to the issue, not generic WCAG advice. “Add an aria-label to the icon button reading Close menu” is useful. “Make buttons accessible” is not.

Where useful, the auditor includes a code pattern or a reference to a known accessible component. The goal is to give developers enough to act without requiring them to research the criterion themselves.

Status tracking through remediation and validation

Each issue moves through states: open, in progress, fixed, validated, closed. Documentation should support that lifecycle from day one. A static PDF can’t do this. A spreadsheet can. A platform built for issue tracking does it best.

Without status tracking, teams lose visibility into what’s been fixed, what’s pending validation, and what’s been deferred. Conformance becomes a moving target.

FAQ

Should an auditor document every instance of the same issue or just one?

Document the issue type once with all locations where it appears. If a missing form label affects 12 fields across 4 pages, the issue is described once and the locations are listed. This keeps the report readable and the count accurate.

What severity rating system should an auditor use?

A four-tier system works well: critical, high, medium, low. Critical blocks core tasks. High degrades a primary function. Medium affects secondary content. Low is cosmetic or edge-case. The rating should be tied to user impact and frequency, not personal judgment.

Does an auditor need to document issues that are out of scope?

If an issue is observed but falls outside the audit scope (a different domain, a third-party widget, a future feature), the auditor notes it as an observation rather than a scored issue. This keeps the conformance count accurate while still surfacing risk.

How long should remediation guidance be per issue?

Long enough to be actionable, short enough to be read. Two to four sentences covers most issues. Complex patterns (custom widgets, ARIA composites) may need a code example or a link to a reference implementation.

Documentation quality is what separates an audit report developers can act on from one that sits unread. Every field exists because someone downstream needs it.

Contact the Accessibility Base directory to find auditors who document issues thoroughly.

Leave a Comment