Automated and Manual Tests

A comprehensive approach to web accessibility testing benefits from both automated and manual tests.

The Strengths of Automated Tests

Automated tests with FireEyes speed up the process of identifying errors, and help increase the reliability of the testing process, because FireEyes will always find the same kinds of errors under the same circumstances. Humans, on the other hand, are inconsistent, even if they are accessibility experts, because that's human nature. People may accidentally overlook some types of issues, or forget to perform some types of tests. FireEyes has the edge in terms of consistency and reliability.

The Strengths of Manual Tests

Manual testing offers a high level of accuracy for documenting individual violations dependent, of course, on the skill of the tester. Not only is manual testing is necessary to review issues that automated testing flags as possible errors (also called "potential issues"), it also finds issues that automated testing cannot find. Additionally, manual testing offers the ability for the tester to not only find an error but also provide the necessary repair guidance.

Definite Accessibility Violations

FireEyes can identify many types of errors with a near 100% accuracy. When FireEyes flags these types of issues, they are almost all true accessibility errors that need to be fixed.

Example: Missing alt text
Images without alt text are automatic accessibility violations because screen readers rely on alt text as a substitute fore the visual image.
Example: Missing form labels
Screen readers cannot reliably communicate the purpose of form elements to users unless the form elements have labels explicitly associated with them. There is more than one way to provide a label for a form element, but form elements without labels are automatic accessibility violations.
Example: Missing titles, pages
One of the first thing a screen reader reads when visiting a web page is the title. If the title is empty, screen reader users must listen to the content of the web page to know what the page is about, or if they've landed on the wrong page, wasting the user's time.
Example: Missing a primary language identification
When the primary language of a web page is not identified, screen readers may start to read the page incorrectly. If there is no language specified in a web page, the screen reader will read the page according to the pronunciation rules of the user's default language settings. For example, if person speaks both French and English, and sets the default language of the screen reader to French, the screen reader will default to reading all web pages using French pronunciation rules, even if the page is an English page. Such misponunciation makes the content almost impossible to understand. Documents with lang="en", on the other hand, will be read with English pronunciation rules, no matter what the user's default language is. FireEyes can find this error and flag it accurately.

FireEyes excels at these kinds of tasks, and will save you a lot of time compared to having to manually identify all of these types of errors yourself.

Possible Errors

In other cases, FireEyes will recognize suspicious circumstances that probably indicate accessibility errors, but which require human intervention to determine if they are true errors or not.

Example: Suspicious alt text
If the alt text for an image is alt="image.jpg", the alt text is probably not accurate or not descriptive enough. But under some rare circumstances, the image might actually say "image.jpg," so the alt text might be appropriate. FireEyes cannot read the image itself, so a human must determine the issue is a real problem or if the issue can be safely ignored.
Example: Failure to mark headings
If FireEyes finds a short phrase of text in a paragraph with all bold text, it will flag the paragraph as a possible error, because the visual appearance will resemble a heading, even though it is only a paragraph in the markup. Chances are that it should be marked as <h1>, <h2>, <h3>, or some other level of heading. Marking it as a heading will make the content easier to understand and navigate for screen reader users. But it is also possible that this phrase is not meant to be a heading. FireEyes can't know for sure. It can flag the suspicious markup, but a human must decide if the paragraph should be marked as a heading.
Example: Identifying device-dependent event handlers
Mouse-dependent JavaScript triggers, such as onmouseover, may not work at all for people who can't use a mouse. A better approach is to use triggers that work for both mouse and keyboard users. Mouse-specific event handlers are not recommended, but they may not break the accessibility of the script, depending on the purpose and functionality of the script. FireEyes can flag mouse-specific event handlers as likely accessibility errors, but it can't interpret the intent of the script, so human judgment is required to determine if the event handlers really do break the accessibility of the script or not.

False Positives

Issues the tool mistakenly reports as a violation which are not true violations are considered false positives.

Missed Errors

Example: Inaccurate alt text
Similarly, if an image has alt text that appears legitimate, such as alt="a house with a white picket fence", FireEyes will not flag the alt text as an error, even if the image is really a drawing of a llama in the Andes mountains. In this case, the alt text is wrong, which means that it is an accessibility violation, but FireEyes has no way of knowing that the alt text is wrong. A human must compare the alt text to the image to ensure the alt text is accurate.

Human judgment is an important part of the web accessibility testing process.