Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue severity evaluations #103

Open
katekalcevich opened this issue Jul 12, 2024 · 1 comment
Open

Issue severity evaluations #103

katekalcevich opened this issue Jul 12, 2024 · 1 comment

Comments

@katekalcevich
Copy link

In response to the Editor's note: https://www.w3.org/TR/wcag-3.0/#issue-container-generatedID-68

As we continue developing this content, we seek input on the following:

  • Is every issue critical to someone, making this concept invalid?
  • How best to assign severity, particularly if testers have different ideas on what is critical?
  • How do we incorporate context/process/task? Is that part of scoping, or issue severity? Both are important to the end result.
  • What to do with non-critical issues?
  • If included, how will situations where severity depends on context be handled?
  • How will issue severity fit into levels? For example:
    “Bronze” could be an absence of any critical or high issues;
    “Silver” could be an absence of any critical, high, or medium issues.
  • How to account for cumulative issues becoming critical?
  • Would another approach be more effective, for example assigning critical issues after testing is complete based on task or type of task rather than by test?

Amber Knabl, Elana Chapman and myself suggest that issue severity will always be subjective, but that doesn't make it invalid. Testers will have ideas on severity, but guidance on what constitutes high, medium, and low severity has always been part of accessibility testing tools and processes. We think about severity in terms of:

High - blocks the completion of a task flow
Medium - user can’t complete task as expected ​
(for example, it’s hard to do, takes a long time or requires a work around)​
Low -
User suggests a fix based on preference/ease of use​
User has to think about how to complete task ​
User makes more than one attempt before succeeding​

Prioritization of issues, including non-critical ones will depend on a variety of factors - including team capacity, whether a product is developed in house or outsourced or is based on open source or frameworks. Expanding WCAG beyond the evaluation of accessibility into how to action findings doesn't feel appropriate for a technical standard. Many other frameworks (maturity models/inclusive design practices) discuss what to do to improve accessibility.

Representative sampling could also be factored into issue severity, but that increases the complexity of WCAG, something we'd caution against. If an accessibility issue exists, it likely impacts more than one user even if you only have evidence of the issue from one user engagement.

@tlees
Copy link

tlees commented Oct 16, 2024

A large problem with accessibility issues is that as it stands today using the WCAG 2.2 guidelines, all issues are critical issues if you must legally meet the conformance level. That doesn't help the people who are trying to prioritize which issues need to be fixed. An important distinction is that all issues fail the guidelines in some way. Not all issues fall into neat Level A, AA, or AAA conformance levels, but the conformance level can be used as a metric to help inform which priority the issue should be, and the priority adjusted for your use case.

The standard that our team has developed is very similar to the one that @katekalcevich suggested. We hold our site to the A and AA conformance level as much as possible.

Critical: These issues prevent the user from completing the primary task. For example, on a retail website, this would mean issues that prevent the user from checking out, such as the add to bag button not working with a keyboard that all keyboard users would encounter. On a website for a podcast it would prevent the user from accessing the podcast in an accessible manner. These issues almost certainly fail the level A guidelines.

High: These issues prevent the user from continuing forward or block the completion of secondary tasks that not all users may encounter, and no alternative is provided. For example, on a forum, the user may be able to create a new post successfully, but the editing feature may not work, preventing the user from editing their post. These issues almost certainly fail A or AA guidelines, but are not in the primary user flow.

Medium: These are issues where the user has a bad user experience, but functionality is not broken, and it doesn't prevent them from using the site or tool. As @katekalcevich noted, these are issues where something is hard to do, takes a long time, or requires a workaround to solve. These issues might have alternative content that passes using some accessibility techniques, but fail in others. Usually, these issues will pass A guidelines, but may fail AA or AAA guidelines.

Low: These are issues that are cosmetic or a poor experience, but doesn't hinder the user. These are usually programmatic issues detected with automated tools that browsers or screen readers would automatically correct for and might technically be incorrect, but actual users would not even notice that they're wrong. These issues will likely pass A and AA guidelines, but fail AAA guidelines.

The main difference between the suggested prioritization technique is the inclusion of a critical severity. The critical severity was necessary for our team to explain that some users were not able to use the site at all, and that it needed to be fixed immediately rather than waiting for a new version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants