-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Outcome about Marking Up AI #90
Comments
I think we should think of AI in many ways like we would think of CODE. Code isn't accessible or inaccessible itself. It is what is created with/by it. Accessibility is about interface — not function or origin. So AI is not accessible or not. and we don't need guidelines about AI (for accessibility) - -but just on what it creates - which is what we are already doing. Other than AI fairness or bias -- (which are not accessibility - but important) -- what exactly are we seeing as an interface issue with AI?
So other than a fear of AI (which we should all have a healthy dose of) what exactly is the problem we see it creating for accessibility of web content? |
This outcome would not be for AI-generated content, it would be for an AI that is acting the way a human acts. For example, it could be a name in IRC talking to you, or it could be on Zoom as an avatar talking in a meeting. For a while, most people will be able to identify the "bot," but people with disabilities may have fewer hints. For example the avatar's hands on Zoom would look a bit off at the moment, but a blind user wouldn't have that extra hint. Eventually, no one will be able to consistently tell, and this is a problem because all sorts of pranks, phishing schemes, etc could take place. (For example, someone could find out the person they've been talking to and spending countless hours chatting with and worrying about is not real at all.) |
The Introduction to the May 2024 draft of WCAG 3.0 asked "What outcomes needed to make web content accessible are missing?"
The idea of indicating something that is an AI is included in:
But, AI might not always be a 3rd party. Since an AI can provide unlimited attention to any one user unlike anything we've seen before, tools should be able to block, flag, or warn users/guardians/educators about AI. An outcome like this might help:
Identify AI
AI (chat, avatar, voice) is programmatically marked as AI.
Another benefit to this is that unmarked AI can be considered a bad practice regardless of the details of the AI, in some cases eliminating the necessity to prove that whatever the AI was doing is a bad practice, which could be much more difficult to pinpoint and address quickly.
The text was updated successfully, but these errors were encountered: