Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Outcome about Marking Up AI #90

Open
SuzanneKTaylor opened this issue May 25, 2024 · 2 comments
Open

Add Outcome about Marking Up AI #90

SuzanneKTaylor opened this issue May 25, 2024 · 2 comments

Comments

@SuzanneKTaylor
Copy link

The Introduction to the May 2024 draft of WCAG 3.0 asked "What outcomes needed to make web content accessible are missing?"

The idea of indicating something that is an AI is included in:

Indicate 3rd party contentEXPLORATORY
Third party content (AI, Advertising, etc.) is visually and programmatically indicated.

But, AI might not always be a 3rd party. Since an AI can provide unlimited attention to any one user unlike anything we've seen before, tools should be able to block, flag, or warn users/guardians/educators about AI. An outcome like this might help:

Identify AI
AI (chat, avatar, voice) is programmatically marked as AI.

Another benefit to this is that unmarked AI can be considered a bad practice regardless of the details of the AI, in some cases eliminating the necessity to prove that whatever the AI was doing is a bad practice, which could be much more difficult to pinpoint and address quickly.

@GreggVan
Copy link

I think we should think of AI in many ways like we would think of CODE. Code isn't accessible or inaccessible itself. It is what is created with/by it. Accessibility is about interface — not function or origin. So AI is not accessible or not. and we don't need guidelines about AI (for accessibility) - -but just on what it creates - which is what we are already doing. Other than AI fairness or bias -- (which are not accessibility - but important) -- what exactly are we seeing as an interface issue with AI?

  • we can say AI shouldn't be used to create text alternatives -- but soon AI will create better ones (on average) that the average human author does. And later it might be able to create better than most all human authors. And in between it could probably generate alternative texts that a particular person likes - -better than humans who create them for "everyone".

So other than a fear of AI (which we should all have a healthy dose of) what exactly is the problem we see it creating for accessibility of web content?

@SuzanneKTaylor
Copy link
Author

SuzanneKTaylor commented May 28, 2024

This outcome would not be for AI-generated content, it would be for an AI that is acting the way a human acts. For example, it could be a name in IRC talking to you, or it could be on Zoom as an avatar talking in a meeting. For a while, most people will be able to identify the "bot," but people with disabilities may have fewer hints. For example the avatar's hands on Zoom would look a bit off at the moment, but a blind user wouldn't have that extra hint. Eventually, no one will be able to consistently tell, and this is a problem because all sorts of pranks, phishing schemes, etc could take place. (For example, someone could find out the person they've been talking to and spending countless hours chatting with and worrying about is not real at all.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants