-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ML model to debias biased content #20
Labels
Comments
👋 Hi! This issue has been marked stale due to inactivity. If no further activity occurs, it will automatically be closed in 14 days. |
not stale or done. |
👋 Hi! This issue has been marked stale due to inactivity. If no further activity occurs, it will automatically be closed in 14 days. |
This a very interesting related paper for this issue. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Background on the problem the feature will solve/improved user experience
People who use TakeTwo might not only want to detect racially biased content but receive help on how to debias their content.
Describe the solution you'd like
Machine learning models that are able to debias content that is found to be racially biased.
Tasks
Acceptance Criteria
Standards we believe this issue must reach to be considered complete and ready for a pull request. E.g precisely all the user should be able to do with this update, performance requirements, security requirements, etc as appropriate.
The text was updated successfully, but these errors were encountered: