You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Are the toxicity scores provided by the Unitary models, probability scores, in the same way that perspective API returns these values?
"The only score type currently offered is a probability score. It indicates how likely it is that a reader would perceive the comment provided in the request as containing the given attribute. For each attribute, the scores provided represent a probability, with a value between 0 and 1. A higher score indicates a greater likelihood that a reader would perceive the comment as containing the given attribute. For example, a comment like “You are an idiot” may receive a probability score of 0.8 for attribute TOXICITY, indicating that 8 out of 10 people would perceive that comment as toxic. "
Or do they represent the extent of the toxicity?
Thanks so much!
The text was updated successfully, but these errors were encountered:
I'm particularly interested in the thresholds you recommend for analysing whether something is 'toxic', do you recommend anything above 0? Or something more in line with the perspective API, i.e. anything above .70 for social scientists?
The scores are probability scores, similar to the Perspective API ones. 0.7 sounds like a good starting point for a threshold although this will vary depending on the use case and the tolerance for either false positives or false negatives.
Great repo!
I have a question, I hope someone can help?
Are the toxicity scores provided by the Unitary models, probability scores, in the same way that perspective API returns these values?
"The only score type currently offered is a probability score. It indicates how likely it is that a reader would perceive the comment provided in the request as containing the given attribute. For each attribute, the scores provided represent a probability, with a value between 0 and 1. A higher score indicates a greater likelihood that a reader would perceive the comment as containing the given attribute. For example, a comment like “You are an idiot” may receive a probability score of 0.8 for attribute TOXICITY, indicating that 8 out of 10 people would perceive that comment as toxic. "
Or do they represent the extent of the toxicity?
Thanks so much!
The text was updated successfully, but these errors were encountered: