-
Notifications
You must be signed in to change notification settings - Fork 130
models Coherence Evaluator
github-actions[bot] edited this page Oct 17, 2024
·
7 revisions
Score range | Integer [1-5]: where 1 is bad and 5 is good |
What is this metric? | Measures how well the language model can produce output that flows smoothly, reads naturally, and resembles human-like language. |
How does it work? | The coherence measure assesses the ability of the language model to generate text that reads naturally, flows smoothly, and resembles human-like language in its responses. |
When to use it? | Use it when assessing the readability and user-friendliness of your model's generated responses in real-world applications. |
What does it need as input? | Query, Generated Response |
Version: 2
Preview
hiddenlayerscanned
View in Studio: https://ml.azure.com/registries/azureml/models/Coherence-Evaluator/version/2
is-promptflow: True
is-evaluator: True
show-artifact: True
_default-display-file: ./evaluator/prompt.jinja2