Skip to content

models Coherence Evaluator

github-actions[bot] edited this page May 17, 2024 · 7 revisions

Coherence-Evaluator

Overview

Score range Integer [1-5]: where 1 is bad and 5 is good
What is this metric? Measures how well the language model can produce output that flows smoothly, reads naturally, and resembles human-like language.
How does it work? The coherence measure assesses the ability of the language model to generate text that reads naturally, flows smoothly, and resembles human-like language in its responses.
When to use it? Use it when assessing the readability and user-friendliness of your model's generated responses in real-world applications.
What does it need as input? Question, Generated Answer

Version: 1

Tags

Preview hiddenlayerscanned

View in Studio: https://ml.azure.com/registries/azureml/models/Coherence-Evaluator/version/1

Properties

is-promptflow: True

is-evaluator: True

show-artifact: True

_default-display-file: ./evaluator/flow/prompt.jinja2

Clone this wiki locally