Skip to content

models Relevance Evaluator

github-actions[bot] edited this page Oct 17, 2024 · 7 revisions

Relevance-Evaluator

Overview

Score range Integer [1-5]: where 1 is bad and 5 is good
What is this metric? Measures the extent to which the model's generated responses are pertinent and directly related to the given questions.
How does it work? The relevance measure assesses the ability of answers to capture the key points of the context. High relevance scores signify the AI system's understanding of the input and its capability to produce coherent and contextually appropriate outputs. Conversely, low relevance scores indicate that generated responses might be off-topic, lacking in context, or insufficient in addressing the user's intended queries.
When to use it? Use the relevance metric when evaluating the AI system's performance in understanding the input and generating contextually appropriate responses.
What does it need as input? Query, Context, Generated Response

Version: 2

Tags

Preview hiddenlayerscanned

View in Studio: https://ml.azure.com/registries/azureml/models/Relevance-Evaluator/version/2

Properties

is-promptflow: True

is-evaluator: True

show-artifact: True

_default-display-file: ./evaluator/prompt.jinja2

Clone this wiki locally