Skip to content

models classification accuracy eval

github-actions[bot] edited this page Sep 8, 2023 · 10 revisions

classification-accuracy-eval

Overview

Description: The "Classification Accuracy Evaluation" is a model designed to assess the effectiveness of a data classification system. It involves matching each prediction against the ground truth, subsequently assigning a "Correct" or "Incorrect" score. The cumulative results are then leveraged to generate performance metrics, such as accuracy, providing an overall measure of the system's proficiency in data classification. ### Inference samples Inference type|Python sample (Notebook)|CLI with YAML |--|--|--| Real time|deploy-promptflow-model-python-example|deploy-promptflow-model-cli-example Batch | N/A | N/A ### Sample inputs and outputs (for real-time inference) #### Sample input json { "inputs": { "groundtruth": "App", "prediction": "App" } } #### Sample output json { "outputs": { "grade": "Correct" } }

Version: 2

View in Studio: https://ml.azure.com/registries/azureml/models/classification-accuracy-eval/version/2

Properties

is-promptflow: True

azureml.promptflow.section: gallery

azureml.promptflow.type: evaluate

azureml.promptflow.name: Classification Accuracy Eval

azureml.promptflow.description: Measuring the performance of a classification system by comparing its outputs to groundtruth.

inference-min-sku-spec: 2|0|14|28

inference-recommended-sku: Standard_DS3_v2

Clone this wiki locally