You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since a rating is linked to a category of products, and the selection of which category a product lives in is mostly subjective (is AirTable a Database or a Spreadsheet? Is Grammarly an AI Copilot or a Word Processor?), you might re-categorize your product into one where your rating for the same score is more favorable.
This is also challenging. How do we decide the buckets without information? New products come on the market with overlapping feature sets, which makes comparison very challenging.
Equivalence
Publishing a book on Amazon under an obscure category so you can claim to be top of the charts with minimal sales.
This is an area where there is a big difference between the digital and physical worlds. It's very hard to claim a Fridge Freezer is an Oven, so Energy Star does not have this problem. The categorization of a software product can be a very subjective decision. Most organizations spend significant time trying to differentiate from their competitors, so they are well placed to put forward arguments for recategorization or category creation.
Counter
First gather data through encouraging transparent reporting. If the label was initially just a record of transparency, like a food ingredients label, with no ratings then there is no need for categorization. Once we have a large body of public, open, transparent data, we can find useful clusterings, and perhaps other forms of categorization will surface once we have the data.
Categorize based on objective traits rather than features.
Categories directly linked to Functional Units. For instance if in the LLM space the Functional Unit is "Prompt" as per the SCI for AI specification, then moving categories to say Word Processor would change the Functional Unit to perhaps "Character".
The text was updated successfully, but these errors were encountered:
This is related to the definition of the 1st step of the SCER specification: Categorization - what does Categorization mean? What constitute a software category? What are the key aspects or components that a category of software or AI models is composed of, so that apples are compared with apples, not oranges? The base SCER spec identified a number of key aspects/components of a software categorization, such as purpose, function, platform, end user. The SCER for LLMs spec identified a number of aspects as well, such as model type (text, image, voice, etc), parameter size (3b, 7b, etc), use/tasks (multi model etc).
The implementers of SCER framework may decide what a category will concretely be. For example, in the case of greencoding.ai, users can choose which LLMs to issue the prompt to, and a SCER rating/labeling can be returned based on all the prompts the user has issued to whatever types of LLMs (llama2, llama3, mistral, gemma, tinyllama etc).
Since a rating is linked to a category of products, and the selection of which category a product lives in is mostly subjective (is AirTable a Database or a Spreadsheet? Is Grammarly an AI Copilot or a Word Processor?), you might re-categorize your product into one where your rating for the same score is more favorable.
This is also challenging. How do we decide the buckets without information? New products come on the market with overlapping feature sets, which makes comparison very challenging.
Equivalence
Counter
The text was updated successfully, but these errors were encountered: