Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add zero-shot evaluation results #4

Open
LeeShiyang opened this issue Apr 21, 2023 · 1 comment
Open

Add zero-shot evaluation results #4

LeeShiyang opened this issue Apr 21, 2023 · 1 comment

Comments

@LeeShiyang
Copy link

Hi all, I read the code and realized that the results were obtained from 3-shot demonstrations. However, some models were trained to follow instructions without demonstrations. These models may have better relative zero-shot performance (ranked higher in the result table) than current few-shot setting. It will be great if you can add these zero-shot evaluation results. Thank you.

@chiayewken
Copy link
Collaborator

Good point, it may be worth investigating the zero-shot performance as well. We will try and add the zero-shot results for MMLU in the next few weeks. For now, we have reported the zero-shot results of HumanEval in the readme table (last column)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants