We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I think it would be really interesting to collect a list of prompts for each "text->image" task, eg
and then turn that into a benchmark so we would know what each model strengths really are
The text was updated successfully, but these errors were encountered:
I'm sure I'm not the first one to have this idea, so the first task would be collecting papers that did things on the topic
Sorry, something went wrong.
similar to https://github.com/EleutherAI/lm-evaluation-harness
here’s the plan https://docs.google.com/document/d/18uAhB_99lTsfhKkBd6tiKbMhZcQ9hErLgJqEWuTOcQY/edit
That seems to be a plan for evaluating the retrieval task for #13
Here is for generative models
This would be super cool. I haven't read it yet but this paper is very relevant to this https://arxiv.org/abs/2202.04053
No branches or pull requests
I think it would be really interesting to collect a list of prompts for each "text->image" task, eg
etc
and then turn that into a benchmark
so we would know what each model strengths really are
The text was updated successfully, but these errors were encountered: