-
Notifications
You must be signed in to change notification settings - Fork 557
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding initial code drop for llm finetune #698
Conversation
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
…(c) adding support for mlloger
Fix eval batch size, add Dockerfile, improve logging, remove unused code, freeze requirements
git clone https://github.com/mlperf/logging.git mlperf-logging | ||
pip install -e mlperf-logging | ||
``` | ||
## Download Data and Model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@itayhubara could you please include a short description of what dataset is being used by the benchmark - training & eval and some text to capture size of it (e.g. number of samples or tokens, size in GB for storage)
Merging initial code drop for fine-tuning |
No description provided.