-
Notifications
You must be signed in to change notification settings - Fork 826
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added other dependencies and clarification about HF models #11
base: main
Are you sure you want to change the base?
Conversation
…ix cases where non-standard llama model path names gets bypassed in tokenizer check. The tokenizer is init with use_fast=True and qlora requires >4.29.2 transformers so the only possible tokenizer is LlamaTokenizerFast.
Fixes a copy paste error where per_device_train_batch_size was set twice.
Check for LlamaTokenizerFast rather than infer type from path name.
Fix link to inference notebook
Set per_device_eval_batch_size in finetune.sh
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for contributing! A small suggestion but otherwise this would be very helpful
README.md
Outdated
@@ -37,11 +37,12 @@ pip install -q -U bitsandbytes | |||
pip install -q -U git+https://github.com/huggingface/transformers.git | |||
pip install -q -U git+https://github.com/huggingface/peft.git | |||
pip install -q -U git+https://github.com/huggingface/accelerate.git | |||
pip install -U datasets evaluate scipy nltk |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these might change. It would be better to put these in a requirements.txt file. Also, note that we removed the nltk dependency.
When trying to run the fine tuning example, I noticed this library needs some additional dependencies not mentioned in the README