-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding unit tests for each tuning type #79
Adding unit tests for each tuning type #79
Conversation
Merge first init
Signed-off-by: Sukriti-Sharma4 <[email protected]>
Signed-off-by: Sukriti-Sharma4 <[email protected]>
…ck/minor_fixes allows disable flash attn and torch dtype param
* first refactor on train Signed-off-by: Sukriti-Sharma4 <[email protected]> * fix:None peft type for fine tuning Signed-off-by: Sukriti-Sharma4 <[email protected]> * accept None as tuningconfig Signed-off-by: Sukriti-Sharma4 <[email protected]> * correct docstrings Signed-off-by: Sukriti-Sharma4 <[email protected]> * refactor of get_hf_peft_config Signed-off-by: Sukriti-Sharma4 <[email protected]> * semantics for python 3.9 Signed-off-by: Sukriti-Sharma4 <[email protected]> * support python 3.9 Signed-off-by: Sukriti-Sharma4 <[email protected]> * check accumulate steps>0 Signed-off-by: Sukriti-Sharma4 <[email protected]> --------- Signed-off-by: Sukriti-Sharma4 <[email protected]>
Signed-off-by: Sukriti-Sharma4 <[email protected]>
…ack/fix_parse_args fix : the way args are passed
…ack/lchu-ibm-patch-1 fix full param tuning
* Add PR template Signed-off-by: ted chang <[email protected]> * Add Makefile Signed-off-by: ted chang <[email protected]> * Update .github/ISSUE_TEMPLATE/user_story.md Co-authored-by: Sukriti Sharma <[email protected]> Signed-off-by: ted chang <[email protected]> * Update .github/pull_request_template.md Co-authored-by: Sukriti Sharma <[email protected]> Signed-off-by: ted chang <[email protected]> --------- Signed-off-by: ted chang <[email protected]> Co-authored-by: Sukriti Sharma <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]> Co-authored-by: Kelly A <[email protected]> Co-authored-by: Anh-Uong <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
* add sample unit test Signed-off-by: ted chang <[email protected]> * Update tests/utils/test_data_type_utils.py Co-authored-by: Sukriti Sharma <[email protected]> Signed-off-by: ted chang <[email protected]> * Update tests/utils/test_data_type_utils.py Co-authored-by: Sukriti Sharma <[email protected]> Signed-off-by: ted chang <[email protected]> * re-raise valueError instead of exit Signed-off-by: ted chang <[email protected]> --------- Signed-off-by: ted chang <[email protected]> Co-authored-by: Sukriti Sharma <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
…ner_image Initial commit for trainer image
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
requirements.txt
Outdated
@@ -1,7 +1,7 @@ | |||
numpy | |||
accelerate>=0.20.3 | |||
packaging | |||
transformers>=4.34.1 | |||
transformers>=4.34.1,<4.38.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Specifying upper bound due to error below based on a recent change in transformers.trainer.py
library
self.is_fsdp_xla_v2_enabled = args.fsdp_config["xla_fsdp_v2"]
KeyError: 'xla_fsdp_v2'
Is xla_fsdp_v2
a required field in the config? If so, do we update ours?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the reason you are seeing this error is because TrainingArguments
is not initialized properly. In the code, we inherit from the dataclass, but we did not call .__post_init__.
This PR #53 fixes this issue.
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Not sure if its a good idea to commit binary files that are 17MB large into a git repo. |
tests/helpers.py
Outdated
from tuning.config import configs, peft_config | ||
|
||
|
||
def causal_lm_train_kwargs(train_kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has come up a few times now, that this code is duplicated in launch_training script and now is duplicated in tests. Feels like it may be worthwhile to refactor the argument parsing into its own method that can be tested separated in unit tests and called in both places.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point @anhuong, I'll move that fn outside of tests/
for now but leave the launch_training.py
refactoring for a separate PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anhuong tuning/utils/config_utils.py
a good place for this function?
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
f9a703f
to
6d110f5
Compare
Description of the change
Adding unit tests for
pt
andlora
tuning method using dummy modelRelated issue number
Closes #74
How to verify the PR
Was the PR tested