Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding unit tests for each tuning type #79

Closed

Conversation

tharapalanivel
Copy link
Collaborator

@tharapalanivel tharapalanivel commented Mar 5, 2024

Description of the change

Adding unit tests for pt and lora tuning method using dummy model

Related issue number

Closes #74

How to verify the PR

Was the PR tested

  • I have added >=1 unit test(s) for every new method I have added.
  • I have ensured all unit tests pass

raghukiran1224 and others added 30 commits December 4, 2023 10:14
…ck/minor_fixes

allows disable flash attn and torch dtype param
* first refactor on train

Signed-off-by: Sukriti-Sharma4 <[email protected]>

* fix:None peft type for fine tuning

Signed-off-by: Sukriti-Sharma4 <[email protected]>

* accept None as tuningconfig

Signed-off-by: Sukriti-Sharma4 <[email protected]>

* correct docstrings

Signed-off-by: Sukriti-Sharma4 <[email protected]>

* refactor of get_hf_peft_config

Signed-off-by: Sukriti-Sharma4 <[email protected]>

* semantics for python 3.9

Signed-off-by: Sukriti-Sharma4 <[email protected]>

* support python 3.9

Signed-off-by: Sukriti-Sharma4 <[email protected]>

* check accumulate steps>0

Signed-off-by: Sukriti-Sharma4 <[email protected]>

---------

Signed-off-by: Sukriti-Sharma4 <[email protected]>
Signed-off-by: Sukriti-Sharma4 <[email protected]>
…ack/fix_parse_args

fix : the way args are passed
…ack/lchu-ibm-patch-1

fix full param tuning
tedhtchang and others added 15 commits February 29, 2024 11:50
* Add PR template

Signed-off-by: ted chang <[email protected]>

* Add Makefile

Signed-off-by: ted chang <[email protected]>

* Update .github/ISSUE_TEMPLATE/user_story.md

Co-authored-by: Sukriti Sharma <[email protected]>
Signed-off-by: ted chang <[email protected]>

* Update .github/pull_request_template.md

Co-authored-by: Sukriti Sharma <[email protected]>
Signed-off-by: ted chang <[email protected]>

---------

Signed-off-by: ted chang <[email protected]>
Co-authored-by: Sukriti Sharma <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>

Co-authored-by: Kelly A <[email protected]>
Co-authored-by: Anh-Uong <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
* add sample unit test

Signed-off-by: ted chang <[email protected]>

* Update tests/utils/test_data_type_utils.py

Co-authored-by: Sukriti Sharma <[email protected]>
Signed-off-by: ted chang <[email protected]>

* Update tests/utils/test_data_type_utils.py

Co-authored-by: Sukriti Sharma <[email protected]>
Signed-off-by: ted chang <[email protected]>

* re-raise valueError instead of exit

Signed-off-by: ted chang <[email protected]>

---------

Signed-off-by: ted chang <[email protected]>
Co-authored-by: Sukriti Sharma <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
…ner_image

Initial commit for trainer image
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
requirements.txt Outdated
@@ -1,7 +1,7 @@
numpy
accelerate>=0.20.3
packaging
transformers>=4.34.1
transformers>=4.34.1,<4.38.0
Copy link
Collaborator Author

@tharapalanivel tharapalanivel Mar 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specifying upper bound due to error below based on a recent change in transformers.trainer.py library

self.is_fsdp_xla_v2_enabled = args.fsdp_config["xla_fsdp_v2"] 
KeyError: 'xla_fsdp_v2'

Is xla_fsdp_v2 a required field in the config? If so, do we update ours?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the reason you are seeing this error is because TrainingArguments is not initialized properly. In the code, we inherit from the dataclass, but we did not call .__post_init__. This PR #53 fixes this issue.

Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
@fabianlim
Copy link
Collaborator

Not sure if its a good idea to commit binary files that are 17MB large into a git repo.

tests/test_sft_trainer.py Outdated Show resolved Hide resolved
tests/test_sft_trainer.py Outdated Show resolved Hide resolved
tests/test_sft_trainer.py Outdated Show resolved Hide resolved
tests/test_sft_trainer.py Outdated Show resolved Hide resolved
tests/test_sft_trainer.py Outdated Show resolved Hide resolved
tests/helpers.py Outdated
from tuning.config import configs, peft_config


def causal_lm_train_kwargs(train_kwargs):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has come up a few times now, that this code is duplicated in launch_training script and now is duplicated in tests. Feels like it may be worthwhile to refactor the argument parsing into its own method that can be tested separated in unit tests and called in both places.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point @anhuong, I'll move that fn outside of tests/ for now but leave the launch_training.py refactoring for a separate PR

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anhuong tuning/utils/config_utils.py a good place for this function?

tests/helpers.py Outdated Show resolved Hide resolved
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Signed-off-by: Thara Palanivel <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add unit tests for tuning/sft_trainer.py
10 participants