-
Notifications
You must be signed in to change notification settings - Fork 468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
full dpo #1966
base: main
Are you sure you want to change the base?
full dpo #1966
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1966
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
self._model = self._setup_model( | ||
cfg_model=cfg.model, | ||
enable_activation_checkpointing=self._enable_activation_checkpointing, | ||
enable_activation_offloading=self._enable_activation_offloading, | ||
custom_sharded_layers=cfg.get("custom_sharded_layers", None), | ||
fsdp_cpu_offload=cfg.get("fsdp_cpu_offload", False), | ||
reshard_after_forward=cfg.get("fsdp_reshard_after_forward", True), | ||
model_state_dict=checkpoint_dict[training.MODEL_KEY], | ||
ac_mode=cfg.get("ac_mode", None), | ||
ac_option=cfg.get("ac_option", None), | ||
) | ||
log.info("Loading reference model") | ||
self._ref_model = self._setup_model( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might need to make sure dropout is explictly disabled in the policy mode so it doesn't cause issues when comparing logprobs between the reference and policy models.
See
# disabling dropout if found - non-determinism leads to issues in e.g. comparing logprobs |
if isinstance(self._loss_fn, SimPOLoss): | ||
loss, chosen_rewards, rejected_rewards = self._loss_fn( | ||
policy_chosen_log_probs, policy_rejected_log_probs | ||
) | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're going to be deprecating non-DPO losses in our DPO recipes so we can take it out and simplify this whole section.
# see :class:`~torchtune.modules.rlhf.loss.dpo.SimPOLoss` | ||
return_average_logprobs=isinstance(self._loss_fn, SimPOLoss), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to my comment below about SimPO
Hey @jxmsML! Thanks so much for opening this. Overall it looks great - just a couple comments for things to look out for. I know this is still in draft so happy to revisit when you're ready for review - it'd be great to see some training outputs and an example config or two (I see you've added one in the recipe registry) at some point. |
Context
What is the purpose of this PR? Is it to
Please link to any issues this PR addresses.
Changelog
What are the changes made in this PR?
*
Test plan
Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.
pre-commit install
)pytest tests
pytest tests -m integration_test
UX
If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example