Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix issues for saving checkpointing steps #9891

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

leisuzz
Copy link
Contributor

@leisuzz leisuzz commented Nov 8, 2024

What does this PR do?

  1. These modification can help to save the checkpoint steps while training. Otherwise it will just stuck for too long and timeout.
    Fixes get stuck when save_state using DeepSpeed backend under training train_text_to_image_lora #2606

  2. Bug fix for weight pop from empty list

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@leisuzz
Copy link
Contributor Author

leisuzz commented Nov 13, 2024

@sayakpaul Please take a look at this PR, thanks for your help!

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your PR. Can you please modify a single file first and discuss the changes first?

@leisuzz
Copy link
Contributor Author

leisuzz commented Nov 13, 2024

When I did the deambooth flux without Lora, and save the check pointing. It stuck for a while and break. So I think they all need these modifications. I can only do the flux ones if you want

@sayakpaul
Copy link
Member

Yeah let's change a single file first and then we can discuss the changes first.

@leisuzz
Copy link
Contributor Author

leisuzz commented Nov 13, 2024

Sure

@leisuzz
Copy link
Contributor Author

leisuzz commented Nov 14, 2024

@sayakpaul I already changed the modifications only on FLUX models

Comment on lines 1670 to 1671
if global_step % args.checkpointing_steps == 0:
# _before_ saving state, check if this save would set us over the `checkpoints_total_limit`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, there is a better way to handle it:

if accelerator.is_main_process or accelerator.distributed_type == DistributedType.DEEPSPEED:

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thaks for your PR!

You can refer to the following scripts:

To see how we handle saving and loading from checkpoints when using DeepSpeed.

Search for DistributedType.DEEPSPEED.

@leisuzz
Copy link
Contributor Author

leisuzz commented Nov 15, 2024

@sayakpaul I've changed it based on the reference

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Left more comments. LMK if they're clear.

model.load_state_dict(load_model.state_dict())
except Exception:
elif isinstance(unwrap_model(model), (CLIPTextModelWithProjection, T5EncoderModel)):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't support fine-tuning the T5 model. So, this seems wrong. It should just be CLIPTextModelWithProjection, no?

Comment on lines 1211 to 1218
try:
load_model = T5EncoderModel.from_pretrained(input_dir, subfolder="text_encoder_2")
model(**load_model.config)
model.load_state_dict(load_model.state_dict())
except Exception:
raise ValueError(f"Couldn't load the model of type: ({type(model)}).")
else:
raise ValueError(f"Unsupported model found: {type(model)=}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same for this.

try:
load_model = CLIPTextModelWithProjection.from_pretrained(input_dir, subfolder="text_encoder")
model(**load_model.config)
if not accelerator.distributed_type == DistributedType.DEEPSPEED:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also need to handle the case when we're actually doing DeepSpeed training. Similar to:
https://github.com/a-r-r-o-w/cogvideox-factory/blob/d63a826f37758eccf226710f94f6c3a4d4ee7a25/training/cogvideox_text_to_video_sft.py#L385

@@ -1262,15 +1263,16 @@ def load_model_hook(models, input_dir):
transformer_ = None
text_encoder_one_ = None

while len(models) > 0:
model = models.pop()
if not accelerator.distributed_type == DistributedType.DEEPSPEED:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -1187,7 +1187,8 @@ def save_model_hook(models, weights, output_dir):
raise ValueError(f"Wrong model supplied: {type(model)=}.")

# make sure to pop weight so that corresponding model is not saved again
weights.pop()
if weights:
weights.pop()

def load_model_hook(models, input_dir):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like we're not handling the loading case appropriately here. I repeated this multiple times now but please refer to the changes here to get an idea of what is required.

In summary, we're not dealing with the changes required to load the state dict in the models being trained when DeepSpeed is enabled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

get stuck when save_state using DeepSpeed backend under training train_text_to_image_lora
2 participants