Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

any plan to support cogvideox? #91

Open
trouble-maker007 opened this issue Dec 18, 2024 · 3 comments
Open

any plan to support cogvideox? #91

trouble-maker007 opened this issue Dec 18, 2024 · 3 comments
Assignees

Comments

@trouble-maker007
Copy link

No description provided.

@foreverpiano
Copy link
Collaborator

CogvideoX is listed in our development plan. Please stay tuned.

@trouble-maker007
Copy link
Author

@foreverpiano If I want to train the distillation of an I2V model, do I need to cache the VAE latents of the I2V according to the previous training process? If I don't cache, can I still train?

@jzhang38
Copy link
Collaborator

You can refer to this page for preprocessing: https://github.com/hao-ai-lab/FastVideo/blob/main/docs/data_preprocess.md
For now it is required to preprocess because that saves tons of memory by eliminating text encoder and vaes(and compute if you train for multi-epoch)

@foreverpiano foreverpiano self-assigned this Dec 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants