Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run video-based reenactment demo #3

Open
xiao-keeplearning opened this issue Apr 30, 2024 · 11 comments
Open

How to run video-based reenactment demo #3

xiao-keeplearning opened this issue Apr 30, 2024 · 11 comments

Comments

@xiao-keeplearning
Copy link

Thank your excellent work!
How to use a source image and a drive video to generate video-based reenactment?The demo you provided is image-based head reenactment.

@YuDeng
Copy link
Owner

YuDeng commented Apr 30, 2024

Hi, you can simply use consecutive frames from a video clip as the driving images. Our method extracts per-frame motion embedding from the images for reenactment.

@johndpope
Copy link

fyi - this code wraps up this nicely
https://github.com/Zejun-Yang/AniPortrait
python -m scripts.vid2vid --config ./configs/prompts/animation_facereenac.yaml -W 512 -H 512 -acc

https://github.com/Zejun-Yang/AniPortrait/blob/main/scripts/vid2vid.py

@xiao-keeplearning
Copy link
Author

@YuDeng
I follow the commands in Data Preprocessing for Custom Images section and got the following result, which didn't work well. The face is shaking abnormally,and face fluctuates in size.
https://github.com/YuDeng/Portrait-4D/assets/26853334/5ba96e4d-34ec-4158-a69e-873185131876
And, I noticed that there are use_smooth parameters in the image preprocessing code, does this enable it improve the result? Do you have any suggestions on how to reproduce what you showed in the demo?

@YuDeng
Copy link
Owner

YuDeng commented May 7, 2024

@YuDeng I follow the commands in Data Preprocessing for Custom Images section and got the following result, which didn't work well. The face is shaking abnormally,and face fluctuates in size. https://github.com/YuDeng/Portrait-4D/assets/26853334/5ba96e4d-34ec-4158-a69e-873185131876 And, I noticed that there are use_smooth parameters in the image preprocessing code, does this enable it improve the result? Do you have any suggestions on how to reproduce what you showed in the demo?

Set use_smooth=True in

def inference(self, input_dir, save_dir, video=True, use_crop_smooth=False):
can reduce the face jittering problem to some extent.

Furthermore, you can adjust the smoothing level by adjusting the hyper-parameters of the smoother:

s_smoother = SmootherHighdim(min_cutoff = 0.0001, beta = 0.005)
.

@xiao-keeplearning
Copy link
Author

@YuDeng Hi, I set use_smooth=True and use original the smoothing level , face jittering problem doesn't look relieved.

@YuDeng
Copy link
Owner

YuDeng commented May 8, 2024

@YuDeng Hi, I set use_smooth=True and use original the smoothing level , face jittering problem doesn't look relieved.

Does this also happen with the final synthesis results of Portrait4D? It should be able to tolerate a certain degree of face jittering in the driving frames.

@xiao-keeplearning
Copy link
Author

The final synthesis results is also face jittering

@YuDeng
Copy link
Owner

YuDeng commented May 10, 2024

Can you provide a specific example?

@xiao-keeplearning
Copy link
Author

drive video

cctv_align_video.mp4

synthesis result:

leonardo_cctv_drive_video.mp4

I've found that in the synthesis video, the face gets bigger every time the face blinks.

@YuDeng
Copy link
Owner

YuDeng commented May 10, 2024

Well, we also observe similar problem in our experiments and it can happen for certain subjects and driving frames. This is mainly due to the fluctuation of reconstructed camera extrinsics for training. The currently provided checkpoint is trained on our original pre-processed data which may inherit this issue. We've re-trained our model on better pre-processed data and we will try to provide the updated checkpoint in the future.

@BoyiZhao
Copy link

BoyiZhao commented Aug 20, 2024

Well, we also observe similar problem in our experiments and it can happen for certain subjects and driving frames. This is mainly due to the fluctuation of reconstructed camera extrinsics for training. The currently provided checkpoint is trained on our original pre-processed data which may inherit this issue. We've re-trained our model on better pre-processed data and we will try to provide the updated checkpoint in the future.

Thank your excellent work! Could you please tell me how did you improve the fluctuation of reconstructed camera extrinsics ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants