-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to run video-based reenactment demo #3
Comments
Hi, you can simply use consecutive frames from a video clip as the driving images. Our method extracts per-frame motion embedding from the images for reenactment. |
fyi - this code wraps up this nicely https://github.com/Zejun-Yang/AniPortrait/blob/main/scripts/vid2vid.py |
@YuDeng |
Set use_smooth=True in
Furthermore, you can adjust the smoothing level by adjusting the hyper-parameters of the smoother:
|
@YuDeng Hi, I set use_smooth=True and use original the smoothing level , face jittering problem doesn't look relieved. |
Does this also happen with the final synthesis results of Portrait4D? It should be able to tolerate a certain degree of face jittering in the driving frames. |
The final synthesis results is also face jittering |
Can you provide a specific example? |
drive video cctv_align_video.mp4synthesis result: leonardo_cctv_drive_video.mp4I've found that in the synthesis video, the face gets bigger every time the face blinks. |
Well, we also observe similar problem in our experiments and it can happen for certain subjects and driving frames. This is mainly due to the fluctuation of reconstructed camera extrinsics for training. The currently provided checkpoint is trained on our original pre-processed data which may inherit this issue. We've re-trained our model on better pre-processed data and we will try to provide the updated checkpoint in the future. |
Thank your excellent work! Could you please tell me how did you improve the fluctuation of reconstructed camera extrinsics ? |
Thank your excellent work!
How to use a source image and a drive video to generate video-based reenactment?The demo you provided is image-based head reenactment.
The text was updated successfully, but these errors were encountered: