Skip to content

yuangan/ExpAvatar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 

Repository files navigation

ExpAvatar: High-Fidelity Avatar Generation of Unseen Expressions with 3D Face Priors (Official)

Setup & Preparation

Refer to our colab for a quick inference validation.

Environment Setup

conda create -n expavatar_fpdiff python=3.8
conda activate expavatar_fpdiff
conda install pytorch=1.11 cudatoolkit=11.3 torchvision -c pytorch
conda install mpi4py dlib scikit-learn scikit-image tqdm -c conda-forge
pip install lmdb opencv-python kornia yacs blobfile chumpy face-alignment==1.3.4 pandas lpips
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1110/download.html

Download Dataset

Download the demo dataset from Yandex, and organize it like:

----./
  |-ExpAvatar
  |-ExpAvatar.zip

then unzip them with:

unzip ExpAvatar.zip -d ./
cd ./ExpAvatar/stepII/
unzip baselines.zip -d ./baselines

Run Inference

Run inference in ./ExpAvatar/stepII/

bash inference.sh

TODO:

  • Environment setup
  • Release the inference code of ExpAvatar.
  • Release the metrics calculation of ExpAvatar.
  • Release the processed data.
  • Release the training code of ExpAvatar Step I.
  • Release the training code of ExpAvatar Step II.

Acknowledge

We acknowledge these works for their public code: DiffusionRig, INSTA, MICA's face tracker, IMavatar, PointAvatar.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published