Zhe Li, Zerong Zheng, Yuxiao Liu, Boyao Zhou, Yebin Liu
Tsinghua Univserity
Projectpage · Paper · Video
We propose PoseVocab, a novel pose encoding method that encodes dynamic human appearances under various poses for human avatar modeling.
teaser.mp4
Clone this repo, then run the following scripts.
cd ./utils/posevocab_custom_ops
python setup.py install
cd ../..
- Download SMPL-X files, place pkl files to
./smpl_files/smplx
. - Download pretrained models, unzip it to
./pretrained_models
.
- Download THuman4.0 dataset. Let's take "subject00" as an example, and denote the root data directory as
SUBJECT00_DIR
. - Specify the data directory and training frame list in
gen_data/main_preprocess.py
, then run the following scripts.
cd ./gen_data
python main_preprocess.py
cd ..
Note: In the first training stage, our method reconstructs depth maps for the depth-guided sampling in the next stages.
If you want to skip the first stage, you can download our provided depth maps from this link, unzip it to SUBJECT00_DIR/depths
, and directly run python main.py -c configs/subject00.yaml -m train
until the network converges.
- Stage 1: training to obtain depth maps.
Set
end_epoch
in configs/subject00.yaml#L15 to 10.
python main.py -c configs/subject00.yaml -m train
- Stage 2: render depth maps.
python main.py -c configs/subject00.yaml -m render_depth_sequences
- Stage 3: continue training. Set
start_epoch
in configs/subject00.yaml#L14 to 11,end_epoch
in configs/subject00.yaml#L15 to 100, andprev_ckpt
in configs/subject00.yaml#L12 to./results/subject00/epoch_latest
.
python main.py -c configs/subject00.yaml -m train
Download testing poses from this link, unzip them to somewhere, denoted as TESTING_POSE_DIR
.
- Specify
prev_ckpt
in configs/subject00.yaml#L78 as the pretrained model./pretrained_models/subject00
or the trained one by yourself. - Specify
data_path
in configs/subject00.yaml#L60 as the testing pose path, e.g.,TESTING_POSE_DIR/thuman4/pose_01.npz
. - Run the following script.
python main.py -c configs/subject00.yaml -m test
- The output results can be found in
./test_results/subject00
.
MIT License. SMPL-X related files are subject to the license of SMPL-X.
If you find our code or paper is useful to your research, please consider citing:
@inproceedings{li2023posevocab,
title={PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling},
author={Li, Zhe and Zheng, Zerong and Liu, Yuxiao and Zhou, Boyao and Liu, Yebin},
booktitle={ACM SIGGRAPH Conference Proceedings},
year={2023}
}