Qi Fang*, Qing Shuai*, Junting Dong, Hujun Bao, Xiaowei Zhou
CVPR 2021 Oral
- The basic version of our dataset used for training has been released.
- The video clips are released as a part of the ZJU-MoCap dataset.
- We build a website for a fast preview of our dataset.
In this paper, we introduce the new task of reconstructing 3D human pose from a single image in which we can see the person and the person’s image through a mirror.
This implementation:
- has the demo of our optimization-based approach implemented purely in PyTorch.
- provides a method to estimate the surface normal of the mirror from vanishing points.
- provides an annotator to label the mirror edges for the vanishing points.
- can estimate the focal length of the Internet mirror images.
This repo has a close relation with EasyMocap. Please refer to our EasyMocap project for installation.
Download our zju-m-demo.zip and run the following code:
ATTN: The following commands are old and please see easymocap-doc-quickstart for a new version of startup.
# set the data path
data=<path_to_sample>/zju-m-demo
out=<path_to_sample>/zju-m-demo-output
# extract the video frames
python3 scripts/preprocess/extract_video.py ${data}
# Run demo on videos
[old, not used!!] python3 apps/demo/1v1p_mirror.py ${data} --out ${out} --vis_smpl --video
Due to the license limitation, we cannot share the raw data directly. The video clips including url links and timestamps are released as a part of the ZJU-MoCap dataset. Besides, the basic version of our dataset used for training has been released in the same place. Note that we have no license of those images, so the dataset cannot be used for commercial applications.
We also provide the annotator metioned in our paper.
The first row shows that we label the edges of the mirror and calculate the vanishing point by the human body automaticly. The intrisic camera parameter can be calculated by this two vanishing points.
The second row shows that to obtain a more accurate vanishing points and camera parameters, we can label the parallel lines in the scene, for example the door, the grid in the ground, and the door.
See EasyMocap/apps/annotator for more instructions.
See doc/internet.md for more instructions.
This part is provided for the researchers who want to:
- capture more accurate human motion with multiple cameras and a mirror
- build a different evaluation dataset
See doc/custom.md for more instructions.
To evaluate the reconstruction part in our paper, see doc/evaluation.md.
Please open an issue if you have any questions (it is preferred than emails such that other people could also see it). We appreciate all contributions to improve our project.
If you find some videos that we didn't notice, please tell us.
@inproceedings{fang2021mirrored,
title={Reconstructing 3D Human Pose by Watching Humans in the Mirror},
author={Fang, Qi and Shuai, Qing and Dong, Junting and Bao, Hujun and Zhou, Xiaowei},
booktitle={CVPR},
year={2021}
}
This project is build on our EasyMocap. We also would like to thank Jianan Zhen and Yuhao Chen for their advice for the paper. Sincere thanks to the performers (Yuji Chen and Hao Xu) in the evaluation dataset and people who uploaded the mirror-human videos to the Internet.