[go: up one dir, main page]

Skip to content

vvright/VideoTo3dPoseAndBvh

 
 

Repository files navigation

Video to 3DPose and Bvh motion file

This project integrates some project working, example as VideoPose3D,video-to-pose3D , video2bvh, AlphaPose, Higher-HRNet-Human-Pose-Estimation,openpose, thanks for the mentioned above project.

The project extracted the 2d joint key point from the video by using AlphaPose,HRNet and so on. Then transform the 2d point to the 3D joint point by using VideoPose3D. Finally We convert the 3d joint point to the bvh motion file.

Environment

  • Windows 10
  • Anaconda
  • Python > 3.6

Dependencies

You can refer to the project dependencies of video-to-pose3D for setting.

This is the dependencies of the project of video-to-pose3D, and modifyed by me to solve some bug.

  • Packages
    • Pytorch > 1.1.0 (I use the Pytorch1.1.0 - GPU)
    • torchsample
    • ffmpeg (note:you must copy the ffmpeg.exe to the directory of python install)
    • tqdm
    • pillow
    • scipy
    • pandas
    • h5py
    • visdom
    • nibabel
    • opencv-python (install with pip)
    • matplotlib
  • 2D Joint detectors
    • Alphapose (Recommended)
      • Download duc_se.pth from (Google Drive | Baidu pan), place to ./joints_detectors/Alphapose/models/sppe
      • Download yolov3-spp.weights from (Google Drive | Baidu pan), place to ./joints_detectors/Alphapose/models/yolo
    • HR-Net (Bad 3d joints performance in my testing environment)
      • Download pose_hrnet* from Google Drive, place to ./joints_detectors/hrnet/models/pytorch/pose_coco/
      • Download yolov3.weights from here, place to ./joints_detectors/hrnet/lib/detector/yolo
  • 3D Joint detectors - Download pretrained_h36m_detectron_coco.bin from here, place it into ./checkpoint folder
  • 2D Pose trackers (Optional) - PoseFlow (Recommended) No extra dependences - LightTrack (Bad 2d tracking performance in my testing environment) - See original README, and perform same get started step on ./pose_trackers/lighttrack

How to Use it

Please place your video to the .\outputs\inputvideo, and setting the path to the videopose.py, like this

inference_video('outputs/inputvideo/kunkun_cut.mp4', 'alpha_pose')

Waiting some minute, you can see the output video in the \outputs\outputvideo directory,and the bvh file in the \outputs\outputvideo\alpha_pose_kunkun_cut\bvh directory.

About

Convert video to the bvh motion file

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 68.1%
  • Cuda 15.4%
  • C++ 7.9%
  • C 6.5%
  • Cython 1.0%
  • Shell 0.5%
  • Other 0.6%