The experiments are conducted on Ubuntu 18.04, with Python version 3.7.8, and PyTorch version 1.7.1.
To setup the environment:
cd Safety-Pose
conda create -n safety-pose python=3.7.11
conda activate safety-pose
pip install -r requirements.txt
- Please refer to
DATASETS.md
for the preparation of the dataset files.
- There are 8 experiments in total (4 for baseline training, 4 for PoseAug training), including four 2D pose settings (Ground Truth, CPN, DET, HR-Net).
- You can also train other pose estimators (SemGCN, SimpleBaseline, ST-GCN, VideoPose). Please refer to PoseAug.
- The training procedure contains two steps: pretrain the baseline models and then train these baseline models with PoseAug.
To pretrain the baseline model,
python3 run_baseline.py --note pretrain --dropout 0 --lr 2e-2 --epochs 100 --posenet_name 'transformer' --checkpoint './checkpoint/pretrain_baseline' --keypoints gt
python3 run_baseline.py --note pretrain --dropout 0 --lr 2e-2 --epochs 100 --posenet_name 'transformer' --checkpoint './checkpoint/pretrain_baseline' --keypoints cpn_ft_h36m_dbb
python3 run_baseline.py --note pretrain --dropout 0 --lr 2e-2 --epochs 100 --posenet_name 'transformer' --checkpoint './checkpoint/pretrain_baseline' --keypoints detectron_ft_h36m
python3 run_baseline.py --note pretrain --dropout 0 --lr 2e-2 --epochs 100 --posenet_name 'transformer' --checkpoint './checkpoint/pretrain_baseline' --keypoints hr
To train the baseline model with PoseAug:
python3 run_poseaug.py --note poseaug --dropout 0 --posenet_name 'transformer' --lr_p 1e-3 --checkpoint './checkpoint/poseaug' --keypoints gt
python3 run_poseaug.py --note poseaug --dropout 0 --posenet_name 'transformer' --lr_p 1e-3 --checkpoint './checkpoint/poseaug' --keypoints cpn_ft_h36m_dbb
python3 run_poseaug.py --note poseaug --dropout 0 --posenet_name 'transformer' --lr_p 1e-3 --checkpoint './checkpoint/poseaug' --keypoints detectron_ft_h36m
python3 run_poseaug.py --note poseaug --dropout 0 --posenet_name 'transformer' --lr_p 1e-3 --checkpoint './checkpoint/poseaug' --keypoints hr
All the checkpoints, evaluation results and logs will be saved to ./checkpoint
. You can use tensorboard to monitor the training process:
cd ./checkpoint/poseaug
tensorboard --logdir=/path/to/eventfile
- For simplicity, hyper-param for different 2D pose settings are the same. If you want to explore better performance for specific setting, please try changing the hyper-param.
- The GAN training may collapse, change the hyper-param (e.g., random_seed) and re-train the models will solve the problem.
python3 run_evaluate.py --posenet_name 'transformer' --keypoints gt --evaluate '/path/to/checkpoint'
python3 run_visualization.py --posenet_name 'transformer' --keypoints gt --evaluate '/path/to/checkpoint'
python3 run_demo.py --posenet_name 'transformer' --keypoints gt --evaluate '/path/to/checkpoint' --video 0
This Real-time demo uses Lightweight-OpenPose 2D net for detecting 2D keypoints.
For inferencing with safety testing
recorded images by Realsense Camera, you can add --track option with argument 1. --track
option supports same target tracking in subsequent images or video
python3 run_demo.py --posenet_name 'transformer' --keypoints gt --evaluate '/path/to/checkpoint' --track 1 --images data_extra/test_set/testsets/RGB/*.png
For 3D plotting coordinates calculated with thorax relative distance, you can add --thorax_relative option with argument 1. --thorax_relative
option support calculating relative distance from thorax not hip.
python3 run_demo.py --posenet_name 'transformer' --keypoints gt --evaluate '/path/to/checkpoint' --thorax_relative 1 --track 1 --video 0
This repo is created for cooperation on the Hyundai Motor Group AI Competition project, not for commercial use. The repo is forked from PoseAug and our model uses SemGCN as backbone. We thank to the authors for releasing their codes.