[go: up one dir, main page]

Skip to content
/ eph-mapf Public

[IROS 2024] EPH: Ensembling Prioritized Hybrid Policies for Multi-agent Pathfinding

License

Notifications You must be signed in to change notification settings

ai4co/eph-mapf

Repository files navigation

EPH: Ensembling Prioritized Hybrid Policies for Multi-agent Pathfinding

arXiv Slack[License: MIT] Open In Colab

image

News: EPH has been accepted at IROS 2024 ! AI4CO Logo

Usage

Installation

[Optional] create a virtual environment:

conda create -n eph python=3.11
conda activate eph

Install the repo locally (with requirements listed in pyproject.toml):

pip install -e '.[all]'

Note: remove [all] if you don't want to install the optional dependencies.

Configuration

To train and test we need to load the configuration file. under configs/ you can find the default configuration file eph.py. To change the configuration or create a new one, you can use export the "CONFIG" environment variable as the desired configuration name without the .py extension:

export CONFIG=eph

Training

To train the model, you can use the following command:

python train.py

Testing

To test the model, you can use the following command:

python test.py

Configurations

We made the configuration loading dynamic, so multiple configurations are allowed for different experiments under configs/.

Before running any script, you can change which configuration to load by changing the CONFIG_NAME variable in the config.py file:

CONFIG_NAME = 'eph'

For example, the above will load the default configuration file configs/eph.py.

Changing model

To change the model, we made sure that the model path is loaded from the configuration file.

You can change the target by:

model_target = "model.Network"

This will load the Network class from the model.py module.

Data generation

Go to src/data/ and follow the instructions in the README.md for generating the MovingAI's test set.

Acknowledgements

Our codebase is heavily based on DHC (https://github.com/ZiyuanMa/DHC) and DCC (https://github.com/ZiyuanMa/DCC). We used some inspiration from SCRIMP for our communication block (https://github.com/marmotlab/SCRIMP) and reimplemented structured maps experiments of MovingAI datasets from SACHA (https://github.com/Qiushi-Lin/SACHA).

We are also looking into implementing MAPF in some modern platform (i.e. TorchRL enviroments and integration with RL4CO) once we have some bandwidth to do so!


eph-video.mp4

Citation

If you find our code or work (or hopefully both!) helpful, please consider citing us:

@inproceedings{tang2024eph,
  title={Ensembling Prioritized Hybrid Policies for Multi-agent Pathfinding},
  author={Tang, Huijie and Berto, Federico and Park, Jinkyoo},
  booktitle={2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  organization={IEEE},
  year={2024},
  note={\url{https://github.com/ai4co/eph-mapf}}
}