This repository implements an extension of the Allegro, Allegro-Legato, which provides a neural-network molecular dynamics (MD) simulator with enhanced robustness. Allegro-Legato realizes the MD simulation for a long time without failure. This legato extension is developed by Hikaru Ibayashi.
When you use Allegro-Legato in your paper, please use the following BibTeX to cite.
@inproceedings{ibayashi2023allegro,
title={Allegro-Legato: Scalable, Fast, and Robust Neural-Network Quantum Molecular Dynamics via Sharpness-Aware Minimization},
author={Ibayashi, Hikaru and Razakh, Taufeq Mohammed and Yang, Liqiu and Linker, Thomas and Olguin, Marco and Hattori, Shinnosuke and Luo, Ye and Kalia, Rajiv K and Nakano, Aiichiro and Nomura, Ken-ichi and others},
booktitle={International Conference on High Performance Computing},
pages={223--239},
year={2023},
organization={Springer}
}
If you have questions about this repository, feel free to contact me (ibayashi[at]alumni.usc.edu).
Note: This implementation assumes an HPC environment. Required environment (with versions confirmed to work.)
- gcc: 8.3.0
- git: 2.25.0
- cmake: 3.16.2
- cuda: 11.3.0
- Python: 3.10.8
- wandb: 0.13.5
With those external modules and libraries installed, run the following commands to install the nequip library and allegro library with the legato extension.
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
cd nequip && pip install .
cd allegro && pip install .
Then, you run the following to compile the LAMMPS environment.
bash compile_lammps.sh
Please refer Allegro installation instructions to see the detailed dependencies.
We provide the following three features. Simply execute the Python scripts and you can enjoy each feature parallelized on HPC.
- Training:
python train.py
- Measuring sharpness:
python measure_sharpness.py
- Measuring
$t_\text{failure}$ :python measure_t_failure.py
Note:
To check if those three features work on your environment, first, run the following command as a sanity check.
python train.py --sanity-check;
python measure_sharpness.py --sanity-check;
python measure_t_failure.py --sanity-check;
Since this code is intended for the short-term simulation to tune the hyper-parameter,
However, if you want to continue running your simulation as long as possible, please comment out this line and recompile LAMMPS.