


default search action
ExaMPI@SC 2020: Atlanta, GA, USA
- Workshop on Exascale MPI, ExaMPI@SC 2020, Atlanta, GA, USA, November 13, 2020. IEEE 2020, ISBN 978-1-6654-1561-3
- Nathan Hanford, Ramesh Pankajakshan, Edgar A. León, Ian Karlin:
Challenges of GPU-aware Communication in MPI. 1-10 - Bharath Ramesh, Kaushik Kandadi Suresh, Nick Sarkauskas, Mohammadreza Bayatpour, Jahanzeb Maqbool Hashmi, Hari Subramoni, Dhabaleswar K. Panda:
Scalable MPI Collectives using SHARP: Large Scale Performance Evaluation on the TACC Frontera System. 11-20 - Noah Evans, Jan Ciesko, Stephen L. Olivier
, Howard Pritchard, Shintaro Iwasaki, Ken Raffenetti, Pavan Balaji:
Implementing Flexible Threading Support in Open MPI. 21-30 - Jonathan Lifflander, Phil Miller
, Nicole Lemaster Slattengren, Nicolas M. Morales
, Paul Stickney, Philippe P. Pébaÿ:
Design and Implementation Techniques for an MPI-Oriented AMT Runtime. 31-40 - Sri Raj Paul, Akihiro Hayashi, Matthew Whitlock, Seonmyeong Bak, Keita Teranishi, Jackson R. Mayo, Max Grossman, Vivek Sarkar:
Integrating Inter-Node Communication with a Resilient Asynchronous Many-Task Runtime System. 41-51 - Derek Schafer
, Ignacio Laguna, Anthony Skjellum, Nawrin Sultana, Kathryn M. Mohror:
Extending the MPI Stages Model of Fault Tolerance. 52-61

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.