[go: up one dir, main page]

skip to main content
research-article

Rateless Codes for Near-Perfect Load Balancing in Distributed Matrix-Vector Multiplication

Published: 09 July 2020 Publication History

Abstract

Large-scale machine learning and data mining applications require computer systems to perform massive matrix-vector and matrixmatrix multiplication operations that need to be parallelized across multiple nodes. The presence of stragglers - nodes that unpredictably slowdown or fail - is a major bottleneck in such distributed computations. We propose a rateless fountain coding strategy to address this issue. Our idea is to create linear combinations of the m rows of the matrix and assign these encoded rows to different worker nodes. The original matrix-vector product can be decoded as soon as slightly more than m row-vector products are collectively finished by the nodes.We show that our approach achieves optimal latency and performs zero redundant computations asymptotically. Experiments on Amazon EC2 show that rateless coding gives as much as 3× speed-up over uncoded schemes.

References

[1]
Amazon. 2006. Amazon Web Services EC2. https://aws.amazon.com/ec2/.
[2]
Adam Coates, Andrew Ng, and Honglak Lee. 2011. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics. 215--223.
[3]
Jeffrey Dean and Luiz André Barroso. 2013. The tail at scale. Commun. ACM 56, 2 (2013), 74--80.
[4]
Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos, and Kannan Ramchandran. 2017. Speeding Up Distributed Machine Learning Using Codes. IEEE Transactions on Information Theory (2017).
[5]
Michael Luby. 2002. LT codes. In null. IEEE, 271.
[6]
Da Wang, Gauri Joshi, and Gregory Wornell. 2015. Using Straggler Replication to Reduce Latency in Large-scale Parallel Computing. ACM SIGMETRICS Performance Evaluation Review 43, 3 (2015), 7--11.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM SIGMETRICS Performance Evaluation Review
ACM SIGMETRICS Performance Evaluation Review  Volume 48, Issue 1
June 2020
110 pages
ISSN:0163-5999
DOI:10.1145/3410048
Issue’s Table of Contents
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 July 2020
Published in SIGMETRICS Volume 48, Issue 1

Check for updates

Author Tags

  1. erasure coded computing
  2. large-scale parallel computing
  3. rateless fountain codes

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 34
    Total Downloads
  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 16 Nov 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media