International Journal of Computational Intelligence Systems

Volume 14, Issue 1, 2021, Pages 1373 - 1387

Communication-Efficient Distributed SGD with Error-Feedback, Revisited

Authors
Tran Thi Phuong1, 2, 3, *, ORCID, Le Trieu Phong3, ORCID
1Faculty of Mathematics and Statistics, Ton Duc Thang University, No.19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City, Vietnam
2Meiji University, 1-1-1 Higashi-Mita, Tama-ku, Kawasaki-shi, Kanagawa, 214-8571, Japan
3National Institute of Information and Communications Technology (NICT) 4-2-1, Nukui-Kitamachi, Koganei, Tokyo, 184-8795, Japan
*Corresponding author. Email: tranthiphuong@tdtu.edu.vn
Corresponding Author
Tran Thi Phuong
Received 15 July 2020, Accepted 31 March 2021, Available Online 20 April 2021.
DOI
10.2991/ijcis.d.210412.001How to use a DOI?
Keywords
Optimizer; Distributed learning; SGD; Error-feedback; Deep neural networks
Abstract

We show that the convergence proof of a recent algorithm called dist-EF-SGD for distributed stochastic gradient descent with communication efficiency using error-feedback of Zheng et al., Communication-efficient distributed blockwise momentum SGD with error-feedback, in Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019 (NeurIPS 2019), 2019, pp. 11446–11456, is problematic mathematically. Concretely, the original error bound for arbitrary sequences of learning rate is unfortunately incorrect, leading to an invalidated upper bound in the convergence theorem for the algorithm. As evidences, we explicitly provide several counter-examples, for both convex and nonconvex cases, to show the incorrectness of the error bound. We fix the issue by providing a new error bound and its corresponding proof, leading to a new convergence theorem for the dist-EF-SGD algorithm, and therefore recovering its mathematical analysis.

Copyright
© 2021 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Download article (PDF)
View full text (HTML)

Journal
International Journal of Computational Intelligence Systems
Volume-Issue
14 - 1
Pages
1373 - 1387
Publication Date
2021/04/20
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.d.210412.001How to use a DOI?
Copyright
© 2021 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Tran Thi Phuong
AU  - Le Trieu Phong
PY  - 2021
DA  - 2021/04/20
TI  - Communication-Efficient Distributed SGD with Error-Feedback, Revisited
JO  - International Journal of Computational Intelligence Systems
SP  - 1373
EP  - 1387
VL  - 14
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.d.210412.001
DO  - 10.2991/ijcis.d.210412.001
ID  - Phuong2021
ER  -