Distributed and Private Coded Matrix Computation with Flexible Communication Load

Document Type

Conference Proceeding

Publication Date

7-1-2019

Abstract

Tensor operations, such as matrix multiplication, are central to large-scale machine learning applications. These operations can be carried out on a distributed computing platform with a master server at the user side and multiple workers in the cloud operating in parallel. For distributed platforms, it has been recently shown that coding over the input data matrices can reduce the computational delay, yielding a tradeoff between recovery threshold and communication load. In this work, we impose an additional security constraint on the data matrices and assume that workers can collude to eavesdrop on the content of these data matrices. Specifically, we introduce a novel class of secure codes, referred to as secure generalized PolyDot codes, that generalizes previously published non-secure versions of these codes for matrix multiplication. These codes extend the state-of-the-art by allowing a flexible trade-off between recovery threshold and communication load for a fixed maximum number of colluding workers.

Identifier

85073151077 (Scopus)

ISBN

[9781538692912]

Publication Title

IEEE International Symposium on Information Theory Proceedings

External Full Text Location

https://doi.org/10.1109/ISIT.2019.8849606

ISSN

21578095

First Page

1092

Last Page

1096

Volume

2019-July

Grant

1525629

Fund Ref

National Science Foundation

This document is currently not available here.

Share

COinS