Understanding and modeling lossy compression schemes on HPC scientific data

Document Type

Conference Proceeding

Publication Date

8-3-2018

Abstract

Scientific simulations generate large amounts of floating-point data, which are often not very compressible using the traditional reduction schemes, such as deduplication or lossless compression. The emergence of lossy floating-point compression holds promise to satisfy the data reduction demand from HPC applications; however, lossy compression has not been widely adopted in science production. We believe a fundamental reason is that there is a lack of understanding of the benefits, pitfalls, and performance of lossy compression on scientific data. In this paper, we conduct a comprehensive study on state-of-The-Art lossy compression, including ZFP, SZ, and ISABELA, using real and representative HPC datasets. Our evaluation reveals the complex interplay between compressor design, data features and compression performance. The impact of reduced accuracy on data analytics is also examined through a case study of fusion blob detection, offering domain scientists with the insights of what to expect from fidelity loss. Furthermore, the trial and error approach to understanding compression performance involves substantial compute and storage overhead. To this end, we propose a sampling based estimation method that extrapolates the reduction ratio from data samples, to guide domain scientists to make more informed data reduction decisions.

Identifier

85052190849 (Scopus)

ISBN

[9781538643686]

Publication Title

Proceedings 2018 IEEE 32nd International Parallel and Distributed Processing Symposium IPDPS 2018

External Full Text Location

https://doi.org/10.1109/IPDPS.2018.00044

First Page

348

Last Page

357

Grant

CCF-1717660

Fund Ref

National Science Foundation

This document is currently not available here.

Share

COinS