Scheduling memory access on a distributed cloud storage network
Document Type
Conference Proceeding
Publication Date
5-29-2012
Abstract
Memory-access speed continues falling behind the growing speeds of network transmission links. High-speed network links provide a means to connect memory placed in hosts, located in different corners of the network. These hosts are called storage system units (SSUs), where data can be stored. Cloud storage provided with a single server can facilitate large amounts of storage to a user, however, at low access speeds. A distributed approach to cloud storage is an attractive solution. In a distributed cloud, small high-speed memories at SSUs can potentially increase the memory access speed for data processing and transmission. However, the latencies of each SSUs may be different. Therefore, the selection of SSUs impacts the overall memory access speed. This paper proposes a latency-aware scheduling scheme to access data from SSUs. This scheme determines the minimum latency requirement for a given dataset and selects available SSUs with the required latencies. Furthermore, because the latencies of some selected SSUs may be large, the proposed scheme notifies SSUs in advance of the expected time to perform data access. The simulation results show that the proposed scheme achieves faster access speeds than a scheme that randomly selects SSUs and another hat greedily selects SSUs with small latencies. © 2012 IEEE.
Identifier
84861427601 (Scopus)
ISBN
[9781467309394]
Publication Title
2012 21st Annual Wireless and Optical Communications Conference Wocc 2012
External Full Text Location
https://doi.org/10.1109/WOCC.2012.6198152
First Page
71
Last Page
76
Recommended Citation
Rojas-Cessa, Roberto; Cai, Lin; and Kijkanjanarat, Taweesak, "Scheduling memory access on a distributed cloud storage network" (2012). Faculty Publications. 18243.
https://digitalcommons.njit.edu/fac_pubs/18243
