Skip to the content.

The third Workshop on Challenges and Opportunities of Efficient and Performant Storage Systems (CHEOPS'23)

Held in conjunction with EuroSys 2023 on May 8th 2023, Rome, Italy

Agenda

The workshop will take place in an in-person format. The proceedings are available in the ACM Digital Library. Two selected papers were also published in ACM SIGOPS Operating Systems Review (Volume 58, Issue 1).

Time Content
8:30-9:00 Registration
9:00-9:10 Welcome - Shadi Ibrahim
  Keynote Session - Chair: Jalil Boukhobza
9:10-10:00 Keynote Speaker - Angelos Bilas (FORTH and University of Crete)
Efficient key-value store design and implementation for fast storage devices (More) Slides
  Session 1 - Chair: Kira Duwe
10:00-10:30 Lethe: Secure Deletion by Addition Slides
Eugene Chou (UC Santa Cruz), Leo Conrad-Shah (UC Santa Cruz), Austen Barker (Sandia National Laboratories), Andrew Quinn (UC Santa Cruz), Ethan L. Miller (UC Santa Cruz / Pure Storage), Darrell D. E. Long (UC Santa Cruz)
10:30-11:00 Coffee break
  Session 2 - Chair: Michael Kuhn
11:00-11:30 An Evaluation of DAOS for Simulation and Deep Learning HPC Workloads Slides
Luke Logan (Illinois Institute of Technology), Jay Lofstead (Sandia National Laboratories), Xian-He Sun (Illinois Institute of Technology), Anthony Kougkas (Illinois Institute of Technology)
11:30-12:00 Intelligent Data Migration Policies in a Write-Optimized Copy-on-Write Tiered Storage Stack Slides
Johannes Wünsche (Otto von Guericke University Magdeburg), Sajad Karim (Otto von Guericke University Magdeburg), Michael Kuhn (Otto von Guericke University Magdeburg), Gunter Saake (Otto von Guericke University Magdeburg), David Broneske (German Center for Higher Education Research and Science Studies)
12:00-12:30 Is Bare-metal I/O Performance with User-defined Storage Drives Inside VMs Possible?: Benchmarking libvfio-user vs. Common Storage Virtualization Configurations Slides
Sebastián Rolón (McGill University), Oana Balmau (McGill University)
12:30-13:00 Performance Characterization of Modern Storage Stacks: POSIX I/O, libaio, SPDK, and io_uring Slides
Zebin Ren (Vrije Universiteit Amsterdam), Animesh Trivedi (Vrije Universiteit Amsterdam)
13:00-14:30 Lunch
  Invited talk Session 1 - Chair: Jean-Thomas Acquaviva
14:30-15:00 Oana Balmau (McGill University, Canada)
Characterizing I/O Patterns in Machine Learning (More) Slides
15:00-15:30 Radu Stoica (IBM Zurich, Switzerland)
Next-Gen Cloud Storage: Leveraging DPUs to Virtualize File System Services (More) Slides
15:30-16:00 Jay Lofstead (Sandia National Laboratories, USA)
Making Data Useful Through Metadata Techniques (More) Slides
16:00-16:30 Coffee break
  Invited talk Session 2 - Chair: Shadi Ibrahim
16:30-17:00 Kira Duwe (EPFL, Switzerland)
Efficient management of self-describing data formats (More) Slides
17:00-17:20 Discussion
17:20-17:30 Closing Remarks

Keynote Speaker

Angelos Bilas, FORTH and University of Crete

Keynote: Efficient key-value store design and implementation for fast storage devices

Abstract
In this talk I will present our work on two aspects of key-value stores for fast storage devices:
(a) Indexing and hybrid key-value placement for reducing I/O amplification. In our work, we propose hybrid KV placement to reduce GC overhead significantly and increase the benefits of using Key-Value separation. Our Parallax prototype classifies KV pairs in three categories small, medium, and large. Then, Parallax uses different approaches for each KV category: It always places large values in a log and small values in place. For medium values it uses a mixed strategy that combines the benefits of using a log and eliminates GC overhead.
(b) Using memory-mapped I/O for reducing caching overhead. Memory-mapped I/O provides several potential advantages over explicit read/write I/O, especially for low latency de- vices: (1) It does not require a system call, (2) it incurs almost zero overhead for data in memory (I/O cache hits), and (3) it removes copies between kernel and user space. We first show that the performance of Linux memory-mapped I/O does not scale beyond 8 threads on a 32-core server. To overcome these limitations, we propose FastMap, an alternative design for the memory-mapped I/O path in Linux that provides scalable access to fast storage devices in multi-core servers, by reducing synchronization overhead in the common path.
Finally, I will briefly present the current direction of our work on key-value stores.

Bio
Angelos Bilas is a Professor in the Department of Computer Science, University of Crete, where he has also held an Associate Professor position between 2002-2011, and a collaborating researcher at FORTH-ICS. Between 2016-2020 he has served as the Chair in the Department of Computer Science. Prof. Bilas received his diploma in Computer Engineering from the University of Patras in 1993, and and his Masters and Ph.D. degrees in Computer Science from Princeton University, NJ in 1995 and 1998 respectively. Between 1998-2002 he has been an Assistant Professor with the ECE Department at the University of Toronto. His current interests include systems software for efficient storage systems, computing and storage architectures, low-latency high-bandwidth communication protocols, and runtime-system support for parallel systems. His work has been published in prestigious conferences in computer architecture and systems, including Usenix ATC, ACM SoCC, and Eurosys. He has been the recipient of a Marie Curie Excellent Teams Award (2005-2009), he has published more than 100 papers in peer-reviewed conferences workshops, and journals, he has served in the Program Committees of more than 90 conferences and workshops in his area, as the Program Chair for IEEE Cluster 2010, in the Editorial Board of IEEE Computer Architecture Letters (CAL) [2007-2011], the Journal of Parallel and Distributed Computing (JPDC) [May 2011-May 2018], and he currently serves in the Editorial Board of ACM Transactions on Storage [2016-]. He has (co)supervised or he is (co)supervising 35 Masters and 8 Ph.D students. He has participated in more than 45 EU or nationally funded projects, he is the scientific director for the H2020 project EVOLVE [2019-2021] (HPC and Big Data enabled Large-scale Test-beds and Applications), and has been the coordinator of the FP7 EU project IOLanes [2010-2013]. He has collaborated extensively with international industry, he has been awarded 4 patents and 1 patent applications, while his research has been licensed to startups (two rounds of research results) and has been used in products by a Fortune50 company. [http://www.csd.uoc.gr/~bilas]

Invited Speakers

Oana Balmau, McGill University

Invited Talk: Characterizing I/O Patterns in Machine Learning

Abstract
Data is the driving force behind machine learning (ML) algorithms. The way we ingest, store, and serve data can impact end-to-end training and inference performance significantly. For instance, as much as 50% of the power can go into storage and data cleaning in large production settings. The amount of data that we produce is growing exponentially, making it expensive and difficult to keep entire training datasets in main memory. Increasingly, ML algorithms will need to access data directly from persistent storage in an efficient manner. To address this challenge, this work sets out to characterize I/O patterns in ML, with a focus on data pre-processing and training.
We use trace collection to understand storage impact in ML. Key factors we are investigating include the workload type, software framework used (e.g., PyTorch, Tensorflow), accelerator type (e.g., GPU, TPU), dataset size to memory ratio, and degree of parallelism. The trace collection is done mainly through eBPF and other system monitoring tools such as mpstat, and NVIDIA Nsight. Our traces include VFS-layer calls such as read, write, open, create, etc. as well as mmap calls, block I/O accesses, CPU use, memory use, and accelerator use. Based on the trace analysis, we plan to build a synthetic I/O workload generator. The workload generator will accurately reproduce I/O patterns for representative ML workloads, simulating the computation time.

Bio
Oana Balmau is an Assistant Professor in the School of Computer Science at McGill University, where she leads the DISCS Lab. Her research focuses on storage systems and data management systems, with an emphasis on machine learning, data science, and edge computing workloads. She completed her PhD at the University of Sydney, advised by Prof. Willy Zwaenepoel. Before her PhD, Oana earned her Bachelors and Masters degrees in Computer Science from EPFL, Switzerland. Oana won the CORE John Makepeace Bennet Award 2021 for the best computer science dissertation in Australia and New Zealand, an Honorable Mention for the ACM SIGOPS Dennis M. Ritchie Doctoral Dissertation Award 2021. She is also a part of MLCommons, an open engineering consortium that aims to accelerate machine learning innovation, where she is leading the effort for storage benchmarking.

Radu Stoica, IBM Zurich

Invited Talk: Next-Gen Cloud Storage: Leveraging DPUs to Virtualize File System Services

Abstract
As we move towards hyper-converged cloud solutions, the efficiency and overheads of distributed file systems at the cloud tenant side (i.e., client) become of paramount importance. Often, the client-side driver of a cloud file system is complex and CPU intensive, deeply coupled with the backend implementation, and requires optimizing multiple intrusive knobs. We present our approach to decouple the file system client from its backend implementation by virtualizing it with an off-the-shelf DPU using the Linux virtio-fs/FUSE framework. The decoupling allows us to offload the file system client execution to a DPU, which is managed and optimized by the cloud provider, while freeing the host CPU cycles. DPFS, our proposed framework, is 4.4X more CPU efficient per I/O, delivers comparable performance to a tenant with zero configuration or modification to their host software stack, while allowing workload-specific backend optimizations.

Bio
Radu Stoica is a Staff Research Scientist and Manager at the IBM Research Lab in Zurich, Switzerland. His current research interests lie in the area of cloud infrastructure software.
Radu received his Ph.D. from EPFL in 2014. His thesis work focused on adapting relational database systems to take full advantage of emerging non-volatile storage technologies such as flash memory. After graduation, he joined IBM Research Zurich working at first on designing enterprise-grade SSD controllers for multiple generations of IBM All FlashArray storage appliances. In 2022, he was promoted to Research Manager of the Hybrid Cloud / Infrastructure Software group, which focuses on developing cloud-native storage systems, serverless data processing frameworks, and software infrastructure to support large-scale machine learning training and inference.
Radu’s accomplishments include co-authoring over 30 scientific publications, holding more than 40 US and European patents in various stages of approval, and being recognized with several IBM technical awards, including the Corporate Technical Award.

Jay Lofstead, Sandia National Laboratories

Invited Talk: Making Data Useful Through Metadata Techniques

Abstract
As science data volumes continue to grow at rates faster than network bandwidth, the challenges shift to finding and analyzing the right data rather than simply searching data on demand. For many years, different tool generations have offered different kinds of functionality to aid in making data useful justifying storage costs. This talk will describe the three existing tool generations as well as an emerging fourth generation aiming to make scientists more productive working with their vast data sets.

Bio
Jay Lofstead is a Principal Member of Technical Staff in the Scalable System Software Group at Sandia National Laboratories. I work on issues related to scientific data for high performance computing applications. Most of my work has been related to IO and data movement while more recent work has been investigating infrastructure to support workflows, storage systems, energy use of data, and resilience techniques to enable reducing data movement.

Kira Duwe, EPFL

Invited Talk: Efficient management of self-describing data formats

Abstract
In times of growing data sizes performing insightful analysis is a major challenge, especially in HPC. The software stack in these systems and the hardware hierarchy used complicate data analysis. While data can be filtered using data formats like NetCDF, HDF5, and ADIOS2, querying metadata in these libraries is difficult since they are stored within the corresponding data formats on data servers.
Therefore, we propose dissecting the file formats into file metadata and data. The approach enables novel and efficient data management techniques by using dedicated backends such as key-value and object stores to manage different data categories transparently. Furthermore, we develop a novel data analysis interface (DAI) for the JULEA storage framework that allows the pre-computation of post-processing operations and tagging specific features to reduce data access during analysis. The evaluation shows that the DAI allows speeding up metadata queries by up to a factor of 60,000 compared to the native ADIOS2 API.

Bio
Kira Duwe is a postdoc in the DIAS lab at EPFL, Switzerland. She researches HPC storage systems and the efficient management of self-describing data formats. She got her PhD from the Otto-von-Guericke University of Magdeburg, supervised by Jun.-Prof. Michael Kuhn. Kira studied Computer Science at the University of Hamburg and worked at the German Climate Computing Center.

Workshop Description

The CHEOPS workshop is aimed at researchers, developers of scientific applications, engineers and everyone interested in the evolution of storage systems. As the developments of computing power, storage and network technologies continue to diverge, the bandwidth performance gap between them widens. This trend, combined with the ever growing data volumes and data-driven computing such as machine learning, results in I/O and storage limitations, impacting the scalability and efficiency of current and future computing systems. Some of these challenges are quantitative, such as scale to match exascale system requirements, or latency reduction of the software stack to efficiently integrate new generations of hardware like storage class memory (SCM). Some other issues are more subtle and arise with the increased complexity of the storage solutions, like new smarter and more potent data management tools, monitoring systems or interoperability between I/O components or data formats.

The main objective of this workshop is to discuss state-of-the-art research, innovative ideas and experiences that focus on the design and implementation of storage systems in both academic and industrial worlds.

Important Dates

Submission Guidelines

In order to guarantee the quality of the submissions, we have formed a globally distributed, diverse program committee. All submissions will be reviewed by the program committee. We will use HotCRP to manage the submissions. The reviewing process will be double blind with at least 3 reviews for each submission. An online discussion will determine which papers to accept.

Only original and novel work not currently under review in other venues will be considered for publication. Submissions can either be full papers (6 pages) or short papers (4 pages). The page count includes the title, text, figures, appendices but excludes the references. They must be submitted electronically as PDF files formatted according to the submission rules of EuroSys. Accepted submissions will have to comply with the EuroSys proceedings format. One author of each accepted paper is required to register for the workshop and present the paper. Extended versions of selected papers will be considered for publication in the ACM SIGOPS Operating Systems Review journal.

Presentations can be given either in-person or via pre-recorded videos. In the latter case, the organizers will collect questions during the presentation and perform a live Q&A session with the presenter, who should be available via video conference.

Rights Forms

You will find a link to the ACM copyright form on your paper’s HotCRP page. Once completed, ACM will send out the information and LaTeX directives (DOI, ISBN etc.) needed to complete the camera-ready version of your paper.

Camera-Ready Format

You should use the acmart document class (https://www.acm.org/publications/proceedings-template, the same as for submission), as follows: \documentclass[sigplan,10pt]{acmart}

As mentioned above, you will receive the instruction regarding some LaTeX directives (\setcopyright, \acmConference, \acmDOI etc.) after completing the copyright form.

All accepted papers can use up to 2 additional pages for the camera-ready version, for a final limit of 8 (full papers) or 6 (short papers) pages, references not included.

Note that Type 1 fonts (scalable) should be used, not Type 3 (bitmapped), and that all fonts must be embedded. Type and embedding of fonts can be checked with various tools including pdffonts. Page numbers should be suppressed. Make also sure that the PDF is searchable by testing the search function in a PDF reader.

Uploading Final Versions

The camera-ready version of your paper and its LaTeX sources have to be uploaded via HotCRP. As a reminder, the camera-ready deadline for all papers is April 2nd, 2023.

Topics of Interest

Submissions may be more hands-on than research papers and we therefore explicitly encourage submissions in the early stages of research. Topics of interest include, but are not limited to:

WORK IN PROGRESS (WIP) SESSION

There will be a WIP session where presenters provide brief (5-minute) talks on their on-going work, with fresh problems/solutions. WIP will not be included in the proceedings. A one-page abstract is required.

DEADLINES

Submissions by email: Please email your submission as a PDF attachment of the one-page abstract to Shadi Ibrahim (shadi.ibrahim@inria.fr) and Suren Byna (byna.1@osu.edu). Put “Cheops 2023 WIP” as the first part of the message subject.

Organization

Steering Committee

General Chair

Program Chairs

Program Committee (TBC)