SNIA Fosters Industry Knowledge of Collaborative Standards Engagements

November 2024 was a memorable month to engage with audiences at The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC) 24 and Technology Live! to provide the latest on collaborative standards development and discuss high performance computing, artificial intelligence, and the future of storage.

At SC24, seven industry consortiums participated in an Open Standards Pavilion to discuss their joint activities in memory and interconnect standards, storage standards, networking fabric standards, and management and orchestration. Technology leaders from DMTF, Fibre Channel Industry Association, OpenFabrics Alliance, SNIA, Ultra Accelerator Link Consortium, Ultra Ethernet Consortium, and Universal Chiplet Interconnect Express™ Consortium shared how these standards are collaborating to foster innovation as technology trends accelerate. CXL® Consortium, NVM Express®, and PCI-SIG® joined these groups in a lively panel discussion moderated by Richelle Ahlvers, Vice Chair SNIA Board of Directors, on their cooperation in standards development. Read More

Storage for AI Q&A

Our recent SNIA Data, Networking & Storage Forum (DNSF) webinar, “AI Storage: The Critical Role of Storage in Optimizing AI Training Workloads,” was an insightful look at how AI workloads interact with storage at every stage of the AI data pipeline with a focus on data loading and checkpointing. Attendees gave this session a 5-star rating and asked a lot of wonderful questions. Our presenter, Ugur Kaynar, has answered them here. We’d love to hear your questions or feedback in the comments field. Q. Great content on File and Object Storage, Are there any use cases for Block Storage in AI infrastructure requirements? A. Today, by default, AI frameworks cannot directly access block storage, and need a file system to interact with block storage during training. Block storage provides raw storage capacity, but it lacks the structure needed to manage files and directories. Like most AI frameworks, PyTorch depends on a file system to manage and access data stored on block storage. Q. Do high speed networks make some significant enhancements to I/O and checkpointing process? Read More

Solving Cloud Object Storage Incompatibilities in a Multi-Vendor Community

The SNIA Cloud Storage Technologies Initiative (CSTI) conducted a poll early in 2024 during a live webinar “Navigating Complexities of Object Storage Compatibility,” citing 72% of organizations have encountered incompatibility issues between various object storage implementations. These results resulted in a call to action for SNIA to create an open expert community dedicated to resolving these issues and building best practices for the industry. Since then, SNIA CSTI has partnered with the SNIA Cloud Storage Technical Work Group (TWG) and successfully organized, hosted, and completed the first SNIA Cloud Object Storage Plugfest (multi-vendor interoperability testing), co-located at SNIA Developer Conference (SDC), September 2024, in Santa Clara, CA. Participating Plugfest companies included engineers from Dell, Google, Hammerspace, IBM, Microsoft, NetApp, VAST Data, and Versity Software. Three days of Plugfest testing discovered and resolved issues, and included a Birds of a Feather (BoF) session to gain consensus on next steps for the industry. Plugfest contributors are now planning two 2025 Plugfest events in Denver in April and Santa Clara in September. It’s a collaborative effort that we’ll discuss in detail on November 21, 2024 at our next live SNIA CSTI webinar, “Building a Community to Tackle Cloud Object Storage Incompatibilities.” At this webinar, we will share insights into industry best practices, explain the benefits your implementation may gain with improved compatibility, and provide an overview of how a wide range of vendors is uniting to address real customer issues, discussing: Read More

Computing in Space: Pushing Boundaries with Off-the-Shelf Tech

Can commercial off-the-shelf technology survive in space? This question is at the heart of Hewlett Packard Enterprise’s Spaceborne Computer-2 Project. Dr. Mark Fernandez (Principal Investigator for Spaceborne Computer Project, Hewlett Packard Enterprise) and Cameron T. Brett (Chair, SNIA STA Forum) discuss this project with SNIA host Eric Wright, and you can watch that full video here, listen to the podcast, or you can read on to learn more. By utilizing enterprise SAS and NVMe SSDs, they are revolutionizing edge computing in space on the International Space Station (ISS). This breakthrough is accelerating experiment processing, like DNA analysis, from months to minutes, and significantly improving astronaut health monitoring and safety protocols.

The Role of SAS in Space Read More

Unveiling the Power of 24G SAS: Enabling Storage Scalability in OCP Platforms

By Cameron T. Brett & Pankaj Kalra

In the fast-paced world of data centers, innovation is key to staying ahead of the curve. The Open Compute Project (OCP) has been at the forefront of driving innovation in data center hardware, and its latest embrace of 24G SAS technology is a testament to this commitment. Join us as we delve into the exciting world of 24G SAS and its transformative impact on OCP data centers.

OCP’s Embrace of 24G SAS

The OCP datacenter SAS-SATA device specification, a collaborative effort involving industry giants like Meta, HPE, and Microsoft, was first published in 2023. This specification laid the groundwork for the integration of 24G SAS technology into OCP data centers, marking a significant milestone in storage innovation.

The Rise of SAS in Hyperscale Environments Read More

The Evolution of Congestion Management in Fibre Channel

The Fibre Channel (FC) industry introduced Fabric Notifications as a key resiliency mechanism for storage networks in 2021 to combat congestion, link integrity, and delivery errors. Since then, numerous manufacturers of FC SAN solutions have implemented Fabric Notifications and enhanced the overall user experience when deploying FC SANs. On August 27, 2024, the SNIA Data, Networking & Storage Forum is hosting a live webinar, “The Evolution of Congestion Management in Fibre Channel,” for a deep dive into Fibre Channel congestion management. We’ve convened a stellar, multi-vendor group of Fibre Channel experts with extensive Fibre Channel knowedge and different technology viewpoints to explore the evolution of Fabric Notifications and the available solutions of this exciting new technology. You’ll learn: Read More

Three Truths About Hard Drives and SSDs

An examination of the claim that flash will replace hard drives in the data center

“Hard drives will soon be a thing of the past.”

“The data center of the future is all-flash.”

Such predictions foretelling hard drives’ demise, perennially uttered by a few vocal proponents of flash-only technology, have not aged well.

Without question, flash storage is well-suited to support applications that require high-performance and speed. And flash revenue is growing, as is all-flash-array (AFA) revenue. But not at the expense of hard drives.

We are living in an era where the ubiquity of the cloud and the emergence of AI use cases have driven up the value of massive data sets. Hard drives, which today store by far the majority of the world’s exabytes (EB), are more indispensable to data center operators than ever. Industry analysts expect hard drives to be the primary beneficiary of continued EB growth, especially in enterprise and large cloud data centers—where the vast majority of the world’s data sets reside. Read More

Ceph Q&A

In a little over a month, more than 1,500 people have viewed the SNIA Cloud Storage Technologies Initiative (CSTI) live webinar, “Ceph: The Linux of Storage Today,” with SNIA experts Vincent Hsu and Tushar Gohad. If you missed it, you can watch it on-demand at the SNIA Educational Library. The live audience was extremely engaged with our presenters, asking several interesting questions. As promised, Vincent and Tushar have answered them here. Given the high level of this interest in this topic, the CSTI is planning additional sessions on Ceph. Please follow us @SNIACloud or at SNIA LinkedIn for dates. Q: How many snapshots can Ceph support per cluster? Q: Does Ceph provide Deduplication? If so, is it across objects, file and block storage?  A: There is no per-cluster limit. In the Ceph filesystem (cephfs) it is possible to create snapshots on a per-path basis, and currently the configurable default limit is 100 snapshots per path. The Ceph block storage (rbd) does not impose limits on the number of snapshots.  However, when using the native Linux kernel rbd client there is a limit of 510 snapshots per image. Read More

30 Speakers Highlight AI, Memory, Sustainability, and More at the May 21-22 Summit!

SNIA Compute, Memory, and Storage Summit is where solutions, architectures, and community come together. Our 2024 Summit – taking place virtually on May 21-22, 2024 – is the best example to date, featuring a stellar lineup of 30 speakers in sessions on artificial intelligence, the future of memory, sustainability, critical storage security issues, the latest on CXL®, UCIe™, and Ultra Ethernet, and more. “We’re excited to welcome executives, architects, developers, implementers, and users to our 12th annual Summit,” said David McIntyre, Compute, Memory, and Storage Summit Chair and member of the SNIA Board of Directors. “Our event features technology leaders from companies like Dell, IBM, Intel, Meta, Samsung – and many more – to bring us the latest developments in AI, compute, memory, storage, and security in our free online event.  We hope you will attend live to ask questions of our experts as they present and watch those you miss on-demand.“ Read More

Power Efficiency Measurement – Our Experts Make It Clear – Part 4

Measuring power efficiency in datacenter storage is a complex endeavor. A number of factors play a role in assessing individual storage devices or system-level logical storage for power efficiency. Luckily, our SNIA experts make the measuring easier! In this SNIA Experts on Data blog series, our experts in the SNIA Solid State Storage Technical Work Group and the SNIA Green Storage Initiative explore factors to consider in power efficiency measurement, including the nature of application workloads, IO streams, and access patterns; the choice of storage products (SSDs, HDDs, cloud storage, and more); the impact of hardware and software components (host bus adapters, drivers, OS layers); and access to read and write caches, CPU and GPU usage, and DRAM utilization. Join us on our final installment on the  journey to better power efficiency – Part 4: Impact of Storage Architectures on Power Efficiency Measurement. And if you missed our earlier segments, click on the titles to read them:  Part 1: Key Issues in Power Efficiency Measurement,  Part 2: Impact of Workloads on Power Efficiency Measurement, and Part 3: Traditional Differences in Power Consumption: Hard Disk Drives vs Solid State Drives.  Bookmark this blog series and explore the topic further in the SNIA Green Storage Knowledge Center. Impact of Storage Architectures on Power Efficiency Measurement Ultimately, the interplay between hardware and software storage architectures can have a substantial impact on power consumption. Optimizing these architectures based on workload characteristics and performance requirements can lead to better power efficiency and overall system performance. Read More