Unveiling the Power of 24G SAS: Enabling Storage Scalability in OCP Platforms

By Cameron T. Brett & Pankaj Kalra

In the fast-paced world of data centers, innovation is key to staying ahead of the curve. The Open Compute Project (OCP) has been at the forefront of driving innovation in data center hardware, and its latest embrace of 24G SAS technology is a testament to this commitment. Join us as we delve into the exciting world of 24G SAS and its transformative impact on OCP data centers.

OCP’s Embrace of 24G SAS

The OCP datacenter SAS-SATA device specification, a collaborative effort involving industry giants like Meta, HPE, and Microsoft, was first published in 2023. This specification laid the groundwork for the integration of 24G SAS technology into OCP data centers, marking a significant milestone in storage innovation.

The Rise of SAS in Hyperscale Environments Read More

New Standard Brings Certainty to the Process of Proper Eradication of Data

A wide variety of data types are recorded on a range of data storage technologies, and businesses need to ensure data residing on data storage devices and media are disposed of in a way that ensures compliance through verification of data eradication.

When media are repurposed or retired from use, the stored data often must be eliminated (sanitized) to avoid potential data breaches. Depending on the storage technology, specific methods must be employed to ensure that the data is eradicated on the logical/virtual storage and media-aligned storage in a verifiable manner.

Existing published standards such as NIST SP 800-88 Revision 1 (Media Sanitization) and ISO/IEC 27040:2015 (Information technology – Security techniques – Storage security) provide guidance on sanitization, covering storage technologies from the last decade but have not kept pace with current technology or legislative requirements.  

New standard makes conformance clearer

Read More

Storage Implications of Doing More at the Edge

In our SNIA Networking Storage Forum webcast series, “Storage Life on the Edge” we’ve been examining the many ways the edge is impacting how data is processed, analyzed and stored. I encourage you to check out the sessions we’ve done to date: On June 15, 2022, we continue the series with “Storage Life on the Edge: Accelerated Performance Strategies” where our SNIA experts will discuss the need for faster computing, access to storage, and movement of data at the edge as well as between the edge and the data center, covering: Read More

Storage for AI Q&A

What types of storage are needed for different aspects of AI? That was one of the many topics covered in our SNIA Networking Storage Forum (NSF) webcast “Storage for AI Applications.” It was a fascinating discussion and I encourage you to check it out on-demand. Our panel of experts answered many questions during the live roundtable Q&A. Here are answers to those questions, as well as the ones we didn’t have time to address. Q. What are the different data set sizes and workloads in AI/ML in terms of data set size, sequential/ random, write/read mix? A. Data sets will vary incredibly from use case to use case. They may be GBs to possibly 100s of PB. In general, the workloads are very heavily reads maybe 95%+. While it would be better to have sequential reads, in general the patterns tend to be closer to random. In addition, different use cases will have very different data sizes. Some may be GBs large, while others may be <1 KB. The different sizes have a direct impact on performance in storage and may change how you decide to store the data. Read More

Storage for Applications Webcast Series

Everyone enjoys having storage that is fast, reliable, scalable, and affordable. But it turns out different applications have different storage needs in terms of I/O requirements, capacity, data sharing, and security.  Some need local storage, some need a centralized storage array, and others need distributed storage—which itself could be local or networked. One application might excel with block storage while another with file or object storage. For example, an OLTP database might require small amounts of very fast flash storage; a media or streaming application might need vast quantities of inexpensive disk storage with extra security safeguards; while a third application might require a mix of different storage tiers with multiple servers sharing the same data. This SNIA Networking Storage Forum “Storage for Applications” webcast series will cover the storage requirements for specific uses such as artificial intelligence (AI), database, cloud, media & entertainment, automotive, edge, and more. With limited resources, it’s important to understand the storage intent of the applications in order to choose the right storage and storage networking strategy, rather than discovering the hard way that you’ve chosen the wrong solution for your application. We kick off this series on October 5, 2020 with “Storage for AI Applications.” AI is a technology which itself encompasses a broad range of use cases, largely divided into training and inference. Read More

An FAQ on Data Reduction Fundamentals

There’s a fair amount of confusion when it comes to data reduction terminology and techniques. That’s why the SNIA Networking Storage Forum (NSF) hosted a live webcast, “Everything You Wanted to Know About Storage But Were Too Proud to Ask: Data Reduction.”  It was a 101-level lesson on the fundamentals of data reduction, which can be performed in different places and at different stages of the data lifecycle. The goal was to clear up confusion around different data reduction and data compression techniques and set the stage for deeper dive webcasts on this topic (see the end of this blog for info on those). As promised during the webcast, here are answers to the questions we didn’t have time to address during the live event. Q. Does block level compression have any direct advantage over file level compression? Read More