Storage Trends: Your Questions Answered

At our recent SNIA SCSI Trade Association Forum webinar, “Storage Trends 2024” our industry experts discussed new storage trends developing in the coming year, the applications and other factors driving these trends, and shared market data that illustrated the assertions. If you missed the live event, you can watch it on-demand in the SNIA Educational Library. Questions from the audience ranged from projections about the split between on-prem vs public cloud to queries about different technologies and terms such as NVMe, LTO tape, EDSFF and Cyber Storage. Here are answers to the audience’s questions.

Q1: What is the future of HDDs?

A1: HDDs are not in any hurry to depart the market. Despite high-capacity flash being out there, the cost per terabyte for HDDs are still very much in the platters’ favor. Not all jobs need to be done on flash.

Q2: Are hard drives out in 2024? Is the world ready for full flash?

A2. We are seeing a lot of flash for back up. Not necessarily to make your back up faster, but for something like instant restores or vehement instant recovery, to be able to run those on robust infrastructure is interesting. Also from the AI perspective, to be able to run AI jobs against backup data that doesn’t hit production or doesn’t require a big batch of copy data to be stood up so that we now have multiple copies of data, is an interesting use case. This is very much in the minority, but something we’re seeing.

Q3 What will be the role of LTO tape in this future?

A3: We’re seeing object deployed on tape. Quantum is doing this, a lot of clouds are doing this, despite how they may market it. Tape is still very much a thing, outside of today’s scope, but it is storage. When you look at some of these big libraries like DiamondBack from IBM, SpectraLogic has got several – both were very popular booths & technologies at the SC’23 industry event. The HPC installments are using tape, and I would imagine a massive chunk of the Fortune 500 are as well.

Q4: What about NVMe HDDs?

A4: The IDC chart about NVMe HDD penetration in the market didn’t even have NVMe HDDs on the line, but they show up at OCP at the storage sessions there. NVMe HDDs have been shown at industry events before, and StorageReview.com has written about them a couple of times. While they are not really a large presence in the market yet, they are certainly interesting and fun to think about. Like with EDSFF, where we are unifying on a connector, and it is interesting to think about a universal connector for drives.

There’s certainly some interest. When you really step back, it is an interface change, but one of the reasons that’s a little slow to adopt, if you look at the earlier IDC chart about amount of capacity and the amount of infrastructure that’s currently in place, that is SAS-leveraged, it’s hard to displace that. While obviously there are some benefits around trying to unify around NVMe, over the long term there could be some TCO benefits. There is a very large installed SAS base, and the market continues to see value in that. That said, is it interesting, is it exciting – yes, you can see how some of these technologies may continue to evolve, and as flash does continue to become a bigger percentage of the ecosystem, it may help to further that along. There’s a lot of ecosystem that’s still built around SAS.

Q5. Does it make sense for HDDs to stay at a SAS interface when SSDs are moving to NVMe? Wouldn’t it make more sense to have HDDs with NVMe interface to leverage a single interface.

A5. This is an active project within OCP. It simplifies design but also introduces new challenges, as in PCIe lane allocation in system designs.

Q6: This question is for Jeff Janukowicz (IDC), any projections on the split between public cloud and on-prem AI storage? Can you share any trends with regards to on prem vs cloud-based data archival?

A6: IDC has done some work, but hasn’t published anything around that quite yet. Obviously, the initial wave around AI is being driven by the public cloud vendors. IDC survey work suggests that the way this will ultimately start to play out will be that a lot of folks will then look to customize or build upon some of those publicly available models that are out there. Those are likely to move back on-prem, for compliance reasons or security reasons, or simply that people want to have that data in-house. When we say AI, it doesn’t mean that everything is going to the cloud. In this AI evolution, there will be a place for it in the cloud, on-prem and at the edge as well. The edge is where we will be collecting a lot of the data, when we think about inferencing and more, that will be done better at the edge. For client devices such as the PC, where Apple and Microsoft are pushing to integrate those AI features directly in the device, and next is mobile. The idea of “AI everywhere” will proliferate, and we are confident in saying yes, there will be a strong AI presence in on-prem data center as well.

Q7: It seems “Cyber Storage” is a trend, is this just a feature of storage, or an entirely new product category?

A7: It’s a term coined by Gartner, which is defined as doing threat detection and response in storage software or hardware. The SNIA Cloud Storage Technologies Initiative did a webinar on it.

Q8: Is the storage demands trend mainly affected by Hyperscalers? If yes, what do you expect in enterprise on-premise infrastructure?

A8: Demand from hyperscalers continues to be very large and represents most of the demand. Even in the IDC chart shown earlier around capacity optimization, that is from hyperscalers. This doesn’t mean that on-prem data centers are going away. People tend to leverage it either for security, compliance, or legacy reasons, so on-prem doesn’t go away. We see both continuing to co-exist and continuing to have value for different customers with different needs.

Additionally, especially from the SSD development process, there is a strong desire by SSD manufacturers to make less variants of their drives. We’re seeing a desire to manufacture drives that go to hyperscalers, who typically buy in volume and dictate what gets made for the enterprise as a by-product. We are seeing more interest in having one skew or possibly a skew with a different firmware for hyperscalers and enterprise; if we get there, that efficiency should be positive for the overall market in terms of enabling SSD vendors to make that one product in scale, and then tune it a little bit for an enterprise vs a hyperscale use. There are potential efficiencies coming.

Q9: There are so many form factors as part of “EDSFF”, how can we really call it standard, when it seems like there are 10 to choose from, plus more to come? How are drive suppliers going to focus efforts on commonality with so many choices?

A9: The standard wants to be able to accommodate all the use cases as possible, but there are many aspects that get defined with a lot of those aspects being optional. Then, just a handful of the those are defined as required. A couple more aspects become de facto standards, but there are several optional outliers that are choices for companies to develop around for more specific use cases. Initially what we are seeing is E.1S and E3.S are most likely going to be the most prevalent form factor versions, with E1.L and E3.S2T as maybe the next more common ones. There are a lot of variants that are defined just so companies can have options for specific use cases.

Q10. QLC has been asserted by some as the end of HDD. Given the projections shown today, is the death of HDDs realistic?

A10. QLC SSDs offer tremendous capacity gains over TLC SSDs and HDDs. They’re not less expensive per TB though, and that’s where HDDs will continue to hold a very critical spot. And while QLC SSDs are of course faster than HDDs, there are plenty of workloads where that speed simply isn’t needed.

Q11. While QLC has been around for years, QLC chatter and activity seems to have really picked up over the last 6 months or so. Is QLC on the precipice of having meaningful shipping volume compared to TLC? What are the drivers?

A11. For workloads that are heavily read-dependent (arguably, most workloads) QLC performance is on par with TLC. Even the endurance of QLC is more robust than most think, we’ve proven that out with our various Pi world record calculations. The density of course is another major benefit for QLC, 61.44TB in a U.2 form factor, even more in E1.L form factors. TLC will remain the go to for mixed or heavy write workloads, however.

Q12. How do you think composable memory system? is it going be a trend?

A12. Future versions of CXL promise composability and sharing of certain resources like DRAM, but this is still very fluid.

Q13. It’s not just capacity. Performance is important. Why purchase a bunch of expensive GPUs just to have them idle while waiting for data? Speed is about keeping these assts highly utilized.

A13. In our experience exploring and utilizing AI, the bottleneck in keeping GPUs busy is fabric, not storage performance. Once 800GbE gain proliferation, that math may change some and force Gen5/6 SSDs to be the default choice, but for now, it’s networking that’s limiting GPU utilization. Also remember, all AI is not created equal, and different use cases will have different storage performance needs, look at edge inferencing for a different model than data center training for instance.

Q14. We are seeing a NAND tightening from our suppliers on availability of the larger capacity SSD drives on SAS and NVMe, what is worse is non=fips vs fips, what’s up from your perspective. Thanks

A14. The NAND flash industry continues to recover from the recent memory downturn and ramping up production.  Until the industry fully recovers, NAND flash supply (and SSDs) will remain tight.

FIPS vs Non-FIPS, such a long lead time for getting FIPS and that’s affecting all of the industry. Getting certification is taking a long time, with 140-3. The storage industry generally values FIPS certification highly and are following the process closely. NIST is the organization managing.

Q15. What power limits trends are seeing from devices being deployed in E3 form factor? The trend I see is the drive companies are continuing to consume more and more power.

A15. E3 offers a variety of power envelopes based on each form factor, ranging up to 70 watts. As you go up in wattage, you are gaining in capacity and performance, but this needs to be managed carefully to create a sustainable balance with data center efficiency. Efficient performance per watt is the goal.

Q16. Is AI accelerating the need for SSDs or is there something new needed? What’s the biggest unique SSD requirement that isn’t in all the other existing applications? Seems higher density, lower power, performance improvements are not new?

A16. There is not much increased demand we can see directly tied to AI, but as AI becomes more deployed, we will likely see an increase in overall SSD demand, as well as high capacity SSDs.

Q17. I have heard that storage as percentage of total IT spend has dropped significantly. As a lot of dollars are now going toward GPUs how will this trend respond?

There is an evolution from GPU spend supporting increased storage spend.

About the Authors

By Jeff Janukowicz, Research Vice President at IDC; Brian Beeler, Owner and Editor In Chief, StorageReview.com; and Cameron T. Brett, SNIA STA Forum Chair.

Note to our readers: We had quite a few questions regarding SSDs, forecasting for SSDs, and comparing that with the future of HDDs. As a result, we are preparing a blog addressing these questions which will go into more detail.

Leave a Reply

Your email address will not be published. Required fields are marked *