Programming Frameworks Q&A

Last month, the SNIA Networking Storage Forum made sense of the “wild west” of programming frameworks, covering xPUs, GPUs and computational storage devices at our live webcast, “You’ve Been Framed! An Overview of xPU, GPU & Computational Storage Programming Frameworks.” It was an excellent overview of what’s happening in this space. There was a lot to digest, so our stellar panel of experts has taken the time to answer the questions from our live audience in this blog. Q. Why is it important to have open-source programming frameworks? A. Open-source frameworks enable community support and partnerships beyond what proprietary frameworks support. In many cases they allow ISVs and end users to write one integration that works with multiple vendors. Q. Will different accelerators require different frameworks or can one framework eventually cover them all? Read More

You’ve Been Framed! An Overview of Programming Frameworks

With the emergence of GPUs, xPUs (DPU, IPU, FAC, NAPU, etc.) and computational storage devices for host offload and accelerated processing, a panoramic wild west of frameworks is emerging, all vying to be one of the preferred programming software stacks that best integrates the application layer with these underlying processing units. On October 26, 2022, the SNIA Networking Storage Forum will break down what’s happening in the world of frameworks in our live webcast, “You’ve Been Framed! An Overview of Programming Frameworks for xPU, GPU & Computational Storage Programming Frameworks.” We’ve convened an impressive group of experts that will provide an overview of programming frameworks that support: Read More

Kubernetes is Everywhere Q&A

Earlier this month, the SNIA Cloud Storage Technologies Initiative hosted a fascinating panel discussion “Kubernetes is Everywhere: What About Cloud Native Storage?”  where storage experts from SNIA and Kubernetes experts from the Cloud Native Computing Foundation (CNCF) discussed storage implications for Kubernetes. It was a lively and enlightening discussion on key considerations for container storage. In this Q&A blog, our panelists Nick Connolly, Michael St-Jean, Pete Brey and I elaborate on some of the most intriguing questions during the session. Q. What are the additional/different challenges for Kubernetes storage at the edge – in contrast to the data center?   Read More

SmartNICs to xPUs Q&A

The SNIA Networking Storage Forum kicked off its xPU webcast series last month with “SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?” where SNIA experts defined what xPUs are, explained how they can accelerate offload functions, and cleared up confusion on many other names associated with xPUs such as SmartNIC, DPU, IPU, APU, NAPU. The webcast was highly-rated by our audience and already has more than 1,300 views. If you missed it, you can watch it on-demand and download a copy of the presentation slides at the SNIA Educational Library. The live audience asked some interesting questions and here are answers from our presenters. Q. How can we have redundancy on an xPU? Read More

Storage Implications of Doing More at the Edge

In our SNIA Networking Storage Forum webcast series, “Storage Life on the Edge” we’ve been examining the many ways the edge is impacting how data is processed, analyzed and stored. I encourage you to check out the sessions we’ve done to date: On June 15, 2022, we continue the series with “Storage Life on the Edge: Accelerated Performance Strategies” where our SNIA experts will discuss the need for faster computing, access to storage, and movement of data at the edge as well as between the edge and the data center, covering: Read More

SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?

As applications continue to increase in complexity and users demand more from their workloads, there is a trend to again deploy dedicated accelerator chips to assist by offloading work from the main CPU.  These new accelerators (xPUs) have multiple names such as SmartNIC (Smart Network Interface Card), DPU, IPU, APU, NAPU. How are these different than GPU, TPU and the venerable CPU? xPUs can accelerate and offload functions including math, networking, storage functions, compression, cryptography, security and management. It’s a topic that the SNIA Networking Storage Forum will spotlight in our 3-part xPU webcast series. The first webcast on May 19, 2022 “SmartNICs to xPUs – Why is the Use of Accelerators Accelerating?” will cover key topics about, and clarify questions surrounding, xPUs, including… Read More

5G, Edge, and Industry 4.0 Q&A

The confluence of 5G networks, AI and machine learning, industrial IoT, and edge computing are driving the fourth industrial revolution – Industry 4.0. The impact of the industrial edge and how it is being transformed were among the topics at our SNIA Cloud Storage Technologies Initiative (CSTI) webcast “5G Industrial Private Network and Edge Data Pipelines.” If you missed it, you can view it on-demand along with the presentation slides in the SNIA Educational Library. In this blog, we are sharing and clarifying answers to some of the intriguing questions from the live event. Q. What are some of the key challenges to support the agility and flexibility requirements of Industry 4.0? Read More

Scaling Storage to New Heights

Earlier this month, the SNIA Cloud Storage Technologies Initiative (CSTI) presented a live webcast called “High Performance Storage at Exascale” where our HPC experts, Glyn Bowden, Torben Kling Petersen and Michael Hennecke talked about processing and storing data in shockingly huge numbers. The session raises some interesting points on how scale is quickly being redefined and what was cost compute prohibitive a few years ago for most, may be in reach for all sooner than expected.

  1. Is HPC a rich man’s game? The scale appears to have increased dramatically over the last few years. Is the cost increasing to the point where this has only for wealthy organizations or has the cost decreased to the point where small to medium-sized enterprises might be able to indulge in HPC activities?
  2. [Torben] I would say the answer is both. To build these really super big systems you

need you need hundreds of millions of dollars because the sheer cost of infrastructure goes beyond anything that we’ve seen in the past, but on the other hand you also see HPC systems in the most unlikely places, like a web retailer that mainly sells shoes. They had a Lustre system driving their back end and HPC out-competed a standard NFS solution.  So, we see this going

in different directions. Price has definitely gone down significantly; essentially the cost of a large storage system now is the same as it was 10 years ago. It’s just that now it’s 100 times faster and 50 times larger. That said, it’s not it’s not cheap to do any of these things because of the amount of hardware you need.

  1. [Michael] We are seeing the same thing. We like to say that these types of HPC systems are more like a time machine that show you what will show up in the general enterprise world a few years after. The cloud space is a prime example. All of the large HPC parallel file systems are now being adopted in the cloud so we get a combination of the deployment mechanisms coming from the cloud world with the scale and robustness of the storage software infrastructure. Those are married together in very efficient ways. So, while not everybody will

build a 200 petabyte or flash system for those types of use cases the same technologies and the same software layers can be used at all scales. I really believe that this is a like the research lab for what will become mainstream pretty quickly. On the cost side, another aspect that we haven’t covered is this old notion that tape is dead, disk is dead, and always the next technology is replacing the old ones. That hasn’t happened, but certainly as new technologies

arrive and cost structures change you get shifts in dollars per terabyte or dollars per terabyte per second which is more the HPC metric. So, how do we get in QLC drives to lower the price of flash and then build larger systems out of that? That’s also technology explorations done at this level and then benefit everybody.

  1. [Glyn] Being the consultant of the group, I guess I should say it depends. It depends on how you want to define HPC. So, I’ve got a device on my desk in front of me at the moment that I can fit in the palm of my hand it has more than a thousand graphics GPU cores in it and so

that costs under $100. I can build a cluster of 10 of those for under $1,000. If you look back five years, that would absolutely be classified as HPC based on the amount of cores and amount of processing it can do. So, these things are shrinking and becoming far more affordable and far more commodity at the low end meaning that we can put what was traditionally sort of a

an HPC cluster and run it on things like Raspberry Pi’s at the edge somewhere. You can absolutely get the architecture and what was previously seen as that kind of parallel batch processing against many cores for next to nothing. As Michael said it’s really the time machine and this is where we’re catching up with what was an HPC. The big stuff is always going to cost the big bucks, but I think it’s affordable to get something that you can play on and work as an HPC system.

We also had several questions on persistent memory. SNIA covers this topic extensively. You can access a wealth of information here. I also encourage you to register for SNIA’s 2022 Persistent Memory + Computational Storage Summit which will be held virtually May 25-26. There was also interest in CXL (Compute Express Link, a high speed cache-coherent interconnect for processors, memory and accelerators). You can find more information on that in the SNIA Educational Library.

 

Scaling Storage to New Heights

Earlier this month, the SNIA Cloud Storage Technologies Initiative (CSTI) presented a live webcast called “High Performance Storage at Exascale” where our HPC experts, Glyn Bowden, Torben Kling Petersen and Michael Hennecke talked about processing and storing data in shockingly huge numbers. The session raises some interesting points on how scale is quickly being redefined and what was cost compute prohibitive a few years ago for most, may be in reach for all sooner than expected. Q. Is HPC a rich man’s game? The scale appears to have increased dramatically over the last few years. Is the cost increasing to the point where this is only for wealthy organizations or has the cost decreased to the point where small to medium-sized enterprises might be able to indulge in HPC activities? Read More

Multi-cloud Use Has Become the Norm

Multiple clouds within an organization have become the norm. This strategy enables organizations to reduce risk and dependence on a single cloud platform. The SNIA Cloud Storage Technologies Initiative (CSTI) discussed this topic at length at our live webcast last month “Why Use Multiple Clouds?” We polled our webcast attendees on their use of multiple clouds and here’s what we learned about the cloud platforms that comprise their multi-cloud environments: Read More