Storage at the Edge Q&A

The ability to run analytics from the data center to the Edge, where the data is generated and lives creates new use cases for nearly every business. The impact of Edge computing on storage strategy was the topic at our recent SNIA Cloud Storage Technologies Initiative (CSTI) webcast, “Extending Storage to the Edge – How It Should Affect Your Storage Strategy.” If you missed the live event, it’s available on-demand. Our experts, Erin Farr, Senior Technical Staff Member, IBM Storage CTO Innovation Team and Vincent Hsu, IBM Fellow, VP & CTO for Storage received several interesting questions during the live event. As promised, here are answers to them all. Q. What is the core principle of Edge computing technology? A. Edge computing is an industry trend rather than a standardized architecture, though there are organizations like LF EDGE with the objective of establishing an open, interoperable framework. Edge computing is generally about moving the workloads closer to where the data is generated and creating new innovative workloads due to that proximity. Common principles often include the ability to manage Edge devices at scale, using open technologies to create portable solutions, and of ultimately doing all of this with enterprise levels of security. Reference architectures exist for guidance, though implementations can vary greatly by industry vertical. Q. We all know connectivity is not guaranteed – how does that affect these different use cases? What are the HA implications? Read More

Next-generation Interconnects: The Critical Importance of Connectors and Cables

Modern data centers consist of hundreds of subsystems connected with optical transceivers, copper cables, and industry standards-based connectors. As data demands escalate, it drives the throughput of these interconnects to increase rapidly, making the maximum reach of copper cabling very short. At the same time, data centers are expanding in size, with nodes stretching further apart. This is making longer-reach optical technologies much more popular. However, optical interconnect technologies are more costly and complex than copper with many new buzz-words and technology concepts. The rate of change from the vast uptick in data demand accelerates new product development at an incredible pace. While much of the enterprise is still on 10/40/100GbE and 128GFC speeds, the optical standards bodies are beginning to deliver 800G, with 1.6Tb transceivers in discussion! The introduction of new technologies creates a paradigm shift that requires changes and adjustments throughout the network. Read More

Genomics Compute, Storage & Data Management Q&A

Everyone knows data is growing at exponential rates. In fact, the numbers can be mind-numbing. That’s certainly the case when it comes to genomic data where 40,000PB of storage each year will be needed by 2025. Understanding, managing and storing this massive amount of data was the topic at our SNIA Cloud Storage Technologies Initiative webcast “Moving Genomics to the Cloud: Compute and Storage Considerations.” If you missed the live presentation, it’s available on-demand along with presentation slides. Our live audience asked many interesting questions during the webcast, but we did not have time to answer them all. As promised, our experts, Michael McManus, Torben Kling Petersen and Christopher Davidson have answered them all here. Q. Human genomes differ only by 1% or so, there’s an immediate 100x improvement in terms of data compression, 2743EB could become 27430PB, that’s 2.743M HDDs of 10TB each. We have ~200 countries for the 7.8B people, and each country could have 10 sequencing centers on average, each center would need a mere 1.4K HDDs, is there really a big challenge here? A. The problem is not that simple unfortunately. The location of genetic differences and the size of the genetic differences vary a lot across people. Still, there are compression methods like CRAM and PetaGene that can save a lot of space. Also consider all of the sequencing for rare disease, cancer, single cell sequencing, etc. plus sequencing for agricultural products. Q. What’s the best compression ratio for human genome data? Read More

Demystifying the Fibre Channel SAN Protocol

Every wonder how Fibre Channel (FC) hosts and targets really communicate? Join the SNIA Networking Storage Forum (NSF) on September 23, 2021 for a live webcast, “How Fibre Channel Hosts and Targets Really Communicate.” This SAN overview will dive into details on how initiators (hosts) and targets (storage arrays) communicate and will address key questions, like:
  • How do FC links activate?
  • Is FC routable?
  • What kind of flow control is present in FC?
  • How do initiators find targets and set up their communication?
  • Finally, how does actual data get transferred between initiators and hosts, since that is the ultimate goal?
Read More

Storage for Applications Webcast Series

Everyone enjoys having storage that is fast, reliable, scalable, and affordable. But it turns out different applications have different storage needs in terms of I/O requirements, capacity, data sharing, and security.  Some need local storage, some need a centralized storage array, and others need distributed storage—which itself could be local or networked. One application might excel with block storage while another with file or object storage. For example, an OLTP database might require small amounts of very fast flash storage; a media or streaming application might need vast quantities of inexpensive disk storage with extra security safeguards; while a third application might require a mix of different storage tiers with multiple servers sharing the same data. This SNIA Networking Storage Forum “Storage for Applications” webcast series will cover the storage requirements for specific uses such as artificial intelligence (AI), database, cloud, media & entertainment, automotive, edge, and more. With limited resources, it’s important to understand the storage intent of the applications in order to choose the right storage and storage networking strategy, rather than discovering the hard way that you’ve chosen the wrong solution for your application. We kick off this series on October 5, 2020 with “Storage for AI Applications.” AI is a technology which itself encompasses a broad range of use cases, largely divided into training and inference. Read More

Can Cloud Storage and Big Data Live Happily Ever After?

“Big Data” has pushed the storage envelope, creating a seemingly perfect relationship with Cloud Storage. But local storage is the third wheel in this relationship, and won’t go down easy. Can this marriage survive when Big Data is being pulled in two directions? Should Big Data pick one, or can the three of them live happily ever after? This will be the topic of discussion on October 21, 2021 at our live SNIA Cloud Storage Technologies webcast, “Cloud Storage and Big Data, A Marriage Made in the Clouds.” Join us as our SNIA experts will cover: Read More

What’s New in Computational Storage? A Conversation with SNIA Leadership

The latest revisions of the SNIA Computational Storage Architecture and Programming Model Version 0.8 Revision 0 and the Computational Storage API v0.5 rev 0 are now live on the SNIA website. Interested to know what has been added to the specifications, SNIAOnStorage met “virtually” with Jason Molgaard, Co-Chair of the SNIA Computational Storage Technical Work Group, and Bill Martin, Co-Chair of the SNIA Technical Council and editor of the specifications, to get the details. Both SNIA volunteer leaders stressed that they welcome ideas about the specifications and invite industry colleagues to join them in continuing to define computational storage standards.  The two documents are working documents – continually being refined and enhanced. If you are not a SNIA member, you can submit public comments via the SNIA Feedback Portal. To learn if your company is a SNIA member, check the SNIA membership list. If you are a SNIA member,  go here to join the Computational Storage Technical Work Group member work area. The Computational Storage Technical Work Group chairs also welcome your emails.  Reach out to them at computationaltwg-chair@snia.org. Read More

Deploying Confidential Computing Q&A

The third live webcast in our SNIA Cloud Storage Technologies Initiative confidential computing series focused on real-world deployments of confidential computing and included case studies and demonstrations. If you missed the live event, you can watch it on demand here. Our live audience asked some interesting questions, here are our expert presenters’ answers. Q.  What is the overhead in CPU cycles for running in a trusted enclave? A. We have been running some very large machine learning applications in secure enclaves using the latest available hardware, and seeing very close to “near-native” performance, with no more than 5% performance overhead compared to normal non-secure operations. This performance is significantly better in comparison to older versions of hardware. With new hardware, we are ready to take on bigger workloads with minimal overhead. Also, it is important to note that encryption and isolation are done in hardware at memory access speeds, so that is not where you will tend to see a performance issue. Regardless of which secure enclave hardware capability you choose, each uses a different technology to manage the barrier between secure enclaves. The important thing is to look at how often an application crosses the barrier, since that is where careful attention is needed. Read More

Q&A (Part 1) from “Storage Trends for 2021 and Beyond” Webcast

Questions from “Storage Trends for 2021 and Beyond” Webcast Answered

It was a great pleasure for Rick Kutcipal, board director, SCSI Trade Association (STA), to welcome Jeff Janukowicz, Research vice president at IDC and Chris Preimesberger, former editor-in-chief of eWeek, in a roundtable talk to discuss prominent data storage technologies shaping the market. If you missed this webcast titled “Storage Trends for 2021 and Beyond,” it’s now available on demand here.

The well-attended event generated a lot of questions! So many in fact, we’re authoring a two-part blog series with the answers. In part one, we recap the questions that were asked and answered during the webcast, but since we ran out of time to answer them all, please watch for part two when we tackle the rest.

Read More

What is eBPF, and Why Does it Matter for Computational Storage?

Recently, a question came up in the SNIA Computational Storage Special Interest Group on new developments in a technology called eBPF and how they might relate to computational storage. To learn more, SNIA on Storage sat down with Eli Tiomkin, SNIA CS SIG Chair with NGD Systems; Matias Bjørling of Western Digital; Jim Harris of Intel; Dave Landsman of Western Digital; and Oscar Pinto of Samsung. SNIA On Storage (SOS):  The eBPF.io website defines eBPF, extended Berkeley Packet Filter, as a revolutionary technology that can run sandboxed programs in the Linux kernel without changing kernel source code or loading kernel modules. Why is it important? Dave Landsman (DL): eBPF emerged in Linux as a way to do network filtering, and enables the Linux kernel to be programmed.  Intelligence and features can be added to existing layers, and there is no need to add additional layers of complexity. SNIA On Storage (SOS):  What are the elements of eBPF that would be key to computational storage?  Read More