xPU Accelerator Offload Functions

As covered in our first xPU webcast “SmartNICs and xPUs: Why is the Use of Accelerators Accelerating,” we discussed the trend to deploy dedicated accelerator chips to assist or offload the main CPU. These new accelerators (xPUs) have multiple names such as SmartNIC, DPU, IPU, APU, NAPU. If you missed the presentation, I encourage you to check it out in the SNIA Educational Library where you can watch it on-demand and access the presentation slides. This second webcast in this SNIA Networking Storage Forum xPU webcast series is “xPU Accelerator Offload Functions” where our SNIA experts will take a deeper dive into the accelerator offload functions of the xPU. We’ll discuss what problems the xPUs are coming to solve, where in the system they live, and the functions they implement, focusing on: Read More

Storage Implications of Doing More at the Edge

In our SNIA Networking Storage Forum webcast series, “Storage Life on the Edge” we’ve been examining the many ways the edge is impacting how data is processed, analyzed and stored. I encourage you to check out the sessions we’ve done to date: On June 15, 2022, we continue the series with “Storage Life on the Edge: Accelerated Performance Strategies” where our SNIA experts will discuss the need for faster computing, access to storage, and movement of data at the edge as well as between the edge and the data center, covering: Read More

Storage Life on the Edge

Cloud to Edge infrastructures are rapidly growing.  It is expected that by 2025, up to 75% of all data generated will be created at the Edge.  However, Edge is a tricky word and you’ll get a different definition depending on who you ask. The physical edge could be in a factory, retail store, hospital, car, plane, cell tower level, or on your mobile device. The network edge could be a top-of-rack switch, server running host-based networking, or 5G base station. The Edge means putting servers, storage, and other devices outside the core data center and closer to both the data sources and the users of that data—both edge sources and edge users could be people or machines. Read More

A Q&A on Discovery Automation for NVMe-oF IP-Based SANs

In order to fully unlock the potential of the NVMe® IP based SANs, we first need to address the manual and error prone process that is currently used to establish connectivity between NVMe Hosts and NVM subsystems. Several leading companies in the industry have joined together through NVM Express to collaborate on innovations to simplify and automate this discovery process. This was the topic of discussion at our recent SNIA Networking Storage Forum webcast “NVMe-oF: Discovery Automation for IP-based SANs” where our experts, Erik Smith and Curtis Ballard, took a deep dive on the work that is being done to address these issues. If you missed the live event, you can watch it on demand here and get a copy of the slides. Erik and Curtis did not have time to answer all the questions during the live presentation. As promised, here are answers to them all. Q. Is the Centralized Discovery Controller (CDC) highly available, and is this visible to the hosts?  Do they see a pair of CDCs on the network and retry requests to a secondary if a primary is not available? Read More

Cabling, Connectors and Transceivers Questions Answered

In our recent live SNIA Network Storage Forum webcast, “Next-generation Interconnects: The Critical Importance of Connectors and Cables” provided an outstanding tutorial on the latest in the impressive array of data center infrastructure components designed to address expanding requirements for higher-bandwidth and lower-power. They covered common pluggable connectors and media types, copper cabling and transceivers, and real world use cases. If you missed the live event, it is available on-demand. We ran out of time to answer all the questions from the live audience. As promised, here are answers to them all. Q. For 25GbE, is the industry consolidating on one of the three options? Read More

Revving Up Storage for Automotive

Each year cars become smarter and more automated. In fact, the automotive industry is effectively transforming the vehicle into a data center on wheels. Connectedness, autonomous driving, and media & entertainment all bring more and more storage onboard and into networked data centers. But all the storage in (and for) a car is not created equal. There are 10s if not 100s of different processors on a car today. Some are attached to storage, some are not and each application demands different characteristics from the storage device. The SNIA Networking Storage Forum (NSF) is exploring this fascinating topic on December 7, 2021 at our live webcast “Revving Up Storage for Automotive” where industry experts from both the storage and automotive worlds will discuss: Read More

Storage for AI Q&A

What types of storage are needed for different aspects of AI? That was one of the many topics covered in our SNIA Networking Storage Forum (NSF) webcast “Storage for AI Applications.” It was a fascinating discussion and I encourage you to check it out on-demand. Our panel of experts answered many questions during the live roundtable Q&A. Here are answers to those questions, as well as the ones we didn’t have time to address. Q. What are the different data set sizes and workloads in AI/ML in terms of data set size, sequential/ random, write/read mix? A. Data sets will vary incredibly from use case to use case. They may be GBs to possibly 100s of PB. In general, the workloads are very heavily reads maybe 95%+. While it would be better to have sequential reads, in general the patterns tend to be closer to random. In addition, different use cases will have very different data sizes. Some may be GBs large, while others may be <1 KB. The different sizes have a direct impact on performance in storage and may change how you decide to store the data. Read More

Automating Discovery for NVMe IP-based SANs

NVMe® IP-based SANs (including transports such as TCP, RoCE, and iWARP) have the potential to provide significant benefits in application environments ranging from the Edge to the Data Center. However, before we can fully unlock the potential of the NVMe IP-based SAN, we first need to address the manual and error prone process that is currently used to establish connectivity between NVMe Hosts and NVM subsystems.  This process includes administrators explicitly configuring each Host to access the appropriate NVM subsystems in their environment. In addition, any time an NVM Subsystem interface is added or removed, a Host administrator may need to explicitly update the configuration of impacted hosts to reflect this change. Due to the decentralized nature of this configuration process, using it to manage connectivity for more than a few Host and NVM subsystem interfaces is impractical and adds complexity when deploying an NVMe IP-based SAN in environments that require a high-degrees of automation. Read More

Demystifying the Fibre Channel SAN Protocol

Every wonder how Fibre Channel (FC) hosts and targets really communicate? Join the SNIA Networking Storage Forum (NSF) on September 23, 2021 for a live webcast, “How Fibre Channel Hosts and Targets Really Communicate.” This SAN overview will dive into details on how initiators (hosts) and targets (storage arrays) communicate and will address key questions, like:
  • How do FC links activate?
  • Is FC routable?
  • What kind of flow control is present in FC?
  • How do initiators find targets and set up their communication?
  • Finally, how does actual data get transferred between initiators and hosts, since that is the ultimate goal?
Read More

Storage for Applications Webcast Series

Everyone enjoys having storage that is fast, reliable, scalable, and affordable. But it turns out different applications have different storage needs in terms of I/O requirements, capacity, data sharing, and security.  Some need local storage, some need a centralized storage array, and others need distributed storage—which itself could be local or networked. One application might excel with block storage while another with file or object storage. For example, an OLTP database might require small amounts of very fast flash storage; a media or streaming application might need vast quantities of inexpensive disk storage with extra security safeguards; while a third application might require a mix of different storage tiers with multiple servers sharing the same data. This SNIA Networking Storage Forum “Storage for Applications” webcast series will cover the storage requirements for specific uses such as artificial intelligence (AI), database, cloud, media & entertainment, automotive, edge, and more. With limited resources, it’s important to understand the storage intent of the applications in order to choose the right storage and storage networking strategy, rather than discovering the hard way that you’ve chosen the wrong solution for your application. We kick off this series on October 5, 2020 with “Storage for AI Applications.” AI is a technology which itself encompasses a broad range of use cases, largely divided into training and inference. Read More