A Q&A on Discovery Automation for NVMe-oF IP-Based SANs

In order to fully unlock the potential of the NVMe® IP based SANs, we first need to address the manual and error prone process that is currently used to establish connectivity between NVMe Hosts and NVM subsystems. Several leading companies in the industry have joined together through NVM Express to collaborate on innovations to simplify and automate this discovery process. This was the topic of discussion at our recent SNIA Networking Storage Forum webcast “NVMe-oF: Discovery Automation for IP-based SANs” where our experts, Erik Smith and Curtis Ballard, took a deep dive on the work that is being done to address these issues. If you missed the live event, you can watch it on demand here and get a copy of the slides. Erik and Curtis did not have time to answer all the questions during the live presentation. As promised, here are answers to them all. Q. Is the Centralized Discovery Controller (CDC) highly available, and is this visible to the hosts?  Do they see a pair of CDCs on the network and retry requests to a secondary if a primary is not available? Read More

Automating Discovery for NVMe IP-based SANs

NVMe® IP-based SANs (including transports such as TCP, RoCE, and iWARP) have the potential to provide significant benefits in application environments ranging from the Edge to the Data Center. However, before we can fully unlock the potential of the NVMe IP-based SAN, we first need to address the manual and error prone process that is currently used to establish connectivity between NVMe Hosts and NVM subsystems.  This process includes administrators explicitly configuring each Host to access the appropriate NVM subsystems in their environment. In addition, any time an NVM Subsystem interface is added or removed, a Host administrator may need to explicitly update the configuration of impacted hosts to reflect this change. Due to the decentralized nature of this configuration process, using it to manage connectivity for more than a few Host and NVM subsystem interfaces is impractical and adds complexity when deploying an NVMe IP-based SAN in environments that require a high-degrees of automation. Read More

Q&A: Security of Data on NVMe-oF

Ensuring the security of data on NVMe over Fabrics was the topic of our SNIA Networking Storage Forum (NSF) webcast “Security of Data on NVMe over Fabrics, the Armored Truck Way.” During the webcast our experts outlined industry trends, potential threats, security best practices and much more. The live audience asked several interesting questions and here are answers to them. Q. Does use of strong authentication and network encryption ensure I will be compliant with regulations such as HIPAA, GDPR, PCI, CCPA, etc.? A. Not by themselves. Proper use of strong authentication and network encryption will reduce the risk of data theft or improper data access, which can help achieve compliance with data privacy regulations. But full compliance also requires establishment of proper processes, employee training, system testing and monitoring. Compliance may also require regular reviews and audits of systems and processes plus the involvement of lawyers and compliance consultants. Q. Does using encryption on the wire such as IPsec, FC_ESP, or TLS protect against ransomware, man-in-the middle attacks, or physical theft of the storage system? Read More

How SNIA Swordfish™ Expanded with NVMe® and NVMe-oF™

The SNIA Swordfish™ specification and ecosystem are growing in scope to include full enablement and alignment for NVMe® and NVMe-oF client workloads and use cases. By partnering with other industry-standard organizations including DMTF®, NVM Express, and OpenFabrics Alliance (OFA), SNIA’s Scalable Storage Management Technical Work Group has updated the Swordfish bundles from version 1.2.1 and later to cover an expanding range of NVMe and NVMe-oF functionality including NVMe device management and storage fabric technology management and administration. The Need Large-scale computing designs are increasingly multi-node and linked together through high-speed networks. These networks may be comprised of different types of technologies, fungible, and morphing. Over time, many different types of high-performance networking devices will evolve to participate in these modern, coupled-computing platforms. New fabric management capabilities, orchestration, and automation will be required to deploy, secure, and optimally maintain these high-speed networks. Read More

Protecting NVMe over Fabrics Data from Day One, The Armored Truck Way

With ever increasing threat vectors both inside and outside the data center, a compromised customer dataset can quickly result in a torrent of lost business data, eroded trust, significant penalties, and potential lawsuits. Potential vulnerabilities exist at every point when scaling out NVMe® storage, which requires data to be secured every time it leaves a server or the storage media, not just when leaving the data center. NVMe over Fabrics is poised to be the one of the most dominant storage transports of the future and securing and validating the vast amounts of data that will traverse this fabric is not just prudent, but paramount. Read More

A Q&A on NVMe-oF Performance Hero Numbers

Last month, the SNIA Networking Storage Forum (NSF) hosted a live webcast “NVMe-oF: Looking Beyond Performance Hero Numbers.”It was extremely popular, in fact it has been viewed almost 800 times in just two weeks! If you missed it, it’s available on-demand, along with the presentation slides at the SNIA Educational Library. Our audience asked several great questions during the live event and our expert presenters, Erik Smith, Rob Davis and Nishant Lodha have kindly answered them all here. Q. There are initiators for Linux but not for Windows? What are my options to connect NVMe-oF to Windows Server? Read More

Beyond NVMe-oF Performance Hero Numbers

When it comes to selecting the right NVMe over Fabrics™ (NVMe-oF™) solution, one should look beyond test results that demonstrate NVMe-oF’s dramatic reduction in latency and consider the other, more important, questions such as “How does the transport really impact application performance?” and “How does the transport holistically fit into my environment?” To date, the focus has been on specialized fabrics like RDMA (e.g., RoCE) because it provides the lowest possible latency, as well as Fibre Channel because it is generally considered to be the most reliable. However, with the introduction of NVMe-oF/TCP this conversation must be expanded to also include considerations regarding scale, cost, and operations. That’s why the SNIA Networking Storage Forum (NSF) is hosting a webcast series that will dive into answering these questions beyond the standard answer “it depends.” Read More

Optimizing NVMe over Fabrics Performance Q&A

Almost 800 people have already watched our webcast “Optimizing NVMe over Fabrics Performance with Different Ethernet Transports: Host Factors” where SNIA experts covered the factors impacting different Ethernet transport performance for NVMe over Fabrics (NVMe-oF) and provided data comparisons of NVMe over Fabrics tests with iWARP, RoCEv2 and TCP. If you missed the live event, watch it on-demand at your convenience. The session generated a lot of questions, all answered here in this blog. In fact, many of the questions have prompted us to continue this discussion with future webcasts on NVMe-oF performance. Please follow us on Twitter @SNIANSF for upcoming dates. Q. What factors will affect the performance of NVMe over RoCEv2 and TCP when the network between host and target is longer than typical Data Center environment? i.e., RTT > 100ms Read More

Optimizing NVMe over Fabrics Performance with Different Ethernet Transports: Host Factors

NVMe over Fabrics technology is gaining momentum and getting more traction in data centers, but there are three kinds of Ethernet based NVMe over Fabrics transports: iWARP, RoCEv2 and TCP. How do we optimize NVMe over Fabrics performance with different Ethernet transports? That will be the discussion topic at our SNIA Networking Storage Forum Webcast, “Optimizing NVMe over Fabrics Performance with Different Ethernet Transports: Host Factorson September 16, 2020. Setting aside the considerations of network infrastructure, scalability, security requirements and complete solution stack, this webcast will explore the performance of different Ethernet-based transports for NVMe over Fabrics at the detailed benchmark level. We will show three key performance indicators: IOPs, Throughput, and Latency with different workloads including: Read More

Notable Questions on NVMe-oF 1.1

At our recent SNIA Networking Storage Forum (NSF) webcast, “Notable Updates in NVMe-oF™ 1.1” we explored the latest features of NVMe over Fabrics (NVMe-oF), discussing what’s new in the NVMe-oF 1.1 release, support for CMB and PMR, managing and provisioning NVMe-oF devices with SNIA Swordfish™, and FC-NVMe-2. If you missed the live event, you can watch it here. Our presenters received many interesting questions on NVMe-oF and here are answers to them all: Read More