Optimizing NVMe over Fabrics Performance Q&A

Almost 800 people have already watched our webcast “Optimizing NVMe over Fabrics Performance with Different Ethernet Transports: Host Factors” where SNIA experts covered the factors impacting different Ethernet transport performance for NVMe over Fabrics (NVMe-oF) and provided data comparisons of NVMe over Fabrics tests with iWARP, RoCEv2 and TCP. If you missed the live event, watch it on-demand at your convenience. The session generated a lot of questions, all answered here in this blog. In fact, many of the questions have prompted us to continue this discussion with future webcasts on NVMe-oF performance. Please follow us on Twitter @SNIANSF for upcoming dates. Q. What factors will affect the performance of NVMe over RoCEv2 and TCP when the network between host and target is longer than typical Data Center environment? i.e., RTT > 100ms Read More

Optimizing NVMe over Fabrics Performance with Different Ethernet Transports: Host Factors

NVMe over Fabrics technology is gaining momentum and getting more traction in data centers, but there are three kinds of Ethernet based NVMe over Fabrics transports: iWARP, RoCEv2 and TCP. How do we optimize NVMe over Fabrics performance with different Ethernet transports? That will be the discussion topic at our SNIA Networking Storage Forum Webcast, “Optimizing NVMe over Fabrics Performance with Different Ethernet Transports: Host Factorson September 16, 2020. Setting aside the considerations of network infrastructure, scalability, security requirements and complete solution stack, this webcast will explore the performance of different Ethernet-based transports for NVMe over Fabrics at the detailed benchmark level. We will show three key performance indicators: IOPs, Throughput, and Latency with different workloads including: Read More

Notable Questions on NVMe-oF 1.1

At our recent SNIA Networking Storage Forum (NSF) webcast, “Notable Updates in NVMe-oF™ 1.1” we explored the latest features of NVMe over Fabrics (NVMe-oF), discussing what’s new in the NVMe-oF 1.1 release, support for CMB and PMR, managing and provisioning NVMe-oF devices with SNIA Swordfish™, and FC-NVMe-2. If you missed the live event, you can watch it here. Our presenters received many interesting questions on NVMe-oF and here are answers to them all: Read More

The Latest on NVMe-oF 1.1

Since its introduction, NVMe over Fabrics (NVMe-oF™) has not been resting on any laurels. Work has been ongoing, and several updates are worth mentioning. And that’s exactly what the SNIA Networking Storage Forum will be doing on June 30th, 2020 at our live webcast, Notable Updates in NVMe-oF 1.1. There is more to a technology than its core standard, of course, and many different groups have been hard at work at improving upon, and fleshing out, many of the capabilities related to NVMe-oF.  In this webcast, we will explore a few of these projects and how they relate to implementing the technology. In particular, this webcast will be covering:
  • A summary of new items introduced in NVMe-oF 1.1
  • Updates regarding enhancements to FC-NVMe-2
  • How SNIA’s provisioning model helps NVMe-oF Ethernet Bunch of Flash (EBOF) devices
  • Managing and provisioning NVMe-oF devices with SNIA Swordfish
Register today for a look at what’s new in NVMe-oF. We hope to see you on June 30th.

Intro to Incast, Head of Line Blocking, and Congestion Management

For a long time, the architecture and best practices of storage networks have been relatively well-understood. Recently, however, advanced capabilities have been added to storage that could have broader impacts on networks than we think. The three main storage network transports – Fibre Channel, Ethernet, and InfiniBand – all have mechanisms to handle increased traffic, but they are not all affected or implemented the same way. For instance, utilizing a protocol such as NVMe over Fabrics will offer very different methodologies for handling congestion avoidance, burst handling, and queue management when looking at one networking in comparison to another. Read More

Dive into NVMe at Storage Developer Conference – a Chat with SNIA Technical Council Co-Chair Bill Martin

The SNIA Storage Developer Conference (SDC) is coming up September 24-27, 2018 at the Hyatt Regency Santa Clara CA.  The agenda is now live!

SNIA on Storage is teaming up with the SNIA Technical Council to dive into major themes of the 2018 conference.  The SNIA Technical Council takes a leadership role to develop the content for each SDC, so SNIA on Storage spoke with Bill Martin, SNIA Technical Council Co-Chair and SSD I/O Standards at Samsung Electronics, to understand why SDC is bringing NVMe and NVMe-oF to conference attendees.

SNIA On Storage (SOS): What is NVMe and why is SNIA emphasizing it as one of their key areas of focus for SDC?

Bill Martin (BM):  NVMeTM, also known as NVM ExpressR, is an open collection of standards and information to fully expose the benefits of non-volatile memory (NVM) in all types of computing environments from mobile to data center.

SNIA is very supportive of NVMe.  In fact, earlier this year, SNIA, the Distributed Management Task Force (DMTF), and the NVM Express organizations formed a new alliance to coordinate standards for managing solid state drive (SSD) storage devices. This alliance brings together multiple standards for managing the issue of scale-out management of SSDs.  It’s designed to enable an all-inclusive management experience by improving the interoperable management of information technologies.

With interest both from within and outside of SNIA from architects, developers, and implementers on how these standards work, the SNIA Technical Council decided to bring even more sessions on this important area to the SDC audience this year. We are proud to include 16 sessions on NVMe topics over the four days of the conference.

SOS:  What will I learn about NVMe at SDC? Read More

Comparing iSCSI, iSER, and NVMe over Fabrics (NVMe-oF): Ecosystem, Interoperability, Performance, and Use Cases

iSCSI is one of the most broadly supported storage protocols, but traditionally has not been associated with the highest performance. Newer protocols like iSER and NVMe over Fabrics promise extreme performance but are still maturing and lack the broad feature and platform support of iSCSI. Storage vendors and customers face interesting tradeoffs and options when evaluating how to achieve the highest block storage performance on Ethernet networks, while preserving the major software and hardware investment in iSCSI. Read More

Ethernet Networked Storage – FAQ

At our SNIA Ethernet Storage Forum (ESF) webcast “Re-Introduction to Ethernet Networked Storage,” we provided a solid foundation on Ethernet networked storage, the move to higher speeds, challenges, use cases and benefits. Here are answers to the questions we received during the live event.

Q. Within the iWARP protocol there is a layer called MPA (Marker PDU Aligned Framing for TCP) inserted for storage applications. What is the point of this protocol?

A. MPA is an adaptation layer between the iWARP Direct Data Placement Protocol and TCP/IP. It provides framing and CRC protection for Protocol Data Units.  MPA enables packing of multiple small RDMA messages into a single Ethernet frame.  It also enables an iWARP NIC to place frames received out-of-order (instead of dropping them), which can be beneficial on best-effort networks. More detail can be found in IETF RFC 5044 and IETF RFC 5041.

Q. What is the API for RDMA network IPC?

The general API for RDMA is called verbs. The OpenFabrics Verbs Working Group oversees the development of verbs definition and functionality in the OpenFabrics Software (OFS) code. You can find the training content from OpenFabrics Alliance here. General information about RDMA for Ethernet (RoCE) is available at the InfiniBand Trade Association website. Information about Internet Wide Area RDMA Protocol (iWARP) can be found at IETF: RFC 5040, RFC 5041, RFC 5042, RFC 5043, RFC 5044.

Q. RDMA requires TCP/IP (iWARP), InfiniBand, or RoCE to operate on with respect to NVMe over Fabrics. Therefore, what are the advantages of disadvantages of iWARP vs. RoCE?

A. Both RoCE and iWARP support RDMA over Ethernet. iWARP uses TCP/IP while RoCE uses UDP/IP. Debating which one is better is beyond the scope of this webcast, but you can learn more by watching the SNIA ESF webcast, “How Ethernet RDMA Protocols iWARP and RoCE Support NVMe over Fabrics.”

Q. 100Gb Ethernet Optical Data Center solution?

A. 100Gb Ethernet optical interconnect products were first available around 2011 or 2012 in a 10x10Gb/s design (100GBASE-CR10 for copper, 100GBASE-SR10 for optical) which required thick cables and a CXP and a CFP MSA housing. These were generally used only for switch-to-switch links. Starting in late 2015, the more compact 4x25Gb/s design (using the QSFP28 form factor) became available in copper (DAC), optical cabling (AOC), and transceivers (100GBASE-SR4, 100GBASE-LR4, 100GBASE-PSM4, etc.). The optical transceivers allow 100GbE connectivity up to 100m, or 2km and 10km distances, depending on the type of transceiver and fiber used.

Q. Where is FCoE being used today?

A. FCoE is primarily used in blade server deployments where there could be contention for PCI slots and only one built-in NIC. These NICs typically support FCoE at 10Gb/s speeds, passing both FC and Ethernet traffic via connect to a Top-of-Rack FCoE switch which parses traffic to the respective fabrics (FC and Ethernet). However, it has not gained much acceptance outside of the blade server use case.

Q. Why did iSCSI start out mostly in lower-cost SAN markets?

A. When it first debuted, iSCSI packets were processed by software initiators which consumed CPU cycles and showed higher latency than Fibre Channel. Achieving high performance with iSCSI required expensive NICs with iSCSI hardware acceleration, and iSCSI networks were typically limited to 100Mb/s or 1Gb/s while Fibre Channel was running at 4Gb/s. Fibre Channel is also a lossless protocol, while TCP/IP is lossey, which caused concerns for storage administrators. Now however, iSCSI can run on 25, 40, 50 or 100Gb/s Ethernet with various types of TCP/IP acceleration or RDMA offloads available on the NICs.

Q. What are some of the differences between iSCSI and FCoE?

A. iSCSI runs SCSI protocol commands over TCP/IP (except iSER which is iSCSI over RDMA) while FCoE runs Fibre Channel protocol over Ethernet. iSCSI can run over layer 2 and 3 networks while FCoE is Layer 2 only. FCoE requires a lossless network, typically implemented using DCB (Data Center Bridging) Ethernet and specialized switches.

Q. You pointed out that at least twice that people incorrectly predicted the end of Fibre Channel, but it didn’t happen. What makes you say Fibre Channel is actually going to decline this time?

A. Several things are different this time. First, Ethernet is now much faster than Fibre Channel instead of the other way around. Second, Ethernet networks now support lossless and RDMA options that were not previously available. Third, several new solutions–like big data, hyper-converged infrastructure, object storage, most scale-out storage, and most clustered file systems–do not support Fibre Channel. Fourth, none of the hyper-scale cloud implementations use Fibre Channel and most private and public cloud architects do not want a separate Fibre Channel network–they want one converged network, which is usually Ethernet.

Q. Which storage protocols support RDMA over Ethernet?

A. The Ethernet RDMA options for storage protocols are iSER (iSCSI Extensions for RDMA), SMB Direct, NVMe over Fabrics, and NFS over RDMA. There are also storage solutions that use proprietary protocols supporting RDMA over Ethernet.

 

 

 

 

 

 

 

 

 

 

 

 

2017 Ethernet Roadmap for Networked Storage

When SNIA’s Ethernet Storage Forum (ESF) last looked at the Ethernet Roadmap for Networked Storage in 2015, we anticipated a world of rapid change. The list of advances in 2016 is nothing short of amazing

  • New adapters, switches, and cables have been launched supporting 25, 50, and 100Gb Ethernet speeds including support from major server vendors and storage startups
  • Multiple vendors have added or updated support for RDMA over Ethernet
  • The growth of NVMe storage devices and release of the NVMe over Fabrics standard are driving demand for both faster speeds and lower latency in networking
  • The growth of cloud, virtualization, hyper-converged infrastructure, object storage, and containers are all increasing the popularity of Ethernet as a storage fabric

The world of Ethernet in 2017 promises more of the same. Now we revisit the topic with a look ahead at what’s in store for Ethernet in 2017.  Join us on December 1, 2016 for our live webcast, “2017 Ethernet Roadmap to Networked Storage.”

With all the incredible advances and learning vectors, SNIA ESF has assembled a great team of experts to help you keep up. Here are some of the things to keep track of in the upcoming year:

  • Learn what is driving the adoption of faster Ethernet speeds and new Ethernet storage models
  • Understand the different copper and optical cabling choices available at different speeds and distances
  • Debate how other connectivity options will compete against Ethernet for the new cloud and software-defined storage networks
  • And finally look ahead with us at what Ethernet is planning for new connectivity options and faster speeds such as 200 and 400 Gigabit Ethernet

The momentum is strong with Ethernet, and we’re here to help you stay informed of the lightning-fast changes. Come join us to look at the future of Ethernet for storage and join this SNIA ESF webcast on December 1st. Register here.