Category: NVMe over Fabrics
Optimizing NVMe over Fabrics Performance with Different Ethernet Transports: Host Factors
Notable Questions on NVMe-oF 1.1
The Latest on NVMe-oF 1.1
- A summary of new items introduced in NVMe-oF 1.1
- Updates regarding enhancements to FC-NVMe-2
- How SNIA’s provisioning model helps NVMe-oF Ethernet Bunch of Flash (EBOF) devices
- Managing and provisioning NVMe-oF devices with SNIA Swordfish
Intro to Incast, Head of Line Blocking, and Congestion Management
Dive into NVMe at Storage Developer Conference – a Chat with SNIA Technical Council Co-Chair Bill Martin
The SNIA Storage Developer Conference (SDC) is coming up September 24-27, 2018 at the Hyatt Regency Santa Clara CA. The agenda is now live!
SNIA on Storage is teaming up with the SNIA Technical Council to dive into major themes of the 2018 conference. The SNIA Technical Council takes a leadership role to develop the content for each SDC, so SNIA on Storage spoke with Bill Martin, SNIA Technical Council Co-Chair and SSD I/O Standards at Samsung Electronics, to understand why SDC is bringing NVMe and NVMe-oF to conference attendees.
SNIA On Storage (SOS): What is NVMe and why is SNIA emphasizing it as one of their key areas of focus for SDC?
Bill Martin (BM): NVMeTM, also known as NVM ExpressR, is an open collection of standards and information to fully expose the benefits of non-volatile memory (NVM) in all types of computing environments from mobile to data center.
SNIA is very supportive of NVMe. In fact, earlier this year, SNIA, the Distributed Management Task Force (DMTF), and the NVM Express organizations formed a new alliance to coordinate standards for managing solid state drive (SSD) storage devices. This alliance brings together multiple standards for managing the issue of scale-out management of SSDs. It’s designed to enable an all-inclusive management experience by improving the interoperable management of information technologies.
With interest both from within and outside of SNIA from architects, developers, and implementers on how these standards work, the SNIA Technical Council decided to bring even more sessions on this important area to the SDC audience this year. We are proud to include 16 sessions on NVMe topics over the four days of the conference.
SOS: What will I learn about NVMe at SDC? Read More
Does Your World Include Storage? Don’t Miss SDC!
Whether storage is already a main focus of your career or may be advancing toward you, you’ll definitely want to attend the flagship event for storage developers – and those involved in storage operations, decision making, and usage – SNIA’s 19th annual Storage Developer Conference (SDC), September 11-14, 2017 at the Hyatt Regency Santa Clara, California. Read More
Comparing iSCSI, iSER, and NVMe over Fabrics (NVMe-oF): Ecosystem, Interoperability, Performance, and Use Cases
Ethernet Networked Storage – FAQ
At our SNIA Ethernet Storage Forum (ESF) webcast “Re-Introduction to Ethernet Networked Storage,” we provided a solid foundation on Ethernet networked storage, the move to higher speeds, challenges, use cases and benefits. Here are answers to the questions we received during the live event.
Q. Within the iWARP protocol there is a layer called MPA (Marker PDU Aligned Framing for TCP) inserted for storage applications. What is the point of this protocol?
A. MPA is an adaptation layer between the iWARP Direct Data Placement Protocol and TCP/IP. It provides framing and CRC protection for Protocol Data Units. MPA enables packing of multiple small RDMA messages into a single Ethernet frame. It also enables an iWARP NIC to place frames received out-of-order (instead of dropping them), which can be beneficial on best-effort networks. More detail can be found in IETF RFC 5044 and IETF RFC 5041.
Q. What is the API for RDMA network IPC?
The general API for RDMA is called verbs. The OpenFabrics Verbs Working Group oversees the development of verbs definition and functionality in the OpenFabrics Software (OFS) code. You can find the training content from OpenFabrics Alliance here. General information about RDMA for Ethernet (RoCE) is available at the InfiniBand Trade Association website. Information about Internet Wide Area RDMA Protocol (iWARP) can be found at IETF: RFC 5040, RFC 5041, RFC 5042, RFC 5043, RFC 5044.
Q. RDMA requires TCP/IP (iWARP), InfiniBand, or RoCE to operate on with respect to NVMe over Fabrics. Therefore, what are the advantages of disadvantages of iWARP vs. RoCE?
A. Both RoCE and iWARP support RDMA over Ethernet. iWARP uses TCP/IP while RoCE uses UDP/IP. Debating which one is better is beyond the scope of this webcast, but you can learn more by watching the SNIA ESF webcast, “How Ethernet RDMA Protocols iWARP and RoCE Support NVMe over Fabrics.”
Q. 100Gb Ethernet Optical Data Center solution?
A. 100Gb Ethernet optical interconnect products were first available around 2011 or 2012 in a 10x10Gb/s design (100GBASE-CR10 for copper, 100GBASE-SR10 for optical) which required thick cables and a CXP and a CFP MSA housing. These were generally used only for switch-to-switch links. Starting in late 2015, the more compact 4x25Gb/s design (using the QSFP28 form factor) became available in copper (DAC), optical cabling (AOC), and transceivers (100GBASE-SR4, 100GBASE-LR4, 100GBASE-PSM4, etc.). The optical transceivers allow 100GbE connectivity up to 100m, or 2km and 10km distances, depending on the type of transceiver and fiber used.
Q. Where is FCoE being used today?
A. FCoE is primarily used in blade server deployments where there could be contention for PCI slots and only one built-in NIC. These NICs typically support FCoE at 10Gb/s speeds, passing both FC and Ethernet traffic via connect to a Top-of-Rack FCoE switch which parses traffic to the respective fabrics (FC and Ethernet). However, it has not gained much acceptance outside of the blade server use case.
Q. Why did iSCSI start out mostly in lower-cost SAN markets?
A. When it first debuted, iSCSI packets were processed by software initiators which consumed CPU cycles and showed higher latency than Fibre Channel. Achieving high performance with iSCSI required expensive NICs with iSCSI hardware acceleration, and iSCSI networks were typically limited to 100Mb/s or 1Gb/s while Fibre Channel was running at 4Gb/s. Fibre Channel is also a lossless protocol, while TCP/IP is lossey, which caused concerns for storage administrators. Now however, iSCSI can run on 25, 40, 50 or 100Gb/s Ethernet with various types of TCP/IP acceleration or RDMA offloads available on the NICs.
Q. What are some of the differences between iSCSI and FCoE?
A. iSCSI runs SCSI protocol commands over TCP/IP (except iSER which is iSCSI over RDMA) while FCoE runs Fibre Channel protocol over Ethernet. iSCSI can run over layer 2 and 3 networks while FCoE is Layer 2 only. FCoE requires a lossless network, typically implemented using DCB (Data Center Bridging) Ethernet and specialized switches.
Q. You pointed out that at least twice that people incorrectly predicted the end of Fibre Channel, but it didn’t happen. What makes you say Fibre Channel is actually going to decline this time?
A. Several things are different this time. First, Ethernet is now much faster than Fibre Channel instead of the other way around. Second, Ethernet networks now support lossless and RDMA options that were not previously available. Third, several new solutions–like big data, hyper-converged infrastructure, object storage, most scale-out storage, and most clustered file systems–do not support Fibre Channel. Fourth, none of the hyper-scale cloud implementations use Fibre Channel and most private and public cloud architects do not want a separate Fibre Channel network–they want one converged network, which is usually Ethernet.
Q. Which storage protocols support RDMA over Ethernet?
A. The Ethernet RDMA options for storage protocols are iSER (iSCSI Extensions for RDMA), SMB Direct, NVMe over Fabrics, and NFS over RDMA. There are also storage solutions that use proprietary protocols supporting RDMA over Ethernet.
2017 Ethernet Roadmap for Networked Storage
When SNIA’s Ethernet Storage Forum (ESF) last looked at the Ethernet Roadmap for Networked Storage in 2015, we anticipated a world of rapid change. The list of advances in 2016 is nothing short of amazing
- New adapters, switches, and cables have been launched supporting 25, 50, and 100Gb Ethernet speeds including support from major server vendors and storage startups
- Multiple vendors have added or updated support for RDMA over Ethernet
- The growth of NVMe storage devices and release of the NVMe over Fabrics standard are driving demand for both faster speeds and lower latency in networking
- The growth of cloud, virtualization, hyper-converged infrastructure, object storage, and containers are all increasing the popularity of Ethernet as a storage fabric
The world of Ethernet in 2017 promises more of the same. Now we revisit the topic with a look ahead at what’s in store for Ethernet in 2017. Join us on December 1, 2016 for our live webcast, “2017 Ethernet Roadmap to Networked Storage.”
With all the incredible advances and learning vectors, SNIA ESF has assembled a great team of experts to help you keep up. Here are some of the things to keep track of in the upcoming year:
- Learn what is driving the adoption of faster Ethernet speeds and new Ethernet storage models
- Understand the different copper and optical cabling choices available at different speeds and distances
- Debate how other connectivity options will compete against Ethernet for the new cloud and software-defined storage networks
- And finally look ahead with us at what Ethernet is planning for new connectivity options and faster speeds such as 200 and 400 Gigabit Ethernet
The momentum is strong with Ethernet, and we’re here to help you stay informed of the lightning-fast changes. Come join us to look at the future of Ethernet for storage and join this SNIA ESF webcast on December 1st. Register here.