When NVMe™ over Fabrics Meets TCP

In the storage world, NVMe™ is arguably the hottest thing going right now. Go to any storage conference – either vendor-related or vendor-neutral, and you’ll see NVMe as the latest and greatest innovation. It stands to reason, then, that when you want to run NVMe over a network, you must understand NVMe over Fabrics (NVMe-oF). Meanwhile, TCP is by far the most popular networking transport protocol both for storage and non-storage traffic. TCP – the long-standing mainstay of networking – is the newest transport technology to be approved by the NVM Express® organization, enabling NVMe/TCP. This can mean really good things for storage and storage networking – but what are the tradeoffs? Read More

Introducing the Networking Storage Forum

At SNIA, we are dedicated to staying on top of storage trends and technologies to fulfill our mission as a globally recognized and trusted authority for storage leadership, standards, and technology expertise. For the last several years, the Ethernet Storage Forum has been working hard to provide high quality educational and informational material related to all kinds of storage.

From our “Everything You Wanted To Know About Storage But Were Too Proud To Ask” series, to the absolutely phenomenal (and required viewing) “Storage Performance Benchmarking” series to the “Great Storage Debates” series, we’ve produced dozens of hours of material.

Technologies have evolved and we’ve come to a point where there’s a need to understand how these systems and architectures work – beyond just the type of wire that is used. Today, there are new systems that are bringing storage to completely new audiences. From scale-up to scale-out, from disaggregated to hyperconverged, RDMA, and NVMe-oF – there is more to storage networking than just your favorite transport. Read More

Dive into NVMe at Storage Developer Conference – a Chat with SNIA Technical Council Co-Chair Bill Martin

The SNIA Storage Developer Conference (SDC) is coming up September 24-27, 2018 at the Hyatt Regency Santa Clara CA.  The agenda is now live!

SNIA on Storage is teaming up with the SNIA Technical Council to dive into major themes of the 2018 conference.  The SNIA Technical Council takes a leadership role to develop the content for each SDC, so SNIA on Storage spoke with Bill Martin, SNIA Technical Council Co-Chair and SSD I/O Standards at Samsung Electronics, to understand why SDC is bringing NVMe and NVMe-oF to conference attendees.

SNIA On Storage (SOS): What is NVMe and why is SNIA emphasizing it as one of their key areas of focus for SDC?

Bill Martin (BM):  NVMeTM, also known as NVM ExpressR, is an open collection of standards and information to fully expose the benefits of non-volatile memory (NVM) in all types of computing environments from mobile to data center.

SNIA is very supportive of NVMe.  In fact, earlier this year, SNIA, the Distributed Management Task Force (DMTF), and the NVM Express organizations formed a new alliance to coordinate standards for managing solid state drive (SSD) storage devices. This alliance brings together multiple standards for managing the issue of scale-out management of SSDs.  It’s designed to enable an all-inclusive management experience by improving the interoperable management of information technologies.

With interest both from within and outside of SNIA from architects, developers, and implementers on how these standards work, the SNIA Technical Council decided to bring even more sessions on this important area to the SDC audience this year. We are proud to include 16 sessions on NVMe topics over the four days of the conference.

SOS:  What will I learn about NVMe at SDC? Read More

Storage Controllers – Your Questions Answered

The term controller is used constantly, but often has very different meanings. When you have a controller that manages hardware, there are very different requirements than a controller that manages an entire system-wide control plane. You can even have controllers managing other controllers. It can all get pretty confusing very quickly. That’s why the SNIA Ethernet Storage Forum (ESF) hosted our 9th “Too Proud to Ask” webcast. This time it was “Everything You Wanted to Know about Storage but were Too Proud to Ask: Part Aqua – Storage Controllers.” Our experts from Microsemi, Cavium, Mellanox and Cisco did a great job explaining the differences between the many types of controllers, but of course there were still questions. Here are answers to all that we received during the live event which you can now view on-demandRead More

The Alphabet Soup of Storage Networking Acronyms Explained

At our most recent webcast, “Everything You Wanted to Know About Storage But Were Too Proud To Ask: Part Turquoise – Where Does My Data Go?, our panel of experts dove into what really happens when you hit “save” and send your data off. It was an alphabet soup of acronyms as they explained the nuances of and the differences between:
  • Volatile v Non-Volatile v Persistent Memory
  • NVDIMM v RAM v DRAM v SLC v MLC v TLC v NAND v 3D NAND v Flash v SSDs v NVMe
  • NVMe (the protocol)
As promised during the live event, here are answers to all the questions we received. Q. Is SRAM still used today? A. SRAM is still in use today as embedded CACHE (Level 1/2/3) within a CPU and very limited in external standalone packaging… This is due to cost and size/capacity. Read More

Q&A – When Compute, Networking and Storage Intersect

In Part Vermillion of our SNIA Ethernet Storage Forum (ESF) “Everything You Wanted To Know About Storage But Were Too Proud To Ask” webcast series – we examined the terms and concepts are at the heart of where compute, networking and storage intersect. That’s why we called it “What if Programming and Networking Had a Storage Baby” If you missed the live webcast, you can watch it on-demand. The discussion from our panel of experts generated a lot of good questions. As promised, here are answers to them all.  Read More

The Too Proud to Ask Train Makes Another Stop: Where Does My Data Go?

By now, we at the SNIA Storage Ethernet Storage Forum (ESF) hope you are familiar with (perhaps even a loyal fan of) the “Everything You Wanted To Know About Storage But Were Too Proud To Ask,” popular webcast series. On August 1st, the “Too Proud to Ask” train will make another stop. In this seventh session, “Everything You Wanted to Know About Storage But Were Too Proud To Ask: Turquoise – Where Does My Data Go?, we will take a look into the mysticism and magic of what happens when you send your data off into the wilderness. Once you click “save,” for example, where does it actually go? Read More

Attend Live – or Live Stream – SNIA’s Persistent Memory Summit January 18

by Marty Foltyn

SNIA’s Persistent Memory Summit makes its fifth annual appearance in Silicon Valley next Wednesday, January 18, and if you are in the vicinity of the Westin San Jose, you owe it to yourself to check it out. PMSummitLogo (2)

SNIA is well known for its technology-focused, no vendor-hype conferences, and this one-day event will feature 12 presentations and two panels that will “level set” the discussion, review persistent memory usage, describe applications incorporating PM available today, discuss the infrastructure and implementation, and provide a vision of the “next generation” of persistent memory.

You’ll meet speakers from SNIA member companies Intel, Micron, Microsemi, VMware, Red Hat, Microsoft, AgigA Tech, Western Digital, and Spin Transfer.  Live demonstrations of persistent memory solutions will be featured from Summit underwriters Intel and the SNIA Solid State Storage Initiative, and Summit sponsors Microsemi, VMware, AgigA Tech, SMART Modular, and Spin Transfer.

Registration is complimentary but limited  -visit http://www.snia.org/pm-summit for the complete agenda and how to sign up.  And, if your travels don’t permit you to attend in person, the Persistent Memory Summit will be live-streamed on the SNIAvideo channel at https://www.youtube.com/user/SNIAVideo.

Questions on the 2017 Ethernet Roadmap for Networked Storage

Last month, experts from Dell EMC, Intel, Mellanox and Microsoft convened to take a look ahead at what’s in store for Ethernet Networked Storage this year. It was a fascinating discussion of anticipated updates. If you missed the webcast, “2017 Ethernet Roadmap for Networked Storage,” it’s now available on-demand. We had a lot of great questions during the live event and we ran out of time to address them all, so here are answers from our speakers.

Q. What’s the future of twisted pair cable? What is the new speed being developed with twisted pair cable?

A. By twisted pair I assume you mean USTP CAT5,6,7 etc.  The problem going forward with high speed signaling is the USTP stands for Un-Shielded and the signal radiates off the wire very quickly.   At 25G and 50G this is a real problem and forces the line card end to have a big, power consuming and costly chip to dig the signal out of the noise. Anything can be done, but at what cost.  25G BASE-T is being developed but the reach is somewhere around 30 meters.  Cost, size, power consumption are all going up and reach going down – all opposite to the trends in modern high speed data centers.  BASE-T will always have a place for those applications that don’t need the faster rates.

Q. What do you think of RCx standards and cables?

A. So far, Amphenol, JAE and Volex are the suppliers who are members of the MSA. Very few companies have announced or discussed RCx.  In addition to a smaller connector, not having an EEPROM eliminates steps in the cable assembly manufacture, hence helping with lowering the cost when compared to traditional DAC cabling. The biggest advantage of RCx is that it can help eliminate bulky breakout cables within a rack since a single RCx4 receptacle can accept a number of combinations of single lane, 2 lane or 4 lane cable with the same connector on the host. RCx ports can be connected to existing QSFP/SFP infrastructure with appropriate cabling. It remains to be seen, however, if it becomes a standard and popular product or remain as a custom solution.

Q. How long does AOC normally reach, 3m or 30m?  

A. AOCs pick it up after DAC drops off about 3m.  Most popular reaches are 3,5,and 10m and volume drops rapidly after 15,20,30,50, and100. We are seeing Ethernet connected HDD’s at 2.5GbE x 2 ports, and Ceph touting this solution.  This seems to play well into the 25/50/100GbE standards with the massive parallelism possible.

Q. How do we scale PCIe lanes to support NVMe drives to scale, and to replace the capacity we see with storage arrays populated completely with HDDs?

A. With the advent of PCIe Gen 4, the per-lane rate of PCIe is going from 8 GT/s to 16GT/s. Scaling of PCIe is already happening.

Q. How many NVMe drives does it take to saturate 100GbE?

A. 3 or 4 depending on individual drives.

Q. How about the reliability of Ethernet? A lot of people think Fibre Channel has better reliability than Ethernet.

A. It’s true that Fibre Channel is a lossless protocol. Ethernet frames are sometimes dropped by the switch, however, network storage using TCP has built in error-correction facility. TCP was designed at a time when networks were less robust than today. Ethernet networks these days are far more reliable.

Q. Do the 2.5GbE and 5GbE refer to the client side Ethernet port or the server Ethernet port?

A. It can exist on both the client side and the server side Ethernet port.

Q. Are there any 25GbE or 50GbE NICs available on the market?

A. Yes, there are many that are on the market from a number of vendors, including Dell, Mellanox, Intel, and a number of others.

Q. Commonly used Ethernet speeds are either 10GbE or 40GbE. Do the new 25GbE and 50GbE require new switches?

A. Yes, you need new switches to support 25GbE and 50GbE. This is, in part, because the SerDes rate per lane at 25 and 50GbE is 25Gb/s, which is not supported by the 10 and 40GbE switches with a maximum SerDes rate of 10Gb/s.

Q. With a certain number of SerDes coming off the switch ASIC, which would you prefer to use 100G or 40G if assuming both are at the same cost?

A. Certainly 100G. You get 2.5X the bandwidth for the same cost under the assumptions made in the question.

Q. Are there any 100G/200G/400G switches and modulation available now?

A. There are many 100G Ethernet switches available on the market today include Dell’s Z9100 and S6100, Mellanox’s SN2700, and a number of others. The 200G and 400G IEEE standards are not complete as of yet. I’m sure all switch vendors will come out with switches supporting those rates in the future.

Q. What does lambda mean?

ALambda is the symbol for wavelength.

Q. Is the 50GbE standard ratified now?

A. IEEE 802.3 just recently started development of a 50GbE standard based upon a single-lane 50 Gb/s physical layer interface. That standard is probably about 2 years away from ratification. The 25G Ethernet Consortium has a ratified specification for 50GbE based upon a dual-lane 25 Gb/s physical layer interface.

Q. Are there any parallel options for using 2 or 4 lanes like in 128GFCp?

A. Many Ethernet specifications are based upon parallel options. 10GBASE-T is based upon 4 twisted-pairs of copper cabling. 100GBASE-SR4 is based upon 4 lanes (8 fibers) of multimode fiber. Even the industry MSA for 100G over CWDM4 is based upon four wavelengths on a duplex single-mode fiber. In some instances, the parallel option is based upon the additional medium (extra wires or fibers) but with fiber optics, parallel can be created by using different wavelengths that don’t interfere with each other.

 

 

Ethernet Networked Storage – FAQ

At our SNIA Ethernet Storage Forum (ESF) webcast “Re-Introduction to Ethernet Networked Storage,” we provided a solid foundation on Ethernet networked storage, the move to higher speeds, challenges, use cases and benefits. Here are answers to the questions we received during the live event.

Q. Within the iWARP protocol there is a layer called MPA (Marker PDU Aligned Framing for TCP) inserted for storage applications. What is the point of this protocol?

A. MPA is an adaptation layer between the iWARP Direct Data Placement Protocol and TCP/IP. It provides framing and CRC protection for Protocol Data Units.  MPA enables packing of multiple small RDMA messages into a single Ethernet frame.  It also enables an iWARP NIC to place frames received out-of-order (instead of dropping them), which can be beneficial on best-effort networks. More detail can be found in IETF RFC 5044 and IETF RFC 5041.

Q. What is the API for RDMA network IPC?

The general API for RDMA is called verbs. The OpenFabrics Verbs Working Group oversees the development of verbs definition and functionality in the OpenFabrics Software (OFS) code. You can find the training content from OpenFabrics Alliance here. General information about RDMA for Ethernet (RoCE) is available at the InfiniBand Trade Association website. Information about Internet Wide Area RDMA Protocol (iWARP) can be found at IETF: RFC 5040, RFC 5041, RFC 5042, RFC 5043, RFC 5044.

Q. RDMA requires TCP/IP (iWARP), InfiniBand, or RoCE to operate on with respect to NVMe over Fabrics. Therefore, what are the advantages of disadvantages of iWARP vs. RoCE?

A. Both RoCE and iWARP support RDMA over Ethernet. iWARP uses TCP/IP while RoCE uses UDP/IP. Debating which one is better is beyond the scope of this webcast, but you can learn more by watching the SNIA ESF webcast, “How Ethernet RDMA Protocols iWARP and RoCE Support NVMe over Fabrics.”

Q. 100Gb Ethernet Optical Data Center solution?

A. 100Gb Ethernet optical interconnect products were first available around 2011 or 2012 in a 10x10Gb/s design (100GBASE-CR10 for copper, 100GBASE-SR10 for optical) which required thick cables and a CXP and a CFP MSA housing. These were generally used only for switch-to-switch links. Starting in late 2015, the more compact 4x25Gb/s design (using the QSFP28 form factor) became available in copper (DAC), optical cabling (AOC), and transceivers (100GBASE-SR4, 100GBASE-LR4, 100GBASE-PSM4, etc.). The optical transceivers allow 100GbE connectivity up to 100m, or 2km and 10km distances, depending on the type of transceiver and fiber used.

Q. Where is FCoE being used today?

A. FCoE is primarily used in blade server deployments where there could be contention for PCI slots and only one built-in NIC. These NICs typically support FCoE at 10Gb/s speeds, passing both FC and Ethernet traffic via connect to a Top-of-Rack FCoE switch which parses traffic to the respective fabrics (FC and Ethernet). However, it has not gained much acceptance outside of the blade server use case.

Q. Why did iSCSI start out mostly in lower-cost SAN markets?

A. When it first debuted, iSCSI packets were processed by software initiators which consumed CPU cycles and showed higher latency than Fibre Channel. Achieving high performance with iSCSI required expensive NICs with iSCSI hardware acceleration, and iSCSI networks were typically limited to 100Mb/s or 1Gb/s while Fibre Channel was running at 4Gb/s. Fibre Channel is also a lossless protocol, while TCP/IP is lossey, which caused concerns for storage administrators. Now however, iSCSI can run on 25, 40, 50 or 100Gb/s Ethernet with various types of TCP/IP acceleration or RDMA offloads available on the NICs.

Q. What are some of the differences between iSCSI and FCoE?

A. iSCSI runs SCSI protocol commands over TCP/IP (except iSER which is iSCSI over RDMA) while FCoE runs Fibre Channel protocol over Ethernet. iSCSI can run over layer 2 and 3 networks while FCoE is Layer 2 only. FCoE requires a lossless network, typically implemented using DCB (Data Center Bridging) Ethernet and specialized switches.

Q. You pointed out that at least twice that people incorrectly predicted the end of Fibre Channel, but it didn’t happen. What makes you say Fibre Channel is actually going to decline this time?

A. Several things are different this time. First, Ethernet is now much faster than Fibre Channel instead of the other way around. Second, Ethernet networks now support lossless and RDMA options that were not previously available. Third, several new solutions–like big data, hyper-converged infrastructure, object storage, most scale-out storage, and most clustered file systems–do not support Fibre Channel. Fourth, none of the hyper-scale cloud implementations use Fibre Channel and most private and public cloud architects do not want a separate Fibre Channel network–they want one converged network, which is usually Ethernet.

Q. Which storage protocols support RDMA over Ethernet?

A. The Ethernet RDMA options for storage protocols are iSER (iSCSI Extensions for RDMA), SMB Direct, NVMe over Fabrics, and NFS over RDMA. There are also storage solutions that use proprietary protocols supporting RDMA over Ethernet.