Q&A Summary from the SNIA-ESF Webcast – “How VN2VN Will Help Accelerate Adoption of FCoE”

Our VN2VN Webcast last week was extremely well received. The audience was big and highly engaged. Here is a summary of the questions attendees asked and answers from my colleague, Joe White, and me. If you missed the Webcast, it’s now available on demand.

Question #1:

We are an extremely large FC shop with well over 50K native FC ports. We are looking to bridge this to the FCoE environment for the future. What does VN2VN buy the larger company? Seems like SMB is a much better target for this.

Answer #1: It’s true that for large port count SAN deployments VN2VN is not the best choice but the split is not strictly along the SMB/large enterprise lines. Many enterprises have multiple smaller special purpose SANs or satellite sites with small SANs and VN2VN can be a good choice for those parts of a large enterprise. Also, VN2VN can be used in conjunction with VN2VF to provide high-performance local storage, as we described in the webcast.

Question #2: Are there products available today that incorporate VN2VN in switches and storage targets?

Answer #2: Yes. A major storage vendor announced support for VN2VN at Interop Las Vegas 2013. As for switches, any switch supporting Data Center Bridging (DCB) will work. Most, if not all, new datacenter switches support DCB today. Recommended also is support in the switch for FIP Snooping, which is also available today.

Question #3: If we have an iSNS kind of service for VN2VN, do you think VN2VN can scale beyond the current anticipated limit?

Answer #3: That is certainly possible. This sort of central service does not exist today for VN2VN and is not part of the T11 specifications so we are talking in principle here. If you follow SDN (Software Defined Networking) ideas and thinking then having each endpoint configured through interaction with a central service would allow for very large potential scaling. Now the size and bandwidth of the L2 (local Ethernet) domain may restrict you, but fabric and distributed switch implementations with large flat L2 can remove that limitation as well.

Question #4: Since VN2VN uses different FIP messages to do login, a unique FSB implementation must be provided to install ACLs. Have any switch vendors announced support for a VN2VN FSB?

Answer #4: Yes, VN2VN FIP Snooping bridges will exist. It only requires a small addition to the filet/ACL rules on the FSB Ethernet switch to cover VN2VN. Small software changes are needed to cover the slightly different information, but the same logic and interfaces within the switch can be used, and the way the ACLs are programmed are the same.

Question #5: Broadcasts are a classic limiter in Layer 2 Ethernet scalability. VN2VN control is very broadcast intensive, on the default or control plane VLAN. What is the scale of a data center (or at least data center fault containment domain) in which VN2VN would be reliably usable, even assuming an arbitrarily large number of data plane VLANs? Is there a way to isolate the control plane broadcast traffic on a hierarchy of VLANs as well?

Answer #5: VLANs are an integral part of VN2VN within the T11 FC-BB-6 specification. You can configure the endpoints (servers and storage) to do all discovery on a particular VLAN or set of VLANs. You can use VLAN discovery for some endpoints (mostly envisioned as servers) to learn the VLANs on which to do discovery from other endpoints (mostly envisioned as storage). The use of VLANs in this manner will contain the FIP broadcasts to the FCoE dedicated VLANs. VN2VN is envisioned initially as enabling small to medium SANs of about a couple hundred ports although in principle the addressing combined with login controls allows for much larger scaling.

Question #6: Please explain difference between VN2VN and VN2VF

Answer #6: The currently deployed version of FCoE, T11 FC-BB-5, requires that every endpoint, or Enode in FC-speak, connect with the “fabric,” a Fibre Channel Forwarder (FCF) more specifically. That’s VN2VF. What FC-BB-6 adds is the capability for an endpoint to connect directly to other endpoints without an FCF between them. That’s VN2VN.

Question #7: In the context of VN2VN, do you think it places a stronger demand for QCN to be implemented by storage devices now that they are directly (logically) connected end-to-end?

Answer #7: The QCN story is the same for VN2VN, VN2VF, I/O consolidation using an NPIV FCoE-FC gateway, and even high-rate iSCSI. Once the discovery completes and sessions (FLOGI + PLOGI/PRLI) are setup, we are dealing with the inherent traffic pattern of the applications and storage.

Question #8: Your analogy that VN2VN is like private loop is interesting. But it does make VN2VN sound like a backward step – people stopped deploying AL tech years ago (for good reasons of scalability etc.). So isn’t this just a way for vendors to save development effort on writing a full FCF for FCoE switches?

Answer #8: This is a logical private loop with a lossless packet switched network for connectivity. The biggest issue in the past with private or public loop was sharing a single fiber across many devices. The bandwidth demands and latency demands were just too intense for loop to keep up. The idea of many devices addressed in a local manner was actually fairly attractive to some deployments.

Question #9: What is the sweet spot for VN2VN deployment, considering iSCSI allows direct initiator and target connections, and most networks are IP-enabled?

Answer #9: The sweet spot if VN2VN FCoE is SMB or dedicated SAN deployments where FC-like flow control and data flow are needed for up to a couple hundred ports. You could implement using iSCSI with PFC flow control but if TCP/IP is not needed due to PFC lossless priorities — why pay the TCP/IP processing overhead? In addition the FC encapsulation/serializaition and FC exchange protocols and models are preserved if this is important or useful to the applications. The configuration and operations of a local SAN using the two models is comparable.

Question #10: Has iSCSI become irrelevant?

Answer #10: Not at all. iSCSI serves a slightly different purpose from FCoE (including VN2VN). iSCSI allows connection across any IP network, and due to TCP/IP you have an end-to-end lossless in-order delivery of data. The drawback is that for high loss rates, burst drops, heavy congestion the TCP/IP performance will suffer due to congestion avoidance and retransmission timeouts (‘slow starts’). So the choice really depends on the data flow characteristics you are looking for and there is not a one size fits all answer.

Question #11: Where can I watch this Webcast?

Answer #11: The Webcast is available on demand on the SNIA website here.

Question #12: Can I get a copy of these slides?

Answer #12: Yes, the slides are available on the SNIA website here.

SSSI Sheds Light on Solid State Storage at Flash Memory Summit

If you attended Flash Memory Summit along with thousands of other business professionals, you soon learned that solid state keynotes, breakouts, and show floor booths, while informative, could be mystifying.  Fortunately, the SNIA Solid State Storage Initiative (SSSI) was at FMS to provide enlightenment in the form of a SSS Reception, four new publications on SSS, and three SSSI member demonstrations.

SSSI’s second annual SSS Reception was attended by over 90 individualIMG_6196s and featured presentations by SSSI Governing Board members Paul Wassenberg of Marvell, Walt Hubis of Fusion-io, and Eden Kim of Calypso Systems on SSSI key programs and technical work.   SSSI Education Chair Tom Coughlin of Coughlin Associates delivered a market update on ubiquitous flash memory.   All SSSI members are eligible and encouraged to join specification development, education, and outreach programs, and new companies are welcome to join SSSI activities.

At the SSSI Booth, attendees snapped up new white papers and Tech Notes authored by SSSI members.  These papers are complimentary to all interested individuals and available on the SSSI education page.

  • The PCI Express (PCIe) 101 – An Overview of Standards, Markets, and Performance white paper surveys the broad landscape of emerging high performance PCIe storage and the implications of revolutionary applications of these new architectures.  Thirteen members of the SNIA SSSI PCIe SSD Committee representing eleven SNIA and SSSI member companies contributed their technical expertise to this paper, which covers standards, programming models for non-volatile memory, the PCIe SSD market, and PCIe SSD performance.
  • The SSD Performance – A Primer white paper, authored by SNIA SSS Technical Work Group chair Eden Kim of Calypso Systems, provides an introduction to solid state drive performance, evaluation, and test.  As noted in the Foreword by SSSI Founder and 2008-2010 SSSI Chair Phil Mills of IBM, “this paper is an excellent tutorial on the performance of solid state drives, which covers this topic in a very easy to understand way, yet provides detailed technical information that the reader can either dig into for a better understanding, or simply skip without missing the main points”.
  • A new PTS User Guide Tech Note delivers the hows and whys of the SNIA Solid State Performance Test Specification.  Authored by SNIA SSSI members Eden Kim of Calypso Systems and Chuck Paradon of HP, this Tech Note provides an easy to understand, step-by step guide to using the SNIA SSS Performance Test Specification (PTS) test methodologies and tests. The Tech Note discusses four basic PTS 1.1 tests – Write Saturation (WSAT), IOPS, Throughput (TP) and Response Time (or Latency) – as updated per SNIA draft PTS-E version 1.1.
  • The SSSI Workload I/O Capture Program (WIOCP) FAQ, authored by SNIA SSSI member Tom West of hyperI/O LLC, gives details on this project undertaken by the SNIA SSSI to collect I/O operation performance metrics. These empirical metrics reflect the actual I/O operation activity performed during normal, everyday application/workload usage spanning both consumer/client and enterprise systems.

IMG_20130814_155646

Also in the booth, the Media, Entertainment and Scientific Storage (MESS) Meetup group chatted with end users, and SSSI members exhibited new solid state storage solutions for enterprise markets:

  • BitMicro  presented a high performance MaxIO drive incorporating BiTMICRO’s ultra fast Talino Quad Core ASIC controller, which integrates embedded processors with a high speed multi-bus design to achieve performance far beyond legacy solid state designs.
  • Fastor Systems unveiled a NVMe compliant PCIe software defined storage device (SDS), or post-controller SSD, in which a de-coupled control & data plane together with a non-blocking fabric and message based architecture provide both high throughput and low latency to address the needs of today’s hyperscale datacenters.
  • Micron showcased a P420m PCIe SSD featuring multilevel (MLC) NAND technology.

Ethernet is the right fit for the Software Defined Data Center

“Software Defined” is a label being used to define advances in network and storage virtualization and promises to greatly improve infrastructure management and accelerate business agility. Network virtualization itself isn’t a new concept and has been around in various forms for some time (think vLANs). But, the commercialization of server virtualization seems to have paved the path to extend virtualization throughout the data center infrastructure, making the data center an IT environment delivering dynamic and even self-deployed services. The networking stack has been getting most of the recent buzz and I’ll focus on that portion of the infrastructure here.

VirtualizationChangesWhat is driving this trend in data networking? As I mentioned, server virtualization has a lot to do with the new trend. Virtualizing applications makes a lot of things better, and makes some things more complicated. Server virtualization enables you to achieve much higher application density in your data center. Instead of a one-to-one relationship between the application and server, you can host tens of applications on the same physical server. This is great news for data centers that run into space limitations or for businesses looking for greater efficiency out of their existing hardware.

YesteryearThe challenge, however, is that these applications aren’t stationary. They can move from one physical server to another. And this mobility can add complications for the networking guys. Networks must be aware of virtual machines in ways that they don’t have to be aware of physical servers. For network admins of yesteryear, their domain was a black box of “innies” and “outies”. Gone are the days of “set it and forget it” in terms of networking devices. Or is it?

Software defined networks (aka SDN) promise to greatly simplify the network environment. By decoupling the control plane from the data plane, SDN allows administrators to treat a collection of networking devices as a single entity and can then use policies to configure and deploy networking resources more dynamically. Additionally, moving to a software defined infrastructure means that you can move control and management of physical devices to different applications within the infrastructure, which give you flexibility to launch and deploy virtual infrastructures in a more agile way.

network virtualizationSoftware defined networks aren’t limited to a specific physical transport. The theory, and I believe implementation, will be universal in concept. However, the more that hardware can be deployed in a consistent manner, the greater flexibility for the enterprise. As server virtualization becomes the norm, servers hosting applications with mixed protocol needs (block and file) will be more common. In this scenario, Ethernet networks offer advantages, especially as software defined networks come to play. Following is a list of some of the benefits of Ethernet in a software defined network environment.

Ubiquitous

Ethernet is a very familiar technology and is present in almost every compute and mobile device in an enterprise. From IP telephony to mobile devices, Ethernet is a networking standard commonly deployed and as a result, is very cost effective. The number of devices and engineering resources focused on Ethernet drives the economics in favor of Ethernet.

Compatibility

Ethernet has been around for so long and has proven to “just work.” Interoperability is really a non-issue and this extends to inter-vendor interoperability. Some other networking technologies require same vendor components throughout the data path. Not the case with Ethernet. With the rare exception, you can mix and match switch and adapter devices within the same infrastructure. Obviously, best practices would suggest that at least a single vendor within the switch infrastructure would simplify the environment with a common set of management tools, features, and support plans. But, that might change with advances in SDN.

Highly Scalable

Ethernet is massively scalable. The use of routing technology allows for broad geographic networks. The recent adoption of IPv6 extends IP addressing way beyond what is conceivable at this point in time. As we enter the “internet of things” period in IT history, we will not lack for network scale. At least, in theory.

Overlay Networks

Overlay Networksallow you to extend L2 networks beyond traditional geographic boundaries. Two proposed standards are under review by the Internet Engineering Task Force (IETF). These include Virtual eXtensible Local Area Networks (VXLAN) from VMware and Network Virtualization using Generic Routing Encapsulation (NVGRE) from Microsoft. Overlay networks combine L2 and L3 technologies to extend the L2 network beyond traditional geographic boundaries, as with hybrid clouds. You can think of overlay networks as essentially a generalization of a vLAN. Unlike with routing, overlay networks permit you to retain visibility and accessibility of your L2 network across larger geographies.

Unified Protocol Access

Ethernet has the ability to support mixed storage protocols, including iSCSI, FCoE, NFS, and CIFS/SMB. Support for mixed or unified environments can be more efficiently deployed using 10 Gigabit Ethernet (10GbE) and Data Center Bridging (required for FCoE traffic) as IP and FCoE traffic can share the same ports. 10GbE simplifies network deployment as the data center can be wired once and protocols can be reconfigured with software, rather than hardware changes.

Virtualization

Ethernet does very well in virtualized environments. IP address can easily be abstracted from physical ports to facilitate port mobility. As a result, networks built on an Ethernet infrastructure leveraging network virtualization can benefit from increased flexibility and uptime as hardware can be serviced or upgraded while applications are online.

Roadmap

For years, Ethernet has increased performance, but the transition from Gigabit Ethernet to 10 Gigabit Ethernet was a slow one. Delays in connector standards complicated matters. But, those days are over and the roadmap remains robust and product advances are accelerating. We are starting to see 40GbE devices on the market today, and will see 100GbE devices in the near future. As more and more data traffic is consolidated onto a shared infrastructure, these performance increases will provide the headroom for more efficient infrastructure deployments.

Some of the benefits listed above can be found with other networking technologies. But, Ethernet technology offers a unique combination of technology and economic value across a broad ecosystem of vendors that make it an ideal infrastructure for next generation data centers. And as these data centers are designed more and more around application services, software will be the lead conversation. To enable the vision of a software defined infrastructure, there is no better network technology than Ethernet.

Join the SSSI at Flash Memory Summit August 12-15 in Santa Clara CA!

SSSI returns to the Flash Memory Summit in booth 808, featuring information on updates on new tests in the SNIA Solid State Storage Performance Test Specification-Enterprise 1.1, NVM programming, and Workload I/O Capture Program (WIOCP) activities; new tech notes and white papers, including a PTS User Guide Tech Note, a PCIe SSD 101 Whitepaper, and a Performance Primer Whitepaper; and PCIe SSD demonstrations from SSSI members Bitmicro, Fastor, and Micron.

flash memory summitAll current SSSI members attending FMS and individuals from companies interested in the SSSI and their activities are cordially invited to the SSSI Solid State Storage Reception Monday evening August 12 from 5:30 pm – 7:00 pm in Room 209-210 at the Santa Clara Convention Center.   At the reception, SSSI Education Chair Tom Coughlin of Coughlin Associates will provide an overview of the SSD market, and SSSI Chair Paul Wassenberg of Marvell will discuss SSD performance.  SSSI Vice Chair Walt Hubis of Fusion-io will discuss SSSI programs, including PTS, NVM Programming, Workload I/O Capture, and PCIe SSD.  Refreshments, table displays, and an opportunity drawing for SSDs provided by SSSI members Intel, Micron, and OCZ will be featured.

FMS conference activities begin August 13, and the agenda can be found here.  SSSI members speaking and chairing panels include:

Tuesday August 13

4:35 pm – Paul Wassenberg of Marvell on Standards

Wednesday August 14

8:30 am – Eden Kim and Easen Ho of Calypso Testers – PCIe Power Budgets, Performance, and Deployment

9:50 am – Eden Kim and Easen Ho of Calypso Testers -  SNIA Solid State Storage Performance Test Specification

3:10 pm - Walt Hubis of Fusion-io – Revolutionizing Application Development Using NVM Storage Software

3:10 pm – Easen Ho of Calypso Testers –  SSD Testing Challenges

4:30 pm – Paul von Behren of Intel -  SNIA Tutorial: SNIA NVM Programming Model:  Optimizing Software for Flash

Thursday August 15

3:10 pm – Jim Pappas of Intel – PCI Express and  Enterprise SSDs

3:10 pm – Jim Handy of Objective Analysis – Market Research

An open “Chat with the Experts” roundtable session Tuesday August 13 at 7:00 pm will feature Jim Pappas of Intel at a Standards table, Eden Kim of Calypso Testers at a SSD Performance table, Easen Ho of Calypso Testers at a Testing table, and Paul Wassenberg of Marvell at a SATA Express table.MESS - Final logo #2-Megan Archer

The Media Entertainment and Scientific Storage (MESS) will hold their August “Meetup” at the Open Chat with the Experts, and also be located in SSSI Booth 808 for further discussions.

Exhibit admission is complimentary until August 8.  SNIA and SSSI members and colleagues can receive a $100 discount on either the 3-day conference or the 1-day technical program using the code SNIA at www.flashmemorysummit.com.

 

PCI Express Coming to an SSD Near You

There’s been a lot of press recently about what’s going on in the world of storage regarding the utilization of PCIe as a device interface.  Of course, PCIe has been around a long time as a system bus, while SATA and SAS have been used as storage device interfaces.  But with SSDs getting faster with every new product release, it’s become difficult for the traditional interfaces to keep up.

Some folks figure that PCIe is the solution to that problem.  PCIe 3.0 operates at 1GB/s, which is faster than 600MB/s SATA.  And with PCIe, it’s possible to add lanes to increase the overall bandwidth.  The SATA Express spec from SATA-IO defines a client PCIe device as having up to 2 lanes of PCIe, which brings the speed up to 2GB/s.  Enterprise SSDs will have up to 4 lanes of PCIe, which provides 4GB/s of bandwidth.

There was also some work on the software side that needed to be done to support PCIe devices, including NVM Express and SCSI Over PCIe (SOP), both of which are well underway.

If you are interested in knowing more about PCIe SSDs, keep an eye on our Education page, where, sometime during the week of August 5, we will be posting a new white paper on this topic.