2013 in Review and the Outlook for 2014 – A SNIA ESF Perspective

Technology continues to advance rapidly. Making sense of it all can be a challenge. At the SNIA Ethernet Storage Forum, we focus on storage technologies and solutions enabled by and associated with Ethernet Networks. Last year, we modified the charters of our two Special Interest Groups (SIG) to address topics about file protocols and storage over Ethernet. The File Protocols SIG includes the prior focus on Network File System (NFS) related topics and adds discussions around Server Message Block (SMB / CIFS). We had our first webcast last November on the topic of SMB 3.0 and it was our best attended webcast ever. The Storage over Ethernet SIG focuses on general Ethernet storage topics as well as more information about technologies like FCoE, iSCSI, Data Center Bridging, and virtual networking for storage. I encourage you to check out other articles on these hot topics in this SNIAESF blog to hear from our member experts as well as guest posts from leading analysts.

2013 was a busy year and we are already kickin’ it in 2014. This should be an exciting year in IT. Data storage continues to be a hot sector especially in the areas of All-Flash and Hybrid arrays. This year, we will expect to see new standards coming out of the T11 committee for Fibre Channel and possibly FCoE as well as progress in high speed Ethernet networks. Lower cost network interconnects will facilitate adoption of high speed networks in the small to midsize business segment. And a new conversation around “Software Defined…” should push a lot of ink in trade rags and other news sources. Oh, and don’t forget about the “Internet of Things”, mobile solutions, and all things Cloud.

The ESF will be addressing the impact on Ethernet storage solutions from these hot technologies. Next month, on February 18th, experts from the ESF, along with industry analysts from Dell’Oro Group will speak to the benefits and best practices of deploying FCoE and iSCSI storage protocols. This presentation “Use Cases for iSCSI and Fibre Channel: Where Each Makes Sense” will be part of an upcoming BrightTalk Summit on Storage Networking. I encourage you to register for this session. Additionally, we will be publishing a couple of white papers on file-based storage and a review of FCoE and iSCSI in storage applications.

Finally, SNIA will be kicking off its first year of the new user conference, Data Storage Innovation Conference. This will be one of the few storage focused user conferences in the market and should be quite interesting.

We’re excited about our growing membership and our plans for 2014. Our goal is to advance application of innovative technologies and we encourage you to send us mail or comment below with topics that are of interest to you.

Here’s to an exciting 2014!

Fibre Channel over Ethernet (FCoE): Hype vs. Reality

It’s been a bit of a bumpy ride for FCoE, which started out with more promise than it was able to deliver. In theory, the benefits of a single converged LAN/SAN network are fairly easy to see. The problem was, as is often the case with new technology, that most of the theoretical benefit was not available on the initial product release. The idea that storage traffic was no longer confined to expensive SANs, but instead could run on the more commoditized and easier-to-administer IP equipment was intriguing, however, new 10 Gbps Enhanced Ethernet switches were not exactly inexpensive with few products supporting FCoE initially, and those that did, did not play nicely with products from other vendors.

Keeping FCoE “On the Single-Hop”?

The adoption of FCoE to date has been almost exclusively “single-hop”, meaning that FCoE is being deployed to provide connectivity between the server and the Top of Rack switch. Consequently, traffic continues to be broken out one way for IP, and another way for FC. Breaking out the traffic makes sense—by consolidating network adapters and cables, it adds value on the server access side.

A significant portion of FCoE switch ports come from Cisco’s UCS platform, which runs FCoE inside the chassis. In terms of a complete end-to-end FCoE solution, there continues to be very little multi-hop FCoE happening, or ports shipping on storage arrays.

In addition, FCoE connections are more prevalent on blade servers than on stand-alone servers for various reasons.

  • First, blades are used more in a virtualized environment where different types of traffic can travel on the same link.
  • Second, the migration to 10 Gbps has been very slow so far on stand-alone servers; about 80% of these servers are actually still connected with 1 Gbps, which cannot support FCoE.

What portion of FCoE-enabled server ports are actually running storage traffic?

FCoE-enabled ports comprise about a third of total 10 Gbps controller and adapter ports shipped on servers. However, we would like to bring to readers’ attention the wide difference between the portion of 10 Gbps ports that is FCoE-enabled and the portion that is actually running storage traffic. We currently believe less than a third of the FCoE-enabled ports are being used to carry storage traffic. That’s because the FCoE port, in many cases, is provided by default with the server. That’s the case with HP blade servers as well as Cisco’s UCS servers, which together are responsible for around 80% of the FCoE-enabled ports. We believe, however, that in the event that users buy separate adapters they will most likely use that adapter to run storage traffic—but they will need to pay an additional premium for this – about 50% to 100% – for the FCoE license.

The Outlook

That said, whether FCoE-enabled ports are used to carry storage traffic or not, we believe they are being introduced at the expense of some FC adapters. If users deploy a server with an FCoE-enabled port, they most likely will not buy a FC adapter to carry storage traffic. Additionally, as Ethernet speeds reach 40 Gbps, the differential over FC will be too great and FC will be less likely to keep pace.

About the Authors

Casey Quillin is a Senior Analyst, Storage Area Network & Data Center Appliance Market Research with the Dell’Oro Group

Sameh Boujelbene is a Senior Analyst, Server and Controller & Adapter Market Research with the Dell’Oro Group

Software Defined Networks for SANs?

Previously, I’ve blogged about the VN2VN (virtual node to virtual node) technology coming with the new T11-FC-BB6 specification. In a nutshell, VN2VN enables an “all Ethernet” FCoE network, eliminating the requirement for an expensive Fibre Channel Forwarding (FCF) enabled switch. VN2VN dramatically lowers the barrier of entry for deploying FCoE. Host software is available to support VN2VN, but so far only one major SAN vendor supports VN2VN today. The ecosystem is coming, but are there more immediate alternatives for deploying FCoE without an FCF-enabled switch or VN2VN-enabled target SANs? The answer is that full FC-BB5 FCF services could be provided today using Software Defined Networking (SDN) in conjunction with standard DCB-enabled switches by essentially implementing those services in host-based software running in a virtual machine on the network. This would be an alternative “all Ethernet” storage network supporting Fibre Channel protocols. Just such an approach was presented at SNIA’s Storage Developers Conference 2013 in a presentation entitled, “Software-Defined Network Technology and the Future of Storage,” Stuart Berman, Chief Executive Officer, Jeda Networks. (Note, of course neither approach is relevant to SAN networks using Fibre Channel HBAs, cables, and switches.)

Interest in SDN is spreading like wildfire. Several pioneering companies have released solutions for at least parts of the SDN puzzle, but kerosene hit the wildfire with the $1B acquisition of Nicira by VMware. Now a flood of companies are pursuing an SDN approach to everything from wide area networks to firewalls to wireless networks. Applying SDN technology to storage, or more specifically to Storage Area Networks, is an interesting next step. See Jason Blosil’s blog below, “Ethernet is the right fit for the Software Defined Data Center.”

To review, an SDN abstracts the network switch control plane from the physical hardware. This abstraction is implemented by a software controller, which can be a virtual appliance or virtual machine hosted in a virtualized environment, e.g., a VMware ESXi host. The benefits are many; the abstraction is often behaviorally consistent with the network being virtualized but simpler to manipulate and manage for a user. The SDN controller can automate the numerous configuration steps needed to set up a network, lowering the amount of touch points required by a network engineer. The SDN controller is also network speed agnostic, i.e., it can operate over a 10Gbps Ethernet fabric and seamlessly transition to operate over a 100Gbps Ethernet fabric. And finally, the SDN controller can be given an unlimited amount of CPU and memory resources in the host virtual server, scaling to a much greater magnitude compared to the control planes in switches that are powered by relatively low powered processors.

So why would you apply SDN to a SAN? One reason is SSD technology; storage arrays based on SSDs move the bandwidth bottleneck for the first time in recent memory into the network. An SSD array can load several 10Gbps links, overwhelming many 10G Ethernet fabrics. Applying a Storage SDN to an Ethernet fabric and removing the tight coupling of speed of the switch with the storage control plane will accelerate adoption of higher speed Ethernet fabrics. This will in turn move the network bandwidth bottleneck back into the storage array, where it belongs.

Another reason to apply SDN to Storage Networks is to help move certain application workloads into the Cloud. As compute resources increase in speed and consolidate, workloads require deterministic bandwidth, IOPS and/or resiliency metrics which have not been well served by Cloud infrastructures. Storage SDNs would apply enterprise level SAN best practices to the Cloud, enabling the migration of some applications which would increase the revenue opportunities of the Cloud providers. The ability to provide a highly resilient, high performance, SLA-capable Cloud service is a large market opportunity that is not cost available/realizable with today’s technologies.

So how can SDN technology be applied to the SAN? The most viable candidate would be to leverage a Fibre Channel over Ethernet (FCoE) network. An FCoE network already converges a high performance SAN with the Ethernet LAN. FCoE is a lightweight and efficient protocol that implements flow control in the switch hardware, as long as the switch supports Data Center Bridging (DCB). There are plenty of standard “physical” DCB-enabled Ethernet switches to choose from, so a Storage SDN would give the network engineer freedom of choice. An FCoE based SDN would create a single unified, converged and abstracted SAN fabric. To create this Storage SDN you would need to extract and abstract the FCoE control plane from the switch removing any dependency of a physical FCF. This would include the critical global SAN services such as the Name Server table, the Zoning table and State Change Notification. Containing the global SAN services, the Storage SDN would also have to communicate with initiators and targets, something an SDN controller does not do. Since FCoE is a network-centric technology, i.e., configuration is performed from the network, a Storage SDN can automate large SAN’s from a single appliance. The Storage SDN should be able to create deterministic, end-to-end Ethernet fabric paths due to the global view of the network that an SDN controller typically has.

A Storage SDN would also be network speed agnostic, since Ethernet switches already support 10Gbps, 40Gbps, and 100Gbps this would enable extremely fast SANs not currently attainable. Imagine the workloads, applications and consolidation of physical infrastructure possible with a 100Gbps Storage SDN SAN all controlled by a software FCoE virtual server connecting thousands of servers with terabytes of SSD storage? SDN technology is bursting with solutions around LAN traffic; now we need to tie in the SAN and keep it as non-proprietary to the hardware as possible.

10 Gigabit Ethernet – 2H12 Results and 2013 Outlook

Seamus Crehan, President, Crehan Research Inc.

2H12 results

2012 turned out be another very strong growth year for 10 Gigabit Ethernet (10GbE), with the data center switch market and the server-class adapter and LAN-on-Motherboard (LOM) market both growing more than 50%.  Broad long-term trends such as virtualization, convergence, data center network traffic growth, cloud deployments, and price declines were helped further by more specific demand drivers, many of which materialized in the latter half of 2012. These included the adoption of Romley servers, expanded 10GBASE-T product offerings for both switches and servers, 10GbE LOM solutions for volume rack servers (which drive the majority of server shipments), and the public cloud’s migration to 10GbE for mainstream server networking access. (The SNIA Ethernet Storage Forum wrote about many of these in its July 2012 whitepaper titled 10GbE Comes of Age).

However, despite another stellar growth year, 10GbE still remained a minority of the overall data center and server shipment mix (see Figure 1).  

Crehan figure 1

Furthermore, its adoption hit some turbulence in the latter half of 2012, mostly related to the initial high prices and the learning curve associated with the new Modular LOM form-factor, resulting in some inventory issues.  Another drag on 2H12 10GbE growth was the lack of comprehensive 10GBASE-T offerings from many market participants. Although we saw a very significant step up in 10GBASE-T shipments in 2012, limited product offerings throughout much of 2012 capped its adoption at under less than 10% of total 10GbE shipments.

But these 2H12 issues were more than offset by 10GbE entering its next major stage of volume server adoption during this time period.  Crehan Research reported a near-50% increase in 2H12 10GbE results as many public cloud, Web 2.0, and massively scalable data center companies deployed 10GbE servers and server-access data center switches. We believe this is the second of three major stages of mainstream 10GbE server adoption, the first of which was driven by blade servers. The third will be driven by the upgrade of the traditional enterprise segment’s large installed base of 1GbE rack and tower server ports to 10GbE.

2013 expectations

As we move through 2013, Crehan Research expects the following factors to have positive impacts on the 10GbE market, driving it closer to becoming the majority data center networking interconnect:

Better pricing and understanding of Modular LOMs.  Initial pricing on 10GbE Modular LOMs has been relatively high, contributing to slower adoption and inventory issues.  In the past, end customers were given the higher-speed LOM for free for example, during the 1GbE and blade-server 10GbE transitions.  The Modular LOM is a new product form-factor, and it takes time for buyers and sellers to get comfortable with and fully understand it. During 2013, we should see lower pricing for this class of product, driving a higher server attach rate.

Comprehensive 10GBASE-T product offerings. 2013 should finally bring complete 10GBASE-T product offerings from the major server and switch OEMs, helping drive stronger 10GBASE-T adoption and growth. More specifically, we should see more 10GBASE-T LOMs in addition to top-of-rack and end-of-row data center switches. Furthermore, we expect many of these products to be attractively priced, in order to entice the large installed base of 1GBASE-T customers to upgrade to 10GbE.

Higher-speed uplink, aggregation, and core data center switches. Servers and server-access switches likely won’t see volume deployments to 10GbE without robust and cost-effective higher-speed uplink, aggregation, and core networking options. These have now begun to arrive with 40GbE, and we are starting to see a strong ramp for this technology. Crehan Research expects 2013 to bring the advent of many 40GbE data center switches, and foresees all of the major switch vendors rolling out offerings in 2013. In contrast with the early days of 10GbE, 40GbE prices are already close to parity on a bandwidth basis with 10GbE and have settled on a single interface form factor (QSFP), which should propel 40GbE data center switches to a much stronger start than that seen by 10GbE data center switches.

Continued traction of 10GbE for storage applications. We expect that 2013 will see a continuation of the broader adoption of 10GbE as a storage protocol, in both the public cloud and traditional enterprise segments.  Although Fibre Channel remains a very important data center storage networking technology, Fibre Channel switch and Host Bus Adapter (HBA) shipments each declined slightly in 2012 and have seen flat compound annual growth rates over the past four years (see Figure 2). We expect this gradual Fibre Channel decline to continue in 2013 as more customers run Ethernet-based protocols such as NAS, iSCSI and FCoE, especially over 10GbE, for their storage needs and deployments.

Crehan figure 2

Resolving the Confusion around DCB (I Hope)

Storage traffic running over Ethernet-based networks has been around for as long as we have had Ethernet-based networks.  Of course sometimes it is technically not accurate to think of the protocols as fundamentally Ethernet protocols – whilst FCoE, by definition, only runs on Ethernet, iSCSI, SMB, and NFS,  they are, in reality, IP-based storage protocols and whilst most commonly run on Ethernet, could run on any network that supports IP.  That notwithstanding, it is increasingly important to understand the real nature of Ethernet, and in particular, the nature of the new enhancements that we put under the umbrella of Data Center Bridging (DCB).

Although there is a great deal of information around DCB, there is also a lot of confusion where even the best articles miss describing a number of its elements.  As such, with 10GbE ramping now is a good time to try to clarify the reality of what DCB does and does not do.

Perhaps the first and most important point is that DCB is, in reality, a task group in IEEE responsible for the development enhancements to the 802.1 bridge specifications that apply specifically to Ethernet Switching (or as IEEE say Bridging) in the datacenter.  As such, DCB is not in itself a standard nor is the DCB group solely involved in those standards that apply to I/O and network convergence.  The mots recent work of this task group falls into two distinct areas both of which apply to the datacenter ­ one is the now completed standards around network and I/O convergence (802.1Qau,Qaz, Qbb) but the other are those standards that address the impact of server virtualization technology (802.1Qbg, BR, and the now withdrawn Qbh). 

Protocol Tree

Also critical to understand, so that we do not overstate either the limitations of traditional Ethernet nor the advantages of the new standards around I/O and network convergence, is that these new standards build on top of many well understood, well used mature capabilities that already exist within the IEEE Ethernet standards set.  Indeed IMHO, the most important element of this is that the DCB Convergence standards are building on top of the 802.1p capability to specify eight different classes of service through a 3-bit PCP field in the 802.1Q header ­ the VLAN header.  Or to say that in English ­ Ethernet has for some considerable time had the ability to separate traffic into 8 separate categories to ensure that those different categories got different treatment – or more bluntly the fundamentals of I/O and network convergence are nothing new to Ethernet.  Not only that, but the VLAN identification itself can be used to apply QoS on different sets of traffic, as does the fact that we can usually identify different traffic types by the Ethertype or IP socket.

So what is all the fuss about? As much as there were some good convergence capabilities, it was recognized that these could be further enhanced.

802.1Qbb or Priority-based Flow Control (PFC), far from adding into Ethernet a non-existent capability for lossless, simply takes the existing capability for lossless ­ 802.3X ­ and enhances it.  802.3X, when deployed with both RX and TX pause, can give lossless Ethernet as both recognized by many in the iSCSI community as well as by the FCoE specifications.  However, the pause mechanisms apply at the port level, which means giving one traffic class lossless causes blocking of other traffic classes.  All 802.1Qbb does, along with 802.3bd, is allow the pause mechanisms to be applied individually to specific priorities or traffic classes ­ aka pause FCoE or iSCSI or RoCE whilst allowing other traffic to flow. 

PFC

802.1Qaz or ETS (let’s ignore that DCBX is also part of this document and discussed in another SNIA-ESF blog) is not bandwidth allocation to your individual priorities, rather it is the ability to create a group of priorities and apply bandwidth rules to that group.  In English, it adds a new tier to your QoS schedulers so you can now apply bandwidth rules to port, priority or class group, individual priority, and VLAN.  The standard suggests a practice of at least three groups, one for best effort traffic classes, one for PFC-lossless classes, and one for strict priority ­ though it does allow more groups. 

ETS logical view

Last but not least, 802.1Qau or QCN is not a mechanism to provide lossless capabilities.  Where pause and PFC are point-to-point flow control mechanisms QCN allows flow control to be applied by a message from the congestion point all the way back to the source.  Being Ethernet level mechanisms it is of course across multiple hops within a layer 2 domain and so cannot cross either IP routing or FCF FCoE-based forwarding.  If QCN is applied to a non PFC priority, then it would most likely reduce drop by telling the source device to slow down rather than having frames be dropped and allowing the TCP congestion window to trigger slowing down at the TCP level.  If QCN is applied to a PFC priority, then it could reduce back propagation of PFC pause and so congestion propagation within that priority. 

QCN

Although not part of the standards for DCB-based convergence, but mentioned in the standards, devices that implement DCB typically have some form of buffer carving or partitioning such that the different traffic classes are not just on different priorities or classes as they flow through the network, but are being queued in and utilizing separate buffer queues.  This is important as the separated queuing and buffer allocation is another aspect of how fate sharing is limited or avoided between the different traffic classes.  It also makes conversations around microbursts, burst absorption, and latency bubbles all far more complex than before when there was less or no buffer separation.

It is important to remember that what we are describing here are the layer 2 Ethernet mechanisms around I/O and network convergence, QoS, flow control.  These are not the only tools available (or in operation) and any datacenter design needs to fully consider what is happening at every level of the network and server stack ­ including, but not limited to, the TCP/IP layer, SCSI layer, and indeed application layer.  The interactions between the layers are often very interesting ­ but that is perhaps the subject for another blog.

In summary, with the set of enhanced convergence protocols now fully standardized and fairly commonly available on many platforms, along with the many capabilities that exist within Ethernet, and the increasing deployment of networks with 10GbE or above, more organizations are benefiting from convergence – but to do so they quickly find that they need to learn about aspects of Ethernet that in the past were perhaps of less interest in a non-converged world.

 

How DCB Makes iSCSI Better

A challenge with traditional iSCSI deployments is the non-deterministic nature of Ethernet networks. When Ethernet networks only carried non-storage traffic, lost data packets where not a big issue as they would get retransmitted. However; as we layered storage traffic over Ethernet, lost data packets became a “no no” as storage traffic is not as forgiving as non-storage traffic and data retransmissions introduced I/O delays which are unacceptable to storage traffic. In addition, traditional Ethernet also had no mechanism to assign priorities to classes of I/O.

Therefore a new solution was needed. Short of creating a separate Ethernet network to handle iSCSI storage traffic, Data Center Bridging (DCB), was that solution.

The DCB standard is a key enabler of effectively deploying iSCSI over Ethernet infrastructure. The standard provides the framework for high-performance iSCSI deployments with key capabilities that include:
- Priority Flow Control (PFC)—enables “lossless Ethernet”, a consistent stream of data between servers and storage arrays. It basically prevents dropped frames and maximizes network efficiency. PFC also helps to optimize SCSI communication and minimizes the effects of TCP to make the iSCSI flow more reliably.
- Quality of Service (QoS) and Enhanced Transmission Selection (ETS)—support protocol priorities and allocation of bandwidth for iSCSI and IP traffic.
- Data Center Bridging Capabilities eXchange (DCBX) — enables automatic network-based configuration of key network and iSCSI parameters.

With DCB, iSCSI traffic is more balanced over high-bandwidth 10GbE links. From an investment protection perspective, the ability to support iSCSI and LAN IP traffic over a common network makes it possible to consolidate iSCSI storage area networks with traditional IP LAN traffic networks. There is also another key component needed for iSCSI over DCB. This component is part of Data Center Bridging eXchange (DCBx) standard, and it’s called TCP Application Type-Length-Value, or simply “TLV”! TLV allows the DCB infrastructure to apply unique ETS and PFC settings to specific sub-segments of the TCP/IP traffic. This is done through switches which can identify the sub-segments based on their TCP socket or port identifier which are included in the TCP/IP frame. In short, TLV directs servers to place iSCSI traffic on available PFC queues, which separates storage traffic from other IP traffic. PFC also eliminates data retransmission and supports a consistent data flow with low latency. IT administrators can leverage QoS and ETS to assign bandwidth and priority for iSCSI storage traffic, which is crucial to support critical applications.

Therefore, depending on your overall datacenter environment, running iSCSI over DCB can improve:
- Performance by insuring a consistent stream of data, resulting in “deterministic performance” and the elimination of packet loss that can cause high latency
- Quality of service through allocation of bandwidth per protocol for better control of service levels within a converged network
- Network convergence

For more information on this topic or technologies discussed in this blog, please visit some of our other blog articles:
- What Up with DCBX Blog and iSCSI over DCB: Reliability and predictable performance or check out the IEEE website on DCB

VN2VN: “Ethernet Only” Fibre Channel over Ethernet (FCoE) Is Coming

The completion of a specification for FCoE (T11 FC-BB-5, 2009) held great promise for unifying storage and LAN over a unified Ethernet network, and now we are seeing the benefits. With FCoE, Fibre Channel protocol frames are encapsulated in Ethernet packets. To achieve the high reliability and “lossless” characteristics of Fibre Channel, Ethernet itself has been enhanced by a series of IEEE 802.1 specifications collectively known as Data Center Bridging (DCB). DCB is now widely supported in enterprise-class Ethernet switches. Several major switch vendors also support the capability known as Fibre Channel Forwarding (FCF) which can de-encapsulate /encapsulate the Fibre Channel protocol frames to allow, among other things, the support of legacy Fibre Channel SANs from a FCoE host.

 
The benefits of unifying your network with FCoE can be significant, in the range of 20-50% total cost of ownership depending on the details of the deployment. This is significant enough to start the ramp of FCoE, as SAN administrators have seen the benefits and successful Proof of Concepts have shown reliability and delivered performance. However, the economic benefits of FCoE can be even greater than that. And that’s where VN2VN — as defined in the final draft T11 FC-BB-6 specification — comes in. This spec completed final balloting in January 2013 and is expected to be published this year. The code has been incorporated in the Open FCoE code (www.open-fcoe.org). VN2VN was demonstrated at the Fall 2012 Intel Developer Forum in two demos by Intel and Juniper Networks, respectively.

 
“VN2VN” refers to Virtual N_Port to Virtual N_Port in T11-speak. But the concept is simply “Ethernet Only” FCoE. It allows discovery and communication between peer FCoE nodes without the existence or dependency of a legacy FCoE SAN fabric (FCF). The Fibre Channel protocol frames remain encapsulated in Ethernet packets from host to storage target and storage target to host. The only switch requirement for functionality is support for DCB. FCF-capable switches and their associated licensing fees are expensive. A VN2VN deployment of FCoE could save 50-70% relative to the cost of an equivalent Fibre Channel storage network. It’s these compelling potential cost savings that make VN2VN interesting. VN2VN could significantly accelerate the ramp of FCoE. SAN admins are famously conservative, but cost savings this large are hard to ignore.

 
An optional feature of FCoE is security support through Fibre Channel over Ethernet (FCoE) Initialization Protocol (FIP) snooping. FIP snooping, a switch function, can establish firewall filters that prevent unauthorized network access by unknown or unexpected virtual N_Ports transmitting FCoE traffic. In BB-5 FCoE, this requires FCF capabilities in the switch. Another benefit of VN2VN is that it can provide the security of FIP snooping, again without the requirement of an FCF.

 
Technically what VN2VN brings to the party is new T-11 FIP discovery process that enables two peer FCoE nodes, say host and storage target, to discover each other and establish a virtual link. As part of this new process of discovery they work cooperatively to determine unique FC_IDs for each other. This is in contrast to the BB-5 method where nodes need to discover and login to an FCF to be assigned FC_IDs. A VN2VN node can login to a peer node and establish a logical point-to-point link with standard fabric login (FLOGI) and port login (PLOGI) exchanges.

VN2VN also has the potential to bring the power of Fibre Channel protocols to new deployment models, most exciting, disaggregated storage. With VN2VN, a rack of diskless servers could access a shared storage target with very high efficiency and reliability. Think of this as “L2 DAS,” the immediacy of Direct Attached Storage over an L2 Ethernet network. But storage is disaggregated from the servers and can be managed and serviced on a much more scalable model. The future of VN2VN is bright.

How is 10GBASE-T Being Adopted and Deployed?

For nearly a decade, the primary deployment of 10 Gigabit Ethernet (10GbE) has been using network interface cards (NICs) supporting enhanced Small Form-Factor Pluggable (SFP+) transceivers. The predominant transceivers for 10GbE are Direct Attach (DA) copper, short range optical (10GBASE-SR), and long-range optical (10GBASE-LR). The Direct Attach copper option is the least expensive of the three. However, its adoption has been hampered by two key limitations:

- DA’s range is limited to 7m, and

- because of the SFP+ connector, it is not backward-compatible with existing 1GbE infrastructure using RJ-45 connectors and twisted-pair cabling.

10GBASE-T addresses both of these limitations.

10GBASE-T delivers 10GbE over Category 6, 6A, or 7 cabling terminated with RJ-45 jacks. It is backward-compatible with 1GbE and even 100 Megabit Ethernet. Cat 6A and 7 cables will support up to 100m. The advantages for deployment in an existing data center are obvious. Most existing data centers have already installed twisted pair cabling at Cat 6 rating or better. 10GBASE-T can be added incrementally to these data centers, either in new servers or via NIC upgrades “without forklifts.” New 10GBASE-T ports will operate with all the existing Ethernet infrastructure in place. As switches get upgraded to 10GBASE-T at whatever pace, the only impact will be dramatically improved network bandwidth.

Market adoption of 10GBASE-T accelerated sharply with the first single-chip 10GBASE-T controllers to hit production. This integration become possible because of Moore’s Law advances in semiconductor technology, which also enabled the rise of dense commercial switches supporting 10GBASE-T. Integrating PHY and MAC on a single piece of silicon significantly reduced power consumption. This lower power consumption made fan-less 10GBASE-T NICs possible for the first time. Also, switches supporting 10GBASE-T are now available from Cisco, Dell, Arista, Extreme Networks, and others with more to come. You can see the early market impact single-chip 10GBASE-T had by mid-year 2012 in this analysis of shipments in numbers of server ports from Crehan Research:

 

Server-class Adapter & LOM 10GBASE-T Shipments

Note, Crehan believes that by 2015, over 40% of all 10GbE adapters and controllers sold that year will be 10GBASE-T.

Early concerns about the reliability and robustness of 10GBASE-T technology have all been addressed in the most recent silicon designs. 10GBASE-T meets all the bit-error rate (BER) requirements of all the Ethernet and storage over Ethernet specifications. As I addressed in an earlier SNIA-ESF blog, the storage networking market is a particularly conservative one. But there appear to be no technical reasons why 10GBASE-T cannot support NFS, iSCSI, and even FCoE. Today, Cisco is in production with a switch, the Nexus 5596T, and a fabric extender, the 2232TM-E that support “FCoE-ready” 10GBASE-T. It’s coming – with all the cost of deployment benefits of 10GBASE-T.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll. Note: There is a poll embedded within this post, please visit the site to participate in this post's poll. Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.