10GbE Answers to Your Questions

Our recent Webcast: 10GbE – Key Trends, Drivers and Predictions was very well received and well attended. We thank everyone who was able to make the live event. For those of you who couldn’t make it, it’s now available on demand. Check it out here.

There wasn’t enough time to respond to all of the questions during the Webcast, so we have consolidated answers to all of them in this blog post from the presentation team.  Feel free to comment and provide your input.

Question: When implementing VDI (1000 to 5000 users) what are best practices for architecting the enterprise storage tier and avoid peak IOPS / Boot storm problems?  How can SSD cache be used to minimize that issue? 

 

Answer: In the case of boot storms for VDI, one of the challenges is dealing with the individual images that must be loaded and accessed by remote clients at the same time. SSDs can help when deployed either at the host or at the storage layer. And when deduplication is enabled in these instances, then a single image can be loaded in either local or storage SSD cache and therefore it can be served the client much more rapidly. Additional best practices can include using cloning technologies to reduce the space taken up by each virtual desktop.

 

Question: What are the considerations for 10GbE with LACP etherchannel?

 

Answer: Link Aggregation Control Protocol (IEEE 802.1AX-2008) is speed agnostic.  No special consideration in required going to 10GbE.

 

Question: From a percentage point of view what is the current adoption rate of 10G Ethernet in data Centers vs. adoption of 10G FCoE?

 

Answer: As I mentioned on the webcast, we are at the early stages of adoption for FCoE.  But you can read about multiple successful deployments in case studies on the web sites of Cisco, Intel, and NetApp, to name a few.  The truth is no one knows how much FCoE is actually deployed today.  For example, Intel sells FCoE as a “free” feature of our 10GbE CNAs.  We really have no way of tracking who uses that feature.  FC SAN administrators are an extraordinarily conservative lot, and I think we all expect this to be a long transition.  But the economics of FCoE are compelling and will get even more compelling with 10GBASE-T.  And, as several analysts have noted, as 40GbE becomes more broadly deployed, the performance benefits of FCoE also become quite compelling.

 

Question: What is the difference between DCBx Baseline 1.01 and IEEE DCBx 802.1 Qaz?

 

Answer: There are 3 versions of DCBX
- Pre-CEE (also called CIN)
- CEE
- 802.1Qaz

There are differences in TLVs and the ways that they are encoded in all 3 versions.  Pre-CEE and CEE are quite similar in terms of the state machines.  With Qaz, the state machines are quite different — the
notion of symmetric/asymmetric/informational parameters was introduced because of which the way parameters are passed changes.

 

Question: I’m surprise you would suggest that only 1GBe is OK for VDI??  Do you mean just small campus implementations?  What about multi-location WAN for large enterprise if 1000 to 5000 desktop VMs?

 

Answer: The reference to 1GbE in the context of VDI was to point out that enterprise applications will also rely on 1GbE in order to reach the desktop. 1GbE has sufficient bandwidth to address VoIP, VDI, etc… as each desktop connects to the central datacenter with 1GbE. We don’t see a use case for 10GbE on any desktop or laptop for the foreseeable future.

 

Question: When making a strategic bet as a CIO/CTO in future (5-8 years plus) of my datacenter, storage network etc, is there any technical or biz case to keep FC and SAN?  Versus, making move to 10/40GbE path with SSD and FC?  This seems especially with move to Object Based storage and other things you talked about with Big Data and VM?  Seems I need to keep FC/SAN only if vendor with structured data apps requires block storage?

 

Answer: An answer to this question really requires an understanding of the applications you run, the performance and QOS objectives, and what your future applications look like. 10GbE offers the bandwidth and feature set to address the majority of application requirements and is flexible enough to support both file and block protocols. If you have existing investment in FC and aren’t ready to eliminate it, you have options to transition to a 10GbE infrastructure with the use of FCoE. FCoE at its core is FCP, so you can connect your existing FC SAN into your new 10GbE infrastructure with CNAs and switches that support both FC and FCoE. This is one of the benefits of FCoE – it offers a great migration path from FC to Ethernet transports. And you don’t have to do it all at once. You can migrate your servers and edge switches and then migrate the rest of your infrastructure later.

 

Question: Can I effectively emulate or out-perform SAN on FC, by building VLAN network storage architecture based on 10/40GBe, NAS, and use SSD Cache strategically.

 

Answer: What we’ve seen, and you can see this yourself in the Yahoo case study posted on the Intel website, is that you can get to line rate with FCoE.  So 10GbE outperforms 8Gbps FC by about 15% in bandwidth.  FC is going to 16 Gbps, but Ethernet is going to 40Gbps.  So you should be able to increasingly outperform FC with FCoE — with or without SSDs.

 

Question: If I have large legacy investment in FC and SAN, how do cost-effectively migrate to 10 or 40 GBe using NAS?  Does it only have to be greenfield opportunity? Is there better way to build a business case for 10GBe/NAS and what mix should the target architecture look like for large virtualized SAN vs. NAS storage network on IP?

 

Answer: The combination of 10Gb converged network adapter (CNA) and a top of the rack (TOR) switch that supports both FCoE and native FC allows you to preserve connectivity to your existing FC SAN assets while taking advantage of putting in place a 10Gb access layer that can be used for storage and IP.  By using CNAs and DCB Ethernet switches for your storage and IP access you are also helping to reduce your CAPEX and OPEX (less equipment to buy and manage using a common infrastructure.  You get the added performance (throughput) benefit of 10G FCoE or iSCSI versus 4G or 8G Fibre Channel or 1GbE iSCSI.  For your 40GbE core switches so you have t greater scalability to address future growth in your data center.

 

Question: If I want to build an Active-Active multi-Petabyte storage network over WAN with two datacenters 1000 miles apart to primarily support Big Data analytics,  why would I want to (..or not) do this over 10/40GBe / NAS vs.  FC on SAN?  Does SAN vs. NAS really enter into the issue?  If I got mostly file-based demand vs. block is there a tech or biz case to keep SAN ?

 

Answer: You’re right, SAN or NAS doesn’t really enter into the issue for the WAN part; bandwidth does for the amount of Big Data that will need to be moved, and will be the key component in building active/active datacenters. (Note that at that distance, latency will be significant and unavoidable; applications will experience significant delay if they’re at site A and their data is at site B.)

 

Inside the data center, the choice is driven by application protocols. If you’re primarily delivering file-based space, then a FC SAN is probably a luxury and the small amount of block-based demand can be delivered over iSCSI with equal performance. With a 40GbE backbone and 10GbE to application servers, there’s no downside to dropping your FC SAN.

 

Question: Are you familiar with VMware and CISCO plans to introduce a Beat for virtualized GPU Appliance (aka think Nivdia hardware GPUs) for heavy duty 3D visualization apps on VDI?  No longer needing expensive 3D workstations like RISC based SGI desktops. If so, when dealing with these heavy duty apps what are your concerns for network and storage network?

 

Answer: I’m afraid I’m not familiar with these plans.  But clearly moving graphics processing from the client to the server will add increasing load to the network.  It’s hard to be specific without a defined system architecture and workload.  However, I think the generic remarks Jason made about VDI and how NVM storage can help with peak loads like boot storms apply here as well, though you probably can’t use the trick of assuming multiple users will have a common image they’re trying to access.

 

Question: How do I get a copy of your slides from today?  PDF?

 

Answer: A PDF of the Webcast slides is available at the SNIA-ESF Website at: http://www.snia.org/sites/default/files/SNIA_ESF_10GbE_Webcast_Final_Slides.pdf

 

 

 

 

 

 

 

 

 

 

 

 

 

10GbE – Are You Ready?

Is 10GbE coming of age? Many of us within the SNIA-ESF think so. We have co-authored a new and objective white paper on the subject, “10GbE – Comes of Age.” You can download it at http://snia.org/sites/default/files/10GbElookto40GbE_Final.pdf

In this paper we dive deep into why we believe 2012 is the year for wide 10GbE adoption. There are numerous technical and economic justifications that will compel organizations to take advantage of the significant benefits 10GbE delivers. From virtualization and network convergence, to the general availability of LOM and 10GBASE-T there is no shortage of disruptive technologies converging to drive this protocol forward.

This paper is the foundation for much of our activity for the rest of the year. Our 10GbE live Webcast a couple of weeks ago was very well received. In fact hundreds of people either attended the live event or have viewed it on demand. I encourage you to check it out at http://www.brighttalk.com/webcast/663/50385. We also have two more Webcasts scheduled, one on NFS in August and the other on Flash technology in September. Keep checking this blog for details.

This paper is the result of a collaboration of industry leaders from Broadcom, Dell, Emulex, Intel, and  NetApp. We pride ourselves on keeping things “vendor-neutral.” If you’re in IT, we hope you find this cooperation refreshing. If you’re a vendor, we welcome your participation and urge you to consider joining SNIA and the ESF. Get more info on joining SNIA at http://www.snia.org/member_com/join

SSSI PCIe SSD Taskforce Enters Final Stretch

With opening day of the Del Mar races in my home town on Wednesday, it seems only fitting to note that the SSSI PCIe SSD taskforce is rounding the last turn in its informational call schedule.

If you have a stake in this fast growing technology area, you won’t want to miss the final two calls on July 16 and July 30 at 7:00 pm ET/4:00 PM PT.

The July 16 call will feature a talk by Narinder Lall of eASIC on PCIe Controllers and a presentation by Walt Hubis of the SNIA Security Technical Work Group on Security and Removable NVRAM PCIe Storage.

Join the teleconference at 1-866-439-4480 passcode 25478081# and the webex at snia.webex.com meeting id 797-289-257 passcode pcie2012.

Finally, if you’ve missed any calls to this point, catch up by visiting http://snia.org/forums/sssi/pcie.

See you at the races!

Live Webcast: 10GbE – Key Trends, Drivers and Predictions

The SNIA Ethernet Storage Forum (ESF) will be presenting a live Webcast on 10GbE on Thursday, July 19th.  Together with my SNIA colleagues, David Fair and Gary Gumanow, we’ll be discussing the technical and economic justifications that will likely make 2012 the “breakout year” for 10GbE.  We’ll cover the disruptive technologies moving this protocol forward and highlight the real-world benefits early adopters are seeing. I hope you will join us!

The Webacast will begin at 8:00 a.m. PT/11:00 a.m. ET. Register Now: http://www.brighttalk.com/webcast/663/50385

This event is live, so please come armed with your questions. We’ll answer as many as we can on the spot and include the full Q&A here in a SNIA ESF blog post.

We look forward to seeing you on the 19th!

Live Webcast: 10GbE – Key Trends, Drivers and Predictions

The SNIA Ethernet Storage Forum (ESF) will be presenting a live Webcast on 10GbE on Thursday, July 19th.  Together with my SNIA colleagues, David Fair and Gary Gumanow, we’ll be discussing the technical and economic justifications that will likely make 2012 the “breakout year” for 10GbE.  We’ll cover the disruptive technologies moving this protocol forward and highlight the real-world benefits early adopters are seeing. I hope you will join us!

The Webacast will begin at 8:00 a.m. PT/11:00 a.m. ET. Register Now: http://www.brighttalk.com/webcast/663/50385

This event is live, so please come armed with your questions. We’ll answer as many as we can on the spot and include the full Q&A here in a SNIA ESF blog post.

We look forward to seeing you on the 19th!

Impressions from Cisco Live 2012

I recently attended Cisco Live in San Diego last week and wanted to share some of my impressions of the show.

First of all, the weather was a disappointment. I’m a native Californian (the northern state of course) and I was looking forward to some sweet weather instead of the cool overcast climate. It’s been so nice in Boston, I have been spoiled.

Attendance was huge. I heard something north of 17,000 attendees. I don’t know if that was actual attendees or registrations. But, it was a significant number and I had several engaging conversations about data center trends, applications, as well as general storage inquiries with the attendees.

Presenting at the Intel Booth

My buddies at Intel asked to make a couple of presentations at their booth and I spoke on the current status of 10GbE adoption and the value it offers. My two presentations were in the morning of the first two full days of the show. Things didn’t look good when only a few attendees were seated at the time we were about to start. My first impression seeing the empty seats in the theater was, “the Intel employees better make a great audience.”

Fortunately, the 20 or so seats filled just as I started with more visitors standing in the back and side. The number of attendees doubled the second day, so maybe I built a reputation.  Yeah, right.

Anyway, let me share just a couple of the ideas from my presentation here:

1)      10GbE is an ideal network infrastructure that offers great flexibility and performance with the ability to support a variety of workloads and applications. For storage, both block and file based protocols are supported which is ideal for today’s highly virtualized infrastructures.

2)      The ability to consolidate data traffic over a shared network promises significant capital and operational benefits for organizations currently supporting data centers with mixed network technologies. These benefits include fewer ports, cables, and components which mean less equipment to purchase, manage, power and cool. Goodness all around.

3)      There are a couple of applications in particular that are making 10GbE particularly useful.

  1. Virtualization – high VM density drives increase bandwidth requirements from server to storage
  2. Flash / SSD – flash memory drives increased performance at both the server and storage which requires increased bandwidth

After the presentation, I asked for questions and was pleased with the number and quality of questions. Sure, we were giving away swag (Intel t-shirts). But, the relevance of the questions was particularly interesting. Many customers were considering deploying converged networks or just moving to Ethernet from Fibre Channel infrastructures. Some of the questions included, where would you position iSCSI vs FCoE? What are the ideal use cases for each? When do you expect to see 40GbE or 100GbE and for what applications? What about other network technologies, such as Infiniband?

Interestingly, very few if any were planning to move to 16Gb Fibre Channel. Now, this was a Cisco show, so I would expect attendees to be there because they favor Cisco’s message and technology or are in the process of evaluating it. So, given Cisco’s strength and investment in 10GbE, it shouldn’t be a surprise that most attendees at the show, or at least my presentation, were leaning that direction. But, I didn’t expect it to be so one sided.

Conclusion

Interest in vendor technology shows is clearly surpassing other industry events, and Cisco Live is no exception. And each Cisco Live event continues to reflect greater interest from customers in 10GbE in the datacenter.

Updated Client Solid State Performance Test Specification Now Available

SNIA’s Solid State Storage Initiative has just released a revised Client SSS Performance Test Specification (PTS-Client) which adds a new write saturation test and refines existing tests.

The Solid State Storage Performance Test Specification (PTS) is a device-level performance test suite for benchmarking and comparing performance among SAS, SATA and PCI Express SSDs

Revision 1.1 of the PTS-Client updates tests for IOPS, throughput and latency to more accurately reflect the workload conditions under which Client SSDs are used.  The PTS-Client v1.1 also adds a Write Saturation test that measures the initial Fresh-Out-of-Box state of SSDs and their performance evolution as data is randomly written to the device.

Eden Kim, Chair of SNIA’s SSS Technical Working Group, describes the primary updates to PTS-Client v1.1 as adjustments to preconditioning ranges and test boundaries.   Taken together, these parameters create a repeatable test stimulus that more accurately reflects the workload characteristics of SSDs used in a single user environment The PTS-Client v1.1 also adds an easily understandable description of each test, which helps the user to understand the purpose of the test, the test flow, and guidance on how to interpret the test results.

Sample test results using the PTS-Client v1.1 have been posted to the SNIA SSSI Understanding PTS Performance webpage.

 

Full Steam Ahead for SNIA SSSI PCIe Task Force – It’s Not Too Late to Participate!

SSSI’s PCIe SSD Task Force has covered a lot of ground since the inaugural call April 9.  Sixty-five organizations are now participating with 125 members on the email reflector.  The first four calls identified issues, with speakers from the SSSI, Agilent, Calypso, HP, LeCroy, Marvell, Micron, Seagate, Stec, Toshiba, and Virident taking a closer look at standards; discussing a PCIe test hardware RTP refresh; presenting results of a survey on how many IOPS are enough; discussing PCIe test methodology and system integration issues; presenting on 2.5” PCIe form factor; and reviewing SCSI Express, the PCI SIG, and PCIe system and form factor concerns.   All call notes are at http://snia.org/forums/sssi/pcie.

The open meeting roadmap now ramps with calls on the following topics – Big Picture – What Does It All Mean? (June 4); Deployment Strategies/Market Development (June 18); Where Do We Go From Here? (July 2); and Roadmaps and Milestones 2012 (July 16).  All SSSI members are invited to attend – calls are 4:00 pm – 5:30 pm PT and details are at http://snia.org/forums/sssi/pcie.

PCIe SSD Task Force activities will culminate in a PCIe Task Force Face-to-Face Meeting August 20 from 5:30 pm – 7:00 pm at the Flash Memory Summit in Santa Clara CA (www.flashmemorysummit.com). Contact SSSI at pciechair@snia.org if you would like to attend.

Membership is complimentary in the PCIe SSD Task Force and all current SSSI members are welcome to participate.   After July, the Task Force will change format to a SSSI Committee and companies not already SSSI members will need to join the SNIA and SSSI to participate. For additional information, or to join, please contact the PCIe Taskforce Chair at pciechair@snia.org.

Full Steam Ahead for SNIA SSSI PCIe Task Force – It’s Not Too Late to Participate!

SSSI’s PCIe SSD Task Force has covered a lot of ground since the inaugural call April 9.  Sixty-five organizations are now participating with 125 members on the email reflector.  The first four calls identified issues, with speakers from the SSSI, Agilent, Calypso, HP, LeCroy, Marvell, Micron, Seagate, Stec, Toshiba, and Virident taking a closer look at standards; discussing a PCIe test hardware RTP refresh; presenting results of a survey on how many IOPS are enough; discussing PCIe test methodology and system integration issues; presenting on 2.5” PCIe form factor; and reviewing SCSI Express, the PCI SIG, and PCIe system and form factor concerns.   All call notes are at http://snia.org/forums/sssi/pcie.

The open meeting roadmap now ramps with calls on the following topics – Big Picture – What Does It All Mean? (June 4); Deployment Strategies/Market Development (June 18); Where Do We Go From Here? (July 2); and Roadmaps and Milestones 2012 (July 16).  All SSSI members are invited to attend – calls are 4:00 pm – 5:30 pm PT and details are at http://snia.org/forums/sssi/pcie.

PCIe SSD Task Force activities will culminate in a PCIe Task Force Face-to-Face Meeting August 20 from 5:30 pm – 7:00 pm at the Flash Memory Summit in Santa Clara CA (www.flashmemorysummit.com). Contact SSSI at pciechair@snia.org if you would like to attend.

Membership is complimentary in the PCIe SSD Task Force and all current SSSI members are welcome to participate.   After July, the Task Force will change format to a SSSI Committee and companies not already SSSI members will need to join the SNIA and SSSI to participate. For additional information, or to join, please contact the PCIe Taskforce Chair at pciechair@snia.org.

New Cloud Storage Meme – “Enterprise DropBox”

In a number of recent presentations on cloud storage recently, I have started by asking the audience “how many of you use DropBox?” I have seen rooms where more than half of the hands go up. Of course, the next question I ask is “does your corporate IT department know about this?” – sheepish grins abound.

DropBox has been responsible for for a significant fraction of the growth in the number of Amazon S3 objects – that’s where the files end up when you drop them into that icon on your laptop, smartphone or tablet. However, if that file is a corporate document, who is in charge of making sure the data and its storage meets corporate policies for protection, privacy, retention and security? Nobody.

Thus there is now growing interest in bringing that data back in-house and on premise for the enterprise so that business policies for the data can be enforced. This trending meme has been termed “Enterprise Dropbox”. The basic idea is to offer the equivalent service and set of applications to allow corporate IT users to store their corporate documents where the IT department can manage them.

Is this “Private Cloud”? Well, yes in that it uses capitalized corporate storage equipment. But it also sits “at the edge” of the corporate network so as to be accessible by employees wherever they happen to be. In reality, Enterprise DropBox needs to be part of an overall Bring Your Own Device (BYOD) strategy to enable frictionless innovation and collaboration for employees.

Who are likely to be the players in this space? Virtualization vendors such as Citrix (with its ShareFile acquisition) and VMware with its Project Octopus initiative look to be first movers in this space, along with start ups such as Oxygen Cloud. It’s interesting that major storage vendors have not picked up on this as yet.

Digging into how this works, you find that every vendor has a storage cloud with an HTTP based object storage interface that is then exposed to the internet with secure protocols. Each interface is just slightly different enough that there is no interoperability. In addition, each vendor develops, maintains and distributes it own set of client “apps” for operating systems, smartphones and tablets. A key feature is integration of the authentication and authorization with the corporate LDAP directory both for security and to reduce administrative overhead. Support for quotas and department charge back is essential.

Looking down the road, however, this proliferation of proprietary clients and interfaces is already causing headaches for the poor device user, who may have several of these apps on their devices (all maxed out to their “free” limit). The burden on vendors is the development cost of creating and maintaining all those applications on all those different devices and operating systems. We’ve seen this before, however, in the early days of the Windows ecosystem. You used to have to purchase a separate FTP client for early Windows installations. Want NFS? A separate client purchase and install. Of course, now all those standard protocol clients are built into operating systems everywhere. Nobody thinks twice about it.

The same thing will eventual work its way out in the smart device category as well. But not until a standard protocol emerges that all the applications can use (such as FTP or NFS in the Windows case). The SNIA’s Cloud Data Management Interface (CDMI) is poised to meet this need as it’s adoption continues to accelerate. CDMI offers a RESTful HTTP object storage data path that is highly secure and has the features that corporate IT departments need in order to protect and secure data while meeting business policies. It enables each smart device to have a single embedded client to multiple clouds – both public and private. No more proliferation of little icons all going to separate clouds.

What will drive this evolution? You – the corporate customer of these vendor offerings. You can ask the Enterprise DropBox vendors simply to “show me CDMI support in your roadmap”. Educate your employees about choosing smart devices that support the CDMI standard natively. Only then will the market forces compel the vendors to realize that there is no value in locking in their customers. Instead they can differentiate on the innovation and execution that separates them from their competitors. Adoption of a standard such as CDMI will actually accelerate the growth of the entire market as the existing friction between clouds gets ground down and smoothed out by virtue of this adoption.