Category: ethernet
Does Your World Include Storage? Don’t Miss SDC!
Whether storage is already a main focus of your career or may be advancing toward you, you’ll definitely want to attend the flagship event for storage developers – and those involved in storage operations, decision making, and usage – SNIA’s 19th annual Storage Developer Conference (SDC), September 11-14, 2017 at the Hyatt Regency Santa Clara, California. Read More
Architectural Principles for Networked Solid State Storage Access
There are many permutations of technologies, interconnects and application level approaches in play with solid state storage today. In fact, it is becoming increasingly difficult to reason clearly about which problems are best solved by various permutations of these. That’s why the SNIA Ethernet Storage Forum, together with the SNIA Solid State Storage Initiative, is hosting a live Webcast, “Architectural Principles for Networked Solid State Storage Access,” on June 2nd at 10:00 a.m. PT.
As our presenter, we are fortunate to have Doug Voigt, chair of the SNIA NVM Programming Technical Working Group and a member of the SNIA Technical Council. Doug will outline key architectural principals that may allow us to think about the application of networked solid state technologies more systematically, answering questions such as:
- How do applications see IO and memory access differently?
- What is the difference between a memory and an SSD technology?
- How do application and technology views permute?
- How do memory and network interconnects change the equation?
- What are persistence domains and why are they important?
I hope you’ll register today and join us on June 2nd for an hour that is sure to be insightful.
Find out How iSCSI is Evolving
The next Ethernet Storage Forum Webcast. “Evolution of iSCSI including iSER, iSCSI over RDMA Ethernet,” will focus on developments with iSCSI – the Internet Protocol standard for transferring SCSI commands across an Ethernet network, enabling hosts to link to storage devices wherever they may be. At this Webcast on May 24th, I will be joined by Fred Knight, Standards Technologist at NetApp, and Andy Banta, Storage Janitor at SolidFire/NetApp, who will discuss the evolution of iSCSI up to iSER, which takes advantage of Ethernet RDMA fabric technologies to enhance performance. Register now to hear:
- A brief history of iSCSI
- How iSCSI works
- IETF refinements to the specification
- Enhancing iSCSI performance with iSER
The Webcast will be live, so please bring your questions for Andy and Fred. We hope to see you there!
Ethernet Roadmap for Networked Storage Q&A
Almost 200 people attended our joint Webcast with the Ethernet Alliance: “The 2015 Ethernet Roadmap for Networked Storage.” We had a lot of great questions during the live event, but we did not have time to answer them all. As promised, we’ve complied answers for all of the questions that came in. If you think of additional questions, please feel free to comment on this blog.
Q. What did you mean by parity of flash with HDD?
A. We were referring to the O’Reilly article in “Network Computing.” O’Reilly is predicting parity in BOTH capacity and price in 2016.
Q. When do we expect IEEE standards ratification for 25G speed?
A. 2016. You can see the exact schedule here.
Q. Do you envision the Enterprise, Cloud Providers, HPC, Financials getting rid of their 10/40GbE infrastructure and replacing that with 25/100GbE infrastructure in 2017? Will these customers deploy 100GbE/25GbE switch in the leaf layer in 2017?
A. Deployment will occur over a multi-year time span overall if only because switch infrastructure is expensive to upgrade, as reflected in the Crehan Research forecast. New deployments will likely move to 25/100GbE as new switches with 100GbE downstream ports become available in 2016. Just because the Cloud Service Providers are currently the most aggressive in driving new infrastructure purchases, they represent the largest early volumes for 25/100 GbE. Enterprise is still in the midst of the transition from 1GbE to 10GbE.
Q. What are some of the developments on spanning-tree derivatives vs. Dykstra based derivatives such as OSPF, FSPF for switches?
A. Beyond the scope of this presentation on Ethernet. Ethernet is defined by the IEEE for L1 and L2 in the ISO model. Your questions are at L3 and L4, which is handled by organizations like IETF.
Q. With all the speeds possible who is working on flow control?
A. Flow control at the 802.1 level is supported in the Layer 1/2 PHY & MAC by setting upper bounds on the delay through each layer which allows higher layers to comprehend the delays & response times to pause frames. Each new speed & PHY in 802.3 is accompanied by delay constraint specifications to support this.
Q. Do you have an overlay graphic that shows the Ethernet RDMA roadmap? If so, is Ethernet storage the primary driver for that technology?
A. Beyond the scope of this presentation on Ethernet. Ethernet is defined by the IEEE for L1 and L2 in the ISO model. Your questions are at L3 and L4, which is handled by organizations like IETF and the InfiniBand Trade Association.
Q. The adoption of faster and new Ethernet always has to do with the costs of acquiring new technology. How long do you think it will take to adopt/acquire faster Ethernet in datacenters now that the development is happening much faster than the last 20 years?
A. Please see the chart on slide 7 where Crehan Research predicts how fast the technology will diffuse into deployments.
Q. What do you expect as cost comparison between Ethernet and InfiniBand going forward?
Also, what work is being done to reduce latency?
A. Beyond the scope of this presentation. Latency is primarily a consequence of design methodologies and semiconductor process technology, and thus under the control of the silicon device manufacturers. Some vendors prioritize latency more than others.
Q. What’s the technical limitation as speeds go higher and higher?
A. A number of factors limit speeds going faster and faster, but the main problem is that materials attenuate signals as they travel at higher frequencies.
Q. Will 1GbE used for manageability purposes disappear from public cloud? If so, what is the expected time frame?
A. This is a choice for end users. Most equipment is managed on a separate network for security concerns, but users can eliminate these management networks at any time.
Q. What are the relative market size predictions for the expanding number of standards (25G, 50G, 100G, 200G, etc.)?
A. See the Crehan Research forecast in the presentation.
Q. What is the major difference between SMF & MMF for the not so initiated?
A. The SMF has a 9um core while the MMF has a 50um core. Different lasers are used for each fiber type and MMF typically goes 100 meters above 10GbE and SMF goes from 500m to 10km.
Q. Will 25G be available through both copper and fibre connectivity?
A. Yes. IEEE 802.3 work is currently underway to specify 25Gb/s on twinax (“direct attach copper)” to 5 meters, printed circuit backplane up to ~1m, twisted pair copper to 30m, multimode fiber to 100m. There is no technology barrier to 25G on SMF, just that a standards project to specify it has not started yet.
Q. This is interesting from a hardware viewpoint, but has nothing to do with storage yet. Are we going to get to how this relates to storage other than saying flash drives are fast and only Ethernet can keep up?
A. Beyond the scope of this presentation on Ethernet. Ethernet is defined by the IEEE for L1 and L2 in the ISO model. Your questions are directed at the higher layers. The key point of this webcast is that storage networking engineers need to pay much more attention to the Ethernet roadmap than they have historically, primarily because of NVM.
Q. How does “SFP 28″ fit in this mix? Is it required for 25G?
A. SFP28 connectors and modules are required for 25GbE because they give better performance than SFP+ that only works to 10GbE.
Q. Can you provide the quick difference between copper & optical on speed & costs?
A. Copper and optical Ethernet links are usually standardized at the same speed. 400GbE is not defining a copper link but an active Direct Attached Cable (DAC) will probably support 400GbE. Cost depends on volume and many factors and is beyond the scope of this presentation. Copper is usually a fraction of the cost of optical links.
Q. Do you think people will try to use multiple CAT 5e to get more aggregate bandwidth to the access points to avoid having to run Fibre to them?
A. IEEE is defining 2.5GBASE-T and 5GBASE-T to enable Cat5e to support faster wireless access points.
Q. When are higher speeds and PoE going to reach the point when copper based Ethernet will become a viable heat source for buildings thus helping the environment?
A. IEEE is defining 4 wire PoE to deliver at least 60W to end devices. You can find out more here.
Q. What are the use cases for 2.5Gb and 5.0Gb Base-T?
A. The leading use case for 2.5G/5GBASE-T is to provide the uplink for wireless LAN access points that support 802.11ac and future wireless technology. Wireless LAN technology has advanced to the point where >1Gb/s BW is needed upstream from the AP, and 2.5G/5G provide a higher speed uplink while preserving the user’s investment in Cat5e/Cat6 cabling.
Q. Why not have only CFP2 sockets right away with things disabled for lower speeds for all the intervening years leading to full-fledged CFP2?
A. CFP2 is defined for 100GbE and 8 ports can be used on a 1U switch. 100GbE switches are shifting to QSFP28 so that 32 ports of 100GbE is supported in a 1U switch at low cost. The CFP2 is much more expensive than QSFP28 and will not be used for lower speeds because of the high cost.
Relentless Advance Of Ethernet – And Ethernet Storage Networking
As one Cisco colleague once said to me, “After the nuclear holocaust, there will be two things left: cockroaches and Ethernet.” Not sure I like Ethernet’s unappealing company in that statement, but the truth it captures is that Ethernet, now entering its fifth decade (wow!), is ubiquitous and still continuing to advance at a breathtaking pace. And as it advances, it advances the capabilities of storage networking based on the Ethernet backbone, be it file storage like NFS or SMB or block storage like iSCSI or FCoE.
Most recent evidence of Ethernet’s continuing and relentless evolution is illustrated in the 28 March 2014 announcement from the Ethernet Alliance congratulating the IEEE on formation of their IEEE P802.3bs™ Task Force:
The new group is chartered with the development of the IEEE P802.3bs 400 Gigabit Ethernet (GbE) project, which will define Ethernet Media Access Control (MAC) parameters, physical layer specifications, and management parameters for the transfer of Ethernet format frames at 400 Gb/s. As the leading voice of the Ethernet ecosystem, the Ethernet Alliance is ideally positioned to support this latest move towards standardizing and advancing 400Gb/s technologies through efforts such as the launch of the Ethernet Alliance’s own 400 GbE Subcommittee.
Ethernet is in production today from multiple vendors at 40GbE and supports all storage protocols, including FCoE, at those speeds. Market forecasters expect the first 100GbE adapters to appear in 2015. Obviously, it is too early to forecast when 400GbE will arrive, but the train is assuredly in motion. And support for all the key storage protocols we see today on 10GbE and 40GbE will naturally extend to 100GbE and 400GbE. Jim O’Reilly makes similar points in his recent Information Week article, “Ethernet: The New Storage Area Network” where he argues, “Ethernet wins on schedule, cost, and performance.”
Beyond raw transport speed, the rich Ethernet infrastructure offers techniques to catapult your performance even beyond the fastest single-pipe speed. The Ethernet world has established techniques for what is alternately referred to as link aggregation, channel bonding, or teaming. The levels available are determined by the capabilities provided in system software and what switch vendors will support. And those capabilities, in turn, are determined by what they respectively see as market demand. VMware, for example, today will let you bond eight 10GbE channels into a single 80GbE pipe. And that’s today with mainstream 10GbE technology.
Ethernet will continue to evolve in many different ways to support the needs of the industry. Serving as a backbone for all storage networking traffic is just one of many such roles for Ethernet. In fact, precisely because of the increasing breadth of usage models Ethernet supports, it will also continue to offer cost advantages. The argument here is a very simple volume argument:
Total Server-class Adapter and LOM Market Ports
Enough said, except to also note that volume is what funds speed roadmaps.
Object Storage is a Big Deal (and Ethernet Matters)
A significant challenge in managing large amounts of data (or Big Data) is a lack of what I like to call “total data awareness”. It’s a situation where you know (or suspect) that you have data – you just can’t find it. When you think about many current IT environments, they are often not built for total data awareness. This starts with core elements of the IT infrastructure, such as file systems. Traditional file systems and access methods were not designed to store hundreds of millions or billions of files in a single namespace. This leads to admins storing data in multiple file systems, multiple shares, complex directory structures – not because the data should be logically organized in that way, but simply because of limitations in file system architectures. This issue becomes even more pressing when data sits in multiple locations, maybe even across on-premise and off-premise, cloud-based storage.
Is object-based storage the answer?
Think about how you find data on your computer. Do you navigate complex directory structures, trying to remember the file name of the file that hopefully has the data you are looking for – or have you moved on and just use search tools like Spotlight? Imagine you have hundreds of millions of files, scattered across dozens or hundreds of sites. How about just searching across these sites and immediately finding the data you are looking for? With object storage technology you have the ability to store data in objects, along with metadata that describes the object. Now you can just search for your data based on metadata tags (like a filename – or even better an account number and document type) – as well as manage data based on policies that leverage that metadata.
However, this often means that you have to consider interfacing with your storage system through APIs, as opposed to NFS and CIFS – so your applications need to support whatever API your storage vendor offers.
CDMI to the rescue?
Today, storage vendors often use proprietary APIs. This means that application vendors would have to support a plethora of APIs from a number of different vendors, leading to a lack of commitment from application vendors to support more innovative, object-based storage architectures.
A key path to solve this issue is to leverage technology and standards that have been specifically developed to provide this idea of a single namespace for billions of data sets and across locations and even managed services that might reside off-premise.
Relatively new on the standards side you have CDMI (http://www.snia.org/cdmi), the Cloud Data Management Interface. CDMI is a standard developed by SNIA (http://www.snia.org), the Storage Networking Industry Association, with heavy involvement from a number of leading storage vendors. CDMI not only introduces a standard interface to ingest and retrieve data into and out of a large-scale repository, it also enables applications to easily manage this repository and where the data sits.
CDMI is the new NFS
Forgive the provocation, but when it comes to creating and managing large, distributed content repositories it quickly becomes clear that NFS and CIFS are not ideally suited for this use case. This is where CDMI shines, especially with an object-based storage architecture behind it that was built to support multi-petabyte environments with billions of data sets across hundreds of sites and accommodates retention policies that can reach to “forever”.
CDMI and NFS have something in common – Ethernet
One of the key commonalities between CDMI and NFS is that they both are ideally suited to be deployed in an Ethernet infrastructure. CDMI, specifically, is a RESTful HTTP interface, so it runs on standard Ethernet networks. Even for object storage deployments that don’t support CDMI, practically all of these multi-site, long-term repositories support HTTP (and thus Ethernet) through proprietary APIs based on REST or SOAP.
Why does this matter
Ethernet infrastructure is a great foundation to run any number of workloads, including access to data that sits in large, multi-site content repositories that are based on object storage technologies. So if you are looking at object storage, chances are that you will be able to leverage existing Ethernet infrastructure.
Ethernet Storage Forum – 2012 Year in Review and What to Expect in 2013
As we come to a close of the year 2012, I want to share some of our successes and briefly highlight some new changes for 2013. Calendar year 2012 has been eventful and the SNIA-ESF has been busy. Here are some of our accomplishments:
- 10GbE – With virtualization and network convergence, as well as the general availability of LOM and 10GBASE-T cabling, we saw this is a “breakout year” for 10GbE. In July, we published a comprehensive white paper titled “10GbE Comes of Age.” We then followed up with a Webcast “10GbE – Key Trends, Predictions and Drivers.” We ran this live once in the U.S. and once in the U.K. and combined, the Webcast has been viewed by over 400 people!
- NFS – has also been a hot topic. In June we published a white paper “An Overview of NFSv4” highlighting the many improved features NFSv4 has over NFSv3. A Webcast to help users upgrade, “NFSv4 – Plan for a Smooth Migration,” has also been well received with over 150 viewers to date. A 4-part Webcast series on NFS is now planned. We kicked the series off last month with “Reasons to Start Working with NFSv4 Now” and will continue on this topic during the early part of 2013. Our next NFS Webcast will be “Advances in NFS – NFSv4.1 and pNFS.” You can register for that here.
- Flash – The availability of solid state devices based on NAND flash is changing the performance efficiencies of storage. Our September Webcast “Flash – Plan for the Disruption” discusses how Flash is driving the need for 10GbE and has already been viewed by more than 150 people.
We have also added to expand membership and welcome new membership from Tonian and LSI to the ESF. We expect with this new charter to see an increase in membership participation as we drive incremental value and establish ourselves as a leadership voice for Ethernet Storage.
As we move into 2013, we expect two hot trends to continue – the broader use of file protocols in datacenter applications, and the continued push toward datacenter consolidation with the use of Ethernet as a storage network. In order to better address these two trends, we have modified our charter for 2013. Our NFS SIG will be renamed the File Protocol SIG and will focus on promoting not only NFS, but also SMB / CIFS solutions and protocols. The iSCSI SIG will be renamed to the Storage over Ethernet SIG and will focus on promoting data center convergence topics with Ethernet networks, including the use of block and file protocols, such as NFS, SMB, FCoE, and iSCSI, over the same wire. This modified charter will allow us to have a richer conversation around storage trends relevant to your IT environment.
So, here is to a successful 2012, and excitement for the coming year.
Ethernet and IP Storage – Today’s Technology Enabling Next Generation Data Centers
I continue to believe that IP based storage protocols will be preferred for future data center deployments. The future of IT is pointing to cloud based architectures, whether internal or external. At the core of the cloud is virtualization. And I believe that Ethernet and IP storage protocols offer the greatest overall value to unlock the potential of virtualization and clouds. Will other storage network technologies work? Of course. But, I’m not talking about whether a network “works”. I’m suggesting that a converged network environment with Ethernet and IP storage offers the best combined value for virtual environments and cloud deployments. I’ve written and spoken about this topic before. And I will likely continue to do so. So, let me mention a few reasons to choose IP storage, iSCSI or NAS, for use in cloud environments.
Mobility. One of the many benefits of server virtualization is the ability to non-disruptively migrate applications from one physical server to another to support load balancing, failover or redundancy, and servicing or updating of hardware. The ability to migrate applications is best achieved with networked storage since the data doesn’t have to move when a virtual machine (VM) moves. But, the network needs to maintain connectivity to the fabric when a VM moves. Ethernet offers a network technology capable of migrating or reassigning network addresses, in this case IP addresses, from one physical device to another. When a VM moves to another physical server, the IP addresses move with it. IP based storage, such as iSCSI, leverages the built in capabilities of TCP/IP over Ethernet to migrate network port addresses without interruption to applications.
Flexibility. Most data centers require a mixture of applications that access either file or block data. With server virtualization, it is likely that you’ll require access to file and block data types on the same physical server for either the guest or parent OS. The ability to use a common network infrastructure for both the guest and parent can reduce cost and simplify management. Ethernet offers support for multiple storage protocols. In addition to iSCSI, Ethernet supports NFS and CIFS/SMB resulting in greater choice to optimize application performance within your budget. FCoE is also supported on an enhanced 10Gb Ethernet network to offer access to an existing FC infrastructure. The added flexibility to interface with existing SAN resources enhances the value of 10Gb as a long-term networking solution.
Performance. Cost. Ubiquity. Other factors that enhance Ethernet storage and therefore IP storage adoption include a robust roadmap, favorable economics, and near universal adoption. The Ethernet roadmap includes 40Gb and 100Gb speeds which will support storage traffic and will be capable of addressing any foreseeable application requirements. Ethernet today offers considerable economic value as port prices continue to drop. Although Gb speeds offer sufficient bandwidth for most business applications, the cost per Gb of bandwidth with 10 Gigabit Ethernet (GbE) is now lower than GbE and therefore offers upside in cost and efficiency. Finally, nearly all new digital devices including mobile phones, cameras, laptops, servers, and even some home appliances, are being offered with WiFi connectivity over Ethernet. Consolidating onto a single network technology means that the networking infrastructure to the rest of the world is essentially already deployed. How good is that?
Some may view moving to a shared network as kind of scary. The concerns are real. But, Ethernet has been a shared networking platform for decades and continues to offer enhanced features, performance, and security to address its increased application. And just because it can share other traffic, doesn’t mean that it must. Physical isolation of Ethernet networks is just as feasible as any other networking technology. Some may choose this option. Regardless, selecting a single network technology, even if not shared across all applications, can reduce not only capital expense, but also operational expense. Your IT personnel can be trained on a single networking technology versus multiple specialized single purpose networks. You may even be able to reduce maintenance and inventory costs to boot.
Customers looking to architect their network and storage infrastructure for today and the future would do well to consider Ethernet and IP storage protocols. The advantages are pretty compelling.