SMB 3.0 – Your Questions Asked and Answered

Last week we had a large and highly-engaged audience at our live Webcast: “SMB 3.0 – New Opportunities for Windows Environments.” We ran out of time answering all the questions during our event so, as promised, here is a recap of all the questions and answers to attendees’ questions. The Webcast is now available on demand at http://snia.org/forums/esf/knowledge/webcasts. You can also download a copy of the presentation slides there.

Q. Have you tested SMB Direct over 40Gb Ethernet or using RDMA?

A. SMB Direct has been demonstrated using 40Gb Ethernet using TCP or RDMA and Infiniband using RDMA.

Q. 100 iops, really?

A. If you look at the bottom right of slide 27 (Performance Test Results) you will see that the vertical axis is IOPs/sec (Normalized). This is a common method for comparing alternative storage access methods on the same storage server. I think we could have done a better job in making this clear by labeling the vertical axis as “IOPs (Normalized).”

Q. How does SMB 3.0 weigh against NFS-4.1 (with pNFS)?

A. That’s a deep question that probably deserves a webcast of its own. SMB 3 doesn’t have anything like pNFS. However many Windows workloads don’t need the sophisticated distributed namespace that pNFS provides. If they do, the namespace is stitched together on the client using mounts and DFS-N.

Q. In the iSCSI ODX case, how does server1 (source) know about the filesystem structure being stored on the LUN (server2) i.e. how does it know how to send the writes over to the LUN?

A. The SMB server (source) does not care about the filesystem structure on the LUN (destination). The token mechanism only loosely couples the two systems. They must agree that the client has permission to do the copy and then they perform the actual copy of a set of blocks. Metadata for the client’s file system representing the copied file on the LUN is part of the client workflow. Client drag/drops file from share to mounted LUN. Client subsystem determines that ODX is available. Client modifies file system metadata on the LUN as part of the copy operation including block maps. ODX is invoked and the servers are just moving blocks.

Q. Can ODX copies be within the same share or only between?

A. There is no restriction to ODX in this respect. The resource and destination of the copy can be on same shares, different shares, or even completely different protocols as illustrated in the presentation.

Q. Does SMB 3 provide API for integration with storage vendor snapshot other MS VSS?

A. Each storage vendor has to support Microsoft Remote VSS protocol, which is part of SMB 3.0 protocol specification. In Windows 2012 or Windows 8 the VSS APIs were extended to support UNC share path.

Q. How does SMB 3 compare to iSCSI rather than FC?

A. Please examine slide 27, which compares SMB 3, FC and iSCSI on the same storage server configuration.

Q. I have a question between SMB and CIFS. I know both are the protocols used for sharing. But why is CIFS adopted by most of the storage vendors? We are using CIFS shares on our NetApps, and I have seen that most of the other storage vendors are also using CIFS on their NAS devices.

A. There has been confusion between the terms “SMB” and “CIFS” ever since CIFS was introduced in the 90s. Fundamentally, the protocol that manages the data transfer between and client and server is SMB. Always has been. IMO CIFS was a marketing term created in response to Sun’s WebNFS. CIFS became popularized with most SMB server vendors calling their product a CIFS server. Usage is slowly changing but if you have a CIFS server it talks SMB.

Q. What is required on the client? Is this a driver with multi-path capability? Is this at the file system level in the client? What is needed in transport layer for the failover?

A. No special software or driver is required on the client side as long as it is running Windows 8 and later operating environment.

Q. Are all these new features cross-platform or is it something only supported by Windows?

A. SMB 3 implementations by different storage vendors will have some set of these features.

Q. Are virtual servers (cloud based) vs. non-virtual transition speeds greatly different?

A. The speed of a transition, i.e. failover is dependent on two steps. The first is the time needed to detect the failure and the second is the time needed to recover from that failure. While both a virtual and a physical server support transition the speed can significantly vary due to different network configurations. See more with next question.

Q. Is there latency as it fails over?

A. Traditionally SMB timeouts were associated with lower level, i.e. TCP timeouts. Client behavior has varied over the years but a rule-of-thumb was detection of a failure in 45 sec. This error would be passed up the stack to the user/application. With SMB 3 there is a new protocol called SMB Witness. A scale-out SMB server will include nodes providing SMB shares as well those providing Witness service. A client connects to SMB and Witness. If the node hosting the SMB share fails, the Witness node will notify the client indicating the new location for the SMB share. This can significantly reduce the time needed for detection. The scale-out SMB server can implement a proprietary mechanism to quickly detect node failure and trigger a Witness notification.

Q. Sync or Async?

A. Whether state movement between server nodes is sync or async depends on vendor implementation. Typically all updated state needs to be committed to stable storage before returning completion to the client.

Q. How fast is this transition with passing state id’s between hosts?

A. The time taken for the transition includes the time needed to detect the failure of Client A and the time needed to re-establish things using Client B. The time taken for both is highly dependent on the nature of the clustered app as well as the supported use case.

Q. We already have FC (using VMware), why drop down to SMB?

A. If you are using VMware with FC, then moving to SMB is not an option. VMware supports the use of NFS for hypervisor storage but not SMB.

Q. What are the top applications on SMB 3.0?

A. Hyper-V, MS-SQL, IIS

Q. How prevalent is true “multiprotocol sharing” taking place with common datasets being simultaneously accessed via SMB and NFS clients?

A. True “multiprotocol sharing” i.e. simultaneous access of a file by NFS & SMB clients is extremely rare. The NFS and SMB locking models don’t lend themselves to that. Sharing of a multiprotocol directory is an important use case. Users may want access to a common area from Linux, OS X and Windows. But this is sequential access by one OS/protocol at a time not all at once.

Q. Do we know growth % split between NFS and SMB?

A. There is no explicit industry tracker for the protocol split and probably not that much point in collecting them either, as the protocols aren’t really in competition. There is affinity among applications, OSes and protocols – MS products tend to SMB (Hyper-V, SQL Server,…), and non-Microsoft to NFS (VMware, Oracle, …). Cloud products at the point of consumption are normally HTTP RESTless protocols.

SUSE Announces NFSv4.1 and pNFS Support

SUSE, founded in 1992, provides an enterprise ready Linux distribution in the form of SLES; the SUSE Linux Enterprise Server. As of late last month (October 22, 2013), SUSE announced that SLES 11 with service pack 3 now supports the Linux client for NFSv4.1 and pNFS client. This major distribution joins RedHat’s RHEL (RedHat Enterprise Linux) 6.4 in supporting enterprise quality Linux distributions with support for files based NFSv4.1 and pNFS.

For the adventurous, block and object pNFS support is available in the upstream kernel. Most regularly maintained distributions based on a Linux 3.1 or better kernel (if not all distributions now – check with the supplier of the distribution if you’re unsure) should provide the files, block and object compliant client directly in the download.

The future of pNFS looks very exciting. We now have a fully pNFS compliant Linux client, and a number of commercial files, blocks and object servers. Remember that although pNFS block and object support is available, currently these distributions support only the pNFS files layout. For those users not needing pNFS with block or objects support and requiring enterprise quality support, SUSE and RedHat are an excellent solution.

Participate in the SSD Features Rating Project!

The SSSI has launched the SSD Features Rating project - intended to provide a better understanding of what users expect of their SSDs.

Understanding which attributes are most important in applications will help SSD manufacturers to design products more suitable for those applications, and will provide guidance to users looking for the best SSD for their application.

The first step of the project  is a survey which asks SSD users to rate the importance of various SSD attributes/features.  SSSI plans to announce the results of the survey at the Storage Visions conference on January 5-6, 2014.

The survey is now open and will close on November 28.  Survey results will be posted on this page after the conference.

Join our LinkedIn Group:  SSDs – What’s Important to You? to contribute to the discussion!

Please take the survey at  http://www.surveymonkey.com/s/LGWKWJL

Software Defined Networks for SANs?

Previously, I’ve blogged about the VN2VN (virtual node to virtual node) technology coming with the new T11-FC-BB6 specification. In a nutshell, VN2VN enables an “all Ethernet” FCoE network, eliminating the requirement for an expensive Fibre Channel Forwarding (FCF) enabled switch. VN2VN dramatically lowers the barrier of entry for deploying FCoE. Host software is available to support VN2VN, but so far only one major SAN vendor supports VN2VN today. The ecosystem is coming, but are there more immediate alternatives for deploying FCoE without an FCF-enabled switch or VN2VN-enabled target SANs? The answer is that full FC-BB5 FCF services could be provided today using Software Defined Networking (SDN) in conjunction with standard DCB-enabled switches by essentially implementing those services in host-based software running in a virtual machine on the network. This would be an alternative “all Ethernet” storage network supporting Fibre Channel protocols. Just such an approach was presented at SNIA’s Storage Developers Conference 2013 in a presentation entitled, “Software-Defined Network Technology and the Future of Storage,” Stuart Berman, Chief Executive Officer, Jeda Networks. (Note, of course neither approach is relevant to SAN networks using Fibre Channel HBAs, cables, and switches.)

Interest in SDN is spreading like wildfire. Several pioneering companies have released solutions for at least parts of the SDN puzzle, but kerosene hit the wildfire with the $1B acquisition of Nicira by VMware. Now a flood of companies are pursuing an SDN approach to everything from wide area networks to firewalls to wireless networks. Applying SDN technology to storage, or more specifically to Storage Area Networks, is an interesting next step. See Jason Blosil’s blog below, “Ethernet is the right fit for the Software Defined Data Center.”

To review, an SDN abstracts the network switch control plane from the physical hardware. This abstraction is implemented by a software controller, which can be a virtual appliance or virtual machine hosted in a virtualized environment, e.g., a VMware ESXi host. The benefits are many; the abstraction is often behaviorally consistent with the network being virtualized but simpler to manipulate and manage for a user. The SDN controller can automate the numerous configuration steps needed to set up a network, lowering the amount of touch points required by a network engineer. The SDN controller is also network speed agnostic, i.e., it can operate over a 10Gbps Ethernet fabric and seamlessly transition to operate over a 100Gbps Ethernet fabric. And finally, the SDN controller can be given an unlimited amount of CPU and memory resources in the host virtual server, scaling to a much greater magnitude compared to the control planes in switches that are powered by relatively low powered processors.

So why would you apply SDN to a SAN? One reason is SSD technology; storage arrays based on SSDs move the bandwidth bottleneck for the first time in recent memory into the network. An SSD array can load several 10Gbps links, overwhelming many 10G Ethernet fabrics. Applying a Storage SDN to an Ethernet fabric and removing the tight coupling of speed of the switch with the storage control plane will accelerate adoption of higher speed Ethernet fabrics. This will in turn move the network bandwidth bottleneck back into the storage array, where it belongs.

Another reason to apply SDN to Storage Networks is to help move certain application workloads into the Cloud. As compute resources increase in speed and consolidate, workloads require deterministic bandwidth, IOPS and/or resiliency metrics which have not been well served by Cloud infrastructures. Storage SDNs would apply enterprise level SAN best practices to the Cloud, enabling the migration of some applications which would increase the revenue opportunities of the Cloud providers. The ability to provide a highly resilient, high performance, SLA-capable Cloud service is a large market opportunity that is not cost available/realizable with today’s technologies.

So how can SDN technology be applied to the SAN? The most viable candidate would be to leverage a Fibre Channel over Ethernet (FCoE) network. An FCoE network already converges a high performance SAN with the Ethernet LAN. FCoE is a lightweight and efficient protocol that implements flow control in the switch hardware, as long as the switch supports Data Center Bridging (DCB). There are plenty of standard “physical” DCB-enabled Ethernet switches to choose from, so a Storage SDN would give the network engineer freedom of choice. An FCoE based SDN would create a single unified, converged and abstracted SAN fabric. To create this Storage SDN you would need to extract and abstract the FCoE control plane from the switch removing any dependency of a physical FCF. This would include the critical global SAN services such as the Name Server table, the Zoning table and State Change Notification. Containing the global SAN services, the Storage SDN would also have to communicate with initiators and targets, something an SDN controller does not do. Since FCoE is a network-centric technology, i.e., configuration is performed from the network, a Storage SDN can automate large SAN’s from a single appliance. The Storage SDN should be able to create deterministic, end-to-end Ethernet fabric paths due to the global view of the network that an SDN controller typically has.

A Storage SDN would also be network speed agnostic, since Ethernet switches already support 10Gbps, 40Gbps, and 100Gbps this would enable extremely fast SANs not currently attainable. Imagine the workloads, applications and consolidation of physical infrastructure possible with a 100Gbps Storage SDN SAN all controlled by a software FCoE virtual server connecting thousands of servers with terabytes of SSD storage? SDN technology is bursting with solutions around LAN traffic; now we need to tie in the SAN and keep it as non-proprietary to the hardware as possible.

Q&A Summary from the SNIA-ESF Webcast – “How VN2VN Will Help Accelerate Adoption of FCoE”

Our VN2VN Webcast last week was extremely well received. The audience was big and highly engaged. Here is a summary of the questions attendees asked and answers from my colleague, Joe White, and me. If you missed the Webcast, it’s now available on demand.

Question #1:

We are an extremely large FC shop with well over 50K native FC ports. We are looking to bridge this to the FCoE environment for the future. What does VN2VN buy the larger company? Seems like SMB is a much better target for this.

Answer #1: It’s true that for large port count SAN deployments VN2VN is not the best choice but the split is not strictly along the SMB/large enterprise lines. Many enterprises have multiple smaller special purpose SANs or satellite sites with small SANs and VN2VN can be a good choice for those parts of a large enterprise. Also, VN2VN can be used in conjunction with VN2VF to provide high-performance local storage, as we described in the webcast.

Question #2: Are there products available today that incorporate VN2VN in switches and storage targets?

Answer #2: Yes. A major storage vendor announced support for VN2VN at Interop Las Vegas 2013. As for switches, any switch supporting Data Center Bridging (DCB) will work. Most, if not all, new datacenter switches support DCB today. Recommended also is support in the switch for FIP Snooping, which is also available today.

Question #3: If we have an iSNS kind of service for VN2VN, do you think VN2VN can scale beyond the current anticipated limit?

Answer #3: That is certainly possible. This sort of central service does not exist today for VN2VN and is not part of the T11 specifications so we are talking in principle here. If you follow SDN (Software Defined Networking) ideas and thinking then having each endpoint configured through interaction with a central service would allow for very large potential scaling. Now the size and bandwidth of the L2 (local Ethernet) domain may restrict you, but fabric and distributed switch implementations with large flat L2 can remove that limitation as well.

Question #4: Since VN2VN uses different FIP messages to do login, a unique FSB implementation must be provided to install ACLs. Have any switch vendors announced support for a VN2VN FSB?

Answer #4: Yes, VN2VN FIP Snooping bridges will exist. It only requires a small addition to the filet/ACL rules on the FSB Ethernet switch to cover VN2VN. Small software changes are needed to cover the slightly different information, but the same logic and interfaces within the switch can be used, and the way the ACLs are programmed are the same.

Question #5: Broadcasts are a classic limiter in Layer 2 Ethernet scalability. VN2VN control is very broadcast intensive, on the default or control plane VLAN. What is the scale of a data center (or at least data center fault containment domain) in which VN2VN would be reliably usable, even assuming an arbitrarily large number of data plane VLANs? Is there a way to isolate the control plane broadcast traffic on a hierarchy of VLANs as well?

Answer #5: VLANs are an integral part of VN2VN within the T11 FC-BB-6 specification. You can configure the endpoints (servers and storage) to do all discovery on a particular VLAN or set of VLANs. You can use VLAN discovery for some endpoints (mostly envisioned as servers) to learn the VLANs on which to do discovery from other endpoints (mostly envisioned as storage). The use of VLANs in this manner will contain the FIP broadcasts to the FCoE dedicated VLANs. VN2VN is envisioned initially as enabling small to medium SANs of about a couple hundred ports although in principle the addressing combined with login controls allows for much larger scaling.

Question #6: Please explain difference between VN2VN and VN2VF

Answer #6: The currently deployed version of FCoE, T11 FC-BB-5, requires that every endpoint, or Enode in FC-speak, connect with the “fabric,” a Fibre Channel Forwarder (FCF) more specifically. That’s VN2VF. What FC-BB-6 adds is the capability for an endpoint to connect directly to other endpoints without an FCF between them. That’s VN2VN.

Question #7: In the context of VN2VN, do you think it places a stronger demand for QCN to be implemented by storage devices now that they are directly (logically) connected end-to-end?

Answer #7: The QCN story is the same for VN2VN, VN2VF, I/O consolidation using an NPIV FCoE-FC gateway, and even high-rate iSCSI. Once the discovery completes and sessions (FLOGI + PLOGI/PRLI) are setup, we are dealing with the inherent traffic pattern of the applications and storage.

Question #8: Your analogy that VN2VN is like private loop is interesting. But it does make VN2VN sound like a backward step – people stopped deploying AL tech years ago (for good reasons of scalability etc.). So isn’t this just a way for vendors to save development effort on writing a full FCF for FCoE switches?

Answer #8: This is a logical private loop with a lossless packet switched network for connectivity. The biggest issue in the past with private or public loop was sharing a single fiber across many devices. The bandwidth demands and latency demands were just too intense for loop to keep up. The idea of many devices addressed in a local manner was actually fairly attractive to some deployments.

Question #9: What is the sweet spot for VN2VN deployment, considering iSCSI allows direct initiator and target connections, and most networks are IP-enabled?

Answer #9: The sweet spot if VN2VN FCoE is SMB or dedicated SAN deployments where FC-like flow control and data flow are needed for up to a couple hundred ports. You could implement using iSCSI with PFC flow control but if TCP/IP is not needed due to PFC lossless priorities — why pay the TCP/IP processing overhead? In addition the FC encapsulation/serializaition and FC exchange protocols and models are preserved if this is important or useful to the applications. The configuration and operations of a local SAN using the two models is comparable.

Question #10: Has iSCSI become irrelevant?

Answer #10: Not at all. iSCSI serves a slightly different purpose from FCoE (including VN2VN). iSCSI allows connection across any IP network, and due to TCP/IP you have an end-to-end lossless in-order delivery of data. The drawback is that for high loss rates, burst drops, heavy congestion the TCP/IP performance will suffer due to congestion avoidance and retransmission timeouts (‘slow starts’). So the choice really depends on the data flow characteristics you are looking for and there is not a one size fits all answer.

Question #11: Where can I watch this Webcast?

Answer #11: The Webcast is available on demand on the SNIA website here.

Question #12: Can I get a copy of these slides?

Answer #12: Yes, the slides are available on the SNIA website here.

SSSI Sheds Light on Solid State Storage at Flash Memory Summit

If you attended Flash Memory Summit along with thousands of other business professionals, you soon learned that solid state keynotes, breakouts, and show floor booths, while informative, could be mystifying.  Fortunately, the SNIA Solid State Storage Initiative (SSSI) was at FMS to provide enlightenment in the form of a SSS Reception, four new publications on SSS, and three SSSI member demonstrations.

SSSI’s second annual SSS Reception was attended by over 90 individualIMG_6196s and featured presentations by SSSI Governing Board members Paul Wassenberg of Marvell, Walt Hubis of Fusion-io, and Eden Kim of Calypso Systems on SSSI key programs and technical work.   SSSI Education Chair Tom Coughlin of Coughlin Associates delivered a market update on ubiquitous flash memory.   All SSSI members are eligible and encouraged to join specification development, education, and outreach programs, and new companies are welcome to join SSSI activities.

At the SSSI Booth, attendees snapped up new white papers and Tech Notes authored by SSSI members.  These papers are complimentary to all interested individuals and available on the SSSI education page.

  • The PCI Express (PCIe) 101 – An Overview of Standards, Markets, and Performance white paper surveys the broad landscape of emerging high performance PCIe storage and the implications of revolutionary applications of these new architectures.  Thirteen members of the SNIA SSSI PCIe SSD Committee representing eleven SNIA and SSSI member companies contributed their technical expertise to this paper, which covers standards, programming models for non-volatile memory, the PCIe SSD market, and PCIe SSD performance.
  • The SSD Performance – A Primer white paper, authored by SNIA SSS Technical Work Group chair Eden Kim of Calypso Systems, provides an introduction to solid state drive performance, evaluation, and test.  As noted in the Foreword by SSSI Founder and 2008-2010 SSSI Chair Phil Mills of IBM, “this paper is an excellent tutorial on the performance of solid state drives, which covers this topic in a very easy to understand way, yet provides detailed technical information that the reader can either dig into for a better understanding, or simply skip without missing the main points”.
  • A new PTS User Guide Tech Note delivers the hows and whys of the SNIA Solid State Performance Test Specification.  Authored by SNIA SSSI members Eden Kim of Calypso Systems and Chuck Paradon of HP, this Tech Note provides an easy to understand, step-by step guide to using the SNIA SSS Performance Test Specification (PTS) test methodologies and tests. The Tech Note discusses four basic PTS 1.1 tests – Write Saturation (WSAT), IOPS, Throughput (TP) and Response Time (or Latency) – as updated per SNIA draft PTS-E version 1.1.
  • The SSSI Workload I/O Capture Program (WIOCP) FAQ, authored by SNIA SSSI member Tom West of hyperI/O LLC, gives details on this project undertaken by the SNIA SSSI to collect I/O operation performance metrics. These empirical metrics reflect the actual I/O operation activity performed during normal, everyday application/workload usage spanning both consumer/client and enterprise systems.

IMG_20130814_155646

Also in the booth, the Media, Entertainment and Scientific Storage (MESS) Meetup group chatted with end users, and SSSI members exhibited new solid state storage solutions for enterprise markets:

  • BitMicro  presented a high performance MaxIO drive incorporating BiTMICRO’s ultra fast Talino Quad Core ASIC controller, which integrates embedded processors with a high speed multi-bus design to achieve performance far beyond legacy solid state designs.
  • Fastor Systems unveiled a NVMe compliant PCIe software defined storage device (SDS), or post-controller SSD, in which a de-coupled control & data plane together with a non-blocking fabric and message based architecture provide both high throughput and low latency to address the needs of today’s hyperscale datacenters.
  • Micron showcased a P420m PCIe SSD featuring multilevel (MLC) NAND technology.

Ethernet is the right fit for the Software Defined Data Center

“Software Defined” is a label being used to define advances in network and storage virtualization and promises to greatly improve infrastructure management and accelerate business agility. Network virtualization itself isn’t a new concept and has been around in various forms for some time (think vLANs). But, the commercialization of server virtualization seems to have paved the path to extend virtualization throughout the data center infrastructure, making the data center an IT environment delivering dynamic and even self-deployed services. The networking stack has been getting most of the recent buzz and I’ll focus on that portion of the infrastructure here.

VirtualizationChangesWhat is driving this trend in data networking? As I mentioned, server virtualization has a lot to do with the new trend. Virtualizing applications makes a lot of things better, and makes some things more complicated. Server virtualization enables you to achieve much higher application density in your data center. Instead of a one-to-one relationship between the application and server, you can host tens of applications on the same physical server. This is great news for data centers that run into space limitations or for businesses looking for greater efficiency out of their existing hardware.

YesteryearThe challenge, however, is that these applications aren’t stationary. They can move from one physical server to another. And this mobility can add complications for the networking guys. Networks must be aware of virtual machines in ways that they don’t have to be aware of physical servers. For network admins of yesteryear, their domain was a black box of “innies” and “outies”. Gone are the days of “set it and forget it” in terms of networking devices. Or is it?

Software defined networks (aka SDN) promise to greatly simplify the network environment. By decoupling the control plane from the data plane, SDN allows administrators to treat a collection of networking devices as a single entity and can then use policies to configure and deploy networking resources more dynamically. Additionally, moving to a software defined infrastructure means that you can move control and management of physical devices to different applications within the infrastructure, which give you flexibility to launch and deploy virtual infrastructures in a more agile way.

network virtualizationSoftware defined networks aren’t limited to a specific physical transport. The theory, and I believe implementation, will be universal in concept. However, the more that hardware can be deployed in a consistent manner, the greater flexibility for the enterprise. As server virtualization becomes the norm, servers hosting applications with mixed protocol needs (block and file) will be more common. In this scenario, Ethernet networks offer advantages, especially as software defined networks come to play. Following is a list of some of the benefits of Ethernet in a software defined network environment.

Ubiquitous

Ethernet is a very familiar technology and is present in almost every compute and mobile device in an enterprise. From IP telephony to mobile devices, Ethernet is a networking standard commonly deployed and as a result, is very cost effective. The number of devices and engineering resources focused on Ethernet drives the economics in favor of Ethernet.

Compatibility

Ethernet has been around for so long and has proven to “just work.” Interoperability is really a non-issue and this extends to inter-vendor interoperability. Some other networking technologies require same vendor components throughout the data path. Not the case with Ethernet. With the rare exception, you can mix and match switch and adapter devices within the same infrastructure. Obviously, best practices would suggest that at least a single vendor within the switch infrastructure would simplify the environment with a common set of management tools, features, and support plans. But, that might change with advances in SDN.

Highly Scalable

Ethernet is massively scalable. The use of routing technology allows for broad geographic networks. The recent adoption of IPv6 extends IP addressing way beyond what is conceivable at this point in time. As we enter the “internet of things” period in IT history, we will not lack for network scale. At least, in theory.

Overlay Networks

Overlay Networksallow you to extend L2 networks beyond traditional geographic boundaries. Two proposed standards are under review by the Internet Engineering Task Force (IETF). These include Virtual eXtensible Local Area Networks (VXLAN) from VMware and Network Virtualization using Generic Routing Encapsulation (NVGRE) from Microsoft. Overlay networks combine L2 and L3 technologies to extend the L2 network beyond traditional geographic boundaries, as with hybrid clouds. You can think of overlay networks as essentially a generalization of a vLAN. Unlike with routing, overlay networks permit you to retain visibility and accessibility of your L2 network across larger geographies.

Unified Protocol Access

Ethernet has the ability to support mixed storage protocols, including iSCSI, FCoE, NFS, and CIFS/SMB. Support for mixed or unified environments can be more efficiently deployed using 10 Gigabit Ethernet (10GbE) and Data Center Bridging (required for FCoE traffic) as IP and FCoE traffic can share the same ports. 10GbE simplifies network deployment as the data center can be wired once and protocols can be reconfigured with software, rather than hardware changes.

Virtualization

Ethernet does very well in virtualized environments. IP address can easily be abstracted from physical ports to facilitate port mobility. As a result, networks built on an Ethernet infrastructure leveraging network virtualization can benefit from increased flexibility and uptime as hardware can be serviced or upgraded while applications are online.

Roadmap

For years, Ethernet has increased performance, but the transition from Gigabit Ethernet to 10 Gigabit Ethernet was a slow one. Delays in connector standards complicated matters. But, those days are over and the roadmap remains robust and product advances are accelerating. We are starting to see 40GbE devices on the market today, and will see 100GbE devices in the near future. As more and more data traffic is consolidated onto a shared infrastructure, these performance increases will provide the headroom for more efficient infrastructure deployments.

Some of the benefits listed above can be found with other networking technologies. But, Ethernet technology offers a unique combination of technology and economic value across a broad ecosystem of vendors that make it an ideal infrastructure for next generation data centers. And as these data centers are designed more and more around application services, software will be the lead conversation. To enable the vision of a software defined infrastructure, there is no better network technology than Ethernet.

Join the SSSI at Flash Memory Summit August 12-15 in Santa Clara CA!

SSSI returns to the Flash Memory Summit in booth 808, featuring information on updates on new tests in the SNIA Solid State Storage Performance Test Specification-Enterprise 1.1, NVM programming, and Workload I/O Capture Program (WIOCP) activities; new tech notes and white papers, including a PTS User Guide Tech Note, a PCIe SSD 101 Whitepaper, and a Performance Primer Whitepaper; and PCIe SSD demonstrations from SSSI members Bitmicro, Fastor, and Micron.

flash memory summitAll current SSSI members attending FMS and individuals from companies interested in the SSSI and their activities are cordially invited to the SSSI Solid State Storage Reception Monday evening August 12 from 5:30 pm – 7:00 pm in Room 209-210 at the Santa Clara Convention Center.   At the reception, SSSI Education Chair Tom Coughlin of Coughlin Associates will provide an overview of the SSD market, and SSSI Chair Paul Wassenberg of Marvell will discuss SSD performance.  SSSI Vice Chair Walt Hubis of Fusion-io will discuss SSSI programs, including PTS, NVM Programming, Workload I/O Capture, and PCIe SSD.  Refreshments, table displays, and an opportunity drawing for SSDs provided by SSSI members Intel, Micron, and OCZ will be featured.

FMS conference activities begin August 13, and the agenda can be found here.  SSSI members speaking and chairing panels include:

Tuesday August 13

4:35 pm – Paul Wassenberg of Marvell on Standards

Wednesday August 14

8:30 am – Eden Kim and Easen Ho of Calypso Testers – PCIe Power Budgets, Performance, and Deployment

9:50 am – Eden Kim and Easen Ho of Calypso Testers -  SNIA Solid State Storage Performance Test Specification

3:10 pm - Walt Hubis of Fusion-io – Revolutionizing Application Development Using NVM Storage Software

3:10 pm – Easen Ho of Calypso Testers –  SSD Testing Challenges

4:30 pm – Paul von Behren of Intel -  SNIA Tutorial: SNIA NVM Programming Model:  Optimizing Software for Flash

Thursday August 15

3:10 pm – Jim Pappas of Intel – PCI Express and  Enterprise SSDs

3:10 pm – Jim Handy of Objective Analysis – Market Research

An open “Chat with the Experts” roundtable session Tuesday August 13 at 7:00 pm will feature Jim Pappas of Intel at a Standards table, Eden Kim of Calypso Testers at a SSD Performance table, Easen Ho of Calypso Testers at a Testing table, and Paul Wassenberg of Marvell at a SATA Express table.MESS - Final logo #2-Megan Archer

The Media Entertainment and Scientific Storage (MESS) will hold their August “Meetup” at the Open Chat with the Experts, and also be located in SSSI Booth 808 for further discussions.

Exhibit admission is complimentary until August 8.  SNIA and SSSI members and colleagues can receive a $100 discount on either the 3-day conference or the 1-day technical program using the code SNIA at www.flashmemorysummit.com.

 

PCI Express Coming to an SSD Near You

There’s been a lot of press recently about what’s going on in the world of storage regarding the utilization of PCIe as a device interface.  Of course, PCIe has been around a long time as a system bus, while SATA and SAS have been used as storage device interfaces.  But with SSDs getting faster with every new product release, it’s become difficult for the traditional interfaces to keep up.

Some folks figure that PCIe is the solution to that problem.  PCIe 3.0 operates at 1GB/s, which is faster than 600MB/s SATA.  And with PCIe, it’s possible to add lanes to increase the overall bandwidth.  The SATA Express spec from SATA-IO defines a client PCIe device as having up to 2 lanes of PCIe, which brings the speed up to 2GB/s.  Enterprise SSDs will have up to 4 lanes of PCIe, which provides 4GB/s of bandwidth.

There was also some work on the software side that needed to be done to support PCIe devices, including NVM Express and SCSI Over PCIe (SOP), both of which are well underway.

If you are interested in knowing more about PCIe SSDs, keep an eye on our Education page, where, sometime during the week of August 5, we will be posting a new white paper on this topic.

New Solid State Storage Performance Test Specification Available for Public Review

A new revision of the Enterprise Solid State Storage Performance Test Specification (PTS–E 1.1) is now available for public review. The PTS is an industry standard test methodology and test suite for the comparison of SSD performance at the device level. The PTS–E 1.1 updates the PTS–E 1.0 released in 2011 and adds tests with specific types of workloads common in the enterprise environment. The PTS–E 1.1 may be downloaded at http://www.snia.org/publicreview.

“The PTS–Enterprise v1.1 provides both standard testing (IOPS, Throughput, Latency, and Write Saturation) as well as new tests for specific workloads commonly found in Enterprise environments,” said Eden Kim, Chair of the SSS Technical Work Group. “These new tests also allow the user to insert workloads into the new tests while maintaining the industry standard methodology for pre conditioning and steady state determination.”

The new tests target workloads common to OLPT, VOD, VM, and other enterprise applications while paying special attention to the optimization of drives for varying demand intensity, maximum IOPS and minimal response times and latencies.

For more information, visit www.snia.org/forums/sssi