SNIA ESF Sponsored Webinar on Advances in NFSv4

Good news.

The SNIA Ethernet Storage Forum (ESF) will be presenting a live webinar on the topic of NFS version 4, including version 4.1 (RFC 5661) as well as a glimpse of what is being considered for version 4.2. The expert on the topic will be Alex McDonald, SNIA NFS SIG co-chair. Gary Gumanow, ESF Board Member will moderate the webinar.

The webinar will begin at 8am PT / 11am ET. You can register for this BrightTalk hosted event here http://www.brighttalk.com/webcast/663/41389.

The webinar will be interactive, so feel free to ask questions of the guest speaker. Questions will be addressed live during the webinar. Answers to questions not addressed during the webinar will be included with answers from the webinar on a blog post after the event on the SNIA ESF blog.

So, get registered. We’ll see you on the 29th.

What’s the story with NFSv4? Four things you need to know.

Experts from SNIA’s Ethernet Storage Forum are going to discuss the key drivers to consider working with NFSv4. For years, NFSv3 has been the practical standard of choice. But, times have changed and significant advances in the NFS standards are ready to address today’s challenges around massive scale, cloud deployments, performance and management.

Join our host Steve Abbott from Emulex and our content expert, Alex McDonald from NetApp, as these SNIA representatives discuss the reasons to start planning for deployment with NFSv4.

Date: November 8, 2011
Time: 11am ET

Register for a live webinar. http://www.brighttalk.com/webcast/663/35415

Beyond Potatoes – Migrating from NFSv3

“It is a mistake to think you can solve any major problems just with potatoes.”
Douglas Adams (1952-2001, English humorist, writer and dramatist)

While there have been many advances and improvements to NFS over the last decade, some IT organizations have elected to continue with NFSv3 – like potatoes, it’s the staple filesystem protocol that just about any UNIX administrator understands.

Although adequate for many purposes and a familiar and well understood protocol, choosing and continuing to deploy NFSv3 has become increasingly difficult to justify in a modern datacenter. For example, NFSv3 makes promiscuous use of ports, something that is unsuitable for a variety of security reasons for use over a wide area network (WAN); plus increased server & client bandwidth demands and improved functionality of Network Attached Storage (NAS) arrays have outstripped NFSv3’s ability to deliver high throughput.
NFSv4 and the minor versions that follow it are designed to address many of the issues that NFSv3 poses. NFSv4 also includes features intended to enable its use in global wide area networks (WANs), and to improve the performance and resilience of NAS (Network Attached Storage):

  • Firewall-friendly single port operations
  • Internationalization support
  • Replication and migration facilities
  • Mandatory use of strong RPC security flavors that depend on cryptography, with support of access control that is compatible with both UNIX® and Windows®
  • Use of character strings instead of integers to represent user and group identifiers
  • Advanced and aggressive cache management features with delegations
  • (with NFSv4.1 pNFS, or parallel NFS) Trunking

In April 2003, the Network File System (NFS) version 4 Protocol was ratified as an Internet standard, described in RFC-3530, which superseded NFS Version 3 (NFSv3, specified in RFC-1813). Since the ratification of NFSv4, further advances have been made to the standard, notably NFSv4.1 (as described in RFC-5661, ratified in January 2010) that included several new features such as parallel NFS (pNFS). And further work is currently underway in the IETF for NFSv4.2.

Delegations with NFSv4

In NFSv3, clients have to function as if there is contention for the files they have opened, even though this is often not the case. As a result of this conservative approach to file locking, there are frequently many unneeded requests from the client to the server to find out whether an open file has been modified by some other client. Even worse, all write I/O in this scenario is required to be synchronous, further impacting client-side performance.
NFSv4 differs by allowing the server to delegate specific actions on a file to the client; this enables more aggressive client caching of data and the locks. A server temporarily cedes control of file updates and the locking state to a client via a delegation, and promises to notify the client if other clients are accessing the file. Once the client holds a delegation, it can perform operations on files with data has been cached locally, and thereby avoid network latency and optimize its use of I/O.

Trunking with pNFS

Many additional enhancements to NFSv4 are available with NFSv4.1, of which pNFS is a part. pNFS adds the capability to perform trunking at the NFS level by adding a session layer. The client establishes a session with an NFSv4.1 server, and can then create multiple TCP connections to the NFSv4.1 server, each potentially going over a different network interface on the client, and arriving on a different interface on the NFSv4.1 server. Now different requests sent over the same session identifier can go over different network paths, dramatically improving latency and increasing bandwidth.
Although client and server implementations of NFSv4.1 are available, they are in early stages of implementation and adoption. However, to take advantage of them in the future, it is important to plan now for the move to NFSv4 and beyond – and there are many servers and clients available now that support NFSv4. NFSv4 is a mature and stable protocol with many advantages in its own right over its predecessors NFSv3 and NFSv2.

Potatoes and Beyond

Now is the time to make the switchover; there really is no justification for not pursuing NFSv4 as the first NFS protocol version of choice. Although migrating from earlier versions of NFS requires some planning as there are significant differences between the two protocols, the benefits are impressive. To ensure a smooth migration to NFSv4 and beyond, the SNIA Ethernet Storage Forum NFS Special Interest Group has recently published an overview white paper “Migrating to NFSv4”. This covers internationalization support, automatic mounting of NFSv4 filesystems on demand, TCP protocol support amongst other considerations.
NFSv4 and NFSv4.1 have been developed for a reason; and NFSv4.2 is on the horizon. Like the potato, NFSv3 is a staple of the network Filesystem world. But as Douglas Adams said; “It is a mistake to think you can solve any major problems just with potatoes.” NFSv4 fixes many of NFSv3’s deficiencies, and represents a major advance that brings improved availability, performance and security; all the check-list items beyond potatoes that today’s users of network attached storage demand.

Deploying SQL Server with iSCSI – Answers to your questions

by: Gary Gumanow

Last Wednesday (2/24/11), I hosted an Ethernet Storage Forum iSCSI SIG webinar with representatives from Emulex and NetApp to discuss the benefits of iSCSI storage networks in SQL application environments. You can catch a recording of the webcast on BrightTalk here.

The webinar was well attended, and while we received so many great questions during the webinar we just didn’t have time to answer all of them. Which brings us to this blogpost. We have included answers to these unanswered questions in our blog below.
We’ll be hosting another webinar real soon, so please check back for upcoming ESF iSCSI SIG topics. You’ll be able to register for this event shortly on BrightTalk.com.

Let’s get to the questions. We took the liberty of editing the questions for clarity. Please feel free to comment if we misinterpreted the question.

Question: Is TRILL needed in the data center to avoid pausing of traffic while extending the number of links that can be used?

Answer: The Internet Engineering Task Force (IETF) has developed a new shortest path frame Layer 2 (L2) routing protocol for multi-hop environments. The new protocol is called Transparent Interconnection of Lots of Links, or TRILL. TRILL will enable multipathing for L2 networks and remove the restrictions placed on data center environments by STP single-path networks.

Although TRILL may serve as an alternative to STP, it doesn’t require that STP be removed from an Ethernet infrastructure. Hybrid solutions that use both STP and TRILL are not only possible but also will be the norm for at least the near-term future. TRILL will also not automatically eliminate the risk of a single point of failure, especially in hybrid environments.

Another area where TRILL is not expected to play a role is the routing of traffic across L3 routers. TRILL is expected to operate within a single subnet. While the IETF draft standard document mentions the potential for tunneling data, it is unlikely that TRILL will evolve in a way that will expand its role to cover cross-L3 router traffic. Existing and well-established protocols such as Multiprotocol Label Switching (MPLS) and Virtual Private LAN Service (VPLS) cover these areas and are expected to continue to do so.

In summary, TRILL will help multipathing for L2 networks.

Question: How do you calculate bandwidth when you only have IOPS?
Answer:
The mathematical formula to calculate bandwidth is a function of IOPS and I/O size. The formula is simply IOP x I/O size. Example: 10,000 IOPS x 4k block size (4096 bytes) = 40.9 MB/sec.

Question: When deploying FCoE, must all 10GbE switches support Data Center Bridging (DCB) and FCoE? Or can some pass through FCoE?
Answer:
Today, in order to deploy FCoE, all switches in the data path must support both FCoE forwarding and DCB. Future standards include proposals to allow pass through of FCoE commands without having to support Fibre Channel services. This will allow for more cost effective networks where not all switch layers are needed to support the FCoE storage protocol.
Question: iSCSI performance is comparable to FC and FCoE. Do you expect to see iSCSI overtake FC in the near future?
Answer:
FCoE deployments are still very small compared to traditional Fibre Channel and iSCSI. However, industry projections by several analyst firms indicate that Ethernet storage protocols, such as iSCSI and FCoE, will overtake traditional Fibre Channel due to increased focus on shared data center infrastructures to address applications, such as private and public clouds. But, even the most aggressive forecasts don’t show this cross over for several years from now.
Customers looking to deploy new data centers are more likely today to consider iSCSI than in the past. Customers with existing Fibre Channel investments are likely to transition to FCoE in order to extend the investment of their existing FC storage assets. In either case, transitioning to 10Gb Ethernet with DCB capability offers the flexibility to do both.

Question: With 16Gb/s FC ratified, what product considerations would be considered by disk manufacturers?
Answer:
We can’t speak to what disk manufacturers will or won’t do regarding 16Gb/s disks. But, the current trend is to move away from Fibre Channel disk drives in favor of Serial Attached SCSI (SAS) and SATA disks as well as SSDs. 16Gb Fibre Channel will be a reality and will play in the data center. But, the prediction of some vendors is that the adoption rate will be much slower than previous generations.
Question: Why move to 10GbE if you have 8Gb Fibre Channel? The price is about the same, right?
Answer:
If your only network requirement is block storage, then Fibre Channel provides a high performance network to address that requirement. However, if you have a mixture of networking needs, such as NAS, block storage, and LAN, then moving to 10GbE provides sufficient bandwidth and flexibility to support multiple traffic types with fewer resources and with lower overall cost.
Question: Is the representation of number of links accurate when comparing Ethernet to Fibre Channel. Your overall bandwidth of the wire may be close, but when including protocol overheads, the real bandwidth isn’t an accurate comparison. Example: FC protocol overhead is only 5% vs TCP at 25%. iSCSI framing adds another 4%. So your math on how many FC cables equal 10 Gbps cables is not a fair comparison.

Answer: As pointed out in the question, comparing protocol performance requires more than just a comparison of wire rates of the physical transports. Based upon protocol efficiency, one could conclude that the comparison between FC and TCP/IP is unfair as designed because Fibre Channel should have produced greater data throughput from a comparable wire rate. However, the data in this case shows that iSCSI offers comparable performance in a real world application environment, rather than just a benchmark test. The focus of the presentation was iSCSI. FCoE and FC were only meant to provide a reference points. The comparisons were not intended to be exact nor precise. 10GbE and iSCSI offers the performance to satisfy business critical performance requirements. Customers looking to deploy a storage network should consider a proof of concept to ensure that a new solution can satisfy their specific application requirements.

Question: Two FC switches were used during this testing. Was it to solve an operation risk of no single point of failure?
Answer:
The use of two switches was due to hardware limitation. Each switch had 8-ports and the test required 8 ports at the target and the host. Since this was a lab setup, we weren’t configuring for HA. However, the recommendation for any production environment would be to use redundant switches. This would apply for iSCSI storage networks as well.
Question: How can iSCSI match all the distributed management and security capabilities of Fibre Channel / FCoE such as FLOGI, integrated name server, zoning etc?
Answer:
The feature lists between the two protocols don’t match exactly. The point of this presentation was to point out that iSCSI is closing the performance gap and has enough high-end features to make it enterprise-ready.
Question: How strong is the possibility that 40G Ethernet will be bypassed, with a move directly from 10G to 100G?
Answer: Vendors are shipping products today that support 40Gb Ethernet so it seems clear that there will be a 40GbE. Time will tell if customers bypass 40GbE and wait for 100GbE.

Thanks again for checking out our blog. We hope to have you on our next webinar live, but if not, we’ll be updating this blog frequently.

Gary Gumanow – iSCSI SIG Co-chairman, ESF Marketing Chair

Ethernet and IP Storage – Today’s Technology Enabling Next Generation Data Centers

I continue to believe that IP based storage protocols will be preferred for future data center deployments. The future of IT is pointing to cloud based architectures, whether internal or external. At the core of the cloud is virtualization. And I believe that Ethernet and IP storage protocols offer the greatest overall value to unlock the potential of virtualization and clouds. Will other storage network technologies work? Of course. But, I’m not talking about whether a network “works”. I’m suggesting that a converged network environment with Ethernet and IP storage offers the best combined value for virtual environments and cloud deployments. I’ve written and spoken about this topic before. And I will likely continue to do so. So, let me mention a few reasons to choose IP storage, iSCSI or NAS, for use in cloud environments.

Mobility. One of the many benefits of server virtualization is the ability to non-disruptively migrate applications from one physical server to another to support load balancing, failover or redundancy, and servicing or updating of hardware. The ability to migrate applications is best achieved with networked storage since the data doesn’t have to move when a virtual machine (VM) moves. But, the network needs to maintain connectivity to the fabric when a VM moves. Ethernet offers a network technology capable of migrating or reassigning network addresses, in this case IP addresses, from one physical device to another. When a VM moves to another physical server, the IP addresses move with it. IP based storage, such as iSCSI, leverages the built in capabilities of TCP/IP over Ethernet to migrate network port addresses without interruption to applications.

Flexibility. Most data centers require a mixture of applications that access either file or block data. With server virtualization, it is likely that you’ll require access to file and block data types on the same physical server for either the guest or parent OS. The ability to use a common network infrastructure for both the guest and parent can reduce cost and simplify management. Ethernet offers support for multiple storage protocols. In addition to iSCSI, Ethernet supports NFS and CIFS/SMB resulting in greater choice to optimize application performance within your budget. FCoE is also supported on an enhanced 10Gb Ethernet network to offer access to an existing FC infrastructure. The added flexibility to interface with existing SAN resources enhances the value of 10Gb as a long-term networking solution.

Performance. Cost. Ubiquity. Other factors that enhance Ethernet storage and therefore IP storage adoption include a robust roadmap, favorable economics, and near universal adoption. The Ethernet roadmap includes 40Gb and 100Gb speeds which will support storage traffic and will be capable of addressing any foreseeable application requirements. Ethernet today offers considerable economic value as port prices continue to drop. Although Gb speeds offer sufficient bandwidth for most business applications, the cost per Gb of bandwidth with 10 Gigabit Ethernet (GbE) is now lower than GbE and therefore offers upside in cost and efficiency. Finally, nearly all new digital devices including mobile phones, cameras, laptops, servers, and even some home appliances, are being offered with WiFi connectivity over Ethernet. Consolidating onto a single network technology means that the networking infrastructure to the rest of the world is essentially already deployed. How good is that?

Some may view moving to a shared network as kind of scary. The concerns are real. But, Ethernet has been a shared networking platform for decades and continues to offer enhanced features, performance, and security to address its increased application. And just because it can share other traffic, doesn’t mean that it must. Physical isolation of Ethernet networks is just as feasible as any other networking technology. Some may choose this option. Regardless, selecting a single network technology, even if not shared across all applications, can reduce not only capital expense, but also operational expense. Your IT personnel can be trained on a single networking technology versus multiple specialized single purpose networks. You may even be able to reduce maintenance and inventory costs to boot.

Customers looking to architect their network and storage infrastructure for today and the future would do well to consider Ethernet and IP storage protocols. The advantages are pretty compelling.

Ethernet Storage Market Momentum Continues

The inexorable growth of the market for Ethernet storage continued in the first half of 2010 – in fact we’re getting very close to Ethernet storage being the majority of networked storage in the Enterprise.

According to IDC’s recent Q2 2010 Worldwide Storage Systems Hardware Tracker, Ethernet Storage (NAS plus iSCSI) revenue market share climbed to 45%, up from 39% in 2009, 32% in 2008 and 28% in 2007, as shown below.

2007

2008

2009

Q2 2010

FC SAN

72%

68%

61%

55%

iSCSI SAN

6%

10%

13%

15%

NAS

22%

22%

26%

30%

In terms of capacity market share, we have already see the crossover point, with Ethernet Storage at 52% of the total PB shipped, up from 47% in 2009, 42% in 2008 and 37% in 2007, as shown in the following table.

2007

2008

2009

Q2 2010

FC SAN

62%

58%

53%

48%

iSCSI SAN

8%

13%

15%

18%

NAS

29%

29%

32%

34%

Ethernet Storage Market Momentum Continues in First Half of 2010

The inexorable growth of the market for Ethernet storage continued in the first half of 2010 – in fact we’re getting very close to Ethernet storage being the majority of networked storage in the Enterprise.

According to IDC’s recent Q2 2010 Worldwide Storage Systems Hardware Tracker, Ethernet Storage (NAS plus iSCSI) revenue market share climbed to 45%, up from 39% in 2009, 32% in 2008 and 28% in 2007, as shown below.

2007

2008

2009

Q2 2010

FC SAN

72%

68%

61%

55%

iSCSI SAN

6%

10%

13%

15%

NAS

22%

29%

26%

30%

In terms of capacity market share, we have already see the crossover point, with Ethernet Storage at 52% of the total PB shipped, up from 47% in 2009, 42% in 2008 and 37% in 2007, as shown in the following table.

2007

2008

2009

Q2 2010

FC SAN

62%

58%

53%

48%

iSCSI SAN

8%

13%

15%

18%

NAS

29%

29%

32%

34%

Share