Fibre Channel SAN Hosts and Targets Q&A

At our recent SNIA Networking Storage Forum (NSF) webcast “How Fibre Channel Hosts and Targets Really Communicate” our Fibre Channel (FC) experts explained exactly how Fibre Channel works, starting with the basics on the FC networking stack, link initialization, port types, and flow control, and then dove into the details on host/target logins and host/target IO. It was a great tutorial on Fibre Channel. If you missed it, you can view it on-demand. The audience asked several questions during the live event. Here are answers to them all: Q. What is the most common problem that we face in the FC protocol? A. Much the same as any other network protocol, congestion is the most common problem found in FC SANs. It can take a couple of forms including, but not limited to, host oversubscription and “Fan-in/Fan-out” ratios of host ports to storage ports, but it is probably the single largest generator of support cases. Another common problem is the ‘Host cannot see target’ kind of problem. Read More

The Current State of Storage in the Container World

It seems like everyone is talking about containers these days, but not everyone is talking about storage – and they should be. The first wave of adoption of container technology was focused on micro services and ephemeral workloads. The next wave of adoption won’t be possible without persistent, shared storage. That’s why the SNIA Ethernet Storage Forum is hosting a live webcast on November 17th, “Current State of Storage in the Container World.” In this webcast, we will provide an overview of Docker containers and the inherent challenge of persistence when containerizing traditional enterprise applications.  We will then examine the different storage solutions available for solving these challenges and provide the pros and cons of each. You’ll hear:

  • An Overview of Containers
    • Quick history, where we are now
    • Virtual machines vs. Containers
    • How Docker containers work
    • Why containers are compelling for customers
    • Challenges
    • Storage
  • Storage Options for Containers
    • NAS vs. SAN
    • Persistent and non-persistent
  • Future Considerations
    • Opportunities for future work

This webcast should appeal to those interested in understanding the basics of containers and how it relates to storage used with containers. I encourage you to register today! We hope you can make it on November 17th. And if you’re interested in keeping up with all that SNIA is doing with containers, please sign up for our Containers Opt-In Email list and we’ll be sure to keep you posted.

 

Outstanding Keynotes from Leading Storage Experts Make SDC Attendance a Must!

Posted by Marty Foltyn

Tomorrow is the last day to register online for next week’s Storage Developer Conference at the Hyatt Regency Santa Clara. What better incentive to click www.storagedeveloper.org and register than to read about the amazing keynote and featured speakers at this event – I think they’re the best since the event began in 1998! Preview sessions here, and click on the title to download the full description.SDC15_WebHeader3_999x188

Bev Crair, Vice President and General Manager, Storage Group, Intel will present Innovator, Disruptor or Laggard, Where Will Your Storage Applications Live? Next Generation Storage and discuss the leadership role Intel is playing in driving the open source community for software defined storage, server based storage, and upcoming technologies that will shift how storage is architected.

Jim Handy, General Director, Objective Analysis will report on The Long-Term Future of Solid State Storage, examining research of new solid state memory and storage types, and new means of integrating them into highly-optimized computing architectures. This will lead to a discussion of the way that these will impact the market for computing equipment.

Jim Pinkerton, Partner Architect Lead, Microsoft will present Concepts on Moving From SAS connected JBOD to an Ethernet Connected JBOD . This talk examines the advantages of moving to an Ethernet connected JBOD, what infrastructure has to be in place, what performance requirements are needed to be competitive, and examines technical issues in deploying and managing such a product.

Andy Rudoff, SNIA NVM Programming TWG, Intel will discuss Planning for the Next Decade of NVM Programming describing how emerging NVM technologies and related research are causing a change to the software development ecosystem. Andy will describe use cases for load/store accessible NVM, some transparent to applications, others non-transparent.

Richard McDougall, Big Data and Storage Chief Scientist, VMware will present Software Defined Storage – What Does it Look Like in 3 Years? He will survey and contrast the popular software architectural approaches and investigate the changing hardware architectures upon which these systems are built.

Laz Vekiarides, CTO and Co-founder, ClearSky Data will discuss Why the Storage You Have is Not the Storage Your Data Needs , sharing some of the questions every storage architect should ask.

Donnie Berkholz, Research Director, 451 Research will present Emerging Trends in Software Development drawing on his experience and research to discuss emerging trends in how software across the stack is created and deployed, with a particular focus on relevance to storage development and usage.

Gleb Budman, CEO, Backblaze will discuss Learnings from Nearly a Decade of Building Low-cost Cloud Storage. He will cover the design of the storage hardware, the cloud storage file system software, and the operations processes that currently store over 150 petabytes and 5 petabytes every month.

You could wait and register onsite at the Hyatt, but why? If you need more reasons to attend, check out SNIA on Storage previous blog entries on File Systems, Cloud, Management, New Thinking, Disruptive Technologies, and Security sessions at SDC. See the full agenda and register now for SDC at http://www.storagedeveloper.org.

Software Defined Networks for SANs?

Previously, I’ve blogged about the VN2VN (virtual node to virtual node) technology coming with the new T11-FC-BB6 specification. In a nutshell, VN2VN enables an “all Ethernet” FCoE network, eliminating the requirement for an expensive Fibre Channel Forwarding (FCF) enabled switch. VN2VN dramatically lowers the barrier of entry for deploying FCoE. Host software is available to support VN2VN, but so far only one major SAN vendor supports VN2VN today. The ecosystem is coming, but are there more immediate alternatives for deploying FCoE without an FCF-enabled switch or VN2VN-enabled target SANs? The answer is that full FC-BB5 FCF services could be provided today using Software Defined Networking (SDN) in conjunction with standard DCB-enabled switches by essentially implementing those services in host-based software running in a virtual machine on the network. This would be an alternative “all Ethernet” storage network supporting Fibre Channel protocols. Just such an approach was presented at SNIA’s Storage Developers Conference 2013 in a presentation entitled, “Software-Defined Network Technology and the Future of Storage,” Stuart Berman, Chief Executive Officer, Jeda Networks. (Note, of course neither approach is relevant to SAN networks using Fibre Channel HBAs, cables, and switches.)

Interest in SDN is spreading like wildfire. Several pioneering companies have released solutions for at least parts of the SDN puzzle, but kerosene hit the wildfire with the $1B acquisition of Nicira by VMware. Now a flood of companies are pursuing an SDN approach to everything from wide area networks to firewalls to wireless networks. Applying SDN technology to storage, or more specifically to Storage Area Networks, is an interesting next step. See Jason Blosil’s blog below, “Ethernet is the right fit for the Software Defined Data Center.”

To review, an SDN abstracts the network switch control plane from the physical hardware. This abstraction is implemented by a software controller, which can be a virtual appliance or virtual machine hosted in a virtualized environment, e.g., a VMware ESXi host. The benefits are many; the abstraction is often behaviorally consistent with the network being virtualized but simpler to manipulate and manage for a user. The SDN controller can automate the numerous configuration steps needed to set up a network, lowering the amount of touch points required by a network engineer. The SDN controller is also network speed agnostic, i.e., it can operate over a 10Gbps Ethernet fabric and seamlessly transition to operate over a 100Gbps Ethernet fabric. And finally, the SDN controller can be given an unlimited amount of CPU and memory resources in the host virtual server, scaling to a much greater magnitude compared to the control planes in switches that are powered by relatively low powered processors.

So why would you apply SDN to a SAN? One reason is SSD technology; storage arrays based on SSDs move the bandwidth bottleneck for the first time in recent memory into the network. An SSD array can load several 10Gbps links, overwhelming many 10G Ethernet fabrics. Applying a Storage SDN to an Ethernet fabric and removing the tight coupling of speed of the switch with the storage control plane will accelerate adoption of higher speed Ethernet fabrics. This will in turn move the network bandwidth bottleneck back into the storage array, where it belongs.

Another reason to apply SDN to Storage Networks is to help move certain application workloads into the Cloud. As compute resources increase in speed and consolidate, workloads require deterministic bandwidth, IOPS and/or resiliency metrics which have not been well served by Cloud infrastructures. Storage SDNs would apply enterprise level SAN best practices to the Cloud, enabling the migration of some applications which would increase the revenue opportunities of the Cloud providers. The ability to provide a highly resilient, high performance, SLA-capable Cloud service is a large market opportunity that is not cost available/realizable with today’s technologies.

So how can SDN technology be applied to the SAN? The most viable candidate would be to leverage a Fibre Channel over Ethernet (FCoE) network. An FCoE network already converges a high performance SAN with the Ethernet LAN. FCoE is a lightweight and efficient protocol that implements flow control in the switch hardware, as long as the switch supports Data Center Bridging (DCB). There are plenty of standard “physical” DCB-enabled Ethernet switches to choose from, so a Storage SDN would give the network engineer freedom of choice. An FCoE based SDN would create a single unified, converged and abstracted SAN fabric. To create this Storage SDN you would need to extract and abstract the FCoE control plane from the switch removing any dependency of a physical FCF. This would include the critical global SAN services such as the Name Server table, the Zoning table and State Change Notification. Containing the global SAN services, the Storage SDN would also have to communicate with initiators and targets, something an SDN controller does not do. Since FCoE is a network-centric technology, i.e., configuration is performed from the network, a Storage SDN can automate large SAN’s from a single appliance. The Storage SDN should be able to create deterministic, end-to-end Ethernet fabric paths due to the global view of the network that an SDN controller typically has.

A Storage SDN would also be network speed agnostic, since Ethernet switches already support 10Gbps, 40Gbps, and 100Gbps this would enable extremely fast SANs not currently attainable. Imagine the workloads, applications and consolidation of physical infrastructure possible with a 100Gbps Storage SDN SAN all controlled by a software FCoE virtual server connecting thousands of servers with terabytes of SSD storage? SDN technology is bursting with solutions around LAN traffic; now we need to tie in the SAN and keep it as non-proprietary to the hardware as possible.

Deploying SQL Server with iSCSI – Answers to your questions

by: Gary Gumanow

Last Wednesday (2/24/11), I hosted an Ethernet Storage Forum iSCSI SIG webinar with representatives from Emulex and NetApp to discuss the benefits of iSCSI storage networks in SQL application environments. You can catch a recording of the webcast on BrightTalk here.

The webinar was well attended, and while we received so many great questions during the webinar we just didn’t have time to answer all of them. Which brings us to this blogpost. We have included answers to these unanswered questions in our blog below.
We’ll be hosting another webinar real soon, so please check back for upcoming ESF iSCSI SIG topics. You’ll be able to register for this event shortly on BrightTalk.com.

Let’s get to the questions. We took the liberty of editing the questions for clarity. Please feel free to comment if we misinterpreted the question.

Question: Is TRILL needed in the data center to avoid pausing of traffic while extending the number of links that can be used?

Answer: The Internet Engineering Task Force (IETF) has developed a new shortest path frame Layer 2 (L2) routing protocol for multi-hop environments. The new protocol is called Transparent Interconnection of Lots of Links, or TRILL. TRILL will enable multipathing for L2 networks and remove the restrictions placed on data center environments by STP single-path networks.

Although TRILL may serve as an alternative to STP, it doesn’t require that STP be removed from an Ethernet infrastructure. Hybrid solutions that use both STP and TRILL are not only possible but also will be the norm for at least the near-term future. TRILL will also not automatically eliminate the risk of a single point of failure, especially in hybrid environments.

Another area where TRILL is not expected to play a role is the routing of traffic across L3 routers. TRILL is expected to operate within a single subnet. While the IETF draft standard document mentions the potential for tunneling data, it is unlikely that TRILL will evolve in a way that will expand its role to cover cross-L3 router traffic. Existing and well-established protocols such as Multiprotocol Label Switching (MPLS) and Virtual Private LAN Service (VPLS) cover these areas and are expected to continue to do so.

In summary, TRILL will help multipathing for L2 networks.

Question: How do you calculate bandwidth when you only have IOPS?
Answer:
The mathematical formula to calculate bandwidth is a function of IOPS and I/O size. The formula is simply IOP x I/O size. Example: 10,000 IOPS x 4k block size (4096 bytes) = 40.9 MB/sec.

Question: When deploying FCoE, must all 10GbE switches support Data Center Bridging (DCB) and FCoE? Or can some pass through FCoE?
Answer:
Today, in order to deploy FCoE, all switches in the data path must support both FCoE forwarding and DCB. Future standards include proposals to allow pass through of FCoE commands without having to support Fibre Channel services. This will allow for more cost effective networks where not all switch layers are needed to support the FCoE storage protocol.
Question: iSCSI performance is comparable to FC and FCoE. Do you expect to see iSCSI overtake FC in the near future?
Answer:
FCoE deployments are still very small compared to traditional Fibre Channel and iSCSI. However, industry projections by several analyst firms indicate that Ethernet storage protocols, such as iSCSI and FCoE, will overtake traditional Fibre Channel due to increased focus on shared data center infrastructures to address applications, such as private and public clouds. But, even the most aggressive forecasts don’t show this cross over for several years from now.
Customers looking to deploy new data centers are more likely today to consider iSCSI than in the past. Customers with existing Fibre Channel investments are likely to transition to FCoE in order to extend the investment of their existing FC storage assets. In either case, transitioning to 10Gb Ethernet with DCB capability offers the flexibility to do both.

Question: With 16Gb/s FC ratified, what product considerations would be considered by disk manufacturers?
Answer:
We can’t speak to what disk manufacturers will or won’t do regarding 16Gb/s disks. But, the current trend is to move away from Fibre Channel disk drives in favor of Serial Attached SCSI (SAS) and SATA disks as well as SSDs. 16Gb Fibre Channel will be a reality and will play in the data center. But, the prediction of some vendors is that the adoption rate will be much slower than previous generations.
Question: Why move to 10GbE if you have 8Gb Fibre Channel? The price is about the same, right?
Answer:
If your only network requirement is block storage, then Fibre Channel provides a high performance network to address that requirement. However, if you have a mixture of networking needs, such as NAS, block storage, and LAN, then moving to 10GbE provides sufficient bandwidth and flexibility to support multiple traffic types with fewer resources and with lower overall cost.
Question: Is the representation of number of links accurate when comparing Ethernet to Fibre Channel. Your overall bandwidth of the wire may be close, but when including protocol overheads, the real bandwidth isn’t an accurate comparison. Example: FC protocol overhead is only 5% vs TCP at 25%. iSCSI framing adds another 4%. So your math on how many FC cables equal 10 Gbps cables is not a fair comparison.

Answer: As pointed out in the question, comparing protocol performance requires more than just a comparison of wire rates of the physical transports. Based upon protocol efficiency, one could conclude that the comparison between FC and TCP/IP is unfair as designed because Fibre Channel should have produced greater data throughput from a comparable wire rate. However, the data in this case shows that iSCSI offers comparable performance in a real world application environment, rather than just a benchmark test. The focus of the presentation was iSCSI. FCoE and FC were only meant to provide a reference points. The comparisons were not intended to be exact nor precise. 10GbE and iSCSI offers the performance to satisfy business critical performance requirements. Customers looking to deploy a storage network should consider a proof of concept to ensure that a new solution can satisfy their specific application requirements.

Question: Two FC switches were used during this testing. Was it to solve an operation risk of no single point of failure?
Answer:
The use of two switches was due to hardware limitation. Each switch had 8-ports and the test required 8 ports at the target and the host. Since this was a lab setup, we weren’t configuring for HA. However, the recommendation for any production environment would be to use redundant switches. This would apply for iSCSI storage networks as well.
Question: How can iSCSI match all the distributed management and security capabilities of Fibre Channel / FCoE such as FLOGI, integrated name server, zoning etc?
Answer:
The feature lists between the two protocols don’t match exactly. The point of this presentation was to point out that iSCSI is closing the performance gap and has enough high-end features to make it enterprise-ready.
Question: How strong is the possibility that 40G Ethernet will be bypassed, with a move directly from 10G to 100G?
Answer: Vendors are shipping products today that support 40Gb Ethernet so it seems clear that there will be a 40GbE. Time will tell if customers bypass 40GbE and wait for 100GbE.

Thanks again for checking out our blog. We hope to have you on our next webinar live, but if not, we’ll be updating this blog frequently.

Gary Gumanow – iSCSI SIG Co-chairman, ESF Marketing Chair