Category: Fibre Channel
Fibre Channel vs. iSCSI – The Great Debate Generates Questions Galore
“A good and frank discussion about the two technologies that don’t always need to compete!”
“Really nice and fair comparison guys. Always well moderated, you hit a lot of material in an hour. Thanks for your work!”
“Very fair and balanced overview of the two protocols.”
“Excellent coverage of the topic. I will have to watch it again.”
If you missed the webcast, you can watch it on-demand at your convenience and download a copy of the slides. The debate generated many good questions and our expert speakers have answered them all: Read MoreThe Great Debates – Our Next Webcast Series
FC vs. iSCSI – The Debate Continues
Does Your World Include Storage? Don’t Miss SDC!
Whether storage is already a main focus of your career or may be advancing toward you, you’ll definitely want to attend the flagship event for storage developers – and those involved in storage operations, decision making, and usage – SNIA’s 19th annual Storage Developer Conference (SDC), September 11-14, 2017 at the Hyatt Regency Santa Clara, California. Read More
The Alphabet Soup of Storage Networking Acronyms Explained
- Volatile v Non-Volatile v Persistent Memory
- NVDIMM v RAM v DRAM v SLC v MLC v TLC v NAND v 3D NAND v Flash v SSDs v NVMe
- NVMe (the protocol)
Clearing Up Confusion on Common Storage Networking Terms
Do you ever feel a bit confused about common storage networking terms? You’re not alone. At our recent SNIA Ethernet Storage Forum webcast “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Mauve,” we had experts from Cisco, Mellanox and NetApp explain the differences between:
- Channel vs. Busses
- Control Plane vs. Data Plane
- Fabric vs. Network
If you missed the live webcast, you can watch it on-demand. As promised, we’re also providing answers to the questions we got during the webcast. Between these questions and the presentation itself, we hope it will help you decode these common, but sometimes confusing terms.
And remember, the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” is a webcast series with a “colorfully-named pod” for each topic we tackle. You can register now for our next webcast: Part Teal, The Buffering Pod, on Feb. 14th.
Q. Why do we have Fibre and Fiber
A. Fiber Optics is the term used for the optical technology used by Fibre Channel Fabrics. While a common story is that the “Fibre” spelling came about to accommodate the French (FC is after all, an international standard), in actuality, it was a marketing idea to create a more unique name, and in fact, it was decided to use the British spelling – “Fibre”.
Q. Will OpenStack change all the rules of the game?
A. Yes. OpenStack is all about centralizing the control plane of many different aspects of infrastructure.
Q. The difference between control and data plane matters only when we discuss software defined storage and software defined networking, not in traditional switching and storage.
A. It matters regardless. You need to understand how much each individual control plane can handle and how many control planes you have from a overall management perspective. In the case were you have too many control planes SDN and SDS can be a benefit to you.
Q. As I’ve heard that networks use stateless protocols, would FC do the same?
A. Fibre Channel has several different Classes, which can be either stateful or stateless. Most applications of Fibre Channel are Class 3, as it is the preferred class for SCSI traffic, A connection between Fibre Channel endpoints is always stateful (as it involves a login process to the Fibre Channel fabric). The transport protocol is augmented by Fibre Channel exchanges, which are managed on a per-hop basis. Retransmissions are handled by devices when exchanges are incomplete or lost, meaning that each exchange is a stateful transmission, but the protocol itself is considered stateless in modern SCSI-transport Fibre Channel.
iSCSI, as a connection-oriented protocol, creates a nexus between an initiator and a target, and is considered stateful. In addition, SMB, NFSv4, ftp, and TCP are stateful protocols, while NFSv2, NFSv3, http, and IP are stateless protocols.
Q. Where do CIFS/SMB come into the picture?
A. CIFFS/SMB is part of a network stack. We need to have a separate talk about network stacks and their layers. In this presentation, we were talking primarily about the physical layer of the networks and fabrics. To overly simplify network stacks, there are multiple layers of protocols that run on top of the physical layer. In the case of FC, those protocols include the control plane protocols (such as FC-SW), and the data plane protocols. In FC, the most common data plane protocol is FCP (used by SCSI, FICON, and FC-NVMe). In the case of Ethernet, those protocols also include the control plan (such as TCP/IP), and data plane protocols. In Ethernet, there are many commonly used data plane protocols for storage (such as iSCSI, NFS, and CIFFS/SMB)
Questions on the 2017 Ethernet Roadmap for Networked Storage
Last month, experts from Dell EMC, Intel, Mellanox and Microsoft convened to take a look ahead at what’s in store for Ethernet Networked Storage this year. It was a fascinating discussion of anticipated updates. If you missed the webcast, “2017 Ethernet Roadmap for Networked Storage,” it’s now available on-demand. We had a lot of great questions during the live event and we ran out of time to address them all, so here are answers from our speakers.
Q. What’s the future of twisted pair cable? What is the new speed being developed with twisted pair cable?
A. By twisted pair I assume you mean USTP CAT5,6,7 etc. The problem going forward with high speed signaling is the USTP stands for Un-Shielded and the signal radiates off the wire very quickly. At 25G and 50G this is a real problem and forces the line card end to have a big, power consuming and costly chip to dig the signal out of the noise. Anything can be done, but at what cost. 25G BASE-T is being developed but the reach is somewhere around 30 meters. Cost, size, power consumption are all going up and reach going down – all opposite to the trends in modern high speed data centers. BASE-T will always have a place for those applications that don’t need the faster rates.
Q. What do you think of RCx standards and cables?
A. So far, Amphenol, JAE and Volex are the suppliers who are members of the MSA. Very few companies have announced or discussed RCx. In addition to a smaller connector, not having an EEPROM eliminates steps in the cable assembly manufacture, hence helping with lowering the cost when compared to traditional DAC cabling. The biggest advantage of RCx is that it can help eliminate bulky breakout cables within a rack since a single RCx4 receptacle can accept a number of combinations of single lane, 2 lane or 4 lane cable with the same connector on the host. RCx ports can be connected to existing QSFP/SFP infrastructure with appropriate cabling. It remains to be seen, however, if it becomes a standard and popular product or remain as a custom solution.
Q. How long does AOC normally reach, 3m or 30m?
A. AOCs pick it up after DAC drops off about 3m. Most popular reaches are 3,5,and 10m and volume drops rapidly after 15,20,30,50, and100. We are seeing Ethernet connected HDD’s at 2.5GbE x 2 ports, and Ceph touting this solution. This seems to play well into the 25/50/100GbE standards with the massive parallelism possible.
Q. How do we scale PCIe lanes to support NVMe drives to scale, and to replace the capacity we see with storage arrays populated completely with HDDs?
A. With the advent of PCIe Gen 4, the per-lane rate of PCIe is going from 8 GT/s to 16GT/s. Scaling of PCIe is already happening.
Q. How many NVMe drives does it take to saturate 100GbE?
A. 3 or 4 depending on individual drives.
Q. How about the reliability of Ethernet? A lot of people think Fibre Channel has better reliability than Ethernet.
A. It’s true that Fibre Channel is a lossless protocol. Ethernet frames are sometimes dropped by the switch, however, network storage using TCP has built in error-correction facility. TCP was designed at a time when networks were less robust than today. Ethernet networks these days are far more reliable.
Q. Do the 2.5GbE and 5GbE refer to the client side Ethernet port or the server Ethernet port?
A. It can exist on both the client side and the server side Ethernet port.
Q. Are there any 25GbE or 50GbE NICs available on the market?
A. Yes, there are many that are on the market from a number of vendors, including Dell, Mellanox, Intel, and a number of others.
Q. Commonly used Ethernet speeds are either 10GbE or 40GbE. Do the new 25GbE and 50GbE require new switches?
A. Yes, you need new switches to support 25GbE and 50GbE. This is, in part, because the SerDes rate per lane at 25 and 50GbE is 25Gb/s, which is not supported by the 10 and 40GbE switches with a maximum SerDes rate of 10Gb/s.
Q. With a certain number of SerDes coming off the switch ASIC, which would you prefer to use 100G or 40G if assuming both are at the same cost?
A. Certainly 100G. You get 2.5X the bandwidth for the same cost under the assumptions made in the question.
Q. Are there any 100G/200G/400G switches and modulation available now?
A. There are many 100G Ethernet switches available on the market today include Dell’s Z9100 and S6100, Mellanox’s SN2700, and a number of others. The 200G and 400G IEEE standards are not complete as of yet. I’m sure all switch vendors will come out with switches supporting those rates in the future.
Q. What does lambda mean?
A. Lambda is the symbol for wavelength.
Q. Is the 50GbE standard ratified now?
A. IEEE 802.3 just recently started development of a 50GbE standard based upon a single-lane 50 Gb/s physical layer interface. That standard is probably about 2 years away from ratification. The 25G Ethernet Consortium has a ratified specification for 50GbE based upon a dual-lane 25 Gb/s physical layer interface.
Q. Are there any parallel options for using 2 or 4 lanes like in 128GFCp?
A. Many Ethernet specifications are based upon parallel options. 10GBASE-T is based upon 4 twisted-pairs of copper cabling. 100GBASE-SR4 is based upon 4 lanes (8 fibers) of multimode fiber. Even the industry MSA for 100G over CWDM4 is based upon four wavelengths on a duplex single-mode fiber. In some instances, the parallel option is based upon the additional medium (extra wires or fibers) but with fiber optics, parallel can be created by using different wavelengths that don’t interfere with each other.
SNIA Storage Developer Conference-The Knowledge Continues
SNIA’s 18th Storage Developer Conference is officially a success, with 124 general and breakout sessions; Cloud Interoperability, Kinetic Storage, and SMB3 plugfests; ten Birds-of-a-Feather Sessions, and amazing networking among 450+ attendees. Sessions on NVMe over Fabrics won the title of most attended, but Persistent Memory, Object Storage, and Performance were right behind. Many thanks to SDC 2016 Sponsors, who engaged attendees in exciting technology discussions.
For those not familiar with SDC, this technical industry event is designed for a variety of storage technologists at various levels from developers to architects to product managers and more. And, true to SNIA’s commitment to educating the industry on current and future disruptive technologies, SDC content is now available to all – whether you attended or not – for download and viewing.
You’ll want to stream keynotes from Citigroup, Toshiba, DSSD, Los Alamos National Labs, Broadcom, Microsemi, and Intel – they’re available now on demand on SNIA’s YouTube channel, SNIAVideo.
All SDC presentations are now available for download; and over the next few months, you can continue to download SDC podcasts which combine audio and slides. The first podcast from SDC 2016 – on hyperscaler (as well as all 2015 SDC Podcasts) are available here, and more will be available in the coming weeks.
SNIA thanks all its members and colleagues who contributed to make SDC a success! A special thanks goes out to the SNIA Technical Council, a select group of acknowledged industry experts who work to guide SNIA technical efforts. In addition to driving the agenda and content for SDC, the Technical Council oversees and manages SNIA Technical Work Groups, reviews architectures submitted by Work Groups, and is the SNIA’s technical liaison to standards organizations. Learn more about these visionary leaders at http://www.snia.org/about/organization/tech_council.
And finally, don’t forget to mark your calendars now for SDC 2017 – September 11-14, 2017, again at the Hyatt Regency Santa Clara. Watch for the Call for Presentations to open in February 2017.
Securing Fibre Channel Storage
by Eric Hibbard, SNIA Storage Security TWG Chair, and SNIA Storage Security TWG team members
Fibre Channel is often viewed as a specialized form of networking that lives within data centers and which neither has, or requires, special security protections. Neither of these assumptions is true, but finding the appropriate details to secure Fibre Channel infrastructure can be challenging.
The ISO/IEC 27040:2015 Information technology – Security techniques – Storage Security standard provides detailed technical guidance in securing storage systems and ecosystems. However, while the coverage of this standard is quite broad, it lacks details for certain important topics.
ISO/IEC 27040:2015 addresses storage security risks and threats at a high level. This blog is written in the context of Fibre Channel. The following list is a summary of the major threats that may confront Fibre Channel implementations and deployments.
- Storage Theft: Theft of storage media or storage devices can be used to access data as well as to deny legitimate use of the data.
- Sniffing Storage Traffic: Storage traffic on dedicated storage networks or shared networks can be sniffed via passive network taps or traffic monitoring revealing data, metadata, and storage protocol signaling. If the sniffed traffic includes authentication details, it may be possible for the attacker to replay9 (retransmit) this information in an attempt to escalate the attack.
- Network Disruption: Regardless of the underlying network technology, any software or congestion disruption to the network between the user and the storage system can degrade or disable storage.
- WWN Spoofing: An attacker gains access to a storage system in order to access/modify/deny data or metadata.
- Storage Masquerading: An attacker inserts a rogue storage device in order to access/modify/deny data or metadata supplied by a host.
- Corruption of Data: Accidental or intentional corruption of data can occur when the wrong hosts gain access to storage.
- Rogue Switch: An attacker inserts a rogue switch in order to perform reconnaissance on the fabric (e.g., configurations, policies, security parameters, etc.) or facilitate other attacks.
- Denial of Service (DoS): An attacker can disrupt, block or slow down access to data in a variety of ways by flooding storage networks with error messages or other approaches in an attempt to overload specific systems within the network.
A core element of Fibre Channel security is the ANSI INCITS 496-2012, Information Technology – Fibre Channel – Security Protocols – 2 (FC-SP-2) standard, which defines protocols to authenticate Fibre Channel entities, set up session encryption keys, negotiate parameters to ensure frame-by-frame integrity and confidentiality, and define and distribute policies across a Fibre Channel fabric. It is also worth noting that FC-SP-2 includes compliance elements, which is somewhat unique for FC standards.
Fibre Channel fabrics may be deployed across multiple, distantly separated sites, which make it critical that security services be available to assure consistent configurations and proper access controls.
A new whitepaper, one in a series from SNIA that addresses various elements of storage security, is intended to leverage the guidance in the ISO/IEC 27040 standard and enhance it with a specific focus on Fibre Channel (FC) security. To learn more about security and Fibre Channel, please visit www.snia.org/security and download the Storage Security: Fibre Channel Security whitepaper.
And mark your calendar for presentations and discussions on this important topic at the upcoming SNIA Data Storage Security Summit, September 22, 2016, at the Hyatt Regency Santa Clara CA. Registration is complimentary – go to www. http://www.snia.org/dss-summit for details on how you can attend and get involved in the conversation.