• Home
  • About
  •  

    Rock n’ Roll with SMB3

    February 23rd, 2017

    Server Message Block (SMB) is the core file-transfer protocol of Windows, MacOS and Samba, and has become widely deployed. It’s ubiquitous – a 30-year-old family of network code.

    However, the latest iteration of SMB3 is almost unrecognizable when compared to versions only a few years old. That’s why the SNIA Ethernet Storage Forum (ESF) has invited Microsoft’s Ned Pyle, program manager of the SMB protocol, to speak at our live webcast, “Rockin’ and Rollin’ with SMB3.”

    Extensive reengineering has led to advanced capabilities that include multichannel, transparent failover, scale out, and encryption. SMB Direct makes use of RDMA networking, creates block transport system and provides reliable transport to zetabytes of unstructured data, worldwide.

    SMB3 forms the basis of hyperconverged and scale-out systems for virtualization and SQL Server. It is available for a variety of hardware devices, from printers, network-attached storage appliances, to Storage Area Networks. It is often the most prevalent protocol on a network, with high-performance data transfers as well as efficient end-user access over wide-area connections. Register now for the live event on April 5th to hear:

    • Brief background on SMB
    • An overview of the SMB 3.x family, first released with Windows 8, Windows Server 2012, MacOS 10.10, Samba 4.1, and Linux CIFS 3.12
    • What changed in SMB 3.1.1
    • Understanding SMB security, scenarios, and workloads
    • The deprecation and removal of the legacy SMB1 protocol
    • How SMB3 supports hyperconverged and scale-out storage

    This is a unique opportunity to “rock out” with an SMB3 expert on the front lines at Microsoft. We hope to see you on April 5th.


    SNIA Activities in Security, Containers, and File Storage on Tap at Three Bay Area Events

    February 14th, 2017

    SNIA will be out and about in February in San Francisco and Santa Clara, CA, focused on their security, container, and file storage activities.

    February 14-17 2017, join SNIA in San Francisco at the RSA Conference in the OASIS Interop: KMIP & PKCS11 booth S2115. OASIS and SNIA member companies will be demonstrating OASIS Key Management Interoperability Protocol (KMIP) through live interoperability across all participants. … Continue reading


    Would You Like Some Rosé with Your iSCSI?

    February 3rd, 2017

    Would you like some rosé with your iSCSI? I’m guessing that no one has ever asked you that before. But we at the SNIA Ethernet Storage Forum like to get pretty colorful in our “Everything You Wanted To Know about Storage But Were Too Proud To Ask” webcast series as we group common storage terms together by color rather than by number.

    In our next live webcast, Part Rosé – The iSCSI Pod, we will focus entirely on iSCSI, one of the most used technologies in data centers today. With the increasing speeds for Ethernet, the technology is more and more appealing because of its relative low cost to implement. However, like any other storage technology, there is more here than meets the eye.

    We’ve convened a great group of experts from Cisco, Mellanox and NetApp who will start by covering the basic elements to make your life easier if you are considering using iSCSI in your architecture, diving into:

    • iSCSI definition
    • iSCSI offload
    • Host-based iSCSI
    • TCP offload

    Like nearly everything else in storage, there is more here than just a protocol. I hope you’ll register today to join us on March 2nd and learn how to make the most of your iSCSI solution. And while we won’t be able to provide the rosé wine, our panel of experts will be on-hand to answer your questions.


    SNIA Recognizes Outstanding Individual and Group Contributors

    February 2nd, 2017

    The backbone of SNIA is its passionate and dedicated volunteers – over 3,500 from 160 companies involved in storage and technology.  At the end of each year, SNIA members vote anonymously to recognize both individuals and groups who have made significant contributions over that year to advancing SNIA’s mission to lead the storage industry worldwide in developing and promoting vendor-neutral architectures, standards, and educational services … Continue reading


    We’ve Been Thinking…What Does Hyperconverged Mean to Storage?

    February 1st, 2017

    Here at the SNIA Ethernet Storage Forum (ESF), we’ve been discussing how hyperconverged adoption will impact storage. Converged Infrastructure (CI), Hyperconverged Infrastructure (HCI), along with Cluster or Cloud In a Box (CIB) are popular trend topics that have gained both industry and customer adoption. As part of data infrastructures, CI, HCI, and CIB enable simplified deployment of resources (servers, storage, I/O networking, hypervisor, application software) across different environments.

    But what do these approaches mean for the storage environment? What are the key concerns and considerations related specifically to storage? How will the storage be connected to (or included in) the platform? Who will protect and backup the data? And most importantly, how do you know that you’re asking the right questions in order to get to the right answers?

    Find out on March 15th in a live SNIA-ESF webcast, “What Does Hyperconverged Mean to Storage.” We’ve invited expert Greg Schulz, founder and analyst of Server StorageIO, to answer the questions we’ve been debating. Join us, as Greg will move beyond the hype (pun intended) to discuss:

    • What are the storage considerations for CI, CIB and HCI
    • Why fast applications and fast servers need fast I/O
    • Networking and server-storage I/O considerations
    • How to avoid aggravation-causing aggregation (bottlenecks)
    • Aggregated vs. disaggregated vs. hybrid converged
    • Planning, comparing, benchmarking and decision-making
    • Data protection, management and east-west I/O traffic
    • Application and server north-south I/O traffic

    Register today and please bring your questions. We’ll be on-hand to answer them during this event. We hope to see you there!


    Buffers, Queues, and Caches, Oh My!

    January 18th, 2017

    Buffers and Queues are part of every data center architecture, and a critical part of performance – both in improving it as well as hindering it. A well-implemented buffer can mean the difference between a finely run system and a confusing nightmare of troubleshooting. Knowing how buffers and queues work in storage can help make your storage system shine.

    However, there is something of a mystique surrounding these different data center components, as many people don’t realize just how they’re used and why. Join our team of carefully-selected experts on February 14th in the next live webcast in our “Too Proud to Ask” series, “Everything You Wanted to Know About Storage But Were Too Proud To Ask – Part Teal: The Buffering Pod” where we’ll demystify this very important aspect of data center storage. You’ll learn:

    • What are buffers, caches, and queues, and why you should care about the differences?
    • What’s the difference between a read cache and a write cache?
    • What does “queue depth” mean?
    • What’s a buffer, a ring buffer, and host memory buffer, and why does it matter?
    • What happens when things go wrong?

    These are just some of the topics we’ll be covering, and while it won’t be exhaustive look at buffers, caches and queues, you can be sure that you’ll get insight into this very important, and yet often overlooked, part of storage design.

    Register today and spend Valentine’s Day with our experts who will be on-hand to answer your questions on the spot!


    Clearing Up Confusion on Common Storage Networking Terms

    January 12th, 2017

    Do you ever feel a bit confused about common storage networking terms? You’re not alone. At our recent SNIA Ethernet Storage Forum webcast “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Mauve,” we had experts from Cisco, Mellanox and NetApp explain the differences between:

    • Channel vs. Busses
    • Control Plane vs. Data Plane
    • Fabric vs. Network

    If you missed the live webcast, you can watch it on-demand. As promised, we’re also providing answers to the questions we got during the webcast. Between these questions and the presentation itself, we hope it will help you decode these common, but sometimes confusing terms.

    And remember, the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” is a webcast series with a “colorfully-named pod” for each topic we tackle. You can register now for our next webcast: Part Teal, The Buffering Pod, on Feb. 14th.

    Q. Why do we have Fibre and Fiber

    A. Fiber Optics is the term used for the optical technology used by Fibre Channel Fabrics.  While a common story is that the “Fibre” spelling came about to accommodate the French (FC is after all, an international standard), in actuality, it was a marketing idea to create a more unique name, and in fact, it was decided to use the British spelling – “Fibre”.

    Q. Will OpenStack change all the rules of the game?

    A. Yes. OpenStack is all about centralizing the control plane of many different aspects of infrastructure.

    Q. The difference between control and data plane matters only when we discuss software defined storage and software defined networking, not in traditional switching and storage.

    A. It matters regardless. You need to understand how much each individual control plane can handle and how many control planes you have from a overall management perspective. In the case were you have too many control planes SDN and SDS can be a benefit to you.

    Q. As I’ve heard that networks use stateless protocols, would FC do the same?

    A. Fibre Channel has several different Classes, which can be either stateful or stateless. Most applications of Fibre Channel are Class 3, as it is the preferred class for SCSI traffic, A connection between Fibre Channel endpoints is always stateful (as it involves a login process to the Fibre Channel fabric). The transport protocol is augmented by Fibre Channel exchanges, which are managed on a per-hop basis. Retransmissions are handled by devices when exchanges are incomplete or lost, meaning that each exchange is a stateful transmission, but the protocol itself is considered stateless in modern SCSI-transport Fibre Channel.

    iSCSI, as a connection-oriented protocol, creates a nexus between an initiator and a target, and is considered stateful. In addition, SMB, NFSv4, ftp, and TCP are stateful protocols, while NFSv2, NFSv3, http, and IP are stateless protocols.

    Q. Where do CIFS/SMB come into the picture?

    A. CIFFS/SMB is part of a network stack.  We need to have a separate talk about network stacks and their layers.  In this presentation, we were talking primarily about the physical layer of the networks and fabrics.  To overly simplify network stacks, there are multiple layers of protocols that run on top of the physical layer.  In the case of FC, those protocols include the control plane protocols (such as FC-SW), and the data plane protocols.  In FC, the most common data plane protocol is FCP (used by SCSI, FICON, and FC-NVMe).  In the case of Ethernet, those protocols also include the control plan (such as TCP/IP), and data plane protocols.  In Ethernet, there are many commonly used data plane protocols for storage (such as iSCSI, NFS, and CIFFS/SMB)


    Attend Live – or Live Stream – SNIA’s Persistent Memory Summit January 18

    January 12th, 2017

    by Marty Foltyn

    SNIA’s Persistent Memory Summit makes its fifth annual appearance in Silicon Valley next Wednesday, January 18, and if you are in the vicinity of the Westin San Jose, you owe it to yourself to check it out. PMSummitLogo (2)

    SNIA is well known for its technology-focused, no vendor-hype conferences, and this one-day event will feature 12 presentations and two panels that will “level set” … Continue reading


    Questions on the 2017 Ethernet Roadmap for Networked Storage

    January 9th, 2017

    Last month, experts from Dell EMC, Intel, Mellanox and Microsoft convened to take a look ahead at what’s in store for Ethernet Networked Storage this year. It was a fascinating discussion of anticipated updates. If you missed the webcast, “2017 Ethernet Roadmap for Networked Storage,” it’s now available on-demand. We had a lot of great questions during the live event and we ran out of time to address them all, so here are answers from our speakers.

    Q. What’s the future of twisted pair cable? What is the new speed being developed with twisted pair cable?

    A. By twisted pair I assume you mean USTP CAT5,6,7 etc.  The problem going forward with high speed signaling is the USTP stands for Un-Shielded and the signal radiates off the wire very quickly.   At 25G and 50G this is a real problem and forces the line card end to have a big, power consuming and costly chip to dig the signal out of the noise. Anything can be done, but at what cost.  25G BASE-T is being developed but the reach is somewhere around 30 meters.  Cost, size, power consumption are all going up and reach going down – all opposite to the trends in modern high speed data centers.  BASE-T will always have a place for those applications that don’t need the faster rates.

    Q. What do you think of RCx standards and cables?

    A. So far, Amphenol, JAE and Volex are the suppliers who are members of the MSA. Very few companies have announced or discussed RCx.  In addition to a smaller connector, not having an EEPROM eliminates steps in the cable assembly manufacture, hence helping with lowering the cost when compared to traditional DAC cabling. The biggest advantage of RCx is that it can help eliminate bulky breakout cables within a rack since a single RCx4 receptacle can accept a number of combinations of single lane, 2 lane or 4 lane cable with the same connector on the host. RCx ports can be connected to existing QSFP/SFP infrastructure with appropriate cabling. It remains to be seen, however, if it becomes a standard and popular product or remain as a custom solution.

    Q. How long does AOC normally reach, 3m or 30m?  

    A. AOCs pick it up after DAC drops off about 3m.  Most popular reaches are 3,5,and 10m and volume drops rapidly after 15,20,30,50, and100. We are seeing Ethernet connected HDD’s at 2.5GbE x 2 ports, and Ceph touting this solution.  This seems to play well into the 25/50/100GbE standards with the massive parallelism possible.

    Q. How do we scale PCIe lanes to support NVMe drives to scale, and to replace the capacity we see with storage arrays populated completely with HDDs?

    A. With the advent of PCIe Gen 4, the per-lane rate of PCIe is going from 8 GT/s to 16GT/s. Scaling of PCIe is already happening.

    Q. How many NVMe drives does it take to saturate 100GbE?

    A. 3 or 4 depending on individual drives.

    Q. How about the reliability of Ethernet? A lot of people think Fibre Channel has better reliability than Ethernet.

    A. It’s true that Fibre Channel is a lossless protocol. Ethernet frames are sometimes dropped by the switch, however, network storage using TCP has built in error-correction facility. TCP was designed at a time when networks were less robust than today. Ethernet networks these days are far more reliable.

    Q. Do the 2.5GbE and 5GbE refer to the client side Ethernet port or the server Ethernet port?

    A. It can exist on both the client side and the server side Ethernet port.

    Q. Are there any 25GbE or 50GbE NICs available on the market?

    A. Yes, there are many that are on the market from a number of vendors, including Dell, Mellanox, Intel, and a number of others.

    Q. Commonly used Ethernet speeds are either 10GbE or 40GbE. Do the new 25GbE and 50GbE require new switches?

    A. Yes, you need new switches to support 25GbE and 50GbE. This is, in part, because the SerDes rate per lane at 25 and 50GbE is 25Gb/s, which is not supported by the 10 and 40GbE switches with a maximum SerDes rate of 10Gb/s.

    Q. With a certain number of SerDes coming off the switch ASIC, which would you prefer to use 100G or 40G if assuming both are at the same cost?

    A. Certainly 100G. You get 2.5X the bandwidth for the same cost under the assumptions made in the question.

    Q. Are there any 100G/200G/400G switches and modulation available now?

    A. There are many 100G Ethernet switches available on the market today include Dell’s Z9100 and S6100, Mellanox’s SN2700, and a number of others. The 200G and 400G IEEE standards are not complete as of yet. I’m sure all switch vendors will come out with switches supporting those rates in the future.

    Q. What does lambda mean?

    ALambda is the symbol for wavelength.

    Q. Is the 50GbE standard ratified now?

    A. IEEE 802.3 just recently started development of a 50GbE standard based upon a single-lane 50 Gb/s physical layer interface. That standard is probably about 2 years away from ratification. The 25G Ethernet Consortium has a ratified specification for 50GbE based upon a dual-lane 25 Gb/s physical layer interface.

    Q. Are there any parallel options for using 2 or 4 lanes like in 128GFCp?

    A. Many Ethernet specifications are based upon parallel options. 10GBASE-T is based upon 4 twisted-pairs of copper cabling. 100GBASE-SR4 is based upon 4 lanes (8 fibers) of multimode fiber. Even the industry MSA for 100G over CWDM4 is based upon four wavelengths on a duplex single-mode fiber. In some instances, the parallel option is based upon the additional medium (extra wires or fibers) but with fiber optics, parallel can be created by using different wavelengths that don’t interfere with each other.

     

     


    Containers, Docker and Storage – An Expert Q&A

    December 19th, 2016

    Containers continue to be a hot topic today as is evidenced by the more than 2,000 people who have already viewed our SNIA Cloud webcasts, “Intro to Containers, Container Storage and Docker“ and “Containers: Best Practices and Data Management Services.” In this blog, our experts, Keith Hudgins of Docker and Andrew Sullivan of NetApp, address questions from our most recent live event.

    Q. What is the major challenge for storage in containerized environment?

    A. Containers move fast. Users can spin up and spin down containers extremely quickly. The biggest challenge in production-bound container environments is simply keeping up with the movement of data.

    Docker Engine does not delete base container images when the container is shut down. Likewise, Registry assumes you’ve got unlimited storage on hand. For containers that push frequent revisions (as would be the case in a continuous delivery environment), that leads to a lot of orphaned container images that can fill up all available storage if left unchecked.

    There are some community-led scripts that will help to keep things in control. That’s the beauty of community-led technology.

    Q. What about the speed of retrieving the data from storage?

    A. That’s where being a solid storage architect comes in. Every storage system has different strengths and weaknesses, so it’s important to engineer your solution to fit your performance goals. Docker containers are running on the main kernel of the host system. IO is not constrained by abstraction, as in the case of virtual machines. Rather, it is constrained more by density – hundreds of containers on a host can push massive IOPS, so you want your pipes fat and data sources close to the host systems.

    Q. Can you expand on moving Docker Volumes from On-Premise bare metal to Cloud Service Providers? Data Migration? Encryption? 

    A. None of these capabilities are built-in to Docker Engine. We rely on external storage systems to provide those features. Private-to-cloud replication is primarily a feature of software-based companies, like Portworx, Blockbridge, or Hedvig. Encryption and migration are both common features across other companies as well. Flocker from ClusterHQ is a service broker system that provides many bolt-on features for storage systems they support. You can also use community-supplied services like Ceph to get you there.

    Q. Are you familiar with “Flocker” that apparently is able to copy persistent data to another container? Can share your thoughts?

    A. Yes. ClusterHQ (makers of Flocker) provide an API broker that sits between storage engines and Docker (and other dynamic infrastructure providers, like OpenStack), and they also provide some bolt-on features like replication and encryption.

    Q. Is there any sort of feature in the volume plugins that allows a persistent volume to re-connect to a container if the container is moved across multiple hosts?

    A. There’s no feature in plugins to cover that specifically. The plugin API is very simple. In practice, what you would do is write your plugin to expose volumes to Docker Engine on every host that it’s possible to mount that volume. In your container specification, whether it’s a Compose file, DAB file, or what have you, specify the name of your volume. Wherever that unique name is encountered, it will be mounted and attached to the container when it’s re-launched.

    If you have more questions on containers, Docker and storage, check out our first Q&A blog: Containers: No Shortage of Interest or Questions.

    I also encourage you to join our Containers opt-in email list. It will be a good way to keep up with all the SNIA Cloud is doing on this important technology.