• Home
  • About
  •  

    Join SNIA-CSI at the OpenStack Summit

    October 21st, 2014

    Get the tips needed when implementing multiple cloud storage APIs. The SNIA Cloud Storage Initiative (CSI) is hosting a Birds of a Feather session – Tips to Implementing Multiple Cloud Storage APIs at the OpenStack Summit in Paris on November 5th at 9:00 a.m. Room 212/213.

    There are three main object storage APIs today; OpenStack’s Swift (open but not standardized), Amazon’s S3 (proprietary yet a defacto standard) and SNIA’s CDMI (an ISO standard). With three APIs to support, it might sound expensive or difficult to support all of them, yet not doing so could be costly when customers want innovation and industry standard solutions and interoperability in your product.

    What about the similarities and differences between the APIs, and can they be reconciled? Can these APIs be effectively and efficiently implemented in a single product? I hope you’ll join us at this session to learn about and discuss various ways to cope with this situation. You will discover best practices and tips on how to implement these three protocols in your cloud storage solution.

    Register now. I look forward to seeing you on November 5th at the OpenStack Summit.

     

     


    New Website for SSSI Highlights Key SSD Technology Activities

    October 21st, 2014

    A new, more easily “navigatable” website is now online for the SNIA Solid State Storage Initiative (SSSI)  technology community.

    Divided into the activities the SSSI focuses on – Performance, NVDIMM, Non-Volatile Memory Programming, and PCIe SSDs, and with two new tabs linking directly to “News” and “Resources”, the new format gives readers quick access to webcasts, white papers, articles, presentations, and technical materials critically needed in the rapidly changing world of Solid State Storage technology.

    The right navigation bar also highlights SSSI member companies and provides direct links to the SSSI blog you are reading now, the SSSI Twitter feed, and the SSSI LinkedIn Group SSDs – What’s Important to You?

    Check it out, and let us know what you think at asksssi@snia.org or on our social media links!


    New Webcast: Object Storage – Understanding Architectural Trade-Offs

    September 30th, 2014

    The Cloud Storage Initiative (CSI) is excited to announce a live Webcast as part of the upcoming BrightTalk Cloud Storage Summit on October 16thObject Storage 201: Understanding Architectural Trade-Offs. It’s a follow-up to the SNIA Ethernet Storage Forum’s Object Storage 101: Understanding the What, How and Why behind Object Storage Technologies.

    Object-based storage systems are fast becoming one of the key building blocks for a cloud storage infrastructure. They address some of the shortcomings and provide an alternative to more traditional file- and block-based storage for unstructured data.

    An object storage system must accommodate growth (and yes, the rumors are true – data growth is a huge and accelerating problem), be flexible in their provisioning, provide support multiple geographies and legal frameworks, and cope with the inevitable issues of resilience, performance and availability.

    Register now for this Webcast. Experts from the SNIA Cloud Storage Initiative will discuss:

    • Object Storage Architectural Considerations
    • Replication and Erasure Encoding for resilience
    • Pros and Cons of Hash Tables and Key-Value Databases
    • And more…

    This is a live presentation, so please bring your questions and we’ll do our very best to answer them. We hope you’ll join us on October 16th for an unbiased, deep dive into the design considerations for object storage systems.

     


    New Webcast: Cloud File Services: SMB/CIFS and NFS…in the Cloud

    September 18th, 2014

    Imagine evaporating your existing file server into the cloud with the same experience and level of control that you currently have on-premises. On October 1st, ESF will host a live Webcast that introduces the concept of Cloud File Services and discusses the pros and cons you need to consider.

    There are numerous companies and startups offering “sync & share” solutions across multiple devices and locations, but for the enterprise, caveats are everywhere. Register now for this Webcast to learn:

    • Key drivers for file storage
    • Split administration with sync & share solutions and on-premises file services
    • Applications over File Services on-premises (SMB 3, NFS 4.1)
    • Moving to the cloud: your storage OS in a hyperscalar or service provider
    • Accommodating existing File Services workloads with Cloud File Services
    • Accommodating cloud-hosted applications over Cloud File Services

    This Webcast will be a vendor-neutral and informative discussion on this hot topic. And since it’s live, we encourage your to bring your questions for our experts. I hope you’ll register today and we’ll look forward to having you attend on October 1st 

     


    What the CSI is Up to at SDC

    September 9th, 2014

    What the Cloud Storage Initiative Is Doing At SDC

    The SNIA Storage Developer Conference (SDC) is less than a week away. We’re looking forward to the conference and in particular want to make note of some exciting news and events that pertain to work the CSI is doing to promote standards that will increase the adoption, interoperability and portability of data stored in the cloud.

    SDC Conference session: Introducing CDMI v1.1 – Tuesday, September 16th, 1:00 p.m. by David Silk. This session introduces the new CDMI 1.1 and provides an overview of capabilities the Technical Work Group have added to the standard, and what CDMI implementers need to know when moving from CDMI 1.0.2 to CDMI 1.1.

    Cloud Interoperability Plugfest – Participants at the 12th Cloud Interoperability Plugfest will be testing the interoperability of their cloud storage interfaces based on CDMI. We always have a large showing of CDMI implementations at this event, but are also looking for implementations of Amazon S3, and OpenStack Swift, Cinder and Manila interfaces.

    It’s not too late to register for this Plugfest. Find out how here.

    SDC 2014 is going to be exciting and educational. It’s “one stop shopping” for IT professionals who focus on the tools, technologies and developments needed for understanding and implementing efficient data storage, management and security. The CSI hopes to see you there.

     


    Upcoming Plugfests at SDC

    September 4th, 2014

    This year’s SNIA Storage Developer Conference (SDC) will take place in Santa Clara, CA Sept. 15-18.  In addition to an exciting agenda with great speakers, there is an opportunity for vendors to participate in SNIA Plugfests. Two Plugfests that I think are worth noting are: SMB2/SMB3 and iSCSI.

    These Plugfests enable a vendor to bring their implementations of SMB2/SMB3 and/or iSCSI to test, identify, and fix bugs in a collaborative setting with the goal of providing a forum in which companies can develop interoperable products. SNIA provides and supports networks and infrastructure for the Plugfest, creating a collaborative framework for testing. Plugfest participants work together to define the testing process, assuring that objectives are accomplished.

    Still Time to Register

    Great news! There is still time to register. Setup for the Plugfest begins on September 13, 2014 and testing begins on the September 14th.

    Register here for the SMB2/SMB3 Plugfest

    Register here for the iSCSI Plugfest

    What to Expect at a Plugfest

    Learn more about what takes place at the Plugfests by watching the video interview of Jeremy Allison, Co-Creator of Samba, as he candidly talks about what to expect at an SDC Plugfest.

    Learn more about the Plugfest registration process. If you have additional questions, please contact Arnold Jones (arnold@snia.org).

     


    Expanding Your Data Center with FCoE – Q&A

    September 3rd, 2014

    At our recent live ESF Webcast, “Expert Insights: Expanding the Data Center with FCoE,” we examined the current state of FCoE and looked at how this protocol can expand the agility of the data center if you missed it, it’s now available on-demand. We did not have time to address all the questions, so here are answers to them all. If you think of additional questions, please feel free to comment on this blog.

    Q. You mentioned using 40 and 100G for inter-switch links.  Are there use cases for end point (FCoE target and initiator) 40 and 100G connectivity?

    A. Today most end points are only supporting 10G, but we are starting to see 40G server offerings enter the market, and activity among the storage vendors designing these 40G products into their arrays.

    Q. What about interoperability between FCoE switch vendors?

    A. Each switch vendor has his own support matrix, and would need to be examined independently.

    Q. Is FCoE supported on copper cable?

    A. Yes, FCoE supports “Twin Ax” copper and is widely used for server to top of rack switch connections to seven meters.  In fact, Converged Network Adapters are now available that support 10GBASE-T copper cables with the familiar RJ-45 jack.  At least one major switch vendor has qualified FCoE running over 10GBASE-T to 30 meters.

    Q. What distance does FCoE support?

    A. Distance limits are dependent on the hardware in use and the buffering available for Priority Flow Control. The lengths can vary from 3m up to over 80km. Top of rack switches would fall into the 3m range while larger class switch/directors would support longer lengths.

    Q. Can FCoE take part in management/orchestration by OpenStack Neutron?

    A. As of this writing there are no OpenStack extensions in Neutron for FCoE-specific plugins.

    Q. So how is this FC-BB-6 different than FIP snooping?

    A. FIP Snooping is a part of FC-BB-5 (Appendix D), which allows switch devices to identify an FCoE Frame format and create a forwarding ACL to a known FCF. FC-BB-6 creates additional architectural elements for deployments, including a “switch-less” environment (VN2VN), and a distributed switch architecture with a controlling FCF. Each of these cases is independent from the other, and you would choose one instead of the others. You can learn more about VN2VN from our SNIA-ESF Webcast, “How VN2VN Will Help Accelerate Adoption of FCoE.”

    Q. You mentioned DCB at the beginning of the presentation. Are there other purposes for DCB? Seems like a lot of change in the network to create a DCB environment for just FCoE. What are some of the other technologies that can take advantage of DCB?

    A. First, DCB is becoming very ubiquitous. Unlike the early days of the standard, where only a few switches supported it, today most enterprise switches support DCB protocols. As far as other use cases for DCB, iSCSI benefits from DCB, since it eliminates dropped packets and the TCP/IP protocol’s backoff algorithm when packets are dropped, smoothing out response time for iSCSI traffic. There is a protocol known as RoCE or RDMA over Converged Ethernet. RoCE requires the lossless fabric DCB creates to achieve consistent low latency and high bandwidth.  This is basically the InfiniBand API running over Ethernet. Microsoft’s latest version of file serving protocol, SMB Direct, and the Hyper-V Live Migration can utilize RoCE, and there is an extension to iSCSI known as iSER, which replaces TCP/IP with RDMA for the iSCSI datamover; enabling all iSCSI reads and writes to be done as RDMA operations using RoCE.

    Q. Great point about RoCE.  iSCSI RDMA (iSER) is required from DCB if the adapters support RoCE, right?

    A. Agreed. Please see the answer above to the DCB question.

    Q. Did that Boeing Aerospace diagram still have traditional FC links, and if yes, where?

    A. There was no Fibre Channel storage attached in that environment. Having the green line in the ledger was simply to show that Fibre Channel would have it’s own color should there be any links.

    Q. What is the price of a 10 Gbp CNA compare to a 10Gbps NIC ?

    A. Price is dependent on vendor and economics. But, there are several approaches to delivering the value of FCoE which can influence pricing:

    • Purpose built silicon that offloads the FC and Ethernet protocol functions offer a number of advantages including high performance, low CPU overhead, advanced features, etc., though even this depends on the vendor’s implementation.   But, these added features come with the expectation of additional cost. But, the processing of the protocols has to be done somewhere, and if you need your server CPUs to process applications instead of network protocols, then the value is justified.
    • With the introduction of Open FCoE drivers with DCB supported NICs, new options are available for customers to deploy the value of FCoE at the host. Open FCoE offloads the FC processing onto the host CPU and standard 10GbE NICs with DCB support can be used to manage the Ethernet transport functions. Where you have excess CPU capacity on your server, you might be in a position to reduce costs and deploy a software driver with  a 10GbE or faster NIC enhanced with the limited set of hardware offloads necessary to achieve full performance with Open FCoE. However, Open FCoE isn’t available with every OS or every NIC, so you need to consider OS support and availability.
    • A third consideration is that most enterprise servers include some form of advanced 10GbE networking on the motherboard that either supports purpose built silicon or DCB enabled silicon. So, depending upon which server and OS you deploy, you may have several options via embedded silicon.

     


    What’s Happening with 25GbE

    August 25th, 2014

    In July 2014, IEEE 802.3 voted to form a Study Group for 25Gb/s Ethernet.  There has been a lot attention in the networking press lately about 25Gb/s Ethernet, but many people are asking what is it and how did we get here.  After all, 802.3 already has completed standards for 40Gb/s and 100Gb/s and is currently working on 400Gb/s, so from a pure speed perspective, starting a 25Gb/s project now does look like a step backwards.

    (Warning: the following discussion contains excessive physical layer jargon.)

    The Sweet Spot

    25GbE as a port speed is attractive because it makes use of 25Gb/s per lane signaling technology that has been in development for years in the industry, culminating in the recent completion of 802.3bj, the standard for 100GbE over backplane or TwinAx copper that utilizes four parallel lanes of 25Gb/s signaling to achieve the 100Gb/s port speed. Products implementing 25Gb/s signaling in CMOS technology are just starting to come to market, and the rate will likely be a sweet spot for many years, as higher rate signaling of 40Gb/s or 50Gb/s is still in early technology development phases. The ability to implement this high speed I/O in CMOS is important because it allows combining high-speed I/O with many millions of logic gates needed to implement Ethernet switches, controllers, FPGAs, and microprocessors. Thus specifying a MAC rate of 25Gb/s to utilize 25Gb/s serdes technology can enable product developers to optimize for both the lowest cost/bit and the highest overall bandwidth utilization of the switching fabric.

    4-Lane to 1-Lane Evolution

    To see how we got here and why 25Gb/s is interesting, it is useful to back up a couple of generations and look at 10Gb/s and 40Gb/s Ethernet.  Earliest implementations of 10GbE relied on rather wide parallel electrical interfaces: XGMII and the 16-Bit interface.  Very soon after, however, 4-lane serdes-based interfaces became the norm starting with XAUI (for chip-to-chip and chip-to-optical module use) which was then adapted to longer reaches on TwinAx and backplane (10GBASE-CX4 and 10GBASE-KX4).  Preceding 10GbE achieving higher volumes (~2009) was the specification and technical feasibility of 10Gb/s on a single electrical serial lane. XFI was the first followed by 10GBASE-KR (backplane) and SFI (as an optical module interface and for direct attach TwinAx cable using the SFP+ pluggable form factor).  KR and SFI started to ramp around 2009 and are still the highest volume share of 10GbE ports in datacenter applications. The takeaway, in my opinion, is that single-lane interfaces helped the 10GbE volume ramp by reducing interconnect cost. Now look forward to 40GbE and 10GbE. The initial standard, 802.3ba, was completed in 2010.  So during the time that this specification was being developed, 10Gb/s serial interfaces were gaining traction, and consensus formed around the use of multiple 10Gb/s lanes in parallel to make the 40GbE and 100GbE electrical interfaces. For example, there is a great similarity between 10GBASE-KR, and one lane of the 40GBASE-KR4 four-lane interface. In a similar fashion 10Gb/s SFI for TwinAx  & optics in the SFP+ form factor is similar to a lane of the 40GbE equivalent interfaces for TwinAx and optics in the QSFP+ form factor.

    But how does this get to 25Gb/s?

    Due to the similarity in technology needed to make 10GbE and 40GbE, it has because a common feature in Ethernet switch and NIC chips to implement a four-lane port for 40GbE that can be configured to use each lane separately yielding four 10GbE ports.

    From there it is a natural extension that 100GbE ports being implemented using 802.3bj technology (4x25Gb/s) also can be configured to support four independent ports operating at 25Gb/s.  This is such a natural conclusion that multiple companies are implementing 25GbE even though it is not a standard.

    In some environments, the existence of a standard is not a priority.  For example, when a large-scale datacenter of compute, storage and networking is architected, owned and operated by one entity, that entity validates the necessary configuration to meet its requirements. For the broader market, however, there is typically a requirement for multi-vendor interoperability across a diverse set of configurations and uses. This is where Ethernet and IEEE 802.3 has provided value to the industry for over 30 years.

    Where’s the Application?

    Given the nature of their environment, it is the Cloud datacenter operators that are poised to be the early adopters of 25GbE. Will it also find a home in more traditional enterprise and storage markets? Time will tell, but in many environments ease of use, long shelf life, and multi-vendor interoperability are the priorities. For any environment, having the 25GbE specification maintained IEEE 802.3 will facilitate those needs.


    NVM Programming Model Recognized for Enterprise Business Innovation

    August 21st, 2014

    At the most recent Flash Memory Summit in August 2014, the SNIA NVM Programming Model was selected for a singular honor - industry recognition as a Best of Show for Enterprise Business Applications.  Being recognized was quite unique, as the Model is not a product or a solution as were other winners in the various categories; but rather a body of work that defines new software programming models for non-volatile memory, also known as NVM.

    NVM technologies are currently advancing in such a way as to blur the line between storage and memory – which will radically transform the way software is written. The NVM Programming Model embraces this transformation, describing behavior provided by operating systems that enables applications, file systems, and other software to take advantage of new NVM capabilities.

    The Model addresses NVM extensions for existing devices such as SSDs and persistent memory, describing the differences between software written for block storage (SSDs and disks) and persistent memory, and outlining the potential benefits for adapting software for persistent memory.

    In presenting the Award to Doug Voigt, co-chair of the SNIA NVM Programming Technical Work Group, Jay Kramer, Chairman of the Flash Memory Summit Awards Program and President of Network Storage Advisors Inc., stated,  “Flash memory technology is currently experiencing a progression of innovations that can make a real difference in solving storage solution challenges. The industry is seeing a proliferation of new Non–Volatile Memory (NVM) functionality and new NVM technologies.  We are proud to select the SNIA NVM Programming Model for the Best of Show Award as it brings to the marketplace a new standard with a model that defines recommended behavior between various user spaces and operating system (OS) kernel components supporting NVM.”

    Congratulations to the many SNIA member company contributors to the Programming Model for this honor!  For information and to download the specification, visit http://www.snia.org/forums/sssi/nvmp.

    flash_mem_summ2014_award

     


    Upcoming Webcast: Is FCoE the Answer to Data Center Agility?

    August 4th, 2014

    Fibre Channel over Ethernet (FCoE) has been growing in popularity year after year. From access layer, to multi-hop and beyond, FCoE has established itself as a true solution in the data center.

    Interested in learning how the Data Center is expanding with FCoE? Join us on August 20th, at 4:00 pm ET, 1:00 pm PT for our live Webcast, “Expanding the Data Center with FCoE.”  Continuing our conversation from our February Webcast, “Use Cases for iSCSI and FCoE,” which is now available on demand. This live SNIA Webcast examines the current state of FCoE and looks at how this protocol can expand the agility of the data center.

    • We’ll take an unbiased look at the data center using FCoE, covering:
    • The history and evolution of convergence
    • Using FCoE as a storage overlay
    • Single-hop, multi-hop and beyond
    • 40G/100G  – Where does it fit
    • Futures:
      • OpenStack
      • Defining Network Functions Virtualization (NFV)
      • Mapping NFV to FCoE
    • Real-world Use Cases

    This will be a vendor-neutral live presentation. Please join us on August 20th and bring your questions for our expert panel. Register now.