MRAM Topic of Open SSSI TechDev Committee Call Monday February 2 at 2:00 pm PT

As part of their educational offering, the SNIA Solid State Storage Initiative TechDev committee will feature Barry Hoberman speaking on Spin Transfer and MRAM.

This conference call and SNIA WebEx at 2:00 pm Pacific time February 2, 2015 is open to the public. Find the answers to your questions on Spin Transfer/MRAM, including:

  • What are the drivers pushing emergence / adoption of Spin Transfer / MRAM?
  • What are the compelling advantages of Spin Transfer / MRAM?
  • What are the key applications that will be able to take advantage of MRAM?
  • What has to happen for Spin Transfer to find traction and deployment?
  • When will Spin Transfer / MRAM market adoption take place?

Dial-in to: snia.webex.com Meeting Number: 794 116 066 password: TechDev2015 Teleconference: 1-866-439-4480 Passcode: 57236696#

Looking forward to seeing you!

New ESF Webcast: Benefits of RDMA in Accelerating Ethernet Storage Connectivity

We’re kicking off our 2015 ESF Webcasts on March 4th with what we believe is an intriguing topic – how RDMA technologies can accelerate Ethernet Storage. Remote Direct Memory Access (RDMA) has existed for many years as an interconnect technology, providing low latency and high bandwidth in computing clusters. More recently, RDMA has gained traction as a method for accelerating storage connectivity and interconnectivity on Ethernet. In this Webcast, experts from Emulex, Intel and Microsoft will discuss:

  • Storage protocols that take advantage of RDMA
  • Overview of iSER for block storage
  • Deep dive of SMB Direct for file storage.
  • Benefits of available RDMA technologies to accelerate your Ethernet storage connectivity, both iWARP and RoCE

Register now. This live Webcast will provide attendees with a vendor-neutral look at RDMA technologies and should prove to be an interactive and informative event. I hope you’ll join us!

Volunteers Honored at SNIA Symposium

The SNIA Solid State Storage Initiative (SSSI), its affiliated Technical Work Groups (TWGs), and its individual members were honored at this week’s SNIA Member Symposium in San Jose, California.

winners2

From left, Paul Wassenberg, Marvell (SNIA SSSI Chair);
Paul Von Behren, Intel (SNIA NVM Programming TWG Co-Chair);
Jim Ryan, Intel (SNIA SSSI Marketing Chair);
and Arthur Sainio, SMART Modular (SNIA NVDIMM SIG Co-Chair)

At Wednesday’s SNIA member recognition event,  five awards were bestowed based on votes by their colleagues in the Association:

1.      MOST OUTSTANDING ACHIEVEMENTS OF A SNIA TECHNOLOGY COMMUNITY –  SSSI (for the second year in a row)

2.      MOST SIGNIFICANT CONTRIBUTIONS BY A COMMITTEE – NVDIMM SIG (in their first year of existence)

3.      MOST SIGNIFICANT IMPACT BY A TECHNICAL WORK GROUP – NVM Programming TWG (adding to all the other awards it has received in the past 2 years)

4.      VOLUNTEER OF THE YEAR – Jim Ryan (who in addition to managing the Storage Industry Summit for two years in a row, is also the SSSI Marketing co-chair)

5.      INDUSTRY IMPACT AWARD – Paul von Behren, NVM Programming TWG chair (and tireless advocate of NVM technology)

In addition, Phil Mills, SNIA SSSI Founding Chair, was inducted into the SNIA Hall of Fame.

More details are available at www.snia.org/about/awards.

Congratulations to these SNIA volunteers and groups, who are poised for a great 2015!

OpenStack Cloud Storage Q&A

More than 300 people have seen our Webcast “OpenStack Cloud Storage.” If you missed it, it’s now available on demand. It was a great session with a lot of questions from attendees. We did not have time to address them all – so here is a complete Q&A. If you think of any others, please comment on this blog. Also, mark your calendar for January 29th when the SNIA Cloud Storage Initiative will continue its Developers Tutorial Series with a live Webcast on OpenStack Manila.

Q. Is it correct to say that one can use OpenStack on any vendor’s hardware?

A. Servers, yes, assuming the hardware is supported by Linux. Block storage requires a driver, and not all vendor systems have Cinder drivers.

Q. Is there any OpenStack investigation and/or development in the storage networking area?

A. Cinder includes support for FC and iSCSI. As of Icehouse, the FC support also includes auto-zoning. 

Q. Is there any monetization going on around OpenStack, like we see for distros of Linux?

A. Yes, there are already several commercial distributions available.

Q. Is erasure code needed to get a positive business case for Swift, when compared with traditional storage systems?

A. It is a way to reduce the cost of replication. Traditional storage systems typically already have erasure coding, in the form of RAID. Systems without erasure coding end up using more storage to achieve the same level of protection due to their use of 3-way replication.

Q. Is erasure code currently implemented in the current Swift release?

A. No, it is a separate development stream, which has not been merged yet.

Q. Any limitation on the number of objects per container or total number of objects per Swift cluster?

A. Technically there are no limits. However, in practice, the fact that the containers are implemented using SQL lite limits their size to a million or maybe a few million objects per container. However, due to the way that Swift partitions its metadata, each user can also have millions of containers, and there can be millions of users. So practically speaking, the total system can support an unlimited number of objects.

Q. What are some of the technical reasons for an enterprise to select Swift vs. Amazon S3? In other words, are they pretty much direct alternatives, or does each have its own preferred use cases?

A. They are more or less direct alternatives. There are some minor differences, but they are made for the same purpose. That said, S3 is only available from Amazon. There are some S3 compatible systems, but most of those also support Swift. Swift, on the other hand, is available open source or from multiple vendors. So if you want to run it in your own data center, or in a public cloud other than Amazon, you probably want Swift.

Q. If I wanted to play around with Open Stack, Cinder, and Swift in a lab environment (or in my basement), what do I need and how do I get started?

A. openstack.org is the best place to start. The “devstack” distribution is also good for playing around.

Q. Will you be showing any features for Kilo?

A. The “Futures” I showed will likely be Kilo features, though the final decision of what will be in Kilo won’t happen until just before release.

 Q. Are there any plans to implement data encryption in Cinder?

A. I believe some of the back ends can support encryption already. Cinder is really just a provisioning and orchestration layer. Encryption is a data path feature, so it would need to be implemented in the back end.

Q. Some time back I heard OpenStack Swift is going to come up with block storage as well, any timeline for that?

A. I haven’t heard this, Swift is object storage.

Q. The performance characteristics of Cinder block services can vary quite widely. Is there any standard measure proposed within OpenStack to inform Nova or the application about the underlying Cinder block performance characteristics?

A. Volume types were designed to enable clouds to provide different levels of service. The meaning of these types is up to the cloud administrator. That said, Cinder does expose QoS features like minimum/maximum IOPS.

Q. Is the hypervisor talking to a cinder volume or to (for example) a NetApp or EMC volume?

A. The hypervisor talks to a volume the same way it does outside of OpenStack. For example, the KVM hypervisor can talk to volumes through LVM, or can mount SAN volumes directly.

Q. Which of these projects are most production-ready?

A. This is a hard question, and depends on your definition of production ready. It’s hard to do much without Nova, Glance, and Horizon. Most people use Cinder too, and Swift has been in production at HP and Rackspace for years. Neutron has a lot of complexity, so some people still use Nova network, but that has many limitations. For toy clouds you can avoid using Keystone, but you need it for a “production” cluster. The best way to get a “production ready” OpenStack is to get a supported commercial distribution.

Q. Are there any Plugfests?

A. No, however, the Cinder team has a fairly extensive and continuous integration process that drivers need to pass through. Swift does not because it doesn’t officially “support” any plugins.

 

 

 

OpenStack Manila Webcast – Shared File Services for the Cloud

On January 29th, we continue our Cloud Developer’s series by hosting a live Webcast on OpenStack Manila – the OpenStack file share service. Manila provides the management of file shares (for example, NFS and CIFS) as a core service to OpenStack. Manila currently works with a variety of vendors’ storage products, including NetApp, Red Hat, EMC, IBM, and with the Linux NFS server.

In this Webcast we will:

  • Introduce the Manila file share service
  • Review key Manila concepts
  • Describe the logical architecture of Manila and its API structure
  • Explain what’s new in Juno, the latest release of OpenStack
  • Highlight the roadmap for Manila in the next release, OpenStack Kilo, and beyond

Register now for this live event that we expect will be informative and interactive. I hope you’ll join us.

SNIA NVM Programming Technical Work Group Honored at Storage Visions Conference

2015AwardWinnerOutlines

On January 4, 2015, the SNIA NVM Programming Technical Work Group received the Storage Visions 2015 Professional Storage Development Visionary Award ,

Storage Visions, a partner program of the Consumer Electronics Show, showcases digital storage and the entertainment content value chain as the storage conference of CES.

The Storage Visions “Visionary” Awards recognize companies advancing the state of the art in storage technologies utilized in consumer electronics, the media and entertainment industries; visionary products for the digital content value chain; and digital storage end users.

SV paul receiving award

This most recent honor recognizes TWG work on creating next generation programming models, and follows an August 2014 award to the NVM Programming Model as a Best of Show at Flash Memory Summit for the Most Innovative Flash Memory Enterprise Business Application.

The SNIA Non Volatile Memory (NVM) Programming Technical Work Group (TWG) delivers specifications describing the behavior of a common set of software interfaces that provide access to non-volatile memory (NVM). The TWG goal is to encourage a common ecosystem for NVM-enabled software without limiting the ability to innovate.

The TWG’s current work is the NVM Programming Model specification.  This specification describes behavior used by applications and kernel components to access:

  • Emerging features for traditional block NVM (SSDs) and
  • A new programming model for persistent memory (PM) – NVM hardware designed to be treated by software similarly to system memory.

OpenStack Cloud Storage Webcast Preview

On January 14, 2015, the CSI continues its Developer Tutorial series by hosting a live Webcast on OpenStack Cloud Storage. As you likely know, OpenStack is an open source cloud operating system that provides pools of compute, storage, and networking.

OpenStack is currently being developed by thousands of developers from hundreds of companies across the globe, and is the basis of multiple public and private cloud offerings.  Register now for this SNIA-CSI Webcast to hear Sam Fineberg, Distinguished Technologist at HP discuss:

  • Storage aspects of OpenStack including the core projects for block storage (Cinder) and object storage (Swift)
  • Emerging shared file service
  • Common configurations and use cases for these technologies
  • Interaction with the other parts of OpenStack
  • New developments in Cinder and Swift that enable advanced array features, QoS, new storage fabrics, and new types of drives.

I’ll be moderating this live event and Sam and I will be available to answer your specific questions. It should be an informative and interactive session. I hope you’ll join us!

Real-World FCoE Best Practices Q&A

At our recent live Webcast “Real-World FCoE Designs and Best Practices,” IT leaders from Thermo Fisher Scientific and Gannett Co. shared their experiences from their FCoE deployments – one single-hop, one multi-hop. It was a candid discussion on the lessons they learned. If you missed the Webcast, it’s now available on demand. We polled the audience to see what stage of FCoE deployment they’re in (see the poll results at the end of this blog). Just over half said they’re still in learning mode. To that end, here are answers to the questions we got during the Webcast. As you will see, many of these questions were directed to our guest end users regarding their experiences. I hope that it will help you in your journey. If you have additional questions, please ask them in the comments section in this blog and we’ll get back to you as soon as possible.

Q. Have any issues come up where the storage team needed to upgrade SAN switch firmware to solve a problem, but the network team objected to upgrading the FCFs?  This assumes a shared firmware release on both network and SAN switch products (i.e. Cisco NX-OS)

A. No we need to work together as a team so as long as it is planned out in advance this has not been an issue.

Q. Is there any overhead at the host CPU level when using FCOE/CNA vs. using FC/HBA? Has anyone done any benchmarking on this?

A. To the host OS it is the CNA that presents a HBA and 10G Ethernet adapter, so to the host OS there is not a difference from what is normally presented for Ethernet and FC adapters. In a software FCoE implementation there might be, but you should check with the particular implementation from the OS vendors for this information.

Q. Are there any high-level performance considerations when compared to typical FC SAN? Any obvious impact to IO latency as hosts are moved to FCoE compared to FC?

A. There is a performance increase in comparison to 8GB Fibre channel since FCoE using Ethernet and 64/66b encoding vs. 8/10b encoding that native 8GB uses. On dedicated links it could be around 50% increase in performance from 10GB FCoE vs. 8GB FC.

 Q. Have you planned to use of 40G – FCoE in you edge core design?

A. We have purchased the hardware to go 40G if we choose to.

 Q. Was DCB used to isolate the network traffic with FC traffic at the CNA?

A. DCB is a set of technologies that includes DCBX, PFC, ETS that are used with FCoE.

 Q. Was FCoE implemented on existing hosts or just on new ones being added to the SAN?

A. Only on new hosts.

Q. Can you expand on Domain_ID sprawl ?

A. In FC or FCoE fabrics each storage vendor supports only a certain amount of switches per fabric. Each full FC or FCoE switch will consume a Domain ID, so it is important to consider how many switches or domain IDs are allowed in a supported fabric based on the storage vendor’s fabric recommendations. There are technologies such as NPIV and vendor specific technologies that can be helpful to limit domain ID sprawl in your fabrics.

Remember the poll I mentioned during the Webcast? Here are the results. Let us know where you are in your FCoE deployment plans.

Screen Shot 2014-12-19 at 9.15.15 AM

 

 

Take Our ESF Quick Poll

ESF has some exciting plans for 2015! We’re busy covering all things “Ethernet Storage” with topics on FCoE and iSCSI use caseCheckmarks, Cloud File Services, Object Storage, NVMe over Fabrics, SMB 3.0, NFS and more. We’re writing White Papers, hosting live expert Webcasts, publishing articles, and of course using this blog and Twitter to keep you updated on all that’s going on.

To help us in our mission to drive the broad adoption of Ethernet-connected storage networking technologies, we want to deliver content on the Ethernet Storage topics that matter most to you. Please take this quick poll – really it’s quick – only two questions – and help us shape the conversation for 2015. We look forward to your input and appreciate your support of SNIA-ESF. SNIA-ESF quick poll.

 

 

 

Webcast Preview: End Users Share their FCoE Stories

Fibre Channel over Ethernet (FCoE) has been growing in popularity year after year. From access layer, to multi-hop and beyond, FCoE has established itself as a true solution in the data center.

Are you interested in learning how customers are using FCoE? Join us on December 10th, at 3:00 pm ET, 1:00 pm PT for our live Webcast, “Real World FCoE Designs and Best Practices”. This live SNIA Webcast examines the most used FCoE designs and looks at how this is being used in REAL world customer implementations. You will hear from two IT leaders who have implemented FCoE and why they did so. We will cover:

  • Real-world Use Cases and Customer Implementations of:
    • Single-Hop FCoE
    • Multi-Hop FCoE
    • Use of FCoE for Inter-Switch Links (ISLs)

This will be a vendor-neutral live presentation. Please join us on December 10th and bring your questions for our panel.