Join the Online Survey on Disaster Recovery

To start things off for the Disaster Recovery Special Interest Group (SIG) described in the previous blog post, the DPCO Committee has put together an online survey of how enterprises are doing data replication and Disaster Recovery and what issues they are encountering. Please join this effort by responding to this brief survey at: https://www.surveymonkey.com/r/W3DRKYD

THANK YOU in advance for doing this!  It should take less than 5 minutes to complete.

The Changing World of SNIA Technical Work – A Conversation with Technical Council Chair Mark Carlson

carlson_mark_resizeMark Carlson is the current Chair of the SNIA Technical Council (TC). Mark has been a SNIA member and volunteer for over 18 years, and also wears many other SNIA hats.   Recently, SNIA on Storage sat down with Mark to discuss his first nine months as the TC Chair and his views on the industry.

SNIA on Storage (SoS):  Within SNIA, what is the most important activity of the SNIA Technical Council?

Mark Carlson (MC): The SNIA Technical Council works to coordinate and approve the technical work going on within SNIA. This includes both SNIA Architecture (standards) and SNIA Software. The  work is conducted within 13 SNIA Technical Work Groups (TWGs).  The members of the TC are elected from the voting companies of SNIA, and the Council also includes appointed members and advisors as well as SNIA regional affiliate advisors. SNIA_Technology_Infographic_4

SoS:  What has been your focus this first nine months of 2016?   

MC: The SNIA Technical Council has overseen a major effort to integrate a new standard organization into SNIA.  The creation of the new SNIA SFF Technology Affiliate (TA) Technical Work Group has brought in a very successful group of folks and standards related to storage connectors and transceivers. This work group, formed in June 2016, carries forth the longstanding SFF Committee work efforts that has operated since 1990 until mid-2016.  In 2016, SFF Committee leaders transitioned the organizational stewardship to SNIA, to operate under a special membership class named Technology Affiliate, while retaining the long standing technical focus on specifications in a similar fashion as all SNIA TWGs do.

SoS:  What changes did SNIA implement to form the new Technology Affiliate membership class and why?

MC: The SNIA Policy and Procedures were changed to account for this new type of membership.  Companies can now join an Affiliate TWG without having to join SNIA as a US member.  Current SNIA members who want to participate in a Technology Affiliate like SFF can join a Technology Affiliate and pay the separate dues.  The SFF was a catalyst – we saw an organization looking for a new home as its membership evolved and its leadership transitioned.  They felt SNIA could be this home but we needed to complete some activities to make it easier for them to seamlessly continue their work.   The SFF is now fully active within SNIA and also working closely with T10 and T11, groups that SNIA members have long participated in.

SoS:  Is forming this Technology Affiliate a one-time activity?

MC: Definitely not.  The SNIA is actively seeking organizations who are looking for a structure that SNIA provides with IP policies, established infrastructure to conduct their work, and 160+ leading companies with volunteers who know storage and networking technology.

SoC:  What are some of the customer pain points you see in the industry?

MC: Critical pain points the TC has started to address with new TWGs over the last 24 months include: performance of solid state storage arrays, where the SNIA Solid State Storage Systems (S4) TWG is working to identify, develop, and coordinate system performance standards for solid state storage systems; and object drives, where work is being done by the Object Drive TWG to identify, develop, and coordinate standards for object drives operating as storage nodes in scale out storage solutions.  With the number of different future disk drive interfaces emerging that add value from external storage to in-storage compute, we want to make sure they can be managed at scale and are interoperable.TC org chart 2016

SoS:  What’s upcoming for the next six months?

MC: The TC is currently working on a white paper to address data center drive requirements and the features and existing interface standards that satisfy some of those requirements.  Of course, not all the solutions to these requirements will come from SNIA, but we think SNIA is in a unique position to bring in the data center customers that need these new features and work with the drive vendors to prototype solutions that then make their way into other standards efforts.  Features that are targeted at the NVM Express, T10, and T13 committees would be coordinated with these customers.

SoS:  Can non-members get involved with SNIA?

MC:   Until very recently, if a company wanted to contribute to a software project within SNIA, they had to become a member. This was limiting to the community, and cut off contributions from those who were using the code, so SNIA has developed a convenient Contributor License Agreement (CLA) for contributions to individual projects.  This allows external contributions but does not change the software licensing. The CLA is compatible with the IP protections that the SNIA IP Policy provides to our members.  Our hope is that this will create a broader community of contributors to a more open SNIA, and facilitate open source project development even more.

SoS:  Will you be onsite for the upcoming SNIA Storage Developer Conference (SDC)?

MC: Absolutely!  I look forward to meeting SNIA members and colleagues September 19-22 at the Hyatt Regency Santa Clara.  We have a great agenda, now online, that the TC has developed for this, our 18th conference, and registration is now open.  SDC brings in more than 400 of the leading storage software and hardware developers, storage product and solution architects, product managers, storage product quality assurance engineers, product line CTOs, storage product customer support engineers, and in–house IT development staff from around the world.  If technical professionals are not familiar with the education and knowledge that SDC can provide, a great way to get a taste is to check out the SDC Podcasts now posted, and the new ones that will appear leading up to SDC 2016.

Flash Memory Summit Highlights SNIA Innovations in Persistent Memory & Flash

SNIA and the Solid State Storage Initiative (SSSI) invite you to join them at Flash Memory Summit 2016, August 8-11 at the Santa Clara Convention Center. SNIA members and colleagues receive $100 off any conference package using the code “SNIA16” by August 4 when registering for Flash Memory Summit at fms boothhttp://www.flashmemorysummit.com

On Monday, August 8, from 1:00pm – 5:00pm, a SNIA Education Afternoon will be open to the public in SCCC Room 203/204, where attendees can learn about multiple storage-related topics with five SNIA Tutorials on flash storage, combined service infrastructures, VDBench, stored-data encryption, and Non-Volatile DIMM (NVDIMM) integration from SNIA member speakers.

Following the Education Afternoon, the SSSI will host a reception and networking event in SCCC Room 203/204 from 5:30 pm – 7:00 pm with SSSI leadership providing perspectives on the persistent memory and SSD markets, SSD performance, NVDIMM, SSD data recovery/erase, and interface technology. Attendees will also be entered into a drawing to win solid state drives.

SNIA and SSSI members will also be featured during the conference in the following sessions:

arthur crop

  • Persistent Memory (Preconference Session C)
    NVDIMM presentation by Arthur Sainio, SNIA NVDIMM SIG Co-Chair (SMART Modular)
    Monday, August 8, 8:30am- 12:00 noon 
  • Data Recovery of SSDs (Session 102-A)
    SIG activity discussion by Scott Holewinski, SSSI Data Recovery/Erase SIG Chair (Gillware)
    Tuesday, August 9, 9:45 am – 10:50 am
  • Persistent Memory – Beyond Flash sponsored by the SNIA SSSI (Forum R-21) Chairperson: Jim Pappas, SNIA Board of Directors Vice-Chair/SSSI Co-Chair (Intel); papers presented by SNIA members Rob Peglar (Symbolic IO), Rob Davis (Mellanox), Ken Gibson (Intel), Doug Voigt (HP), Neal Christensen (Microsoft) Wednesday, August 10, 8:30 am – 11:00 am
  • NVDIMM Panel, organized by the SNIA NVDIMM SIG (Session 301-B) Chairperson: Jeff Chang SNIA NVDIMM SIG Co-Chair (AgigA Tech); papers presented by SNIA members Alex Fuqa (HP), Neal Christensen (Microsoft) Thursday, August 11, 8:30am – 9:45am

Finally, don’t miss the SNIA SSSI in Expo booth #820 in Hall B and in the Solutions Showcase in Hall C on the FMS Exhibit Floor. Attendees can review a series of updated performance statistics on NVDIMM and SSD, see live NVDIMM demonstrations, access SSD data recovery/erase education, and preview a new white paper discussing erasure with regard to SSDs. SNIA representatives will also be present to discuss other SNIA programs such as certification, conformance testing, membership, and conferences.

Securing Fibre Channel Storage

by Eric Hibbard, SNIA Storage Security TWG Chair, and SNIA Storage Security TWG team members

Fibre Channel is often viewed as a specialized form of networking that lives within data centers and which neither has, or requires, special security protections. Neither of these assumptions is true, but finding the appropriate details to secure Fibre Channel infrastructure can be challenging.summit2

The ISO/IEC 27040:2015 Information technology – Security techniques – Storage Security standard provides detailed technical guidance in securing storage systems and ecosystems. However, while the coverage of this standard is quite broad, it lacks details for certain important topics.

ISO/IEC 27040:2015 addresses storage security risks and threats at a high level. This blog is written in the context of Fibre Channel. The following list is a summary of the major threats that may confront Fibre Channel implementations and deployments.

  1. Storage Theft: Theft of storage media or storage devices can be used to access data as well as to deny legitimate use of the data.
  2. Sniffing Storage Traffic: Storage traffic on dedicated storage networks or shared networks can be sniffed via passive network taps or traffic monitoring revealing data, metadata, and storage protocol signaling. If the sniffed traffic includes authentication details, it may be possible for the attacker to replay9 (retransmit) this information in an attempt to escalate the attack.
  3. Network Disruption: Regardless of the underlying network technology, any software or congestion disruption to the network between the user and the storage system can degrade or disable storage.
  4. WWN Spoofing: An attacker gains access to a storage system in order to access/modify/deny data or metadata.
  5. Storage Masquerading: An attacker inserts a rogue storage device in order to access/modify/deny data or metadata supplied by a host.
  6. Corruption of Data: Accidental or intentional corruption of data can occur when the wrong hosts gain access to storage.
  7. Rogue Switch: An attacker inserts a rogue switch in order to perform reconnaissance on the fabric (e.g., configurations, policies, security parameters, etc.) or facilitate other attacks.
  8. Denial of Service (DoS): An attacker can disrupt, block or slow down access to data in a variety of ways by flooding storage networks with error messages or other approaches in an attempt to overload specific systems within the network.

A core element of Fibre Channel security is the ANSI INCITS 496-2012, Information Technology – Fibre Channel – Security Protocols – 2 (FC-SP-2) standard, which defines protocols to authenticate Fibre Channel entities, set up session encryption keys, negotiate parameters to ensure frame-by-frame integrity and confidentiality, and define and distribute policies across a Fibre Channel fabric. It is also worth noting that FC-SP-2 includes compliance elements, which is somewhat unique for FC standards.

Fibre Channel fabrics may be deployed across multiple, distantly separated sites, which make it critical that security services be available to assure consistent configurations and proper access controls.

A new whitepaper, one in a series from SNIA that addresses various elements of storage security, is intended to leverage the guidance in the ISO/IEC 27040 standard and enhance it with a specific focus on Fibre Channel (FC) security.   To learn more about security and Fibre Channel, please visit www.snia.org/security and download the Storage Security: Fibre Channel Security whitepaper.

And mark your calendar for presentations and discussions on this important topic at the upcoming SNIA Data Storage Security Summit, September 22, 2016, at the Hyatt Regency Santa Clara CA. Registration is complimentary – go to www. http://www.snia.org/dss-summit for details on how you can attend and get involved in the conversation.

 

Q&A – OpenStack Mitaka and Data Protection

At our recent SNIA Webcast “Data Protection and OpenStack Mitaka,” Ben Swartzlander, Project Team Lead OpenStack Manila (NetApp), and Dr. Sam Fineberg, Distinguished Technologist (HPE), provided terrific insight into data protection capabilities surrounding OpenStack. If you missed the Webcast, I encourage you to watch it on-demand at your convenience. We did not have time to get to all of out attendees’ questions during the live event, so as promised, here are answers to the questions we received.

Q. Why are there NFS drivers for Cinder?

 A. It’s fairly common in the virtualization world to store virtual disks as files in filesystems. NFS is widely used to connect hypervisors to storage arrays for the purpose of storing virtual disks, which is Cinder’s main purpose.

 Q. What does “crash-consistent” mean?

 A. It means that data on disk is what would be there is the system “crashed” at that point in time. In other words, the data reflects the order of the writes, and if any writes are lost, they are the most recent writes. To avoid losing data with a crash consistent snapshot, one must force all recently written data and metadata to be flushed to disk prior to snapshotting, and prevent further changes during the snapshot operation.

Q. How do you recover from a Cinder replication failover?

 A. The system will continue to function after the failover, however, there is currently no mechanism to “fail-back” or “re-replicate” the volumes. This function is currently in development, and the OpenStack community will have a solution in a future release.

 Q. What is a Cinder volume type?

 A. Volume types are administrator-defined “menu choices” that users can select when creating new volumes. They contain hidden metadata, in the cinder.conf file, which Cinder uses to decide where to place them at creation time, and which drivers to use to configure them when created.

 Q. Can you replicate when multiple Cinder backends are in use?

 A. Yes

 Q. What makes a Cinder “backup” different from a Cinder “snapshot”?

 A. Snapshots are used for preserving the state of a volume from changes, allowing recovery from software or user errors, and also allowing a volume to remain stable long enough for it to be backed up. Snapshots are also very efficient to create, since many devices can create them without copying any data. However, snapshots are local to the primary data and typically have no additional protection from hardware failures. In other words, the snapshot is stored on the same storage devices and typically shares disk blocks with the original volume.

Backups are stored in a neutral format which can be restored anywhere and typically on separate (possibly remote) hardware, making them ideal for recovery from hardware failures.

 Q. Can you explain what “share types” are and how they work?

 A. They are Manila’s version of Cinder’s volume types. One key difference is that some of the metadata about them is not hidden and visible to end users. Certain APIs work with shares of types that have specific capabilities.

 Q. What’s the difference between Cinder’s multi-attached and Manila’s shared file system?

A. Multi-attached Cinder volumes require cluster-aware filesystems or similar technology to be used on top of them. Ordinary file systems cannot handle multi-attachment and will corrupt data quickly if attached more than one system. Therefore cinder’s multi-attach mechanism is only intended for fiesystems or database software that is specifically designed to use it.

Manilla’s shared filesystems use industry standard network protocols, like NFS and SMB, to provide filesystems to arbitrary numbers of clients where shared access is a fundamental part of the design.

 Q. Is it true that failover is automatic?

 A. No. Failover is not automatic, for Cinder or Manila

 Q. Follow-up on failure, my question was for the array-loss scenario described in the Block discussion. Once the admin decides the array has failed, does it need to perform failover on a “VM-by-VM basis’? How does the VM know to re-attach to another Fabric, etc.?

A. Failover is all at once, but VMs do need to be reattached one at a time.

 Q. What about Cinder? Is unified object storage on SHV server the future of storage?

 A. This is a matter of opinion. We can’t give an unbiased response.

 Q. What about a “global file share/file system view” of a lot of Manila “file shares” (i.e. a scalable global name space…)

 A. Shares have disjoint namespaces intentionally. This allows Manila to provide a simple interface which works with lots of implementations. A single large namespace could be more valuable but would preclude many implementations.

 

 

What Do 650 of Your Colleagues Know That You Don’t Know?

wordcloud 4Is your head spinning with all the variations in solid state storage technologies, interconnects, and application level approaches on the market today?

Then you will want to mark your professional calendar – at YOUR convenience – to watch the SNIA BrighTalk webcast: Architectural Principles for Networked Solid State Storage Access,” – one of the most successful ever produced by SNIA!

In this on-demand webcast, SNIA Ethernet Storage Forum and Solid State Storage Initiative experts J Metz, SNIA Board member from Cisco, and Doug Voigt, chair of the SNIA NVM Programming Technical Working Group and a member of the SNIA Technical Council, deliver the answers to questions like these:

  • How do applications see IO and memory access differently?
  • What is the difference between a memory and an SSD technology?
  • How do application and technology views permute?
  • How do memory and network interconnects change the equation?
  • What are persistence domains and why are they important?

Over 650 professionals have viewed this session – and now it is available for you free of charge on-demand!

Bookmark this link now and plan a great “desktop lunch” session all your own to learn the latest on the application of networked solid state technologies (and maybe you’ll even mention it to your colleagues)!

 

 

Everything You Wanted to Know about Storage, but were too Proud to Ask

Many times we know things without even realizing it, or remembering how we came to know them. In technology, this often comes from direct, personal experience rather than some systematic process. In turn, this leads to “best practices” that come from tribal knowledge, rather than any inherent codified set of rules to follow.

In the world of storage, for example, it’s very tempting to simply think of component parts that can be swapped out interchangeably. Change out your spinning hard drives for solid state, for example, you can generally expect better performance. Change the way you connect to the storage device, get better performance… or do you?

Storage is more holistic than many people realize, and as a result there are often unintended consequences for even the simplest of modifications. With the ‘hockey stick-like’ growth in innovation over the past couple of years, many people have found themselves facing terms and concepts in storage that they feel they should have understood, but don’t.

These series of webcasts are designed to help you with those troublesome spots: everything you thought you should know about storage but were afraid to ask.

Here, we’re going to go all the way back to basics and define the terms so that people can understand what people are talking about in those discussions. Not only are we going to define the terms, but we’re going to talk about terms that are impacted by those concepts once you start mixing and matching.

For example, when we say that we have a “memory mapped” storage architecture, what does that mean? Can we have a memory mapped storage system at the other end of a network? If so, what protocol should we use – iSCSI? POSIX? NVMe over Fabrics? Would this be an idempotent system or an object-based storage system?

Now, if that above paragraph doesn’t send you into fits of laughter, then this series of webcasts is for you (hint: most of it was complete nonsense… but which part? Come watch to find out!).

On September 7th, we will start with the very basics – The Naming of the Parts. We’ll break down the entire storage picture and identify the places where most of the confusion falls. Join us in this first webcast – Part Chartreuse – where we’ll learn:

  • What an initiator is
  • What a target is
  • What a storage controller is
  • What a RAID is, and what a RAID controller is
  • What a Volume Manager is
  • What a Storage Stack is

Too proud to ask

 

With these fundamental parts, we’ll be able to place them into a context so that you can understand how all these pieces fit together to form a Data Center storage environment. Future webcasts will discuss:

Part Mauve – Architecture Pod:

  • Channel v. bus
  • Control plane v. data plane
  • Fabric v. network

Part Teal – Buffering Pod:

  • Buffering v. Queueing (with Queue Depth)
  • Flow Control
  • Ring Buffers

Part Rosé – iSCSI Pod:

  • iSCSI offload
  • TCP offload
  • Host-based iSCSI

Part Sepia – Getting-From-Here-To-There Pod:

  • Encapsulation v. Tuning
  • IOPS v. Latency v. Jitter

Part Vermillion – The What-if-Programming-and-Networking-Had-A-Baby Pod:

  • Storage APIs v. POSIX
  • Block v. File v. Object
  • Idempotent
  • Coherence v. Cache Coherence
  • Byte Addressable v. Logical Block Addressing

Part Taupe – Memory Pod:

  • Memory Mapping
  • Physical Region Page (PRP)
  • Scatter Gather Lists
  • Offset

Part Turquoise – Where-Does-My-Data-Go Pod:

  • Volatile v. Non-Volatile v Persistent Memory
  • NVDIMM v. RAM v. DRAM v. SLC v. MLC v. TLC v. NAND v. 3D NAND v. Flash v SSDs v. NVMe
  • NVMe (the protocol)

Part Burgundy – Orphans Pod

  • Doorbells
  • Controller Memory Buffers

Of course, you may already be familiar with some, or all, of these concepts. If you are, then these webcasts aren’t for you. However, if you’re a seasoned professional in technology in another area (compute, networking, programming, etc.) and you want to brush up on some of the basics without judgment or expectations, this is the place for you.

Oh, and why are the parts named after colors, instead of numbered? Because there is no order to these webcasts. Each is a standalone seminar on understanding some of the elements of storage systems that can help you learn about technology without admitting that you were faking it the whole time! If you are looking for a starting point – the absolute beginning place – please start with Part Chartreuse, “The Naming of the Parts.” We look forward to seeing you on September 7th at 10:00 a.m. PT. Register today.

It’s Time for a Re-Introduction to Ethernet Networked Storage

Ethernet technology had been a proven standard for over 30 years and there are many networked storage solutions based on Ethernet. While storage devices are evolving rapidly with new standards and specifications, Ethernet is moving towards higher speeds as well: 10Gbps, 25Gbps, 50Gbps and 100Gbps….making it time to re-introduce Ethernet Networked Storage.

That’s exactly what Rob Davis and I plan to do on August 4th in a live SNIA Ethernet Storage Forum Webcast, “Re-Introducing Ethernet Networked Storage.” We will start by providing a solid foundation on Ethernet networked storage and move to the latest advancements, challenges, use cases and benefits. You’ll hear:

  • The evolution of storage devices – spinning media to NVM
  • New standards: NVMe and NVMe over Fabric
  • A retrospect of traditional networked storage including SAN and NAS
  • How new storage devices and new standards would impact Ethernet networked storage
  • Ethernet based software-defined storage and the hyper-converged model
  • A look ahead at new Ethernet technologies optimized for networked storage in the future

I hope you will join us on August 4th at 10:00 a.m. PT. We’re confident you will learn some new things about Ethernet networked storage. Register today!

Got DR Issues? Check out the new Disaster Recovery Special Interest Group

The SNIA Data Protection and Capacity Optimization Committee (DPCO) would like to announce the creation of a new, Special Interest Group focusing on Data Replication for Disaster Recovery (DR) Standards. The mission of this SIG is focused on investigating existing ISO standards, carrying out surveys, and studying current guidance in order to identify if there is a need to improve the interoperability and resiliency, and/or education and best practices in the area of data replication for disaster recovery.

Why are we doing this? There have been a number of industry observations that customers either don’t know about standards that exist, cannot implement them or have other needs relating to DR that warrant exploration. The aim of this group is not to reinvent the wheel but examine what is out there, what can be used by customers and find out whether they are using appropriate standards, and if not why.

What are we doing? We are starting with a survey to be sent out to as many industry members as possible. The survey will examine replication DR needs that customers have, systems that they have implemented and questions about their knowledge regarding standards and other issues encountered in designing and operating DR, particularly in multi-site, multi-vendor environments.

What can you do? Get involved, of course! Contact the SNIA DPCO team to indicate your interest as we implement the organization structure for the Data Replication for DR Standards SIG.

John Olson and Gene Nagle

Got DR Issues? Check out the new Disaster Recovery Special Interest Group

The SNIA Data Protection and Capacity Optimization Committee (DPCO) would like to announce the creation of a new, Special Interest Group focusing on Data Replication for Disaster Recovery (DR) Standards. The mission of this SIG is focused on investigating existing ISO standards, carrying out surveys, and studying current guidance in order to identify if there is a need to improve the interoperability and resiliency, and/or education and best practices in the area of data replication for disaster recovery.

Why are we doing this? There have been a number of industry observations that customers either don’t know about standards that exist, cannot implement them or have other needs relating to DR that warrant exploration. The aim of this group is not to reinvent the wheel but examine what is out there, what can be used by customers and find out whether they are using appropriate standards, and if not why.

What are we doing? We are starting with a survey to be sent out to as many industry members as possible. The survey will examine replication DR needs that customers have, systems that they have implemented and questions about their knowledge regarding standards and other issues encountered in designing and operating DR, particularly in multi-site, multi-vendor environments.

What can you do? Get involved, of course! Contact the SNIA DPCO team to indicate your interest as we implement the organization structure for the Data Replication for DR Standards SIG.

John Olson and Gene Nagle