Storage Performance Benchmarking Webcast Series Continues

Attendees cannot get enough of the SNIA Ethernet Storage Forum’s Storage Performance Benchmarking Webcast series. On March 8, 2016 our experts, Mark Rogov and Ken Cantrell, will return for the third installment of our series with “Storage Performance Benchmarking: Block Components.” This session aims to continue educating anyone untrained in the storage performance arts to ascend to a common base with the experts. In this Webcast, you will gain an understanding of the block components of modern storage arrays and learn storage block terminology, including:

  • How storage media affects block storage performance
  • Integrity and performance trade-offs for data protection: RAID, Erasure Coding, etc.…
  • Terminology updates: seek time, rebuild time, garbage collection, queue depth and service time

As always, the event will be live and Mark and Ken will be on hand to answer your questions. I encourage you to register today. We hope to see you on March 8th!

Hitachi Data Systems’ Hu Yoshida Featured in Keynote at SNIA Annual Members Symposium

The Storage Networking Industry Association (SNIA) Annual Members Symposium is a must attend event for both SNIA members and interested colleagues. Held at the Westin San Jose in San Jose California from January 19-22, 2016, the Symposium offers four full days of collaboration, networking, and knowledge about the latest advances in cloud storage, Ethernet storage, solid state storage, storage management, green computing, and more. Register and learn more at http://www.snia.org/events/symp.

A highlight of every SNIA Symposium are keynotes from leaders in the industry, and 2016 is no exception. Whether you are a SNIA member, or thinking about hubert-yoshidaSNIA membership, you are invited to attend a very special keynote presented on January 19 at 8:45 am on IT Trends that Matter in 2016, presented by Mr. Hubert Yoshida, CTO of Hitachi Data Systems. Mr. Yoshida will comment on how innovative technology companies that quickly capitalize on business opportunities and satisfy the demands of today’s empowered consumer have caused a wave of disruption. In 2016, businesses will turn to IT for solutions that will keep them competitive. Chief Information Officers will invest in faster delivery of applications and analytics, and transform IT by leveraging the third platform, social, mobile, analytics, and cloud. Mr. Yoshida will advise on how to avoid distractions and remain focused on the IT trends that matter in 2016.

Hubert Yoshida is responsible for defining the technical direction of Hitachi Data Systems. Currently, he leads the company’s effort to help customers address data life cycle requirements and resolve compliance, governance and operational risk issues. He was instrumental in evangelizing the unique Hitachi approach to storage virtualization, which leveraged existing storage services within Hitachi Universal Storage Platform® and extended it to externally-attached, heterogeneous storage systems.

Mr. Yoshida is well-known within the storage industry, and his blog has ranked among the “top 10 most influential” within the storage industry as evaluated by Network World. In October of 2006, Byte and Switch named him one of Storage Networking’s Heaviest Hitters and in 2013 he was named one of the “Ten Most Impactful Tech Leaders” by Information Week.

Hitachi Data Systems is a valued member of SNIA.  Currently,  Thomas Rivera, Technical Associate – File, Content & Cloud Solutions at Hitachi Data Systems, serves as the Secretary of the SNIA Board of Directors.

For the full agenda of activities and events at the SNIA Symposium visit http://www.snia.org/events/symp.

SNIA NVM Summit Delivers the Persistent Memory Knowledge You Need

by Marty Foltyn

The discussion, use, and application of Non-volatile Memory (NVM) has come a long way from the first SNIA NVM Summit in 2013.  The significant improvements in persistent memory, with enormous capacity, memory-like speed and non-volatility, will make the long-awaited promise of the convergence storage and memory a reality. In this 4th annual NVM Summit, we will see how Storage and Memory have now converged, and learn that we are now faced with developing the needed ecosystem.  Register and join colleagues on Wednesday, January 20, 2016 in San Jose, CA to learn more, or follow http://www.snia.org/nvmsummit to review presentations post- event.

The Summit day begins with Rick Coulson, Senior Fellow, Intel, discussing the most recent developments in persistent memory with a presentation on All the Ways 3D XPoint Impacts Systems Architecture.

Ethan Miller, Professor of Computer Science at UC Santa Cruz, will discuss Rethinking Benchmarks for Non-Volatile Memory Storage Systems. He will describe the challenges for benchmarks posed by the transition to NVM, and propose potential solutions to these challenges.

Ken Gibson, NVM SW Architecture, Intel will present Memory is the New Storage: How Next Generation NVM DIMMs will Enable New Solutions That Use Memory as the High-Performance Storage Tier . This talk reviews some of the decades-old assumptions that change for suppliers of storage and data services as solutions move to memory as the new storage

Jim Handy, General Director, Objective Analysis, and Tom Coughlin, President, Coughlin Associates will discuss Future Memories and Today’s Opportunities, exploring the role of NVM in today’s and future applications. They will give some market analysis and projections for the various NVM technologies in use today.

Matt Bryson, SVP-Research, ABR, will lead a panel on NVM Futures-Emerging Embedded Memory Technologies, exploring the current status and future opportunities for NVM technologies and in particular both embedded and standalone MRAM technologies and associated applications.

Edward Sharp, Chief, Strategy and Technology, PMC-Sierra, will present Changes Coming to Architecture with NVM. Although the IT industry has made tremendous progress innovating up and down the computing stack to enable, and take advantage of, non-volatile memory, is it sufficient, and where are the weakest links to fully unlock the potential of NVM.

Don Jeanette, VP and John Chen, VP of Trendfocus will review the Solid State Storage Market, discuss what is happening in various segments, and why, as it relates to PCIe.

Dejan Vucinc, HGST San Jose Research Center will discuss Latency in Context: Finding Room for NVMs in the Existing Software Ecosystem. HGST Research has been working diligently to find out where is there room in the existing hardware/software ecosystem for emerging NVM technology when viewed as block storage rather than main memory. Vucinc will show an update on previously published results using prototype PCI Express-attached PCM SSDs and our custom device protocol, DC Express, as well as measurements of its latency and performance through a proper device driver using several different kinds of Linux kernel block layer architecture.

Arthur Sainio, Director Marketing, SMART Modular and Co-Chair, SNIA NVDIMM SIG, will lead a panel on NVDIMM. discussing how new media types are joining NAND Flash, and enhanced controllers and networking are being developed to unlock the latency and throughput advantages of NVDIMM.

Neal Christiansen, Principal Development Lead, Microsoft, Microsoft will discuss Storage Class Memory Support in the Windows OS. Storage Class Memories (SCM) have been the topic of R&D for the last few years and with the promise of near term product delivery, the question is how will Windows be enabled for such SCM products and how can applications take advantage of these capabilities.

Jeff Moyer, Principal Software Engineer, Red Hat will give an overview of the current state of Persistent Memory Support in the Linux Kernel.

Cristian Diaconu, Principal Software Engineer, Microsoft will present Microsoft SQL Hekaton – Towards Large Scale Use of PM for In-memory Databases, using the example of Hekaton (Sql Server in-memory database engine) to break down the opportunity areas for non-volatile memory in the database space.

Tom Talpey, Architect File Server Team, Microsoft, will discuss Microsoft Going Remote at Low Latency: A Future Networked NVM Ecosystem. As new ultra-low latency storage such as Persistent Memory and NVM is deployed, it becomes necessary to provide remote access – for replication, availability and resiliency to errors.

Kevin Deierling, VP Marketing, Mellanox will discuss the role of the network in developing Persistent Memory over Fabrics, and what are the key goals and key fabric features requirements.

Data Protection in the Cloud FAQ

SNIA recently hosted a multi-vendor discussion on leveraging the cloud for data protection. If you missed the Webcast, “Moving Data Protection to the Cloud: Trends, Challenges and Strategies”, it’s now available on-demand. As promised during the live event, we’ve compiled answers to some of the most frequently asked questions on this timely topic. Answers from SNIA as well as our vendor panelists are included. If you have additional questions, please comment on this blog and we’ll get back to you as soon as possible

Q. What is the significance of NIST FIPS 140-2 Certification?

Acronis: FIPS 140-2 Certification is can be a requirement by certain entities to use cloud-based solutions. It is important to understand the customer you are going after and whether this will be a requirement. Many small businesses do not require FIPS but certain do.

Asigra: Organizations that are looking to move to a cloud-based data protection solution should strongly consider solutions that have been validated by the National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, as this certification represents that the solution has been tested and maintains the most current security requirement for cryptographic modules, or encryption. It is important to validate that the data is encrypted at rest and in flight for security and compliance purposes. NIST issues numbered certificates to solution providers as the validation that their solution was tested and approved.

SolidFire: FIPS 140-2 has 4 levels of security, 1- 4 depending on what the application requires.  FIPS stands for Federal Information Processing Standard and is required by some non-military federal agencies for hardware/software to be allowed in their datacenter.  This standard describes the requirements for how sensitive but unclassified information is stored.  This standard is focused on how the cryptographic modules secure information for these systems.

Q. How do you ensure you have real time data protection as well as protection from human error?  If the data is replicated, but the state of the data is incorrect (corrupt / deleted)… then the DR plan has not succeeded.

SNIA: The best way to guard against human error or corruption is with regular point-in-time snapshots; some snapshots can be retained for a limited length of time while others are kept for as long as the data needs to be retained.  Snapshots can be done in the cloud as well as in local storage.

Acronis: Each business needs to think through their retention plan to mitigate such cases. For example, they would run 7 daily backups, 4 weekly backups, 12 monthly backups and one yearly backup. In addition it is good to have a system that allows one to test the backup with a simulated recovery to guarantee that data has not been corrupted.

Asigra: One way for organizations that are migrating to SaaS based applications like Google Apps, Microsoft Office 365 and Salesforce.com to protect their data created and stored in these applications is to consider a cloud-based data protection solution to back up the data from these applications to a third party cloud to meet the unique data protection requirements of your organization. You need to take the responsibility to protect your data born in the cloud much like you protect data created in traditional on premise applications and databases. The responsibility for data protection does not move to the SaaS application provider, it remains with you.

For example user error is one of the top ways that data is lost in the cloud. With Microsoft Office 365 by default, deleted emails and mailboxes are unrecoverable after 30 days; if you cancel your subscription, Microsoft deletes all your data after 90 days; and Microsoft’s maximum liability is $5000 US or what a customer paid during the last 12 months on subscription fees – assuming you can prove it was Microsoft’s fault. All the more reason you need to have a data protection strategy in place for data born in the cloud.

SolidFire: You need to have a technology that provides a real-time asynchronous replication technology achieving a low RPO that does not rely on snapshots.  Application consistent snapshots must be used concurrently with a real-time replication technology to achieve real time and point in time protection.  For the scenario of performing a successful failover, but then you have corrupted data.  With application consistent snapshots at the DR site you would be able to roll back instantly to a point in time when the data and app was in a known good state.

Q. What’s the easiest and most effective way for companies to take advantage of cloud data protection solutions today? Where should we start?

SNIA: The easiest way to ease into using cloud storage is to either (1) use the direct cloud interface of your backup software if it has one to set up an offsite backup, or (2) use a cloud storage gateway that allows public or private cloud storage to appear as another local NAS resource.

Acronis: The easiest way is to use a solution that supports both cloud and on premises data protection. Then they can start by backing up certain workloads to the cloud and adding more over time. Today, we see that many workloads are protected with both a cloud and on premise copy.

Asigra: Organizations should start with non-production, non-critical workloads to test the cloud-based data protection solution to ensure that it meets their needs before moving to critical workloads. Identifying and understanding their corporate requirements for a public, private and/or hybrid cloud architecture is important as well as identifying the workloads that will be moved to the cloud and the timing of this transition. Also, organizations may want to consult with a third party IT Solutions Provider who has the expertise and experience with cloud-based data protection solutions to explore how others are leveraging cloud-based solutions, as well as conduct a data classification exercise to understand which young data needs to be readily available versus older data that needs to be retained for longer periods of time for compliance purposes. It is important that organizations identify their required Recovery Time Objectives and Recovery Point Objectives when setting up their new solution to ensure that in the event of a disaster they are able to meet these requirements. Tip: Retain the services of a trusted IT Solution Provider and run a proof of concept or test drive the solution before moving to full production.

SolidFire: Find a simple and automated solution that fits into your budget.  Work with your local value added reseller of data protection services.  The best thing to do is NOT wait.  Even if it’s something like carbonite… it’s better than nothing.  Don’t get caught off guard.  No one plans for a disaster.

Q. Is it sensible to move to a pay-as-you-go service for data that may be retained for 7, 10, 30, or even 100 years?

SNIA: Long term retention does demand low cost storage of course, and although the major public cloud storage vendors offer low pay-as-you-go costs, those costs can add up to significant amounts over a long period of time, especially if there is any regular need to access the data.  An organization can keep control over the costs of long term storage by setting up an in-house object storage system (“private cloud”) using “white box” hardware and appropriate software such a what is offered by Cloudian, Scality, or Caringo.  Another way to control the costs of long-term storage is via the use of tape.  Note that any of these methods — public cloud, private cloud, or tape — require an IT organization, or their service provider to regularly monitor the state of the storage and periodically refresh it; there is always potential over time for hardware to fail, or for the storage media to deteriorate resulting in what is called bit rot.

Acronis: The cost of storage is dropping dramatically and will continue to do so. The best strategy is to go with a pay as you go model with the ability to adjust pricing (downward) at least once a year. Buying your own storage will lock you into pricing over too long of a period.

SolidFire: The risk of moving to a pay-as-you-go service for that long is that you lock your self in for as long as you need to keep the data.  Make sure that contractually you can migrate or move the data from them, even if it’s for a fee.  The sensible part is that you can contract that portion of your IT needs out and focus on your business and advancing it…. Not worrying about completing backups on your own.

Q. Is it possible to set up a backup so that one copy is with one cloud provider and another with a second cloud provider (replicated between them, not just doing the backup twice) in case one cloud provider goes out of business?

SNIA: Standards like the SNIA’s CDMI (Cloud Data Management Interface) make replication between different cloud vendors pretty straightforward, since CDMI provides a data and metadata neutral way of transferring data; and provides both standard and extensible metadata to control policy too.

Acronis: Yes this possible but this is not a good strategy to mitigate a provider going out of business. If that is a concern then pick a provider you trust and one where you control where the data is stored. Then you can easily switch provider if needed.

SolidFire: Yes setting up a DR site and a tertiary site is very doable.  Many data protection software companies available do this for you with integrations at the cloud providers.  When looking at data protection technology make sure their policy engine is capable of being aware of multiple targets and moving data seamlessly between them.  If you’re worried about cloud service providers going out of business make sure you bet on the big ones with proven success and revenue flow.

 

Mobile and Secure Healthcare: Encrypted Objects and Access Control Delegation

Healthcare privacy and data protection regulations are among the most stringent of any industry. On January 28th, SNIA Cloud Storage will host a live Webcast to discuss how healthcare organizations can securely share health data across different cloud services.

Hear experts Martin Rosner, Standardization Officer at Philips and David Slik, Co-Chair, SNIA Cloud Storage Technical Work Group explore how Encrypted Objects and Delegated Access Control Extensions to the Cloud Data Management Interface (CDMI) standard permits objects to freely and securely move between clouds and clients with enhanced security and auditability.

You’ll learn:

  • Protecting health data from alteration or disclosure
  • How Cloud Encrypted Objects work
  • How Delegated Access Control works
  • CDMI for Electronic Medical Records (EMR) applications
  • Healthcare use cases for implementing securely sharing data in the cloud

This Webcast will be live, so please bring your questions. I encourage you register today. We hope to see you on the 28th.

How Ethernet RDMA Protocols iWARP and RoCE Support NVMe over Fabrics

NVMe (Non-Volatile Memory Express) over Fabrics is of tremendous interest among storage vendors, flash manufacturers, and cloud and Web 2.0 customers. Because it offers efficient remote and shared access to a new generation of flash and other non-volatile memory storage, it requires fast, low latency networks, and the first version of the specification is expected to take advantage of RDMA (Remote Direct Memory Access) support in the transport protocol.

Many customers and vendors are now familiar with the advantages and concepts of NVMe over Fabrics but are not familiar with the specific protocols that support it. Join us on January 26th for this live Webcast that will explore and compare the Ethernet RDMA protocols and transports that support NVMe over Fabrics and the infrastructure needed to use them. You’ll hear:

  • Why NVMe Over Fabrics requires a low-latency network
  • How the NVMe protocol is mapped to the network transport
  • How RDMA-capable protocols work
  • Comparing available Ethernet RDMA transports: iWARP and RoCE
  • Infrastructure required to support RDMA over Ethernet
  • Congestion management methods

The event is live, so please bring your questions. We look forward to answering them.