Curious about Your Storage Knowledge? It’s a Quick “Test” with SNIA Storage Foundations Certification Practice Exam

Whether you’ve recently mastered the basics, or are a storage technology expert,  letting the industry know you are credentialed can (and probably should) be part of your career development process.  SNIA’s Storage Networking Certification Program (SNCP) provides a strong foundation of vendor-neutral, systems-level credentials that integrate with and complement individual vendor certifications. SNCP’s three knowledge “domains” – Concepts – Standards, and Solutions – each provide a standard by which your knowledge and skill set can be assessed on a consistent, industry-wide basis without any vendor specializations.Education_continuum_new_resize

Many storage professionals choose to begin with the SNIA Storage Foundations Certification, according to Michael Meleedy, SNIA’s Director of Education.  “The SNIA Foundations Exam (S10-110), newly revised to integrate new technologies and industry practices, is the entry-level exam within the SNIA Storage Networking Certification Program (SNCP),” Meleedy explained. “It has been widely accepted by the storage industry as the benchmark for basic vendor-neutral storage credentials.  In fact, vendors like Dell require this certification.”

Try the Practice Exam!

We recommend considering Spring as the best time to test your skills – and a NEW SNIA Storage Foundations Certification Practice exam makes it very easy.  This practice exam is short (easy to squeeze into your busy day) and the sample of questions from the real exam will help you quickly determine if you have the skills required to pass the industry’s only vendor-neutral certification exam. It’s open to everyone free of charge with the results available immediately.  Take the practice exam.

Why Should I Explore the SNCP?

Professionals often wonder about the real value of IT related certifications.  Is it worth your time and money to become certified?  “Yes, especially in today’s global marketplace,” said Paul Talbut, SNIA Global Education and Regional Affiliate Program Director. “SNIA certifications provide storage and data management practitioners worldwide with an industry recognised uniform standard by which individual knowledge and skill-sets can be judged.   We’re reaching a variety of professional audiences; for example, SNIA’s Foundations Exam is available both in English and Japanese, and is offered at all Prometric testing centers worldwide.”

Learn more about the new SNIA Foundations exam (S10–110) and study materials, the entire range of SNIA Certification Testing, and the six good reasons why you should be SNIA certified! Visit http://www.snia.org/education/certification.

Ethernet RDMA Protocols Support for NVMe over Fabrics – Your Questions Answered

Our recent SNIA Ethernet Storage Forum Webcast on How Ethernet RDMA Protocols iWARP and RocE Support NVMe over Fabrics generated a lot of great questions. We didn’t have time to get to all of them during the live event, so as promised here are the answers. If you have additional questions, please comment on this blog and we’ll get back to you as soon as we can.

Q. Are there still actual (memory based) submission and completion queues, or are they just facades in front of the capsule transport?

A. On the host side, they’re “facades” as you call them. When running NVMe/F, host reads and writes do not actually use NVMe submission and completion queues. That data just comes from and to RNIC RDMA queues. On the target side, there could be real NVMe submissions and completion queues in play. But the more accurate answer is that it is “implementation dependent.”

Q. Who places the command from NVMe queue to host RDMA queue from software standpoint?

A. This is managed by the kernel host software in code written to the NVMe/F specification. The idea is that any existing application that thinks it is writing to the existing NVMe host software will in fact cause the SQE entry to be encapsulated and placed in an RDMA send queue.

Q. You say “most enterprise switches” support NVMe/F over RDMA, I guess those are ‘new’ ones, so what is the exact question to ask a vendor about support in an older switch?

A. For iWARP, any switch that can handle Internet traffic will do. Mellanox and Intel have different answers for RoCE / RoCEv2. Mellanox says that for RoCE, it is recommended, but not required, that the switch support Priority Flow Control (PFC). Most new enterprise switches support PFC, but you should check with your switch vendor to be sure. Intel believes RoCE was architected around DCB. The name itself, RoCE, stands for “RDMA over Converged Ethernet,” i.e., Ethernet with DCB. Intel believes RoCE in general will require PFC (or some future standard that delivers equivalent capabilities) for efficient RDMA over Ethernet.

Q. Can you comment on when one should use RoCEv2 vs. iWARP?

A. We gave a high-level overview of some of the deployment considerations on slide 30. We refer you to some of the vendor links on slide 32 for “non-vendor neutral” perspectives.

Q. If you take RDMA out of equation, what is the key advantage of NVMe/F over other protocols? Is it that they are transparent to any application?

A. NVMe/F allows the application to bypass the SCSI stack and uses native NVMe commands across a network. Most other block storage protocols require using the SCSI protocol layer, translating the NVMe commands into SCSI commands. With NVMe/F you also gain parallelism, simplicity of the command set, a separation between administrative sessions and data sessions, and a reduction of latency and processing required for NVMe I/O operations.

Q. Is ROCE v1 compatible with ROCE v2?

A. Yes. Adapters speaking RoCEv2 can also maintain RDMA connections with adapters speaking RoCEv1 because RoCEv2 ports are backwards interoperable with RoCEv1. Most of the currently shipping NICs supporting RoCE support both RoCEv1 and RoCEv2.

Q. Are RoCE and iWARP the only way to use Ethernet as a fabric for NMVe/F?

A. Initially yes; only iWARP and RoCE are supported for NVMe over Ethernet. But the NVM Express Working Group is also targeting FCoE. We should have probably been clearer about that, though it is noted on slide 11.

Q. What about doing NVMe over Fibre Channel? Is anyone looking at, or doing this?

A. Yes. This is not in scope for the first spec release, but the NVMe WG is collaborating with the FCIA on this. So NVMe over Fibre Channel is expected as another standard in the near future, to be promoted by T11.

Q. Do RoCE and iWARP both use just IP addresses for management or is there a higher level addressing mechanism, and management?

A. RoCEv2 uses the RoCE Connection Manager, and iWARP uses TCP connection management. They both use IP for addressing.

Q. Are there other fabrics to run NVMe over fabrics? Can you do this over OmniPath or Infiniband?

A. InfiniBand is in scope for the first spec release. Also, there is a related effort by the FCIA to support NVMe over Fibre Channel in a standard that will be promoted by T11.

Q. You indicated NVMe stack is in kernel while RDMA is a user level verb. How are NVMe SQ/ CQ entries transferred from NVMe to RDMA and vice versa? Also, could smaller transfers in NVMe (e.g. SGL of 512B) combined to larger sizes before being sent to RDMA entries and vice versa?

A. NVMe/F supports multiple scatter gather entries to combine multiple incontinuous transfers, nevertheless, the protocol doesn’t support chaining multiple NVMe commands on the same command capsule. A command capsule contains only a single NVMe command. Please also refer to slide 18 from the presentation.

Q. 1) How do implementers and adopters today test NVMe deployments? 2) Besides latency, what other key performance indicators do implements and adopters look for to determine whether the NVMe deployment is performing well or not?

A. 1) Like any other datacenter specification, testing is done by debugging, interop testing and plugfests. Local NVMe is well supported and can be tested by anyone. NVMe/F can be tested using pre-standard drivers or solutions from various vendors. UNH-IOH is an organization with an excellent reputation for helping here. 2) Latency, yes. But also sustained bandwidth, IOPS, and CPU utilization, i.e., the “usual suspects.”

Q. If RoCE CM supports ECN, why can’t it be used to implement a full solution without requiring PFC?

A. Explicit Congestion Notification (ECN) is an extension to TCP/IP defined by the IETF. First point is that it is a standard for congestion notification, not congestion management. Second point is that it operates at L3/L4. It does nothing to help make the L2 subnet “lossless.” Intel and Mellanox agree that generally speaking, all RDMA protocols perform better in a “lossless,” engineered fabric utilizing PFC (or some future standard that delivers equivalent capabilities). Mellanox believes PFC is recommended but not strictly required for RoCE, so RoCE can be deployed with PFC, ECN, or both. In contrast, Intel believes that for RoCE / RoCEv2 to deliver the “lossless” performance users expect from an RDMA fabric, PFC is in general required.

Q. How involved are Ethernet RDMA efforts with the SDN/OCP community? Is there a coming example of RoCE or iWarp on an SDN switch?

A. Good question, but neither RoCEv2 nor iWARP look any different to switch hardware than any other Ethernet packets. So they’d both work with any SDN switch. On the other hand, it should be possible to use SDN to provide special treatment with respect to say congestion management for RDMA packets. Regarding the Open Compute Project (OCP), there are various Ethernet NICs and switches available in OCP form factors.

Q. Is there a RoCE v3?

A. No. There is no RoCEv3.

Q. iWARP and RoCE both fall back to TCP/IP in the lowest communication sense? So they are somewhat compatible?

A. They can speak sockets to each other. In that sense they are compatible. However, for the usage model we’re considering here, NVMe/F, RDMA is required. Because of L3/L4 differences, RoCE and iWARP RNICs cannot speak RDMA to each other.

Q. So in case of RDMA (ROCE or iWARP), the NVMe controller’s fabric port is Ethernet?

A. Correct. But it must be RDMA-enabled Ethernet.

Q. What if I am using soft RoCE, do I still need an RNIC?

A. Functionally, soft RoCE or soft iWARP should work on a regular NIC. Whether the performance is sufficient to keep up with NVMe SSDs without the hardware offloads is a different matter.

Q. How would the NVMe controller know that a command is placed in the submission queue by the Fabric host driver? Is the fabric host driver responsible for notifying the NVMe controller through remote doorbell trigger or the Fabric target driver should trigger the doorbell?

A. No separate notification by the host required. The fabric’s host driver simply sends a command capsule to notify its companion subsystem driver that there is a new command to be processed. The way that the subsystem side notifies the backend NVMe drive is out of the scope of the protocol.

Q. I am chair of ETSI NFV working group on NFV acceleration. We are working on virtual RDMA and how VM can benefit from hardware independent RDMA. One corner stone of this is virtual-RDMA pseudo device. But there is not yet consensus on minimal set of verbs to be supported: Do you think this minimal verb set can be identified? Last, the transport address space is not consistent between IB, Ethernet. How supporting transport independent RDMA?

A. You know, the NVM Express Working Group is working on exactly these questions. They have to define a “minimal verb set” since NVMe/F generates the verbs. Similarly, I’d suggest looking to the spec to see how they resolve the transport address space differences.

Q. What’s the plan for Linux submission of NVMe over Fabric changes? What releases are being targeted?

A. The Linux Driver WG in the NVMe WG expects to submit code upstream within a quarter of the spec being finalized. At this time it looks like the most likely Linux target will be kernel 4.6, but it could end up being kernel 4.7.

Q. Are NVMe SQ/CQ transferred transparently to RDMA Queues or can they be modified?

A. The method defined in the NVMe/F specification entails a transparent transfer. If you wanted to modify an SQE or CQE, do so before initiating an NVMe/F operation.

Q. How common are rNICs for recent servers? i.e. What’s a quick check I can perform to find out if my NIC is an rNIC?

A. rNICs are offered by nearly all major server vendors. The best way to check is to ask your server or NIC vendor if your NIC supports iWARP or RoCE.

Q. This is most likely out of the scope of this talk but could you perhaps share about 30K level on the differences between “NVMe controller” hardware versus “NVMeF” hardware. It’s most likely a combination of R-NIC+NVMe controller, but would be great to get your take on this.

A goal of the NVMe/F spec is that it work with all existing NVMe controllers and all existing RoCE and iWARP RNICs.  So on even a very low level, we can say “no difference.”  That said, of course, nothing stops someone from combining NVMe controller and rNIC hardware into one solution.

Q. Are there any example Linux targets in the distros that exercise RDMA verbs? An iWARP or iSER target in a distro?

A. iSER allows this using a LIO or TGT SCSI target.

Q. Is there a standard or IP for RDMA NIC?

A. The various RNICs are based on IBTA, IETF, and IEEE standards are shown on slide 26.

Q. What is the typical additional latency introduced comparing NVMe over Fabric vs. local NVMe?

A. In the 2014 IDF demo, the prototype NVMe/F stack matched the bandwidth of local NVMe with a latency penalty of only 8µs over a local iWARP connection. Other demonstrations have shown an added fabric latency of 3µs to 15µs. The goal for the final spec is under 10µs.

Q. How well is NVME over RDMA supported for Windows ?

A. It is not currently supported, but then the spec isn’t even finished. Contract Microsoft if you are interested in their plans.

Q. RDMA over Ethernet would not support Layer 2 switching? How do you deal with TCP over head?

A. L2 switching is supported by both iWARP and RoCE. Both flavors of RNICs have MAC addresses, etc. iWARP had to deal with TCP/IP in hardware, a TCP/IP Offload Engine or TOE. The TOE used in an iWARP RNIC is significantly constrained compared to a general purpose TOE and therefore can operate with very high performance. See the Chelsio website for proof points. RoCE does not use TCP so does not need to deal with TCP overhead.

Q. Does RDMA not work with fibre channel?

A. They are totally different Transports (L4) and Networks (L3). That said, the FCIA is working with NVMe, Inc. on supporting NVMe over Fibre Channel in a standard to be promoted by T11.

Are Hard Drives or Flash Winning in Actual Density of Storage?

The debate between hard drives and solid state drives goes on in 2016, particularly in the area of areal densities – the actual density of storage on a device.  Fortunately for us, Tom Coughlin, SNIA Solid State Storage Initiative Education Chair, and a respected analyst who contributes to Forbes, has advised that flash memory areal densities have exceeded those of hard drives since last year!

Coughlin Associates provides several charts in the article which map lab demos and product HDD areal density since 2000, and contrasts that to new flash product announcements.  Coughlin comments that “Flash memory areal density exceeding HDD areal density is important since it means that flash memory products with higher capacity can be built using the same surface area.”

Check out the entire article here.

RSA Conference Shows that KMIP Is “Key” To Encryption and Protection of Enterprise Data

By Marty Foltyn

In the vast exhibit halls of last week’s RSA Conference, Cyber (aka cybersecurity) was the mantra.  With customers asking for confidence in the encryption and protection of enterprise data, attendees found  proven interoperability in the OASIS booth where developers of the OASIS Key Management Interoperability Protocol (KMIP) showcased their support for new features.

OASIS (Organization for the Advancement of Structured Information Standards) is a nonprofit consortium that drives the development, convergence20160301_135949, and adoption of open standards for the global information society.   The OASIS KMIP TC works to define a single, comprehensive protocol for communication between encryption systems and a broad range of new and legacy enterprise applications, including email, databases, and storage devices. The resulting Protocol, its profiles, and test cases are defined by the OASIS KMIP Technical Committee. By removing redundant, incompatible key management processes, KMIP  provides better data security while at the same time reducing expenditures on multiple products.

Tony Cox, OASIS KMIP Technical Committee Co-Chair and Interoperability Event Lead, stressed that “The OASIS 2016 Interop is a small window into the reality of proven interoperability between enterprise key managers, HSMs, cryptographic devices, storage, security and cloud products.  The interoperability demonstration helped to reinforce  the reality of choice for CIOs, CSOs and CTOs, enabling products from multiple vendors to be deployed as a single enterprise security solution that addresses both current and future requirements.”

Tony Cox is also the Chair of the SNIA Storage Security Industry Forum, and five SNIA SSIF member companies showcased interoperable products using the OASIS KMIP standard — Cryptsoft, Fornetix, Hewlett Packard Enterprise, IBM, and Townsend Security.

20160301_124611 (2)SNIA provides a KMIP Conformance Test Program that enables organizations with KMIP implementations in their products to test those products against test tools and other products at the SNIA Technology Center in Colorado Springs, Colorado.   According to SNIA’s KMIP Test Program Manager David Thiel, the KMIP Test Program provides independent verification from a trusted third party that a given KMIP implementation conforms to the KMIP standard.  Verification gives confidence to both vendors and end users of KMIP solutions that a product will interoperate with other similarly tested KMIP products. KMIP support has become a prerequisite requirement for organizations looking to acquire storage and security key management solutions.

For vendors with a product that supports KMIP, having the product successfully complete SNIA’s KMIP Conformance Test Program is the best way to instill customer confidence. Any organization with a KMIP implementation can test in the SNIA’s vendor-neutral, non-competitive environment.  For KMIP Server testing, the vendor places the Server in the SNIA Technology Center and trains the KMIP Test Program staff on its use.  For KMIP Client testing, the vendor connects the Client over the Internet to the test apparatus at the SNIA Technology Center or installs the Client in the SNIA Technology Center.  The KMIP Test Program staff then tests the Server or Client and reports results to the vendor. All information regarding vendor testing and test results is confidential until the vendor releases successful test results for publication.

To date, products from Cryptsoft, Hewlett Packard Enterprise, and IBM have successfully passed KMIP Conformance Tests.  Test results can be found on the KMIP Conformance Testing Results page.  Visit the KMIP Test Program to learn more.

Meet Michael Oros – SNIA’s New Executive Director

Michael-Oros-resize120x149SNIA is pleased to announce the appointment of its new Executive Director, Michael Oros. A 20-year industry veteran, Michael comes to SNIA from Intel where he was instrumental in overseeing a wide range of strategic industry initiatives, and for the development and deployment of storage, backup, and disaster recovery services. He also led the formation of the Open Data Center Alliance and with the Board of Directors, established the organization’s presence and reach across six continents, with world leading members accelerating cloud adoption and transformation of the IT landscape.

David Dale, SNIA Chairman, recently sat down with Michael to discuss his vision for the future of SNIA.

Dale: Michael, welcome to SNIA. We’re excited to have you on board.

Oros: Thank you David. I am honored and thrilled to be here! These are exciting times for the storage industry, and I strongly believe SNIA and the member companies are poised to be at the center of this transformation.

Dale: How long have you been involved with SNIA?

Oros: I’ve been involved with SNIA indirectly since 2000, when fibre channel interoperability was an industry challenge that I had to address for Intel’s managed storage service offerings. Since 2004, I have participated more directly starting with my first SNW event in Phoenix.

Dale: What attracted you to the Executive Director position and what excites you the most about SNIA? 

Oros: The opportunity to lead, facilitate and be part of the storage industry transformation. The great people that make up the storage industry – an amazing SNIA Board of Directors that’s passionate and cares deeply, great staff and incredible volunteers; these were key attributes that I personally value and sought out.

Dale: What are the major changes forthcoming in the storage industry that SNIA needs to be actively involved with?

Oros: The flurry of M&A activity over the past couple years has already changed the storage industry landscape, and we can expect to see over the next couple years the impact and innovation coming out from these mergers/acquisitions. SNIA needs to be nimble; continue to deliver value through standards and initiatives that are of high importance and relevancy to the storage industry and the implementers/consumers of enterprise storage technologies: enterprise IT, cloud service providers and hyperscalers. 

Dale: What do you think the impact of the 3rd Platform will be to the industry?

Oros: Huge! The analyst terminology referring to the third computing platform that encompasses mobile, social, cloud computing, and Internet of Things, is driving an increase in both storage demand and efficiency. As billions of users/devices and millions of apps interact on this “3rd Platform”, IT organizations have to change how they do business and manage this exponential increase in assets, data they are generating and its security. The storage industry and vendors have to innovate and deliver solutions that are lower touch to deploy and manage, more flexible and adaptable to an array of applications and security requirements.

Dale: What do you see as SNIA’s top goals for 2016? 

Oros: Continue to be relevant in our work to the industry and our member companies, execute on the technology specifications, and grow the organization.

Dale: One week in the role, what are your initial thoughts and plans?

Oros: First, a big thank you to everyone for their help and support as I’ve come on board! I’ve started working with the team to ensure the member companies have the best resources and tools available to collaborate on technology specifications and initiatives – myself and all SNIA staff are here to support our members and delight our wonderful industry volunteers. Business development and outreach will see an increase in activity. And marketing programs are being planned in addition to our events, to promote loudly and with clarity the vital work SNIA and member companies are doing!

To learn more, read the official SNIA press release.

 

 

SNIA Tutorials Highlight Industry Track at USENIX FAST ’16

by Marty Foltyn

SNIA is pleased to present seven of their series of SNIA Tutorials at the 14th USENIX conference on File and Storage Technologies (USENIX FAST) on February 24, 2016 in Santa Clara, CA.  fast16_button_180_0

SNIA Tutorials are educational materials developed by vendors, training companies, analysts, consultants, and end-users in the storage and information technology industry. SNIA tutorials are presented and used throughout the world at SNIA events and international conferences.

Utilizing VDBench to Perform IDC AFA Testing will be presented by Michael Ault, Oracle Guru, IBM, Inc. This SNIA Tutorial provides procedures, scripts, and examples to perform the IDC test framework utilizing the free tool VDBench on AFAs to provide a common set of results for comparison of multiple AFAs suitability for cloud or other network based storage.

Practical Online Cache Analysis and Optimization will be presented by Carl Waldspurger, Research and Development, CloudPhysics, Inc., and Irfan Ahmad, CTO, CloudPhysics, Inc.  After reviewing the history and evolution of MRC algorithms, this SNIA Tutorial examines new opportunities afforded by MRCs to capture valuable information about locality that can be leveraged to guide efficient cache sizing, allocation, and partitioning in order to support diverse goals such as improving performance, isolation, and quality of service.

SMB Remote File Protocol (Including SMB 3.x) will be presented by Tom Talpey, Architect, Microsoft.  This SNIA Tutorial begins by describing the history and basic architecture of the SMB protocol and its operations. The second part of the tutorial covers the various versions of the SMB protocol, with details of improvements over time. The final part covers the latest changes in SMB3, and the resources available in support of its development by industry.

Object Drives: A New Architectural Partitioning will be presented by Mark Carlson, Principal Engineer, Industry Standards, Toshiba.  This SNIA Tutorial discusses the current state and future prospects for object drives. Use cases and requirements will be examined and best practices will be described.

Fog Computing and Its Ecosystem will be presented by Ramin Elahi, Adjunct Faculty, UC Santa Cruz Silicon Valley.  This SNIA Tutorial introduces and describes Fog Computing and discusses how it supports emerging Internet of Everything (IoE) applications that demand real-time/predictable latency (industrial automation, transportation, networks of sensors and actuators).

Privacy vs. Data Protection: The Impact of EU Data Protection Legislation will be presented by Thomas Rivera, Senior Technical Associate, HDS.  This SNIA Tutorial explores the new EU data protection legislation and highlights the elements that could have significant impacts on data handling practices.

Converged Storage Technology will be presented by Liang Ming, Research Engineer, Development and Research, Distributed Storage Field, Huawei.  This SNIA Tutorial discusses the concept of key-value storage, next generation key-value converged storage solutions, and what has been done to promote the key-value standard.

Get Your Registration Discount
As a friend of SNIA, we are able to offer you a $75 discount on registration for the technical sessions. Use code75FAST15SNIA during registration to receive your discount.

FAST ’16 Program
FAST ’16 will kick off with their Keynote Address given by Eric Brewer, VP Infrastructure at Google, on “Spinning Disks and Their Cloudy Future”. In addition to the SNIA Industry Track, the 3-day technical sessions program also includes 27 refereed paper presentations.

The full program is available here: https://www.usenix.org/conference/fast16/glance

 

On-Demand Cloud Storage Webcasts Worth Watching

As the SNIA Cloud Storage Initiative (CSI) starts our 2016 with a new set of educational programs and webcasts on topics of interest to those developing, implementing & managing cloud storage, I thought it might be a good time to remind everyone of the vendor-neutral educational work the CSI has delivered in 2015.

I’m particularly proud of the work the CSI has done through BrightTalk (a web based content delivery platform) in producing live hour-long tutorials on a wide variety of subjects.

What you may not know is that these are also recorded, and you can play them back when it’s convenient to you. I know that we have a global audience, and that when we deliver the live version it may be in the middle of your busy working day – or even in the middle of the night.

As part of SNIA, the CSI supports the development of technical storage standards; and that means some of our audience are developers. For those of you that are interested in more technical presentations we had two developer focussed BrightTalks:

Hierarchical Erasure Coding: Making Erasure Coding Usable

This talk covered two different approaches to erasure coding – a flat erasure code across JBOD, and a hierarchical code with an inner code and an outer code; it compared the two approaches on different parameters that impact the IT business and provided guidance on evaluating object storage solutions.

Expert Panel: Cloud Storage Initiatives – An SDC Preview

At the 2015 Storage Developer Conference (SDC) we presented on a variety of topics:

  • Mobile and Secure – Cloud Encrypted Objects using CDMI
  • Object Drives: A new Architectural Partitioning
  • Unistore: A Unified Storage Architecture for Cloud Computing
  • Using CDMI to Manage Swift, S3, and Ceph Object Repositories

We discussed how encrypted objects can be stored, retrieved, and transferred between clouds, how Object Drives allow storage to scale up and down by single drive increments, end-user and vendor use cases of the Cloud Data Management Interface (CDMI), and we introduced Unistore – an innovative unified storage architecture that efficiently integrates heterogeneous HDD and SCM devices for Cloud storage systems.

(As an added bonus, all these SDC 2015 presentations and others can be found here http://www.snia.org/events/storage-developer/presentations15.)

OpenStack has had a big year, and the CSI contributed to the discussion with:

OpenStack File Services for High Performance Computing

We looked at how OpenStack can consume and control file services appropriate to High Performance Compute in a cloud and multi-tenanted environment and investigated two approaches to integration. One approach is to have OpenStack manage the storage infrastructure services using Cinder, Nova and Neutron to provide HPC Filesystem as a Service. We also reviewed a second option of using Manila file services for OpenStack to control the HPC File system deployment and manage the exports etc. We discussed the development of the Lustre Manila driver and its current progress.

Hybrid clouds were also in the news. We delivered two sessions, specifically targeted at end users looking to understand the technologies:

Hybrid Clouds: Bridging Private & Public Cloud Infrastructures

Every IT consumer is using cloud in one form or another, and just as storage buyers are reluctant to select single vendor for their on-premises IT, they will choose to work with multiple public cloud providers. But this desirable “many vendor” cloud strategy introduces new problems of compatibility and integration. To provide a seamless view of these discrete storage clouds, Software Defined Storage (SDS) can be used to build a bridge between them. This presentation explored how SDS, with its ability to deploy on different hardware and supporting rich automation capabilities, can extend its reach into cloud deployments to support a hybrid data fabric that spans on-premises and public clouds.

Hybrid Clouds Part 2: Case Study on Building the Bridge between Private & Public

There are significant differences in how cloud services are delivered to various categories of users. The integration of these services with traditional IT operations remains an important success factor but also a challenge for IT managers. The key to success is to build a bridge between private and public clouds. This Webcast expanded on the previous Hybrid Clouds: Bridging Private & Public Cloud Infrastructures webcast where we looked at the choices and strategies for picking a cloud provider for public and hybrid solutions.

Lastly, we looked at some of the issues surrounding data protection and data privacy (no, they’re not the same thing at all!).

Privacy v Data Protection: The Impact Int’l Data Protection Legislation on Cloud

Governments across the globe are proposing and enacting strong data privacy and data protection regulations by mandating frameworks that include noteworthy changes like defining a data breach to include data destruction, adding the right to be forgotten, mandating the practice of breach notifications, and many other new elements. The implications of this and other proposed legislation on how the cloud can be utilized for storing data are significant. This webcast covered:

  • EU “directives” vs. “regulation”
  • General data protection regulation summary
  • How personal data has been redefined
  • Substantial financial penalties for non-compliance
  • Impact on data protection in the cloud
  • How to prepare now for impending changes

Moving Data Protection to the Cloud: Trends, Challenges and Strategies

This was a panel discussion; we talked about various new ways to perform data protection using the Cloud and many advantages of using the Cloud this way.

You can access all the CSI BrightTalk Webcasts on demand at the SNIA Website. Many of you will also be happy to learn that PDFs of the Webcast slides are also available there.

We had a good 2015, and I’m looking forward to producing more great educational material during 2016. If you have a topic you’d like to see the CSI cover this year, please comment below in this blog. We value input from all.

Thanks for your support and hopefully we’ll see you some time this year at one of our BrightTalk webcasts.

Next Live Webcast: NFS 101

Need a primer on NFS? On March 23, 2106, The Ethernet Storage Forum (ESF) will present a live Webcast “What is NFS? An NFS Primer.” The popular and ubiquitous Network File System (NFS) is a standard protocol that allows applications to store and manage data on a remote computer or server. NFS provides two services; a network part that connects users or clients to a remote system or server; and a file-based view of the data. Together these provide a seamless environment that masks the differences between local files and remote files.

At this Webcast, Alex McDonald, SNIA ESF Vice Chair, will provide an introduction and overview presentation to NFS. Geared for technologists and tech managers interested in understanding:

  • NFS history and development
  • The facilities and services NFS provides
  • Why NFS rose in popularity to dominate file based services
  • Why NFS continues to be important in the cloud

As always, the Webcast will be live and Alex and I will be on hand to answer your questions. Register today. Alex and I look forward to hearing from you on March 23rd.

Exploring the Software Defined Data Center – A SNIA Cloud Webcast

SNIA Cloud is pleased to announce our next live Webcast, “Exploring the Software Defined Data Center.” A Software Defined Data Center (SDDC) is a compute facility in which all elements of the infrastructure – networking, storage, CPU and security – are virtualized and removed from proprietary hardware stacks. Deployment, provisioning and configuration as well as the operation, monitoring and automation of the entire environment is abstracted from hardware and implemented in software.

The results of this software-defined approach include maximizing agility and minimizing cost, benefits that appeal to IT organizations of all sizes. In fact, understanding SDDC concepts can help IT professionals in any organization better apply these software defined concepts to storage, networking, compute and other infrastructure decisions.

If you’re interested in Software Defined Data Centers and how such a thing might be implemented – and why this concept is important to IT professionals who aren’t involved with building data centers – then please join us on March 15th as Eric Slack, Sr. Analyst with Evaluator Group, will explain what “software defined” really means and why it’s important to all IT organizations. Eric will be joined by Alex McDonald, Chair for SNIA’s Cloud Storage Initiative who will talk about how these concepts apply to the modern data center.

Register now as we’ll explore:

  • How a SDDC leverages this concept to make the private cloud feasible
  • How we can apply SDDC concepts to an existing data center
  • How to develop your own software defined data center environment

As always, this Webcast will be live. Eric, Alex and I will be on hand to answer your questions. We hope you’ll join us on March 15th.

Storage Performance Benchmarking Webcast Series Continues

Attendees cannot get enough of the SNIA Ethernet Storage Forum’s Storage Performance Benchmarking Webcast series. On March 8, 2016 our experts, Mark Rogov and Ken Cantrell, will return for the third installment of our series with “Storage Performance Benchmarking: Block Components.” This session aims to continue educating anyone untrained in the storage performance arts to ascend to a common base with the experts. In this Webcast, you will gain an understanding of the block components of modern storage arrays and learn storage block terminology, including:

  • How storage media affects block storage performance
  • Integrity and performance trade-offs for data protection: RAID, Erasure Coding, etc.…
  • Terminology updates: seek time, rebuild time, garbage collection, queue depth and service time

As always, the event will be live and Mark and Ken will be on hand to answer your questions. I encourage you to register today. We hope to see you on March 8th!