Virtualization and Storage Networking Best Practices from the Experts

Ever make a mistake configuring a storage array or wonder if you’re maximizing the value of your virtualized environment? With all the different storage arrays and connectivity protocols available today, knowing best practices can help improve operational efficiency and ensure resilient operations. That’s why the SNIA Networking Storage Forum is kicking off 2019 with a live webcast “Virtualization and Storage Networking Best Practices.” In this webcast, Jason Massae from VMware and Cody Hosterman from Pure Storage will share insights and lessons learned as reported by VMware’s storage global services by discussing: Read More

RDMA for Persistent Memory over Fabrics – FAQ

In our most recent SNIA Networking Storage Forum (NSF) webcast Extending RDMA for Persistent Memory over Fabrics, our expert speakers, Tony Hurson and Rob Davis outlined extensions to RDMA protocols that confirm persistence and additionally can order successive writes to different memories within the target system. Hundreds of people have seen the webcast and have given it a 4.8 rating on a scale of 1-5! If you missed, it you can watch it on-demand at your convenience. The webcast slides are also available for download. We had several interesting questions during the live event. Here are answers from our presenters:  Read More

How Scale-Out Storage Changes Networking Demands

Scale-out storage is increasingly popular for Cloud, High-Performance Computing, Machine Learning, and certain Enterprise applications. It offers the ability to grow both capacity and performance at the same time and to distribute I/O workloads across multiple machines. But unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. Clients often access multiple storage servers simultaneously; data typically replicates or migrates from one storage node to another; and metadata or management servers must stay in sync with each other as well as communicating with clients. Due to these demands, traditional network architectures and speeds may not work well for scale-out storage, especially when it’s based on flash. That’s why the SNIA Networking Storage Forum (NSF) is hosting a live webcast “Networking Requirements for Scale-Out Storage” on November 14th. I hope you will join my NSF colleagues and me to learn about: Read More

Deciphering the Economics of Building a Cloud Storage Architecture

Building a cloud storage architecture requires both storage vendors, cloud service providers and large enterprises to consider new technical and economic paradigms in order to enable a flexible and cost efficient architecture. That’s why the SNIA Cloud Storage Technologies Initiative is hosting a live webcast, “Create a Smart and More Economic Cloud Storage Architecture” on November 7th. From an economic perspective, cloud infrastructure is often procured in the traditional way – prepay for expected future storage needs and over provision for unexpected changes in demand. This requires large capital expenditures which slows cost recovery based on fluctuating customer adoption. Giving large enterprises and cloud service providers flexibility in the procurement model for their storage allows them to more closely align the expenditure on infrastructure resources with the cost recovery from customers, optimizing the use of both CapEx and OpEx budgets. Read More

Introducing the Networking Storage Forum

At SNIA, we are dedicated to staying on top of storage trends and technologies to fulfill our mission as a globally recognized and trusted authority for storage leadership, standards, and technology expertise. For the last several years, the Ethernet Storage Forum has been working hard to provide high quality educational and informational material related to all kinds of storage.

From our “Everything You Wanted To Know About Storage But Were Too Proud To Ask” series, to the absolutely phenomenal (and required viewing) “Storage Performance Benchmarking” series to the “Great Storage Debates” series, we’ve produced dozens of hours of material.

Technologies have evolved and we’ve come to a point where there’s a need to understand how these systems and architectures work – beyond just the type of wire that is used. Today, there are new systems that are bringing storage to completely new audiences. From scale-up to scale-out, from disaggregated to hyperconverged, RDMA, and NVMe-oF – there is more to storage networking than just your favorite transport. Read More

Oh What a Tangled Web We Weave: Extending RDMA for PM over Fabrics

For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here Persistent Memory over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA data reads or writes from/to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. Join the Networking Storage Forum (NSF) on October 25, 2018 for out next live webcast, Extending RDMA for Persistent Memory over Fabrics. In this webcast, we will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system. Read More

Centralized vs. Distributed Storage FAQ

To date, thousands have watched our “Great Storage Debate” webcast series. Our most recent installment of this friendly debate (where no technology actually emerges as a “winner”) was Centralized vs. Distributed Storage. If you missed it, it’s now available on-demand. The live event generated several excellent questions which our expert presenters have thoughtfully answered here: Q. Which performs faster, centralized or distributed storage? A. The answer depends on the type of storage, the type of connections to the storage, and whether the compute is distributed or centralized. The stereotype is that centralized storage performs faster if the compute is local, that is if it’s in the same data center as the centralized storage. Read More

An Introduction: What is Swordfish?

Barry Kittner, Technology Initiatives Manager, Intel and SNIA Storage Management Initiative Governing Board Member

To understand Swordfish, let’s start with the basics to examine how modern data centers are managed.

A user of a PC/notebook is assumed to be in control of that PC.  What happens when there are two? Or 10? Or 1,000? Today’s modern data centers can have 100,000 computers (servers) or more! That requires the ability to be in control or “manage” them from a central location.  How does one do that?  It is done via a protocol that enables remote management; today that standard is IPMI, an acronym for Intelligent Platform Management Interface, and which has existed for 20 years.  Among issues with IPMI is that the scale of today’s data centers was not fully envisioned 20 years ago, so some of the components of IPMI cannot cover the tens of thousands of servers it is expected to manage.  The developers also did not foresee the stringent security and increased privacy requirements expected in modern data centers.

The DMTF created, and continues to improve upon, a modern alternative standard for remote or centralized management of data centers called Redfish®.  For those familiar with server management, Redfish is referred to as “schema-based,” meaning that engineers have carefully organized many different categories of information as well as the relationships between them.  Schema are structured to manage the millions of bits of information and operating characteristics that data centers create and report on a continuous basis and that managers monitor to understand the status of the datacenter.  In this way, information on the operational parameters of the machines in the data center is provided, when and where needed, in a consistent, organized and reliable way.

Unlike IPMI, the new Redfish standard uses modern tools, allowing it to scale to the size of today’s modern data centers. Redfish has output language readable by datacenter operators, works across the wide variety of servers and datacenter equipment that exists today, and is extensible for the new hardware of tomorrow.

The Storage Networking Industry Association (SNIA) is a global non-profit organization dedicated to developing standards and education programs to advance storage and information technology. SNIA created the Storage Management Initiative Specification (SMI-S) currently in use in datacenters to manage interoperable storage. SNIA immediately recognized the value of the new Redfish standard and created SNIA Swordfish™, which is an extension to Redfish that seamlessly manages storage equipment and storage services in addition to the server management of Redfish.  Just as most PC’s have one or more storage devices, so do most servers in datacenters, and Swordfish can manage storage devices and allocation across all of the servers in a datacenter in the same structured and organized fashion.

A summary and additional information for the more technical readers is below. If you want to learn more, all the items underlined and in bold below yield more information. You can click them, or type them into your internet browser for more information on the terms used in this tutorial:

  • For security, Swordfish employs HTTPS, a well-known and well-tested protocol that is used for secure communications over the World Wide Web.
  • JavaScript and ODATA increase the readability, compatibility and integration of RESTful API’s that manage data collected from datacenter devices and covers a range of information useful for beginners through experienced engineers.
  • Interoperability exists due to the use of a common schema definition language (CSDL) and common APIs from eco-system partners including the Open Compute Project (OCP).
  • Redfish and Swordfish were created and are maintained by industry leaders that meet weekly to tune and extend management capabilities. (See DMTF.ORG, SNIA.ORG)
  • These schema work together to allow full network discovery, provisioning, volume mapping and monitoring of block, file and object storage for all the systems in a modern datacenter.

There is so much to learn beyond this brief tutorial.  Start at DMTF.ORG to learn about Redfish.  Then surf over to SNIA.ORG/SWORDFISH to see how Swordfish brings the benefits of schema-based management to all your storage devices.  You will learn how Swordfish works in hyperscale and cloud infrastructure environments and enables a scalable solution that grows as your datacenter requirements grow.

By Barry Kittner, Technology Initiatives Manager, Intel and SNIA Storage Management Initiative Governing Board Member

An Introduction: What is Swordfish?

Barry Kittner, Technology Initiatives Manager, Intel and SNIA Storage Management Initiative Governing Board Member

To understand Swordfish, let’s start with the basics to examine how modern data centers are managed.

A user of a PC/notebook is assumed to be in control of that PC.  What happens when there are two? Or 10? Or 1,000? Today’s modern data centers can have 100,000 computers (servers) or more! That requires the ability to be in control or “manage” them from a central location.  How does one do that?  It is done via a protocol that enables remote management; today that standard is IPMI, an acronym for Intelligent Platform Management Interface, and which has existed for 20 years.  Among issues with IPMI is that the scale of today’s data centers was not fully envisioned 20 years ago, so some of the components of IPMI cannot cover the tens of thousands of servers it is expected to manage.  The developers also did not foresee the stringent security and increased privacy requirements expected in modern data centers.

The DMTF created, and continues to improve upon, a modern alternative standard for remote or centralized management of data centers called Redfish®.  For those familiar with server management, Redfish is referred to as “schema-based,” meaning that engineers have carefully organized many different categories of information as well as the relationships between them.  Schema are structured to manage the millions of bits of information and operating characteristics that data centers create and report on a continuous basis and that managers monitor to understand the status of the datacenter.  In this way, information on the operational parameters of the machines in the data center is provided, when and where needed, in a consistent, organized and reliable way.

Unlike IPMI, the new Redfish standard uses modern tools, allowing it to scale to the size of today’s modern data centers. Redfish has output language readable by datacenter operators, works across the wide variety of servers and datacenter equipment that exists today, and is extensible for the new hardware of tomorrow.

The Storage Networking Industry Association (SNIA) is a global non-profit organization dedicated to developing standards and education programs to advance storage and information technology. SNIA created the Storage Management Initiative Specification (SMI-S) currently in use in datacenters to manage interoperable storage. SNIA immediately recognized the value of the new Redfish standard and created SNIA Swordfish™, which is an extension to Redfish that seamlessly manages storage equipment and storage services in addition to the server management of Redfish.  Just as most PC’s have one or more storage devices, so do most servers in datacenters, and Swordfish can manage storage devices and allocation across all of the servers in a datacenter in the same structured and organized fashion.

A summary and additional information for the more technical readers is below. If you want to learn more, all the items underlined and in bold below yield more information. You can click them, or type them into your internet browser for more information on the terms used in this tutorial:

  • For security, Swordfish employs HTTPS, a well-known and well-tested protocol that is used for secure communications over the World Wide Web.
  • JavaScript and ODATA increase the readability, compatibility and integration of RESTful API’s that manage data collected from datacenter devices and covers a range of information useful for beginners through experienced engineers.
  • Interoperability exists due to the use of a common schema definition language (CSDL) and common APIs from eco-system partners including the Open Compute Project (OCP).
  • Redfish and Swordfish were created and are maintained by industry leaders that meet weekly to tune and extend management capabilities. (See DMTF.ORG, SNIA.ORG)
  • These schema work together to allow full network discovery, provisioning, volume mapping and monitoring of block, file and object storage for all the systems in a modern datacenter.

There is so much to learn beyond this brief tutorial.  Start at DMTF.ORG to learn about Redfish.  Then surf over to SNIA.ORG/SWORDFISH to see how Swordfish brings the benefits of schema-based management to all your storage devices.  You will learn how Swordfish works in hyperscale and cloud infrastructure environments and enables a scalable solution that grows as your datacenter requirements grow.

By Barry Kittner, Technology Initiatives Manager, Intel and SNIA Storage Management Initiative Governing Board Member

An Introduction: What is Swordfish?

Barry Kittner, Technology Initiatives Manager, Intel and SNIA Storage Management Initiative Governing Board Member

To understand Swordfish, let’s start with the basics to examine how modern data centers are managed.

A user of a PC/notebook is assumed to be in control of that PC.  What happens when there are two? Or 10? Or 1,000? Today’s modern data centers can have 100,000 computers (servers) or more! That requires the ability to be in control or “manage” them from a central location.  How does one do that?  It is done via a protocol that enables remote management; today that standard is IPMI, an acronym for Intelligent Platform Management Interface, and which has existed for 20 years.  Among issues with IPMI is that the scale of today’s data centers was not fully envisioned 20 years ago, so some of the components of IPMI cannot cover the tens of thousands of servers it is expected to manage.  The developers also did not foresee the stringent security and increased privacy requirements expected in modern data centers. Read More