Join the Online Survey on Disaster Recovery

To start things off for the Disaster Recovery Special Interest Group (SIG) described in the previous blog post, the DPCO Committee has put together an online survey of how enterprises are doing data replication and Disaster Recovery and what issues they are encountering. Please join this effort by responding to this brief survey at: https://www.surveymonkey.com/r/W3DRKYD

THANK YOU in advance for doing this!  It should take less than 5 minutes to complete.

Join the Online Survey on Disaster Recovery

To start things off for the Disaster Recovery Special Interest Group (SIG) described in the previous blog post, the DPCO Committee has put together an online survey of how enterprises are doing data replication and Disaster Recovery and what issues they are encountering. Please join this effort by responding to this brief survey at: https://www.surveymonkey.com/r/W3DRKYD

THANK YOU in advance for doing this!  It should take less than 5 minutes to complete.

Got DR Issues? Check out the new Disaster Recovery Special Interest Group

The SNIA Data Protection and Capacity Optimization Committee (DPCO) would like to announce the creation of a new, Special Interest Group focusing on Data Replication for Disaster Recovery (DR) Standards. The mission of this SIG is focused on investigating existing ISO standards, carrying out surveys, and studying current guidance in order to identify if there is a need to improve the interoperability and resiliency, and/or education and best practices in the area of data replication for disaster recovery.

Why are we doing this? There have been a number of industry observations that customers either don’t know about standards that exist, cannot implement them or have other needs relating to DR that warrant exploration. The aim of this group is not to reinvent the wheel but examine what is out there, what can be used by customers and find out whether they are using appropriate standards, and if not why.

What are we doing? We are starting with a survey to be sent out to as many industry members as possible. The survey will examine replication DR needs that customers have, systems that they have implemented and questions about their knowledge regarding standards and other issues encountered in designing and operating DR, particularly in multi-site, multi-vendor environments.

What can you do? Get involved, of course! Contact the SNIA DPCO team to indicate your interest as we implement the organization structure for the Data Replication for DR Standards SIG.

John Olson and Gene Nagle

Got DR Issues? Check out the new Disaster Recovery Special Interest Group

The SNIA Data Protection and Capacity Optimization Committee (DPCO) would like to announce the creation of a new, Special Interest Group focusing on Data Replication for Disaster Recovery (DR) Standards. The mission of this SIG is focused on investigating existing ISO standards, carrying out surveys, and studying current guidance in order to identify if there is a need to improve the interoperability and resiliency, and/or education and best practices in the area of data replication for disaster recovery.

Why are we doing this? There have been a number of industry observations that customers either don’t know about standards that exist, cannot implement them or have other needs relating to DR that warrant exploration. The aim of this group is not to reinvent the wheel but examine what is out there, what can be used by customers and find out whether they are using appropriate standards, and if not why.

What are we doing? We are starting with a survey to be sent out to as many industry members as possible. The survey will examine replication DR needs that customers have, systems that they have implemented and questions about their knowledge regarding standards and other issues encountered in designing and operating DR, particularly in multi-site, multi-vendor environments.

What can you do? Get involved, of course! Contact the SNIA DPCO team to indicate your interest as we implement the organization structure for the Data Replication for DR Standards SIG.

John Olson and Gene Nagle

Data Storage & the Software Defined Data Center

gnagle-color     The overall success of and general acceptance of server virtualization as a way to make servers more fully utilized and agile, have encouraged IT managers to pursue virtualization of the other two major data center functions, storage and networking.  And most recently, the addition of external resources such as the public Cloud to what the IT organization must manage has encouraged taking these trends a step further.  Many in the industry believe that the full software definition of all IT infrastructure, that is a software defined data center (SDDC), should be the end goal, to make all resources capable of fast adaptation to business needs and for the holy grail of open-API-based, application-aware, single-pane-of-glass management.

So as data storage professionals we must ask:  is storage changing in ways that will make it ready to be a full participant in the software defined data center? And what new storage techniques are now being proven to offer the agility, scalability and cost effectiveness that are sought by those embracing the SDDC?

These questions can best be answered by examining the current state of software defined storage (SDS) and how it is being integrated with other aspects of the SDDC. SDS does for storage what virtualization did for servers—breaking down the physical barriers that bind data to specific hardware. Using SDS, storage repositories can now be made up of high volume, industry standard hardware, where “white boxes,” typically in the form of multiple CPU, Open Compute Project servers with a number of solid-state and/or spinning disks, perform storage tasks that formerly required specialized disk controller hardware.  This is similar to what is beginning to happen to network switches under software defined networking (SDN).  And in another parallel to the SDN world, the software used in SDS is coming both from open source communities such as the OpenStack Swift design for object storage and from traditional storage vendors such as EMC’s ViPR and NetApp’s clustered Data ONTAP, or from hypervisor vendors such as VMware and Microsoft.  Making industry standard hardware handle the performance and high availability requirements of enterprise storage is being done by applying clustering technologies, both local and geographically distributed, to storage – again with object storage in the lead, but new techniques are also making this possible for more traditional file systems.  And combining geographically distributed storage clusters with snapshot may well eliminate the need for traditional types of data protection in the form of backup and disaster recovery.

Integrating storage devices, SDS or traditional, into the rest of the data center requires protocols that facilitate either direct connections of storage to application servers, or networked connections.  And as storage clustering gains traction, networking is the logical choice, with high speed Ethernet, such as 10 Gigabit per second (10 GbE) and 40 GbE increasingly dominant and the new 25 GbE coming along as well.  Given this convergence – the use of the same basic networking standards for all networking requirements, SAN or NAS, LANs, and WANs – storage will integrate quite readily over time into the increasingly accepted SDN technologies that are enabling networking to become a full participant in the virtualization and cloud era.  One trend that will bring SDS and SDN together is going to be the increasing popularity of private and hybrid clouds, since development of a private cloud, when done right, gives an IT organization pretty close to a “clean slate” on which to build new infrastructure and new management techniques — an opportune time to begin testing and using SDS.

Industry trends in servers, storage and networking, then, are heading in the right direction to make possible comprehensive, policy-driven management of the software defined data center.  However, despite the strong desire by IT managers and their C-level bosses for more agile and manageable data centers, a lot of money is still being spent just to maintain existing storage infrastructure, such as Fibre Channel.  So any organization that has set its sights on embracing the SDDC should start NOW to steadily convert its storage infrastructure to the kinds of devices and connectivity that are being proven in cloud environments – both by public cloud providers and by organizations that are taking a clean-slate approach to developing private and hybrid clouds.

 

Data Storage & the Software Defined Data Center

gnagle-color     The overall success of and general acceptance of server virtualization as a way to make servers more fully utilized and agile, have encouraged IT managers to pursue virtualization of the other two major data center functions, storage and networking.  And most recently, the addition of external resources such as the public Cloud to what the IT organization must manage has encouraged taking these trends a step further.  Many in the industry believe that the full software definition of all IT infrastructure, that is a software defined data center (SDDC), should be the end goal, to make all resources capable of fast adaptation to business needs and for the holy grail of open-API-based, application-aware, single-pane-of-glass management.

So as data storage professionals we must ask:  is storage changing in ways that will make it ready to be a full participant in the software defined data center? And what new storage techniques are now being proven to offer the agility, scalability and cost effectiveness that are sought by those embracing the SDDC?

These questions can best be answered by examining the current state of software defined storage (SDS) and how it is being integrated with other aspects of the SDDC. SDS does for storage what virtualization did for servers—breaking down the physical barriers that bind data to specific hardware. Using SDS, storage repositories can now be made up of high volume, industry standard hardware, where “white boxes,” typically in the form of multiple CPU, Open Compute Project servers with a number of solid-state and/or spinning disks, perform storage tasks that formerly required specialized disk controller hardware.  This is similar to what is beginning to happen to network switches under software defined networking (SDN).  And in another parallel to the SDN world, the software used in SDS is coming both from open source communities such as the OpenStack Swift design for object storage and from traditional storage vendors such as EMC’s ViPR and NetApp’s clustered Data ONTAP, or from hypervisor vendors such as VMware and Microsoft.  Making industry standard hardware handle the performance and high availability requirements of enterprise storage is being done by applying clustering technologies, both local and geographically distributed, to storage – again with object storage in the lead, but new techniques are also making this possible for more traditional file systems.  And combining geographically distributed storage clusters with snapshot may well eliminate the need for traditional types of data protection in the form of backup and disaster recovery.

Integrating storage devices, SDS or traditional, into the rest of the data center requires protocols that facilitate either direct connections of storage to application servers, or networked connections.  And as storage clustering gains traction, networking is the logical choice, with high speed Ethernet, such as 10 Gigabit per second (10 GbE) and 40 GbE increasingly dominant and the new 25 GbE coming along as well.  Given this convergence – the use of the same basic networking standards for all networking requirements, SAN or NAS, LANs, and WANs – storage will integrate quite readily over time into the increasingly accepted SDN technologies that are enabling networking to become a full participant in the virtualization and cloud era.  One trend that will bring SDS and SDN together is going to be the increasing popularity of private and hybrid clouds, since development of a private cloud, when done right, gives an IT organization pretty close to a “clean slate” on which to build new infrastructure and new management techniques — an opportune time to begin testing and using SDS.

Industry trends in servers, storage and networking, then, are heading in the right direction to make possible comprehensive, policy-driven management of the software defined data center.  However, despite the strong desire by IT managers and their C-level bosses for more agile and manageable data centers, a lot of money is still being spent just to maintain existing storage infrastructure, such as Fibre Channel.  So any organization that has set its sights on embracing the SDDC should start NOW to steadily convert its storage infrastructure to the kinds of devices and connectivity that are being proven in cloud environments – both by public cloud providers and by organizations that are taking a clean-slate approach to developing private and hybrid clouds.