The overall success of and general acceptance of server virtualization as a way to make servers more fully utilized and agile, have encouraged IT managers to pursue virtualization of the other two major data center functions, storage and networking. And most recently, the addition of external resources such as the public Cloud to what the IT organization must manage has encouraged taking these trends a step further. Many in the industry believe that the full software definition of all IT infrastructure, that is a software defined data center (SDDC), should be the end goal, to make all resources capable of fast adaptation to business needs and for the holy grail of open-API-based, application-aware, single-pane-of-glass management.
So as data storage professionals we must ask: is storage changing in ways that will make it ready to be a full participant in the software defined data center? And what new storage techniques are now being proven to offer the agility, scalability and cost effectiveness that are sought by those embracing the SDDC?
These questions can best be answered by examining the current state of software defined storage (SDS) and how it is being integrated with other aspects of the SDDC. SDS does for storage what virtualization did for servers—breaking down the physical barriers that bind data to specific hardware. Using SDS, storage repositories can now be made up of high volume, industry standard hardware, where “white boxes,” typically in the form of multiple CPU, Open Compute Project servers with a number of solid-state and/or spinning disks, perform storage tasks that formerly required specialized disk controller hardware. This is similar to what is beginning to happen to network switches under software defined networking (SDN). And in another parallel to the SDN world, the software used in SDS is coming both from open source communities such as the OpenStack Swift design for object storage and from traditional storage vendors such as EMC’s ViPR and NetApp’s clustered Data ONTAP, or from hypervisor vendors such as VMware and Microsoft. Making industry standard hardware handle the performance and high availability requirements of enterprise storage is being done by applying clustering technologies, both local and geographically distributed, to storage – again with object storage in the lead, but new techniques are also making this possible for more traditional file systems. And combining geographically distributed storage clusters with snapshot may well eliminate the need for traditional types of data protection in the form of backup and disaster recovery.
Integrating storage devices, SDS or traditional, into the rest of the data center requires protocols that facilitate either direct connections of storage to application servers, or networked connections. And as storage clustering gains traction, networking is the logical choice, with high speed Ethernet, such as 10 Gigabit per second (10 GbE) and 40 GbE increasingly dominant and the new 25 GbE coming along as well. Given this convergence – the use of the same basic networking standards for all networking requirements, SAN or NAS, LANs, and WANs – storage will integrate quite readily over time into the increasingly accepted SDN technologies that are enabling networking to become a full participant in the virtualization and cloud era. One trend that will bring SDS and SDN together is going to be the increasing popularity of private and hybrid clouds, since development of a private cloud, when done right, gives an IT organization pretty close to a “clean slate” on which to build new infrastructure and new management techniques — an opportune time to begin testing and using SDS.
Industry trends in servers, storage and networking, then, are heading in the right direction to make possible comprehensive, policy-driven management of the software defined data center. However, despite the strong desire by IT managers and their C-level bosses for more agile and manageable data centers, a lot of money is still being spent just to maintain existing storage infrastructure, such as Fibre Channel. So any organization that has set its sights on embracing the SDDC should start NOW to steadily convert its storage infrastructure to the kinds of devices and connectivity that are being proven in cloud environments – both by public cloud providers and by organizations that are taking a clean-slate approach to developing private and hybrid clouds.