Q&A – The Impact of International Data Protection Laws on the Cloud

The impact of international data protection legislation on the cloud is complicated and constantly changing. In our recent SNIA Cloud Storage Webcast on this topic we did our best to cover some of the recent global data privacy and data protection regulations being enacted. If you missed the Webcast, I encourage you to watch it on-demand at your convenience. We answered questions during the live event, but as promised we’re providing more complete answers in this blog. If you have additional questions, please comment here and we’ll reply as soon as we can.

The law is complex, and neither SNIA, the authors nor the presenters of this presentation are lawyers. Nothing here or in the presentation should be construed as legal advice. For that you need the services of a qualified professional.

Q. What are your thoughts on Safe Harbour being considered invalid, and the potential for a Safe Harbour 2

A. Since 6 October 2015 when the European Court of Justice invalidated the European Commission’s Safe Harbour Decision, there’s been a lot written about Safe Harbour 2 in the press. But it was clear that a renegotiation was essential two years before that, when discussions for a replacement were started. Many think (and many hope!) that a new and valid agreement in terms of Europe’s Human Rights legislation will be settled between the US and Europe sometime in March 2016.

Q. Are EU Model Clauses still available to use instead of BCRs (Binding Corporate Rules)?

A. EU-US data transfers facilitated by the use of model clauses probably today fail to comply with EU law. But as there appears to be no substitute available, the advice appears to be – use them for now until the problem is fixed. Full guidance can be found on the EC website.

Q. What does imbalance mean relative to consent?

A. An example might help. You might be an employee and agree (the “consent”) to your data being used by your employer in ways that you might not have agreed to normally – perhaps because you feel you can’t refuse because you might lose your job or a promotion for example. That’s an imbalanced relationship, and the consent needs to be seen in that light, and the employer needs to demonstrate that there has been, and will be, no coercion to give consent.

Come See SNIA at the Software-Defined Infrastructure Summit

Demand for software-defined infrastructure (SDI) is on the rise, and with good reason. SDI helps data centers meet the challenges of cloud computing, big data/analytics, mobility and social media, in an agile and cost-effective way.  I’m pleased to announce that SNIA will be an active participant at next week’s Software-Defined Infrastructure Summit in Santa Clara, CA, December 1-3.

My colleagues and I at the SNIA Cloud Storage Initiative have organized a “Working with OpenStack” Seminar that kicks off the Summit on Tuesday, December 1.

I will keynote an OpenStack fireside chat along with Chris DePuy, VP, at Dell’Oro Group. We’ll be discussing the SNIA Cloud Data Management Interface (CDMI) and its interface with OpenStack, OpenStack implementations, how standards play, and the future of open source in the 21st century.

My keynote will be accompanied by additional SNIA talks in the Introduction to OpenStack session and the Application Management session:

  • Sam Fineberg, PhD, SNIA Cloud Storage Initiative member and Distinguished Technologist at Hewlett Packard Enterprise Storage, will provide an overview of the storage aspects of OpenStack including the core projects for block storage (Cinder) and object storage (Swift), and the new shared file service (Manila). He’ll cover some common configurations and use cases for these technologies, and discuss how they interact with the other parts of OpenStack.
  • Richelle Ahlvers, SNIA Open Source Task Force member and Principal Storage Management Architect at Avago Technologies, will discuss application integration in OpenStack and how SNIA-developed standards enable cross-vendor management interoperability and help open source projects interoperate with more industry solutions.

Tuesday’s Seminar day will include additional sessions from leaders in OpenStack, Ceph, and Software Defined Storage. SDI Summit days 2 and 3 will provide information on hardware, software, and data center technology and applications of software-defined infrastructure featuring keynotes from IBM, Intel, Red Hat, and VMware, all SNIA member companies.  It’s a must attend event.

SNIA will also be exhibiting at the Summit. Please stop by booth #408 to learn how SNIA standards are used in open source projects including cloud data management, non-volatile memory, self-contained information retention, and storage management. We will also have information on SNIA programs such as membership, certification, conformance testing, and conferences.

SNIA members and colleagues can use the code SPGP to receive a $100 discount on any level of SDI Summit registration. I hope to see you in Santa Clara!

Moving Data Protection to the Cloud: Key Considerations

Leveraging the cloud for data protection can be an advantageous and viable option for many organizations, but first you must understand the pros and cons of different approaches. Join us on Nov. 17th for our live Webcast, “Moving Data Protection to the Cloud: Trends, Challenges and Strategies” where we’ll discuss the experiences of others with advice on how to avoid the pitfalls, especially during the transition from strictly local resources to cloud resources. We’ve pulled together a unique panel of SNIA experts as well as perspectives from some leading vendor experts Acronis, Asigra and Solid Fire who’ll discuss and debate:

  • Critical cloud data protection challenges
  • How to use the cloud for data protection
  • Pros and cons of various cloud data protection strategies
  • Experiences of others to avoid common pitfalls
  • Cloud standards in use – and why you need them

Register now for this live and interactive event. Our entire panel will be available to answer your questions. I hope you’ll join us!

 

Security is Strategic to Storage Developers – and a Prime Focus at SDC and SNIA Data Storage Security Summit

Posted by Marty Foltyn

Security is critical in the storage development process – and a prime focus of sessions at the SNIA Storage Developer Conference AND the co-located SNIA Data Storage Security Summit on Thursday September 24. Admission to the Summit is complimentary – register here at http://www.snia.org/dss-summit.DataStorageSecuritySummitlogo200x199[1]

The Summit agenda is packed with luminaries in the field of storage security, including keynotes from Eric Hibbard (SNIA Security Technical Work Group and Hitachi), Robert Thibadeau (Bright Plaza), Tony Cox (SNIA Storage Security Industry Forum and OASIS KMIP Technical Committee), Suzanne Widup (Verizon), Justin Corlett (Cryptsoft), and Steven Teppler (TimeCertain); and afternoon breakouts from Radia Perlman (EMC); Liz Townsend (Townsend Security); Bob Guimarin (Fornetix); and David Siles (Data Gravity). Roundtables will discuss current issues and future trends in storage security. Don’t miss this exciting event!

SDC’s “Security” sessions highlight security issues and strategies for mobile, cloud, user identity, attack prevention, key management, and encryption. Preview sessions here, and click on the title to find more details.SDC15_WebHeader3_999x188

Geoff Gentry, Regional Director, Independent Security Evaluators Hackers, will present Attack Anatomy and Security Trends, offering practical experience from implementing the OASIS Key Management Interoperability Protocol (KMIP) and from deploying and interoperability testing multiple vendor implementations of KMIP .

David Slik, Technical Director, Object Storage, NetApp will discuss Mobile and Secure: Cloud Encrypted Objects Using CDMI, introducing the Cloud Encrypted Object Extension to the CDMI standard, which permits encrypted objects to be stored, retrieved, and transferred between clouds.

Dean Hildebrand, IBM Master Inventor and Manager | Cloud Storage Software and Sasikanth Eda, Software Engineer, IBM will present OpenStack Swift On File: User Identity For Cross Protocol Access Demystified. This session will detail the various issues and nuances associated with having common ID management across Swift object access and file access ,and present an approach to solve them without changes in core Swift code by leveraging powerful SWIFT middleware framework.

Tim Hudson, CTO and Technical Director, Cryptsoft will discuss Multi-Vendor Key Management with KMIP, offering practical experience from implementing the OASIS Key Management Interoperability Protocol (KMIP) and from deploying and interoperability testing multiple vendor implementations of KMIP .

Nathaniel McCallum, Senior Software Engineer, Red Hat will present Network Bound Encryption for Data-at-Rest Protection, describing Petera, an open source project which implements a new technique for binding encryption keys to a network.

Finally, check out SNIA on Storage previous blog entries on File Systems, Cloud, Management, New Thinking, and Disruptive Technologies. See the agenda and register now for SDC at http://www.storagedeveloper.org.

Embrace the Cloud at SNIA’s Storage Developer Conference With These Top-Notch Speakers and Sessions!

For the next two weeks, SNIA on Storage will highlight exciting interest areas in the 2015 SNIA Storage Developer Conference (SDC) agenda. If you have not registered, you need to! Visit www.storagedeveloper.org to see the four day overview and sign up.

Cloud storage is hot, and whether you are new to the cloud or an experienced developer, SDC has a great lineup of speakers and sessions. Preview sessions here, and click on the title to find more details.SDC15_WebHeader3_999x188

If you are just dipping your toes into cloud technology, you will want to check out the SDC Pre-Conference Primer on Sunday September 20. These sessions are included with full conference registration.

Here, SNIA Cloud Storage TWG Co-Chairs, David Slik and Mark Carlson, will explain all you Need to Know on Cloud Storage. You will come up to speed on the concepts, conventions, and standards in this space, and even see a live demo of an operating storage cloud. And Brian Mason, MTS-SW at NetApp, will review how to use REST API for Management Integration and how developers can use their best in class management tools and have various storage systems integrate into their management tools.

At the SDC Conference, the Cloud track kicks off with David Slik, SNIA Cloud Storage TWG Co-Chair and Technical Director at NetApp, discussing how to Use SNIA’s Cloud Data Management Interface (CDMI) to Manage Swift, S3, and Ceph Object Repositories, and how the use of CDMI as a management protocol adds value to multi-protocol systems.

Yong Chen, Assistant Professor at Texas Tech University will speak on Unistore: A Unified Storage Architecture for Cloud Computing. He will introduce an ongoing effort from Texas Tech University and Nimboxx Inc. to build an innovative unified storage architecture (Unistore) with the co-existence and efficient integration of heterogeneous HDD and SCM devices for Cloud storage.

Luke Behnke, VP of Products at Bitcasa will present The Developer’s Dilemma: Do-It-Yourself Storage or Surrender Your Data? He’ll discuss the choice between DIY or cloud storage APIs, and how this will impact future functionality and user experience.

Sachin Goswami, Solution Architect and Storage COE Head Hi Tech, Tata Consultancy Services ((TCS), will explain How to Test CDMI Extension Features Like LTFS, Data Deduplication, and OVF, Partial – Value Copy Functionality: Challenges, Solutions and Best Practices, sharing the approach TCS will adopt to overcome the challenges in testing of LTFS integration with CDMI, Data Deduplication, partial upload on Server and Open Vitalization format (OVF) of CDMI and Non-CDMI based scenarios of the cloud.

Speaking of the CDMI standard, join the Cloud Plugfest at SDC starting on September 21st to learn more about the CDMI Conformance Test Program and test your application for CDMI conformance.

And you won’t want to miss the  Birds-Of-a-Feather (BOF) Sessions on Cloud! The first is on Tuesday evening, September 22, on Getting Started with the CDMI Conformance Test Program! Come to this OPEN TO ALL Birds of a Feather session to learn what the CTP program entails, details on the testing service that is offered, and how to get the CTP process started.

On Wednesday evening, September 23, the Moving Data Protection to the Cloud: Trends, Challenges and Strategies BOF will discuss critical cloud data protection challenges, how to use the cloud for data protection, pros and cons of various cloud data protection strategies, experiences of others (good and bad) to avoid common pitfalls, and cloud standards in use – and why you need them! This BOF is open to all!

 

Register now at www.storagedeveloper.org. And stay tuned for tomorrow’s blog on Management topics at SDC!

New Webcast: Block Storage in the Open Source Cloud called OpenStack

On June 3rd at 10:00 a.m. SNIA-ESF will present its next live Webcast “Block Storage in the Open Source Cloud called OpenStack.” Storage is a major component of any cloud computing platform. OpenStack is one of largest and most widely supported Open Source cloud computing platforms that exist in the market today. The OpenStack block storage service (Cinder) provides persistent block storage resources that OpenStack Nova compute instances can consume.

I will be moderating this Webcast, presented by a core member of the OpenStack Cinder team, Walt Boring. Join us, as we’ll dive into:

  • Relevant components of OpenStack Cinder
  • How block storage is managed by OpenStack
  • What storage protocols are currently supported
  • How it all works together with compute instances

I encourage you to register now to block your calendar. This will be a live and interactive Webcast, please bring your questions. I look forward to “seeing” you on June 3rd

LTFS Bulk Transfer Standard Q&A

Our recent live SNIA Cloud Webcast “LTFS Bulk Transfer Standard” is now available on-demand. Thanks to all the folks who attended the live event. We did not have time to address all of the questions, so here are answers to them. If you think of additional questions, please feel free to comment on this blog.

Q. The LTFS standard seems to support shared extents between files, and by extension, deduplicated files. Is this a correct assessment, and how does it play in the bulk transfer standard?

A. The LTFS Bulk Transfer Standard supports shared extents as supported by the LTFS standard, which can transparently reduce space used by having multiple references to common data stored on tape (deduplication). This typically happens below the bulk transfer layer, by the software used to read and write the LTFS volumes. At this point, few software packages support this feature due to the wear and latency consequences of read seeks resulting from using this feature.

Q. What is the state of the standard in its lifecycle? (e.g., working group draft, public review, published, etc.)

A. The LTFS standard has been around for some time; more information can be found here at http://www.snia.org/tech_activities/standards/curr_standards/ltfs. The LTFS Bulk Transfer Standard is here at http://www.snia.org/tech_activities/publicreview#ltfsbulk, and is in public review.

Q. The standard seems to be based on the idea of moving physical tapes to the cloud. Is there a definition of a virtual LTFS image that can be moved between systems over the network?

A. Not yet, but that is a great idea we’ll be taking forward in the next versions of the proposal.

Q. One of the barriers to greater use of LTFS in the Cloud is the relative lack of enterprise grade management software that ensures that the tape media is refreshed / upgraded as it ages, that its integrity is periodically checked, that reclamation and compaction is done. It needs open standards for support in standard volume management systems as well. Until these things are in place, LTFS will be interesting largely to specialized industries like film/entertainment, seismic, and bulk transfer & bulk storage — but not about the steady-state use of tape as a true additional layer of the cloud storage hierarchy. Tape with LTFS plus proper management could fill this role — but not until the full lifecycle tape management is available and integrated.

A. The management that is always required for a physical product with a well-defined and finite lifetime is not a unique requirement of LTFS. Tape has a long history of use as a backup and archive medium, and there are a number of tape management products that are commercially available from LTO tape suppliers and independent software companies, as well as open source products. A Google search for “tape management software” will provide you with a number of alternative solutions.

Q. Do you have a list or people that sell LTFS based solutions?

A. No we don’t, but it’s a very good idea, and we’ll investigate it further.

 

What’s Happening with 25GbE

In July 2014, IEEE 802.3 voted to form a Study Group for 25Gb/s Ethernet.  There has been a lot attention in the networking press lately about 25Gb/s Ethernet, but many people are asking what is it and how did we get here.  After all, 802.3 already has completed standards for 40Gb/s and 100Gb/s and is currently working on 400Gb/s, so from a pure speed perspective, starting a 25Gb/s project now does look like a step backwards.

(Warning: the following discussion contains excessive physical layer jargon.)

The Sweet Spot

25GbE as a port speed is attractive because it makes use of 25Gb/s per lane signaling technology that has been in development for years in the industry, culminating in the recent completion of 802.3bj, the standard for 100GbE over backplane or TwinAx copper that utilizes four parallel lanes of 25Gb/s signaling to achieve the 100Gb/s port speed. Products implementing 25Gb/s signaling in CMOS technology are just starting to come to market, and the rate will likely be a sweet spot for many years, as higher rate signaling of 40Gb/s or 50Gb/s is still in early technology development phases. The ability to implement this high speed I/O in CMOS is important because it allows combining high-speed I/O with many millions of logic gates needed to implement Ethernet switches, controllers, FPGAs, and microprocessors. Thus specifying a MAC rate of 25Gb/s to utilize 25Gb/s serdes technology can enable product developers to optimize for both the lowest cost/bit and the highest overall bandwidth utilization of the switching fabric.

4-Lane to 1-Lane Evolution

To see how we got here and why 25Gb/s is interesting, it is useful to back up a couple of generations and look at 10Gb/s and 40Gb/s Ethernet.  Earliest implementations of 10GbE relied on rather wide parallel electrical interfaces: XGMII and the 16-Bit interface.  Very soon after, however, 4-lane serdes-based interfaces became the norm starting with XAUI (for chip-to-chip and chip-to-optical module use) which was then adapted to longer reaches on TwinAx and backplane (10GBASE-CX4 and 10GBASE-KX4).  Preceding 10GbE achieving higher volumes (~2009) was the specification and technical feasibility of 10Gb/s on a single electrical serial lane. XFI was the first followed by 10GBASE-KR (backplane) and SFI (as an optical module interface and for direct attach TwinAx cable using the SFP+ pluggable form factor).  KR and SFI started to ramp around 2009 and are still the highest volume share of 10GbE ports in datacenter applications. The takeaway, in my opinion, is that single-lane interfaces helped the 10GbE volume ramp by reducing interconnect cost. Now look forward to 40GbE and 10GbE. The initial standard, 802.3ba, was completed in 2010.  So during the time that this specification was being developed, 10Gb/s serial interfaces were gaining traction, and consensus formed around the use of multiple 10Gb/s lanes in parallel to make the 40GbE and 100GbE electrical interfaces. For example, there is a great similarity between 10GBASE-KR, and one lane of the 40GBASE-KR4 four-lane interface. In a similar fashion 10Gb/s SFI for TwinAx  & optics in the SFP+ form factor is similar to a lane of the 40GbE equivalent interfaces for TwinAx and optics in the QSFP+ form factor.

But how does this get to 25Gb/s?

Due to the similarity in technology needed to make 10GbE and 40GbE, it has because a common feature in Ethernet switch and NIC chips to implement a four-lane port for 40GbE that can be configured to use each lane separately yielding four 10GbE ports.

From there it is a natural extension that 100GbE ports being implemented using 802.3bj technology (4x25Gb/s) also can be configured to support four independent ports operating at 25Gb/s.  This is such a natural conclusion that multiple companies are implementing 25GbE even though it is not a standard.

In some environments, the existence of a standard is not a priority.  For example, when a large-scale datacenter of compute, storage and networking is architected, owned and operated by one entity, that entity validates the necessary configuration to meet its requirements. For the broader market, however, there is typically a requirement for multi-vendor interoperability across a diverse set of configurations and uses. This is where Ethernet and IEEE 802.3 has provided value to the industry for over 30 years.

Where’s the Application?

Given the nature of their environment, it is the Cloud datacenter operators that are poised to be the early adopters of 25GbE. Will it also find a home in more traditional enterprise and storage markets? Time will tell, but in many environments ease of use, long shelf life, and multi-vendor interoperability are the priorities. For any environment, having the 25GbE specification maintained IEEE 802.3 will facilitate those needs.

Ethernet is the right fit for the Software Defined Data Center

“Software Defined” is a label being used to define advances in network and storage virtualization and promises to greatly improve infrastructure management and accelerate business agility. Network virtualization itself isn’t a new concept and has been around in various forms for some time (think vLANs). But, the commercialization of server virtualization seems to have paved the path to extend virtualization throughout the data center infrastructure, making the data center an IT environment delivering dynamic and even self-deployed services. The networking stack has been getting most of the recent buzz and I’ll focus on that portion of the infrastructure here.

VirtualizationChangesWhat is driving this trend in data networking? As I mentioned, server virtualization has a lot to do with the new trend. Virtualizing applications makes a lot of things better, and makes some things more complicated. Server virtualization enables you to achieve much higher application density in your data center. Instead of a one-to-one relationship between the application and server, you can host tens of applications on the same physical server. This is great news for data centers that run into space limitations or for businesses looking for greater efficiency out of their existing hardware.

YesteryearThe challenge, however, is that these applications aren’t stationary. They can move from one physical server to another. And this mobility can add complications for the networking guys. Networks must be aware of virtual machines in ways that they don’t have to be aware of physical servers. For network admins of yesteryear, their domain was a black box of “innies” and “outies”. Gone are the days of “set it and forget it” in terms of networking devices. Or is it?

Software defined networks (aka SDN) promise to greatly simplify the network environment. By decoupling the control plane from the data plane, SDN allows administrators to treat a collection of networking devices as a single entity and can then use policies to configure and deploy networking resources more dynamically. Additionally, moving to a software defined infrastructure means that you can move control and management of physical devices to different applications within the infrastructure, which give you flexibility to launch and deploy virtual infrastructures in a more agile way.

network virtualizationSoftware defined networks aren’t limited to a specific physical transport. The theory, and I believe implementation, will be universal in concept. However, the more that hardware can be deployed in a consistent manner, the greater flexibility for the enterprise. As server virtualization becomes the norm, servers hosting applications with mixed protocol needs (block and file) will be more common. In this scenario, Ethernet networks offer advantages, especially as software defined networks come to play. Following is a list of some of the benefits of Ethernet in a software defined network environment.

Ubiquitous

Ethernet is a very familiar technology and is present in almost every compute and mobile device in an enterprise. From IP telephony to mobile devices, Ethernet is a networking standard commonly deployed and as a result, is very cost effective. The number of devices and engineering resources focused on Ethernet drives the economics in favor of Ethernet.

Compatibility

Ethernet has been around for so long and has proven to “just work.” Interoperability is really a non-issue and this extends to inter-vendor interoperability. Some other networking technologies require same vendor components throughout the data path. Not the case with Ethernet. With the rare exception, you can mix and match switch and adapter devices within the same infrastructure. Obviously, best practices would suggest that at least a single vendor within the switch infrastructure would simplify the environment with a common set of management tools, features, and support plans. But, that might change with advances in SDN.

Highly Scalable

Ethernet is massively scalable. The use of routing technology allows for broad geographic networks. The recent adoption of IPv6 extends IP addressing way beyond what is conceivable at this point in time. As we enter the “internet of things” period in IT history, we will not lack for network scale. At least, in theory.

Overlay Networks

Overlay Networksallow you to extend L2 networks beyond traditional geographic boundaries. Two proposed standards are under review by the Internet Engineering Task Force (IETF). These include Virtual eXtensible Local Area Networks (VXLAN) from VMware and Network Virtualization using Generic Routing Encapsulation (NVGRE) from Microsoft. Overlay networks combine L2 and L3 technologies to extend the L2 network beyond traditional geographic boundaries, as with hybrid clouds. You can think of overlay networks as essentially a generalization of a vLAN. Unlike with routing, overlay networks permit you to retain visibility and accessibility of your L2 network across larger geographies.

Unified Protocol Access

Ethernet has the ability to support mixed storage protocols, including iSCSI, FCoE, NFS, and CIFS/SMB. Support for mixed or unified environments can be more efficiently deployed using 10 Gigabit Ethernet (10GbE) and Data Center Bridging (required for FCoE traffic) as IP and FCoE traffic can share the same ports. 10GbE simplifies network deployment as the data center can be wired once and protocols can be reconfigured with software, rather than hardware changes.

Virtualization

Ethernet does very well in virtualized environments. IP address can easily be abstracted from physical ports to facilitate port mobility. As a result, networks built on an Ethernet infrastructure leveraging network virtualization can benefit from increased flexibility and uptime as hardware can be serviced or upgraded while applications are online.

Roadmap

For years, Ethernet has increased performance, but the transition from Gigabit Ethernet to 10 Gigabit Ethernet was a slow one. Delays in connector standards complicated matters. But, those days are over and the roadmap remains robust and product advances are accelerating. We are starting to see 40GbE devices on the market today, and will see 100GbE devices in the near future. As more and more data traffic is consolidated onto a shared infrastructure, these performance increases will provide the headroom for more efficient infrastructure deployments.

Some of the benefits listed above can be found with other networking technologies. But, Ethernet technology offers a unique combination of technology and economic value across a broad ecosystem of vendors that make it an ideal infrastructure for next generation data centers. And as these data centers are designed more and more around application services, software will be the lead conversation. To enable the vision of a software defined infrastructure, there is no better network technology than Ethernet.

Ethernet and IP Storage – Today’s Technology Enabling Next Generation Data Centers

I continue to believe that IP based storage protocols will be preferred for future data center deployments. The future of IT is pointing to cloud based architectures, whether internal or external. At the core of the cloud is virtualization. And I believe that Ethernet and IP storage protocols offer the greatest overall value to unlock the potential of virtualization and clouds. Will other storage network technologies work? Of course. But, I’m not talking about whether a network “works”. I’m suggesting that a converged network environment with Ethernet and IP storage offers the best combined value for virtual environments and cloud deployments. I’ve written and spoken about this topic before. And I will likely continue to do so. So, let me mention a few reasons to choose IP storage, iSCSI or NAS, for use in cloud environments.

Mobility. One of the many benefits of server virtualization is the ability to non-disruptively migrate applications from one physical server to another to support load balancing, failover or redundancy, and servicing or updating of hardware. The ability to migrate applications is best achieved with networked storage since the data doesn’t have to move when a virtual machine (VM) moves. But, the network needs to maintain connectivity to the fabric when a VM moves. Ethernet offers a network technology capable of migrating or reassigning network addresses, in this case IP addresses, from one physical device to another. When a VM moves to another physical server, the IP addresses move with it. IP based storage, such as iSCSI, leverages the built in capabilities of TCP/IP over Ethernet to migrate network port addresses without interruption to applications.

Flexibility. Most data centers require a mixture of applications that access either file or block data. With server virtualization, it is likely that you’ll require access to file and block data types on the same physical server for either the guest or parent OS. The ability to use a common network infrastructure for both the guest and parent can reduce cost and simplify management. Ethernet offers support for multiple storage protocols. In addition to iSCSI, Ethernet supports NFS and CIFS/SMB resulting in greater choice to optimize application performance within your budget. FCoE is also supported on an enhanced 10Gb Ethernet network to offer access to an existing FC infrastructure. The added flexibility to interface with existing SAN resources enhances the value of 10Gb as a long-term networking solution.

Performance. Cost. Ubiquity. Other factors that enhance Ethernet storage and therefore IP storage adoption include a robust roadmap, favorable economics, and near universal adoption. The Ethernet roadmap includes 40Gb and 100Gb speeds which will support storage traffic and will be capable of addressing any foreseeable application requirements. Ethernet today offers considerable economic value as port prices continue to drop. Although Gb speeds offer sufficient bandwidth for most business applications, the cost per Gb of bandwidth with 10 Gigabit Ethernet (GbE) is now lower than GbE and therefore offers upside in cost and efficiency. Finally, nearly all new digital devices including mobile phones, cameras, laptops, servers, and even some home appliances, are being offered with WiFi connectivity over Ethernet. Consolidating onto a single network technology means that the networking infrastructure to the rest of the world is essentially already deployed. How good is that?

Some may view moving to a shared network as kind of scary. The concerns are real. But, Ethernet has been a shared networking platform for decades and continues to offer enhanced features, performance, and security to address its increased application. And just because it can share other traffic, doesn’t mean that it must. Physical isolation of Ethernet networks is just as feasible as any other networking technology. Some may choose this option. Regardless, selecting a single network technology, even if not shared across all applications, can reduce not only capital expense, but also operational expense. Your IT personnel can be trained on a single networking technology versus multiple specialized single purpose networks. You may even be able to reduce maintenance and inventory costs to boot.

Customers looking to architect their network and storage infrastructure for today and the future would do well to consider Ethernet and IP storage protocols. The advantages are pretty compelling.