Trends in Data Protection

Data protection hasn’t changed much in a long time.  Sure, there are slews of product announcements and incessant declarations of the “next big thing”, but really, how much have market shares really changed over the past decade?  You’ve got to wonder if new technology is fundamentally improving how data is protected or is simply turning the crank to the next model year.  Are customers locked into the incremental changes proffered by traditional backup vendors or is there a better way?

Not going to take it anymore

The major force driving change in the industry has little to do with technology.  People have started to challenge the notion that they, not the computing system, should be responsible for ensuring the integrity of their data.  If they want a prior version of their data, why can’t the system simply provide it?   In essence, customers want to rely on a computing system that just works.  The Howard Beale anchorman in the movie Network personifies the anxiety that burdens customers explicitly managing backups, recoveries, and disaster recovery.  Now don’t get me wrong; it is critical to minimize risk and manage expectations.   But the focus should be on delivering data protection solutions that can simply be ignored.

Are you just happy to see me?

The personal computer user is prone to ask “how hard can it be to copy data?”  Ignoring the fact that many such users lose data on a regular basis because they have failed to protect their data at all, the IT professional is well aware of the intricacies of application consistency, the constraints of backup windows, the demands of service levels and scale, and the efficiencies demanded by affordability.    You can be sure that application users that have recovered lost or corrupted data are relieved.  Mae West, posing as a backup administrator, might have said “Is that a LUN in your pocket or are you just happy to see me?”

In the beginning

Knowing where the industry has been is a good step in knowing where the industry is going.  When the mainframe was young, application developers carried paper tape or punch cards.  Magnetic tape was used to store application data as well as a media to copy it to. Over time, as magnetic disk became affordable for primary data, the economics of magnetic tape remained compelling as a backup media.  Data protection was incorporated into the operating system through backup/recovery facilities, as well as through 3rd party products.

As microprocessors led computing mainstream, non-mainframe computing systems gained prominence and tape became relegated to secondary storage.  Native, open source, and commercial backup and recovery utilities stored backup and archive copies on tape media and leveraged its portability to implement disaster recovery plans.  Data compression increased the effective capacity of tape media and complemented its power consumption efficiency.

All quiet on the western front

Backup to tape became the dominant methodology for protecting application data due to its affordability and portability.  Tape was used as the backup media for application and server utilities, storage system tools, and backup applications.

B2T

Backup Server copies data from primary disk storage to tape media

Customers like the certainty of knowing where their backup copies are and physical tapes are comforting in this respect.  However, the sequential access nature of the media and indirect visibility into what’s on each tape led to difficulties satisfying recovery time objectives.  Like the soldier who fights battles that seem to have little overall significance, the backup administrator slogs through a routine, hoping the company’s valuable data is really protected.

B2D phase 1

Backup Server copies data to a Virtual Tape Library

Uncomfortable with problematic recovery from tape, customers have been evolving their practices to a backup to disk model.  Backup to disk and then to tape was one model designed to offset the higher cost of disk media but can increase the uncertainty of what’s on tape.  Another was to use virtual tape libraries to gain the direct access benefits of disk while minimizing changes in their current tape-based backup practices.  Both of these techniques helped improve recovery time but still required the backup administrator to acquire, use, and maintain a separate backup server to copy the data to the backup media.

Snap out of it!

Space-efficient snapshots offered an alternative data protection solution for some file servers. Rather than use separate media to store copies of data, the primary storage system itself would be used to maintain multiple versions of the data by only saving changes to the data.  As long as the storage system was intact, restoration of prior versions was rapid and easy.  Versions could also be replicated between two storage systems to protect the data should one of the file servers become inaccessible.

snapshot

Point in Time copies on disk storage are replicated to other disks

This procedure works, is fast, and is space efficient for data on these file servers but has challenges in terms of management and scale.  Snapshot based approaches manage versions of snapshots; they lack the ability to manage data protection at the file level.  This limitation arises because the customer’s data protection policies may not match the storage system policies.  Snapshot based approaches are also constrained by the scope of each storage system so scaling to protect all the data in a company (e.g., laptops) in a uniform and centralized (labor-efficient) manner is problematic at best.

CDP

Writes are captured and replicated for protection

Continuous Data Protection (both “near CDP” solutions which take frequent snapshots and “true CDP” solutions which continuously capture writes) is also being used to eliminate the backup window thereby ensuring large volumes of data can be protected.  However, the expense and maturity of CDP needs to be balanced with the value of “keeping everything”.

 

 

An offer he can’t refuse

Data deduplication fundamentally changed the affordability of using disk as a backup media.  The effective cost of storing data declined because duplicate data need only be stored once. Coupled with the ability to rapidly access individual objects, the advantages of backing up data to deduplicated storage are overwhelmingly compelling.  Originally, the choice of whether to deduplicate data at the source or target was a decision point but more recent offerings offer both approaches so customers need not compromise on technology.  However, simply using deduplicated storage as a backup target does not remove the complexity of configuring and supporting a data protection solution that spans independent software and hardware products.  Is it really necessary that additional backup servers be installed to support business growth?  Is it too much to ask for a turnkey solution that can address the needs of a large enterprise?

The stuff that dreams are made of

 

PBBA

Transformation from a Backup Appliance to a Recovery Platform

Protection storage offers an end-to-end solution, integrating full-function data protection capabilities with deduplicated storage.  The simplicity and efficiency of application-centric data protection combined with the scale and performance of capacity-optimized storage systems stands to fundamentally alter the traditional backup market.  Changed data is copied directly between the source and the target, without intervening backup servers.  Cloud storage may also be used as a cost-effective target.  Leveraging integrated software and hardware for what each does best allows vendors to offer innovations to customers in a manner that lowers their total cost of ownership.  Innovations like automatic configuration, dynamic optimization, and using preferred management interfaces (e.g., virtualization consoles, pod managers) build on the proven practices of the past to integrate data protection into the customer’s information infrastructure.

No one wants to be locked into products because they are too painful to switch out; it’s time that products are “sticky” because they offer compelling solutions.  IDC projects that the worldwide purpose-built backup appliance (PBBA) market will grow 16.6% from $1.7 billion in 2010 to $3.6 billion by 2015.  The industry is rapidly adopting PBBAs to overcome the data protection challenges associated with data growth.  Looking forward, storage systems will be expected to incorporate a recovery platform, supporting security and compliance obligations, and data protection solutions will become information brokers for what is stored on disk.

CSI Quarterly Update Q3 2011

A Message from
SNIA Links:

Follow SNIA:
Linkedin
Twitter
Facebook

SNIA Blogs:

Cloud Storage Initiative

Upcoming Activities

Get Involved Now!

A limited number of these activities are open to all, or Join SNIA and the CSI to participate in any of these activities

July Cloud Plugfest

The purpose of the Cloud Plugfest is for vendors to bring their implementations of CDMI and OCCI to test, identify, and fix bugs in a collaborative setting with the goal of providing a forum in which companies can develop interoperable products.

The Cloud Plugfest starts on Tuesday July 12 and runs thru Thursday July 14, 2011 at the SNIA Technology Center in Colorado Springs, CO.  The SNIA Cloud Storage Initiative (CSI) is underwriting the costs of the event, therefore there is no participation fee.

More Information

SNIA Cloud Burst Event

There are a multitude of events dedicated to cloud computing, but where can you go to find out specifically about cloud storage? The 2011 SNIA Cloud Burst Summit educates and offers insight into this fast–growing market segment. Come hear from industry luminaries, see live demonstrations, and talk to technology vendors about how to get started with cloud storage.

More information

Cloud Lab Plugfest at SDC

Plugfests have always been an important part of the Storage Developers Conference and this year will be the first Cloud Lab Plugfest event held over multiple days to test the interoperability of CDMI, OVF and OCCI implementations.

To get involved, please contact: arnold@snia.org

Cloud Pavilion at SNW

Every SNW, one of highlights is the Cloud Pavilion where attendees can see public and private cloud offerings and discuss solutions. Space is limited, so get involved early to ensure your spot.

To get involved, please contact: lisa.mercurio@snia.org

Beyond Potatoes – Migrating from NFSv3

“It is a mistake to think you can solve any major problems just with potatoes.”
Douglas Adams (1952-2001, English humorist, writer and dramatist)

While there have been many advances and improvements to NFS over the last decade, some IT organizations have elected to continue with NFSv3 – like potatoes, it’s the staple filesystem protocol that just about any UNIX administrator understands.

Although adequate for many purposes and a familiar and well understood protocol, choosing and continuing to deploy NFSv3 has become increasingly difficult to justify in a modern datacenter. For example, NFSv3 makes promiscuous use of ports, something that is unsuitable for a variety of security reasons for use over a wide area network (WAN); plus increased server & client bandwidth demands and improved functionality of Network Attached Storage (NAS) arrays have outstripped NFSv3’s ability to deliver high throughput.
NFSv4 and the minor versions that follow it are designed to address many of the issues that NFSv3 poses. NFSv4 also includes features intended to enable its use in global wide area networks (WANs), and to improve the performance and resilience of NAS (Network Attached Storage):

  • Firewall-friendly single port operations
  • Internationalization support
  • Replication and migration facilities
  • Mandatory use of strong RPC security flavors that depend on cryptography, with support of access control that is compatible with both UNIX® and Windows®
  • Use of character strings instead of integers to represent user and group identifiers
  • Advanced and aggressive cache management features with delegations
  • (with NFSv4.1 pNFS, or parallel NFS) Trunking

In April 2003, the Network File System (NFS) version 4 Protocol was ratified as an Internet standard, described in RFC-3530, which superseded NFS Version 3 (NFSv3, specified in RFC-1813). Since the ratification of NFSv4, further advances have been made to the standard, notably NFSv4.1 (as described in RFC-5661, ratified in January 2010) that included several new features such as parallel NFS (pNFS). And further work is currently underway in the IETF for NFSv4.2.

Delegations with NFSv4

In NFSv3, clients have to function as if there is contention for the files they have opened, even though this is often not the case. As a result of this conservative approach to file locking, there are frequently many unneeded requests from the client to the server to find out whether an open file has been modified by some other client. Even worse, all write I/O in this scenario is required to be synchronous, further impacting client-side performance.
NFSv4 differs by allowing the server to delegate specific actions on a file to the client; this enables more aggressive client caching of data and the locks. A server temporarily cedes control of file updates and the locking state to a client via a delegation, and promises to notify the client if other clients are accessing the file. Once the client holds a delegation, it can perform operations on files with data has been cached locally, and thereby avoid network latency and optimize its use of I/O.

Trunking with pNFS

Many additional enhancements to NFSv4 are available with NFSv4.1, of which pNFS is a part. pNFS adds the capability to perform trunking at the NFS level by adding a session layer. The client establishes a session with an NFSv4.1 server, and can then create multiple TCP connections to the NFSv4.1 server, each potentially going over a different network interface on the client, and arriving on a different interface on the NFSv4.1 server. Now different requests sent over the same session identifier can go over different network paths, dramatically improving latency and increasing bandwidth.
Although client and server implementations of NFSv4.1 are available, they are in early stages of implementation and adoption. However, to take advantage of them in the future, it is important to plan now for the move to NFSv4 and beyond – and there are many servers and clients available now that support NFSv4. NFSv4 is a mature and stable protocol with many advantages in its own right over its predecessors NFSv3 and NFSv2.

Potatoes and Beyond

Now is the time to make the switchover; there really is no justification for not pursuing NFSv4 as the first NFS protocol version of choice. Although migrating from earlier versions of NFS requires some planning as there are significant differences between the two protocols, the benefits are impressive. To ensure a smooth migration to NFSv4 and beyond, the SNIA Ethernet Storage Forum NFS Special Interest Group has recently published an overview white paper “Migrating to NFSv4”. This covers internationalization support, automatic mounting of NFSv4 filesystems on demand, TCP protocol support amongst other considerations.
NFSv4 and NFSv4.1 have been developed for a reason; and NFSv4.2 is on the horizon. Like the potato, NFSv3 is a staple of the network Filesystem world. But as Douglas Adams said; “It is a mistake to think you can solve any major problems just with potatoes.” NFSv4 fixes many of NFSv3’s deficiencies, and represents a major advance that brings improved availability, performance and security; all the check-list items beyond potatoes that today’s users of network attached storage demand.

Get your hands on a Storage Cloud

Register-Banner2.jpg

Building your own standards-based private storage cloud.

Tuesday May24th, 1-5pm

Omni Interlocken Hotel,

Broomfield, CO

This year at Gluecon SNIA will be conducting a Hands on Lab workshop for Developers,

This session will take you deeper into cloud storage than you likely have ever been. First we will explore the standard cloud storage interface called CDMI (Cloud Data Management Interface), including some of the rationale and design tradeoffs in its creation.

Learn about how to use the RESTful interface to move data into and out of a storage cloud using a common interface. Learn how CDMI enables data portability between clouds. Dig deep into features such as Data System Metadata (how you order services from the cloud), cloud-side operations, queues, query and more.

Then stick around as we load an open source Java implementation of CDMI onto your laptop to create your own private cloud. Explore the workings of the JAX-RS standard used in this implementation and the storage code working behind the scenes. Advanced users can even implement their own cloud storage features and expose them through the standard interface.

CDMI Overview

 

The window below uses a presentation tool called Prezi. Just use the arrow keys to step back and forth through the presentation or click the play button. You can also navigate the graph yourself.

 

Everything You Need to Know About iSCSI

Are you considering deploying an iSCSI storage network, and would like to learn some of the best practices of configuring the environment, from host to storage? Well, now you can learn from an expert. The SNIA Ethernet Storage Forum will be sponsoring a live webcast with our guest speaker, Dennis Martin from Demartek. Dennis will share first-hand expertise and actionable best practices to effectively deploy iSCSI storage networks. A live Q&A will also be included. It doesn’t matter if you have a large, medium or small environment, Dennis will provide application specific recommendations that you won’t want to miss.

When: April 21st

Time: 8:00 am PT / 11:00 am ET

Free registration: http://www.brighttalk.com/webcast/26785

The SNIA ESF has several other web events planned for the rest of this calendar year.  Let us know what topics are important to you. We want to make these events highly educational.

CDMI breaks out at SNW Spring

CDMI announcements at SNW Spring

The SNIA co-sponsors the Storage Networking World (SNW) conference twice a year. At the Spring 2011 SNW show, the CDMI specification was updated to version 1.0.1h (online at http://cdmi.sniacloud.com) and the first commercial implementation of CDMI was announced.

The SNIA also put out a press release on the latest developments and progress that CDMI has made, including some new research results:

Cloud Storage Standard
Readies for Widespread Adoption

SNIA is establishing relationships with National and
International Standards Groups; Recent Market Research Reveals
CDMI will be Mainstream in RFPs

Santa Clara, Calif. (April 4th, 2011) — The Storage Networking Industry Association (SNIA) Cloud Storage Initiative (CSI), today announced that the Cloud Data Management Interface (CDMI), released as an official SNIA Architecture one year ago, continues to make significant steps toward broad acceptance.

“A critical part of delivering an industry wide standard is building a strong ecosystem of partners, alliances and supporting programs,” said David Slik, Co–Chair of SNIA Cloud Storage Technical Work Group. “As demonstrated by initiating relationships with nationally and internationally recognized standards bodies and our forthcoming CDMI Plugfest, we are making strong progress around delivering not only a strong standard, but a widely accepted and valued one.”

SNIA’s CDMI standard has been refined over the past year and is now being readied for further de jure standardization. The SNIA has joined the DAPS38 Technical Committee (which is responsible for Cloud Computing, among other technology standards) of INCITS – the primary U.S. focus of standardization in the field of Information and Communications Technologies (ICT). The SNIA has also requested a Category A Liaison relationship with the ISO/IEC JTC 1 SC38 subcommittee for Distributed Application Platforms and Services (DAPS).

CDMI has been citied in numerous cloud roadmaps and studies, including those from ITU–T (International Telecommunication Union), TeleManagement Forum, SIENA (the European Standards and Interoperability for eInfrastructure Implementation Initiative), and NIST (the U.S. National Institute of Standards and Technology). The maturing CDMI Reference Implementation has been through initial testing of the NIST SAJACC (Standards Acceleration to Jumpstart Adoption of Cloud Computing) use cases..

SNIA CSI 2011 sponsored activities include Plugfests , with the first taking place April 19–21, 2011 at the SNIA Technology Center in Colorado Springs, Colorado. The Cloud Plugfest allows vendors to bring their implementations of CDMI and the Open Grid Forum’s Open Cloud Computing Interface (OCCI) to test, identify, and fix bugs in a collaborative setting with the goal of providing a forum in which companies can develop interoperable products. For additional details on participating in the Cloud Plugfest, please visit www.snia.org/cloud/cloudplugfest/ .

SNIA CSI will repeat its “SNIA Cloud Burst Summit” in Santa Clara, California, on September 22, 2011 as an extended program with the SNIA Storage Developer Conference (SDC). In 2010, over 100 attendees participated in the Cloud Burst Summit, joining other cloud strategists and deployment professionals in this highly successful inaugural program that featured noted industry luminary Geoffrey Moore as the keynote speaker on the topic of clouds and IT transformation.

SNIA CSI has also partnered with Storage Strategies NOW to help bring to market research that will help inform the industry of the key insights around cloud storage. This information, which can be found in the IT Professionals Cloud Adoption Survey released today, will provide a valuable service to help users, vendors and the industry at–large track how adoption and use of cloud technologies should be considered. To learn more, visit www.ssg–now.com.

Deni Connor, principal analyst, Storage Strategies NOW added, “Our findings include that Email (66%) is the primary application for cloud storage, followed by backup (59%) and front office applications (45%). Additionally, 53% say that SNIA’s CDMI will be part of cloud storage RFPs/proposals; and 30% of respondents say SNIA’s CDMI is very important for public/hybrid cloud standard”.

Deni Connor, principal analyst, Storage Strategies NOW added, “Our findings include that Email (66%) is the primary application for cloud storage, followed by backup (59%) and front office applications (45%). Additionally, 53% say that SNIA’s CDMI will be part of cloud storage RFPs/proposals; and 30% of respondents say SNIA’s CDMI is very important for public/hybrid cloud standard”.

To learn more about SNIA and CSI stop by the SNIA CSI Cloud Pavilion on Tuesday and Wednesday during SNW Expo Hall hours.

About the SNIA Cloud Storage Initiative
The SNIA Cloud Storage Initiative (CSI) fosters the growth and success of the market for cloud storage for vendors, service providers, and users. Members of the CSI work together to advance the adoption of the SNIA Cloud Data Management Interface (CDMI) standard, educate the IT communities about cloud storage, perform market outreach that highlights the virtues of cloud storage, and collaborate with other industry associations on cloud storage technical work. CSI member companies represent a variety of segments in the IT industry and include Actifio, Asigra, Broadcom, CA Technologies, Cisco, Cleversafe, CoreVault, Desktone, EMC, Hitachi Data Systems, HP, IBM, Iron Mountain, LSI Corporation, Mezeo, NetApp, Novell, Oracle, Scality, Sepaton, SpectraLogic, StorSimple, SwiftTest, Terasky, Terremark, and Xiotech. For more information on SNIA’s Cloud Storage activities, visit snia.org/cloud and get involved in the conversation at twitter.com/SNIACloud or http://groups.google.com/group/snia-cloud.

About SNIA
The Storage Networking Industry Association (SNIA) is a not–for–profit global organization, made up of some 400 member companies spanning virtually the entire storage industry. SNIA’s mission is to lead the storage industry worldwide in developing and promoting standards, technologies, and educational services to empower organizations in the management of information. To this end, the SNIA is uniquely committed to delivering standards, education, and services that will propel open storage networking solutions into the broader market. For additional information, visit the SNIA web site at www.snia.org.

Deploying SQL Server with iSCSI – Answers to your questions

by: Gary Gumanow

Last Wednesday (2/24/11), I hosted an Ethernet Storage Forum iSCSI SIG webinar with representatives from Emulex and NetApp to discuss the benefits of iSCSI storage networks in SQL application environments. You can catch a recording of the webcast on BrightTalk here.

The webinar was well attended, and while we received so many great questions during the webinar we just didn’t have time to answer all of them. Which brings us to this blogpost. We have included answers to these unanswered questions in our blog below.
We’ll be hosting another webinar real soon, so please check back for upcoming ESF iSCSI SIG topics. You’ll be able to register for this event shortly on BrightTalk.com.

Let’s get to the questions. We took the liberty of editing the questions for clarity. Please feel free to comment if we misinterpreted the question.

Question: Is TRILL needed in the data center to avoid pausing of traffic while extending the number of links that can be used?

Answer: The Internet Engineering Task Force (IETF) has developed a new shortest path frame Layer 2 (L2) routing protocol for multi-hop environments. The new protocol is called Transparent Interconnection of Lots of Links, or TRILL. TRILL will enable multipathing for L2 networks and remove the restrictions placed on data center environments by STP single-path networks.

Although TRILL may serve as an alternative to STP, it doesn’t require that STP be removed from an Ethernet infrastructure. Hybrid solutions that use both STP and TRILL are not only possible but also will be the norm for at least the near-term future. TRILL will also not automatically eliminate the risk of a single point of failure, especially in hybrid environments.

Another area where TRILL is not expected to play a role is the routing of traffic across L3 routers. TRILL is expected to operate within a single subnet. While the IETF draft standard document mentions the potential for tunneling data, it is unlikely that TRILL will evolve in a way that will expand its role to cover cross-L3 router traffic. Existing and well-established protocols such as Multiprotocol Label Switching (MPLS) and Virtual Private LAN Service (VPLS) cover these areas and are expected to continue to do so.

In summary, TRILL will help multipathing for L2 networks.

Question: How do you calculate bandwidth when you only have IOPS?
Answer:
The mathematical formula to calculate bandwidth is a function of IOPS and I/O size. The formula is simply IOP x I/O size. Example: 10,000 IOPS x 4k block size (4096 bytes) = 40.9 MB/sec.

Question: When deploying FCoE, must all 10GbE switches support Data Center Bridging (DCB) and FCoE? Or can some pass through FCoE?
Answer:
Today, in order to deploy FCoE, all switches in the data path must support both FCoE forwarding and DCB. Future standards include proposals to allow pass through of FCoE commands without having to support Fibre Channel services. This will allow for more cost effective networks where not all switch layers are needed to support the FCoE storage protocol.
Question: iSCSI performance is comparable to FC and FCoE. Do you expect to see iSCSI overtake FC in the near future?
Answer:
FCoE deployments are still very small compared to traditional Fibre Channel and iSCSI. However, industry projections by several analyst firms indicate that Ethernet storage protocols, such as iSCSI and FCoE, will overtake traditional Fibre Channel due to increased focus on shared data center infrastructures to address applications, such as private and public clouds. But, even the most aggressive forecasts don’t show this cross over for several years from now.
Customers looking to deploy new data centers are more likely today to consider iSCSI than in the past. Customers with existing Fibre Channel investments are likely to transition to FCoE in order to extend the investment of their existing FC storage assets. In either case, transitioning to 10Gb Ethernet with DCB capability offers the flexibility to do both.

Question: With 16Gb/s FC ratified, what product considerations would be considered by disk manufacturers?
Answer:
We can’t speak to what disk manufacturers will or won’t do regarding 16Gb/s disks. But, the current trend is to move away from Fibre Channel disk drives in favor of Serial Attached SCSI (SAS) and SATA disks as well as SSDs. 16Gb Fibre Channel will be a reality and will play in the data center. But, the prediction of some vendors is that the adoption rate will be much slower than previous generations.
Question: Why move to 10GbE if you have 8Gb Fibre Channel? The price is about the same, right?
Answer:
If your only network requirement is block storage, then Fibre Channel provides a high performance network to address that requirement. However, if you have a mixture of networking needs, such as NAS, block storage, and LAN, then moving to 10GbE provides sufficient bandwidth and flexibility to support multiple traffic types with fewer resources and with lower overall cost.
Question: Is the representation of number of links accurate when comparing Ethernet to Fibre Channel. Your overall bandwidth of the wire may be close, but when including protocol overheads, the real bandwidth isn’t an accurate comparison. Example: FC protocol overhead is only 5% vs TCP at 25%. iSCSI framing adds another 4%. So your math on how many FC cables equal 10 Gbps cables is not a fair comparison.

Answer: As pointed out in the question, comparing protocol performance requires more than just a comparison of wire rates of the physical transports. Based upon protocol efficiency, one could conclude that the comparison between FC and TCP/IP is unfair as designed because Fibre Channel should have produced greater data throughput from a comparable wire rate. However, the data in this case shows that iSCSI offers comparable performance in a real world application environment, rather than just a benchmark test. The focus of the presentation was iSCSI. FCoE and FC were only meant to provide a reference points. The comparisons were not intended to be exact nor precise. 10GbE and iSCSI offers the performance to satisfy business critical performance requirements. Customers looking to deploy a storage network should consider a proof of concept to ensure that a new solution can satisfy their specific application requirements.

Question: Two FC switches were used during this testing. Was it to solve an operation risk of no single point of failure?
Answer:
The use of two switches was due to hardware limitation. Each switch had 8-ports and the test required 8 ports at the target and the host. Since this was a lab setup, we weren’t configuring for HA. However, the recommendation for any production environment would be to use redundant switches. This would apply for iSCSI storage networks as well.
Question: How can iSCSI match all the distributed management and security capabilities of Fibre Channel / FCoE such as FLOGI, integrated name server, zoning etc?
Answer:
The feature lists between the two protocols don’t match exactly. The point of this presentation was to point out that iSCSI is closing the performance gap and has enough high-end features to make it enterprise-ready.
Question: How strong is the possibility that 40G Ethernet will be bypassed, with a move directly from 10G to 100G?
Answer: Vendors are shipping products today that support 40Gb Ethernet so it seems clear that there will be a 40GbE. Time will tell if customers bypass 40GbE and wait for 100GbE.

Thanks again for checking out our blog. We hope to have you on our next webinar live, but if not, we’ll be updating this blog frequently.

Gary Gumanow – iSCSI SIG Co-chairman, ESF Marketing Chair

SQL Server “rocks” with iSCSI – Emulex and NetApp tell why

The leading storage network technology for mission critical applications today is Fibre Channel (FC). Fibre Channel is a highly reliable and high performing network technology for block storage applications. But, for organizations that can’t afford single purpose networks or the added complexity of managing more than one network technology, FC may not be ideal. With the introduction of Fibre Channel over Ethernet (FCoE), the ability to deploy your FC storage resources over a shared Ethernet network is now possible. But, FCoE isn’t the only available option for block storage over Ethernet.

Initially used primarily by small and medium sized businesses or for less demanding applications, iSCSI is now finding broad application by larger enterprises for mission critical applications. Some of the drivers for increased iSCSI adoption in the enterprise include lower cost for 10Gb Ethernet components as well as the drive toward cloud based infrastructures which benefit from increased flexibility and scalability associated with IP network protocols.

On February 24th, the SNIA Ethrnet Storage Forum will present a live webcast to discuss the advantages of iSCSI storage for business applications and will show test results demonstrating the performance of SQL Server deployed with 10GbE iSCSI. Hosted by Gary Gumanow, co-chair of the iSCSI SIG and ESF board member, this presentation will include content experts from Emulex and NetApp along with a live Q&A.

Guest Speakers

Steve Abbott – Sr. Product Marketing Manager, Emulex

Wei Liu – Microsoft Alliance Engineer, NetApp

Data & Time: February 24th, 11am PT

Register today at http://www.brighttalk.com/webcast/25316

SNIA ESF

The SNIA Ethernet Storage Forum is dedicated to educating the IT community on the advantages and best use of Ethernet storage. This presentation is the first in a series of marketing activities that will primarily focus on data center applications during the calendar year 2011.

Join the Cloud Storage Movement at SNIA’s Winter Symposium 2011

Every year the Storage Networking Industry Association (SNIA) has a gathering of their members in San Jose to coordinate the work of the various Technical Work Groups, Forums and Initiatives. This year the Symposium will take place January 24th – 27th, 2011 at the Sainte Claire Hotel in San Jose, CA. SNIA opens this Symposium to non-SNIA members who are evaluating membership, so feel free to attend. Please Register for the Symposium if you plan to be there in person.

SNIA Cloud Events

The Cloud Storage Technical Work Group (TWG) kicks off a multi-day face to face session starting at 1:00pm PT on Monday. We will be discussing the submission of CDMI for international standardization and continuing to discuss the scope of the next minor release (1.1) of CDMI. Topics include Federation and NoSQL among others. Bring your own ideas for how to improve CDMI. The full agenda has been posted publicly.

On Wednesday, the Cloud Storage Initiative will give an overview of their activities at a breakfast session starting at 8:30am. Then at noon on Wednesday, be sure and join us for the 2011 Activities Kickoff presentation in the Grande Ballroom. We will be showcasing all of the upcoming activities that you will want to be involved with over the next year. This session will be live streamed if you cannot make it in person. Regardless of whether you will be there in person or remote, please register for this update event (in addition to the Symposium registration above). More information.

Wednesday afternoon is the meeting of the Cloud Storage Initiative from 1-5pm (also in the Grande Ballroom). Be sure and join us and help plan the activities for the upcoming year.

Lastly, on Wednesday night there will be a Birds of Feather (BOF) session on a new group that is forming for the Archive and Preservation in the Cloud.

Whereas with Cloud Backup, the cloud is simply a repository of backup data, with Cloud Archive and Preservation, the Cloud is where the active processes occur that ensure long term retention, preservation and viability of data.
CDMI is uniquely designed to accommodate these needs with the Data System Metadata that it standardizes.
Cloud providers see the ability to offer more than just a best effort storage area with the promise of being the trusted steward of information for the long term.
Additional services such as eDiscovery and automatic format conversion can easily be offloaded to the cloud reducing costs.

Please join us Wednesday evening from 5:30pm – 7:00pm in the Grande Ballroom for a Birds of Feather session to kick off the formation of the CSI Archive/Preservation Special Interest Group (SIG). Light refreshments will be provided. If you would like to participate remotely, please use the following call in information:
Toll Free: 866-244-8528
International:+1-719-457-0816
Passcode: 510843#
Webex: http://snia.webex.com, Meeting Name: Archive and Preservation SIG
Meeting Password: cloud2011