• Home
  • About
  •  

    What a Solid State We Store In

    June 30th, 2014

    Note:  This blog entry is authored by SNIA Solid State Storage Initiative Governing Board member Gilda Fosswho serves on the SNIA Solid State Storage Initiative [SSSI] Governing Board as well as her role as Industry Evangelist in the CTO Office at NetApp, Inc

    Solid state drives use semiconductor chips, as opposed to magnetic media, to store data.  The chips that solid state drives use are non-volatile memory meaning that the data remains even when the system has no power.  I’ve written about solid state drive technology in the past and I will continue to, for it represents the first major advancement in primary storage in a very long time.  Serving on the Governing Board of the SNIA Solid State Storage Initiative, it allows me to help foster the growth and success of the market for solid state storage in both enterprise and client environments. Our goals are to be the recognized authority for storage made from solid state devices, to determine and document the characteristics of storage made from solid state devices, and to determine and document the impact of storage made from solid state devices on system architectures.

    So what can you expect if you were to ever upgrade to an SSD?  Well, for starters your computing experience will be transformed with screaming fast random access speeds, multi-tasking proficiency, as well as fantastic reliability and durability… and you can choose between an external SSD or even a hybrid drive so you’ve got some options.  A new SSD will make your system faster because the boot times will decrease, launching apps will be lightening fast, opening and saving docs will no longer drag, copying and duplicating file speeds will improve, and overall your system will have a new ‘pep in its step’.  Furthermore, to promote being green, SSDs consume far less power than traditional hard drives, which means they also preserve battery life and stay cooler.  Who doesn’t want and need that? They’re also very quiet, with none of the spinning and clanking you get with HDDs – for obvious reasons. SSDs are cooler and quieter, all the while being faster.

    Since modern SSDs are Flash-based, there is no real hard-defined difference between Flash and SSD.  Rather, as mentioned previously, Solid State Disk is essentially storage that doesn’t require moving parts and Flash is what allows that to exist.  SSDs use Flash instead of RAM these days, since it’s a type of memory that’s super fast and doesn’t require continuous power, making it non-volatile.  A match made in solid-state heaven.

    There are some fundamental aspects that folks expect from a robust flash-based storage solution.  First off, I/O performance and efficiency for many applications, including database acceleration, server and desktop virtualization, and cloud infrastructure.  You should also expect to speed up overall IT performance, boost responsiveness of performance-critical applications, and reduce power costs and over-provisioning.  Furthermore, you will obviously use more high-capacity, low-cost SATA drives while improving utilization of your data center space.  If you can achieve all your flash-based goals without changing your IT infrastructure management processes, then you’ve really got it good.

    Flash storage has customarily had substantial aging issues. In a nutshell, a user could only write to the memory a certain number of times before they would just lose that section of the drive coupled with the fact that performance would degrade over time, too.  However, a lot of these issues were resolved and companies started manufacturing SSDs out of Flash memory instead of out of RAM.

    I’ve stated in the past that many people in the industry believe that flash SSDs will eventually replace traditional hard drives.  By the time this happens other characteristics, such as slower write time and added cost, will likely have been eradicated or significantly diminished. Even today, an SSD can extend the life of a laptop battery, reduce the weight of the system, make it quieter, and increase read performance.  When properly and optimally engineered, SSDs are now at least as reliable as traditional spinning hard drives.  Relating to the faster speed, think of one starting up in seconds versus minutes. Even the slowest current SSD gives you much improved real-world performance than does the fastest conventional hard drive, perhaps even 100x as fast.  This allows for better user productivity, allowing for more work to get done in a fraction of the time.  Furthermore, using flash in enterprise storage servers means you can support more users, do more work, and use less power so it’s no wonder that it’s become an important technology for business transactions.   It’s a solid win-win-win.

    SSSI’s 2014 Mission

    This SNIA initiative was formed in September 2008 and its mission is to foster the growth and success of the market for solid state storage in both enterprise and client environments. Our goals are to be the recognized authority for storage made from solid state devices, to determine and document the characteristics of storage made from solid state devices, and to determine and document the impact of storage made from solid state devices on system architectures.  Additionally, the SSSI collects solid state technical requirements of storage system vendors and communicate to SSD manufacturers for common features, behavior, and robustness.  The initiative collaborates with academia and the research labs of member companies to understand how advances in solid state memory will impact storage made from solid state memory as well as to educate the vendor and user communities about storage made from solid state devices.

    The SNIA SSSI also coordinates education activities with the Education Committee, performs benchmark testing to highlight the performance advantages of solid state storage, create peer reviewed vendor neutral SNIA Tutorials, and create vendor-neutral demonstrations.  The SSI also leverages SNIA and partner conferences, collaborate with industry analysts, perform market outreach that highlights the virtues of storage made from solid state devices.  The initiative determines what technical work should be performed within SNIA technical working groups to further the acceptance of storage made from solid state devices.  Furthermore and very importantly, the SSSI determines the standards that will be necessary to support the industry usage of SSDs by performing interoperability plug-fests as necessary in support of standards development.

    Collaboration between other SNIA organizations is also key.  The SSSI works with the Storage Management Initiative (SMI) to understand how SMI-S can be used to manage storage made from solid state devices.  We also work with the Green Storage Initiative (GSI) to understand how storage made from solid state devices will impact energy use in computer systems.  The work that the SSI does with the Technical Council helps create the desired technical working groups and provides external advocacy and support of these technical working groups.

    Finally, the SSSI collaborates with other industry associations via SNIA’s Strategic Alliances Committee (SAC) on SSD-related technical work in which they are involved as well as coordinates with SNIA Regional Affiliates to ensure that the impact of the SSS Initiative is felt worldwide.  For more information, please visit http://www.snia.org/forums/sssi


    It’s “All About M.2 SSDs” In a New SSSI Webcast June 10

    June 4th, 2014

    Interested in M.2, the new SSD card form factor?

    The SNIA Solid State Storage Initiative is partnering with SATA-IO and NVM Express to give you the latest information on M.2, the new SSD card form factor.  Join us “live” on Tuesday, June 10, at 10:00 am Pacific time/1:00 pm Eastern time.

    Hear from a panel of experts, including Tom Coughlin of Coughlin Associates, Jim Handy of Objective Analysis, Jon Tanguy of Micron, Jaren May of TE Connectivity, David Akerson of Intel, and Eden Kim of Calypso Systems.  You will leave this webinar with an understanding of the M.2 market, M.2 cards and connection schemes, NVM Express, and M.2 performance. You’ll also be able to ask questions of the experts.

    You can access this webcast via the internet.  Click here, or visit http://snia.org/news_events/multimedia#webcasts


    SSD Education Afternoon Monday January 27 at SNIA Symposium in San Jose

    January 24th, 2014

    Interested in the latest information on SSD technology?  Join the SNIA Solid State Storage Initiative Monday January 27 for lunch and an afternoon of the latest on:

    • Flash/SSD technology
    • SCSI Express
    • SAS
    • NVM Express
    • SATA Express
    • SSD performance
    • SSD Markets

    Lunch begins at noon, with presentations from 1:00 pm – 4:00 pm.  There is no charge to attend this session at the Sainte Claire Hotel in downtown San Jose CA. You can attend in person – register at www.snia.org/events/symp2014 or by WebEx (click here for details and the agenda).


    Add to your NVDIMM Knowledge – Attend the January 28 Summit

    January 15th, 2014

    Over 150 individuals participated in the BrightTALK Enterprise Storage Summit NVDIMM webcast.  If you are eager for more information on NVDIMM, you will want to attend an upcoming SNIA Event – the Storage Industry Summit on Non–Volatile Memory.

    This Summit will take place at the Sainte Claire Hotel in San Jose, CA on January 28th as part of the SNIA Annual Members’ Symposium, and will offer critical insights into NVM, including NVDIMMs, and the future of computing. This event is complimentary to attend and you can register here.

    The Summit will take place from 8:15 AM to 5:30 PM and speakers currently include:

    • Nigel Alvares, Senior Director of Marketing, Inphi
    • Bob Beauchamp, Distinguished Engineer and Director Hardware Technology and Architecture, EMC
    • Matt Bryson, ABR Investment Strategy, LLC, SVP-Research
    • Jeff Chang, Vice President, Marketing & Business Development, AgigA Tech
    • Tom Coughlin, Founder, Coughlin Associates
    • Mark Geenen, President, TrendFocus
    • Jim Handy, Analyst, Objective Analysis
    • Jay Kidd, CTO, NetApp
    • Eden Kim, CEO, Calypso
    • Tau Leng, VP/GM, Supermicro
    • Jeff Moyer, Principal Software Engineer, Red Hat
    • Wes Mukai, VP of Cloud Computing, System Engineering, SAP
    • Jim Pinkerton, Lead Partner Architect, Microsoft
    • Adrian Proctor, VP Marketing, Viking Technology
    • Andy Rudoff, Senior Software Engineer, Intel
    • Esther Spanjer, Director, Marketing Management, SanDisk
    • Garret Swart, Database Architect, Oracle
    • Nisha Talagala, Lead Architect, Fusion-IO
    • Doug Voigt, Distinguished Technologist in Storage, HP

    Visit http://www.snia.org/nvmsummit for more information and we hope you will join us in San Jose!


    SNIA’s Events Strategy Today and Tomorrow

    December 5th, 2013

    David Dale, SNIA Chairman

    Last month Computerworld/IDG and the SNIA posted a notice to the SNW website stating that they have decided to conclude the production of SNW.  The contract was expiring and both parties declined to renew.  The IT industry has changed significantly in the 15 years since SNW was first launched, and both parties felt that their individual interests would be best served by pursuing separate events strategies.

    For the SNIA, events are a strategically important vehicle for fulfilling its mission of developing standards, maintaining an active ecosystem of storage industry experts, and providing vendor-neutral educational materials to enable IT professions to deal with and derive value from constant technology change.  To address the first two categories, SNIA has a strong track record of producing Technical Symposia throughout the year, and the successful Storage Developer Conference in September.

    To address the third category, IT professionals, SNIA has announced a new event, to be held in Santa Clara, CA, from April 22-24 – the Data Storage Innovation Conference. This event is targeted at IT decision-makers, technology implementers, and those expected to influence, implement and support data storage innovation as actual production solutions.  See the press release and call for presentations for more information.  We are excited to embark on developing this contemporary new event into an industry mainstay in the coming years.

    Outside of the USA, events are also critically important vehicles for the autonomous SNIA Regional Affiliates to fulfill their mission.  The audience there is typically more biased towards business executives and IT managers, and over the years their events have evolved to incorporate adjacent technology areas, new developments and regional requirements.

    As an example of this evolution, SNIA Europe’s events partner, Angel Business Communications, recently announced that its very successful October event, SNW Europe/Datacenter Technologies/Virtualization World, will be simply known as Powering the Cloud starting in 2014, in order to unite the conference program and to be more clearly relevant to today’s IT industry. See the press release for more details.

    Other Regional Affiliates have followed a similar path with events such as Implementing Information Infrastructure Summit and Information Infrastructure Conference – both tailored to meet regional needs.

    The bottom line on this is that the SNIA is absolutely committed to a global events strategy to enable it to carry out its mission.  We are excited about the evolution of our various events to meet the changing needs of the market and continue to deliver unique vendor-neutral content. IT professionals, partners, vendors and their customers around the globe can continue to rely on SNIA events to inform them about new technologies and developments and help them navigate the rapidly changing world of IT.


    10GbE Answers to Your Questions

    August 2nd, 2012

    Our recent Webcast: 10GbE – Key Trends, Drivers and Predictions was very well received and well attended. We thank everyone who was able to make the live event. For those of you who couldn’t make it, it’s now available on demand. Check it out here.

    There wasn’t enough time to respond to all of the questions during the Webcast, so we have consolidated answers to all of them in this blog post from the presentation team.  Feel free to comment and provide your input.

    Question: When implementing VDI (1000 to 5000 users) what are best practices for architecting the enterprise storage tier and avoid peak IOPS / Boot storm problems?  How can SSD cache be used to minimize that issue? 

     

    Answer: In the case of boot storms for VDI, one of the challenges is dealing with the individual images that must be loaded and accessed by remote clients at the same time. SSDs can help when deployed either at the host or at the storage layer. And when deduplication is enabled in these instances, then a single image can be loaded in either local or storage SSD cache and therefore it can be served the client much more rapidly. Additional best practices can include using cloning technologies to reduce the space taken up by each virtual desktop.

     

    Question: What are the considerations for 10GbE with LACP etherchannel?

     

    Answer: Link Aggregation Control Protocol (IEEE 802.1AX-2008) is speed agnostic.  No special consideration in required going to 10GbE.

     

    Question: From a percentage point of view what is the current adoption rate of 10G Ethernet in data Centers vs. adoption of 10G FCoE?

     

    Answer: As I mentioned on the webcast, we are at the early stages of adoption for FCoE.  But you can read about multiple successful deployments in case studies on the web sites of Cisco, Intel, and NetApp, to name a few.  The truth is no one knows how much FCoE is actually deployed today.  For example, Intel sells FCoE as a “free” feature of our 10GbE CNAs.  We really have no way of tracking who uses that feature.  FC SAN administrators are an extraordinarily conservative lot, and I think we all expect this to be a long transition.  But the economics of FCoE are compelling and will get even more compelling with 10GBASE-T.  And, as several analysts have noted, as 40GbE becomes more broadly deployed, the performance benefits of FCoE also become quite compelling.

     

    Question: What is the difference between DCBx Baseline 1.01 and IEEE DCBx 802.1 Qaz?

     

    Answer: There are 3 versions of DCBX
    - Pre-CEE (also called CIN)
    - CEE
    - 802.1Qaz

    There are differences in TLVs and the ways that they are encoded in all 3 versions.  Pre-CEE and CEE are quite similar in terms of the state machines.  With Qaz, the state machines are quite different — the
    notion of symmetric/asymmetric/informational parameters was introduced because of which the way parameters are passed changes.

     

    Question: I’m surprise you would suggest that only 1GBe is OK for VDI??  Do you mean just small campus implementations?  What about multi-location WAN for large enterprise if 1000 to 5000 desktop VMs?

     

    Answer: The reference to 1GbE in the context of VDI was to point out that enterprise applications will also rely on 1GbE in order to reach the desktop. 1GbE has sufficient bandwidth to address VoIP, VDI, etc… as each desktop connects to the central datacenter with 1GbE. We don’t see a use case for 10GbE on any desktop or laptop for the foreseeable future.

     

    Question: When making a strategic bet as a CIO/CTO in future (5-8 years plus) of my datacenter, storage network etc, is there any technical or biz case to keep FC and SAN?  Versus, making move to 10/40GbE path with SSD and FC?  This seems especially with move to Object Based storage and other things you talked about with Big Data and VM?  Seems I need to keep FC/SAN only if vendor with structured data apps requires block storage?

     

    Answer: An answer to this question really requires an understanding of the applications you run, the performance and QOS objectives, and what your future applications look like. 10GbE offers the bandwidth and feature set to address the majority of application requirements and is flexible enough to support both file and block protocols. If you have existing investment in FC and aren’t ready to eliminate it, you have options to transition to a 10GbE infrastructure with the use of FCoE. FCoE at its core is FCP, so you can connect your existing FC SAN into your new 10GbE infrastructure with CNAs and switches that support both FC and FCoE. This is one of the benefits of FCoE – it offers a great migration path from FC to Ethernet transports. And you don’t have to do it all at once. You can migrate your servers and edge switches and then migrate the rest of your infrastructure later.

     

    Question: Can I effectively emulate or out-perform SAN on FC, by building VLAN network storage architecture based on 10/40GBe, NAS, and use SSD Cache strategically.

     

    Answer: What we’ve seen, and you can see this yourself in the Yahoo case study posted on the Intel website, is that you can get to line rate with FCoE.  So 10GbE outperforms 8Gbps FC by about 15% in bandwidth.  FC is going to 16 Gbps, but Ethernet is going to 40Gbps.  So you should be able to increasingly outperform FC with FCoE — with or without SSDs.

     

    Question: If I have large legacy investment in FC and SAN, how do cost-effectively migrate to 10 or 40 GBe using NAS?  Does it only have to be greenfield opportunity? Is there better way to build a business case for 10GBe/NAS and what mix should the target architecture look like for large virtualized SAN vs. NAS storage network on IP?

     

    Answer: The combination of 10Gb converged network adapter (CNA) and a top of the rack (TOR) switch that supports both FCoE and native FC allows you to preserve connectivity to your existing FC SAN assets while taking advantage of putting in place a 10Gb access layer that can be used for storage and IP.  By using CNAs and DCB Ethernet switches for your storage and IP access you are also helping to reduce your CAPEX and OPEX (less equipment to buy and manage using a common infrastructure.  You get the added performance (throughput) benefit of 10G FCoE or iSCSI versus 4G or 8G Fibre Channel or 1GbE iSCSI.  For your 40GbE core switches so you have t greater scalability to address future growth in your data center.

     

    Question: If I want to build an Active-Active multi-Petabyte storage network over WAN with two datacenters 1000 miles apart to primarily support Big Data analytics,  why would I want to (..or not) do this over 10/40GBe / NAS vs.  FC on SAN?  Does SAN vs. NAS really enter into the issue?  If I got mostly file-based demand vs. block is there a tech or biz case to keep SAN ?

     

    Answer: You’re right, SAN or NAS doesn’t really enter into the issue for the WAN part; bandwidth does for the amount of Big Data that will need to be moved, and will be the key component in building active/active datacenters. (Note that at that distance, latency will be significant and unavoidable; applications will experience significant delay if they’re at site A and their data is at site B.)

     

    Inside the data center, the choice is driven by application protocols. If you’re primarily delivering file-based space, then a FC SAN is probably a luxury and the small amount of block-based demand can be delivered over iSCSI with equal performance. With a 40GbE backbone and 10GbE to application servers, there’s no downside to dropping your FC SAN.

     

    Question: Are you familiar with VMware and CISCO plans to introduce a Beat for virtualized GPU Appliance (aka think Nivdia hardware GPUs) for heavy duty 3D visualization apps on VDI?  No longer needing expensive 3D workstations like RISC based SGI desktops. If so, when dealing with these heavy duty apps what are your concerns for network and storage network?

     

    Answer: I’m afraid I’m not familiar with these plans.  But clearly moving graphics processing from the client to the server will add increasing load to the network.  It’s hard to be specific without a defined system architecture and workload.  However, I think the generic remarks Jason made about VDI and how NVM storage can help with peak loads like boot storms apply here as well, though you probably can’t use the trick of assuming multiple users will have a common image they’re trying to access.

     

    Question: How do I get a copy of your slides from today?  PDF?

     

    Answer: A PDF of the Webcast slides is available at the SNIA-ESF Website at: http://www.snia.org/sites/default/files/SNIA_ESF_10GbE_Webcast_Final_Slides.pdf

     

     

     

     

     

     

     

     

     

     

     

     

     


    10GbE Answers to Your Questions

    August 2nd, 2012

    Our recent Webcast: 10GbE – Key Trends, Drivers and Predictions was very well received and well attended. We thank everyone who was able to make the live event. For those of you who couldn’t make it, it’s now available on demand. Check it out here.

    There wasn’t enough time to respond to all of the questions during the Webcast, so we have consolidated answers to all of them in this blog post from the presentation team.  Feel free to comment and provide your input.

    Question: When implementing VDI (1000 to 5000 users) what are best practices for architecting the enterprise storage tier and avoid peak IOPS / Boot storm problems?  How can SSD cache be used to minimize that issue? 

    Answer: In the case of boot storms for VDI, one of the challenges is dealing with the individual images that must be loaded and accessed by remote clients at the same time. SSDs can help when deployed either at the host or at the storage layer. And when deduplication is enabled in these instances, then a single image can be loaded in either local or storage SSD cache and therefore it can be served the client much more rapidly. Additional best practices can include using cloning technologies to reduce the space taken up by each virtual desktop.

    Question: What are the considerations for 10GbE with LACP etherchannel?

    Answer: Link Aggregation Control Protocol (IEEE 802.1AX-2008) is speed agnostic.  No special consideration in required going to 10GbE.

    Question: From a percentage point of view what is the current adoption rate of 10G Ethernet in data Centers vs. adoption of 10G FCoE?

    Answer: As I mentioned on the webcast, we are at the early stages of adoption for FCoE.  But you can read about multiple successful deployments in case studies on the web sites of Cisco, Intel, and NetApp, to name a few.  The truth is no one knows how much FCoE is actually deployed today.  For example, Intel sells FCoE as a “free” feature of our 10GbE CNAs.  We really have no way of tracking who uses that feature.  FC SAN administrators are an extraordinarily conservative lot, and I think we all expect this to be a long transition.  But the economics of FCoE are compelling and will get even more compelling with 10GBASE-T.  And, as several analysts have noted, as 40GbE becomes more broadly deployed, the performance benefits of FCoE also become quite compelling.

    Question: What is the difference between DCBx Baseline 1.01 and IEEE DCBx 802.1 Qaz?

    Answer: There are 3 versions of DCBX
    - Pre-CEE (also called CIN)
    - CEE
    - 802.1Qaz

    There are differences in TLVs and the ways that they are encoded in all 3 versions.  Pre-CEE and CEE are quite similar in terms of the state machines.  With Qaz, the state machines are quite different — the
    notion of symmetric/asymmetric/informational parameters was introduced because of which the way parameters are passed changes.

    Question: I’m surprise you would suggest that only 1GBe is OK for VDI??  Do you mean just small campus implementations?  What about multi-location WAN for large enterprise if 1000 to 5000 desktop VMs?

    Answer: The reference to 1GbE in the context of VDI was to point out that enterprise applications will also rely on 1GbE in order to reach the desktop. 1GbE has sufficient bandwidth to address VoIP, VDI, etc… as each desktop connects to the central datacenter with 1GbE. We don’t see a use case for 10GbE on any desktop or laptop for the foreseeable future.

    Question: When making a strategic bet as a CIO/CTO in future (5-8 years plus) of my datacenter, storage network etc, is there any technical or biz case to keep FC and SAN?  Versus, making move to 10/40GbE path with SSD and FC?  This seems especially with move to Object Based storage and other things you talked about with Big Data and VM?  Seems I need to keep FC/SAN only if vendor with structured data apps requires block storage?

    Answer: An answer to this question really requires an understanding of the applications you run, the performance and QOS objectives, and what your future applications look like. 10GbE offers the bandwidth and feature set to address the majority of application requirements and is flexible enough to support both file and block protocols. If you have existing investment in FC and aren’t ready to eliminate it, you have options to transition to a 10GbE infrastructure with the use of FCoE. FCoE at its core is FCP, so you can connect your existing FC SAN into your new 10GbE infrastructure with CNAs and switches that support both FC and FCoE. This is one of the benefits of FCoE – it offers a great migration path from FC to Ethernet transports. And you don’t have to do it all at once. You can migrate your servers and edge switches and then migrate the rest of your infrastructure later.

    Question: Can I effectively emulate or out-perform SAN on FC, by building VLAN network storage architecture based on 10/40GBe, NAS, and use SSD Cache strategically.

    Answer: What we’ve seen, and you can see this yourself in the Yahoo case study posted on the Intel website, is that you can get to line rate with FCoE.  So 10GbE outperforms 8Gbps FC by about 15% in bandwidth.  FC is going to 16 Gbps, but Ethernet is going to 40Gbps.  So you should be able to increasingly outperform FC with FCoE — with or without SSDs.

    Question: If I have large legacy investment in FC and SAN, how do cost-effectively migrate to 10 or 40 GBe using NAS?  Does it only have to be greenfield opportunity? Is there better way to build a business case for 10GBe/NAS and what mix should the target architecture look like for large virtualized SAN vs. NAS storage network on IP?

    Answer: The combination of 10Gb converged network adapter (CNA) and a top of the rack (TOR) switch that supports both FCoE and native FC allows you to preserve connectivity to your existing FC SAN assets while taking advantage of putting in place a 10Gb access layer that can be used for storage and IP.  By using CNAs and DCB Ethernet switches for your storage and IP access you are also helping to reduce your CAPEX and OPEX (less equipment to buy and manage using a common infrastructure.  You get the added performance (throughput) benefit of 10G FCoE or iSCSI versus 4G or 8G Fibre Channel or 1GbE iSCSI.  For your 40GbE core switches so you have t greater scalability to address future growth in your data center.

    Question: If I want to build an Active-Active multi-Petabyte storage network over WAN with two datacenters 1000 miles apart to primarily support Big Data analytics,  why would I want to (..or not) do this over 10/40GBe / NAS vs.  FC on SAN?  Does SAN vs. NAS really enter into the issue?  If I got mostly file-based demand vs. block is there a tech or biz case to keep SAN ?

    Answer: You’re right, SAN or NAS doesn’t really enter into the issue for the WAN part; bandwidth does for the amount of Big Data that will need to be moved, and will be the key component in building active/active datacenters. (Note that at that distance, latency will be significant and unavoidable; applications will experience significant delay if they’re at site A and their data is at site B.)

    Inside the data center, the choice is driven by application protocols. If you’re primarily delivering file-based space, then a FC SAN is probably a luxury and the small amount of block-based demand can be delivered over iSCSI with equal performance. With a 40GbE backbone and 10GbE to application servers, there’s no downside to dropping your FC SAN.

    Question: Are you familiar with VMware and CISCO plans to introduce a Beat for virtualized GPU Appliance (aka think Nivdia hardware GPUs) for heavy duty 3D visualization apps on VDI?  No longer needing expensive 3D workstations like RISC based SGI desktops. If so, when dealing with these heavy duty apps what are your concerns for network and storage network?

    Answer: I’m afraid I’m not familiar with these plans.  But clearly moving graphics processing from the client to the server will add increasing load to the network.  It’s hard to be specific without a defined system architecture and workload.  However, I think the generic remarks Jason made about VDI and how NVM storage can help with peak loads like boot storms apply here as well, though you probably can’t use the trick of assuming multiple users will have a common image they’re trying to access.

    Question: How do I get a copy of your slides from today?  PDF?

    Answer: A PDF of the Webcast slides is available at the SNIA-ESF Website at: http://www.snia.org/sites/default/files/SNIA_ESF_10GbE_Webcast_Final_Slides.pdf 


    Quick PTS Implementation

    November 11th, 2011

    PTS ProcedureNeed an abbreviated version of the SNIA SSD Performance Test Specification (PTS) in a hurry?  Jamon Bowen of Texas Memory Systems (TMS) whipped up a simple implementation of certain key parts of the PTS that can be run on a Linux system and interpreted in Excel.

    It’s a free download on his Storage Tuning blog.

    This is a boon for anyone that might want to run a internal preliminary test before pursuing a more formal route.

    The bash script uses the Flexible I/O utility (FIO) to run through part of the SSSI PTS.  FIO does the heavy lifting, and the script manages it.  The script outputs comma separated (CSV) data and the download includes an Excel pivot table that helps format the results and select the measurement window.

    Since this is a bare-bones implementation the SSD must be initialized manually before the test script is run.

    The test runs the IOPS Test from the PTS.  This test covers a range of block sizes, read/write ratios and iterates until the steady state for the device is reached (with a maximum of 25 iterations).  Altogether the test takes over a day to run.

    Once the test is complete, the downloadable pivot tables allow users to select the steady-state measurement window and report the data in a recommended format.

    See Mr. Bowen’s blog at http://storagetuning.wordpress.com/2011/11/07/sssi-performance-test-specification/ for details on this valuable download.