Category: Flash
It’s a Wrap! SNIA’s 20th Storage Developer Conference a Success!
Reviews are in for the 20th Storage Developer Conference (SDC) and they are thumbs up! The 2017 SDC was the largest ever- expanding to four full days with seven keynotes, five SNIA Tutorials, and 92 sessions. The SNIA Technical Council, who oversees conference content, compiled a rich agenda of 18 topic categories focused on Read More
The Alphabet Soup of Storage Networking Acronyms Explained
- Volatile v Non-Volatile v Persistent Memory
- NVDIMM v RAM v DRAM v SLC v MLC v TLC v NAND v 3D NAND v Flash v SSDs v NVMe
- NVMe (the protocol)
SNIA Storage Developer Conference-The Knowledge Continues
SNIA’s 18th Storage Developer Conference is officially a success, with 124 general and breakout sessions; Cloud Interoperability, Kinetic Storage, and SMB3 plugfests; ten Birds-of-a-Feather Sessions, and amazing networking among 450+ attendees. Sessions on NVMe over Fabrics won the title of most attended, but Persistent Memory, Object Storage, and Performance were right behind. Many thanks to SDC 2016 Sponsors, who engaged attendees in exciting technology discussions.
For those not familiar with SDC, this technical industry event is designed for a variety of storage technologists at various levels from developers to architects to product managers and more. And, true to SNIA’s commitment to educating the industry on current and future disruptive technologies, SDC content is now available to all – whether you attended or not – for download and viewing.
You’ll want to stream keynotes from Citigroup, Toshiba, DSSD, Los Alamos National Labs, Broadcom, Microsemi, and Intel – they’re available now on demand on SNIA’s YouTube channel, SNIAVideo.
All SDC presentations are now available for download; and over the next few months, you can continue to download SDC podcasts which combine audio and slides. The first podcast from SDC 2016 – on hyperscaler (as well as all 2015 SDC Podcasts) are available here, and more will be available in the coming weeks.
SNIA thanks all its members and colleagues who contributed to make SDC a success! A special thanks goes out to the SNIA Technical Council, a select group of acknowledged industry experts who work to guide SNIA technical efforts. In addition to driving the agenda and content for SDC, the Technical Council oversees and manages SNIA Technical Work Groups, reviews architectures submitted by Work Groups, and is the SNIA’s technical liaison to standards organizations. Learn more about these visionary leaders at http://www.snia.org/about/organization/tech_council.
And finally, don’t forget to mark your calendars now for SDC 2017 – September 11-14, 2017, again at the Hyatt Regency Santa Clara. Watch for the Call for Presentations to open in February 2017.
Architectural Principles for Networked Solid State Storage Access
There are many permutations of technologies, interconnects and application level approaches in play with solid state storage today. In fact, it is becoming increasingly difficult to reason clearly about which problems are best solved by various permutations of these. That’s why the SNIA Ethernet Storage Forum, together with the SNIA Solid State Storage Initiative, is hosting a live Webcast, “Architectural Principles for Networked Solid State Storage Access,” on June 2nd at 10:00 a.m. PT.
As our presenter, we are fortunate to have Doug Voigt, chair of the SNIA NVM Programming Technical Working Group and a member of the SNIA Technical Council. Doug will outline key architectural principals that may allow us to think about the application of networked solid state technologies more systematically, answering questions such as:
- How do applications see IO and memory access differently?
- What is the difference between a memory and an SSD technology?
- How do application and technology views permute?
- How do memory and network interconnects change the equation?
- What are persistence domains and why are they important?
I hope you’ll register today and join us on June 2nd for an hour that is sure to be insightful.
Next Webcast: The 2015 Ethernet Roadmap for Networked Storage
The ESF is excited to announce our next live Webcast, “The 2015 Ethernet Roadmap for Networked Storage.”
For over three decades, Ethernet has advanced on a simple “powers-of-ten” speed increases, and this model has served the industry well. Ethernet is changing in big ways and the Ethernet Alliance has captured the latest changes in the 2015 Ethernet Roadmap.
On June 30th at 10:00 a.m. PT an expert panel comprised of Scott Kipp, President of the Ethernet Alliance, David Chalupsky, Chair IEEE P802.3bq/bz TFs and the Ethernet Alliance BASE-T Subcommittee and myself will present the Ethernet Alliance’s 2015 Ethernet Roadmap for the networking technology that underlies most of future network storage.
SNIA has focused on protocols and usage models and more or less just takes Ethernet for granted. The biggest technology disruption in the storage space is the emergence into the mainstream of Non-Volatile Memory (NVM), FLASH in particular. NVM increasingly moves system bottlenecks from the storage subsystem to the network. Developments in NVM — most recently 3D FLASH — assure that the cost per GB will continue aggressive declines and demand for bandwidth will go up. NVM will become more prevalent, making the roadmap for Ethernet increasingly more important to the storage networking community.
This will be a live and interactive session. I encourage you to register now and bring your questions for our experts. I hope to see you on June 30th.
A Beginner’s Guide to NVMe
When I first started in storage technology (it doesn’t seem like that long ago, really!) the topic seemed like it was becoming rather stagnant. The only thing that seemed to be happening was that disks were getting bigger (more space) and the connections were getting faster (more speed).
More speed, more space; more space, more speed. Who doesn’t like that? After all, you can never have too much bandwidth, or too much disk space! Even so, it does get rather routine. It gets boring. It gets well, “what have you done for me lately?”
Then came Flash.
People said that Flash memory was a game changer, and though I believed it, I just didn’t understand how, really. I mean, sure, it’s really, really fast storage drives, but isn’t that just the same story as before? This is coming, of course, from someone who was far more familiar with storage networks than storage devices.
Fortunately, I kept that question to myself (well, at least until I was asked to write this blog), thus saving myself from potential embarrassment.
There is no shortage of Flash vendors out there who (rightfully) would have jumped at the chance to set my misinformed self on the straight and narrow; they would have been correct, too. Flash isn’t just “cool,” it allows the coordination, access, and retrieval of data in ways that simply weren’t possible with more traditional media.
There are different ways to use Flash, of course, and different architectures abound in the marketplace – from fully “All Flash Arrays” (AFA), “Hybrid Arrays” (which are a combination of Flash and spinning disk), to more traditional systems that have simply replaced the spinning drives with Flash drives (without much change to the architecture).
Even through these architectures, though, Flash is still constrained by the very basic tenets of SCSI connectivity. While this type of connectivity works very well (and has a long and illustrious history), the characteristics of Flash memory allow for some capabilities that SCSI wasn’t built to do.
Enter NVMe.
What’s NVMe?
Glad you asked. NVMe stands for “Non-Volatile Memory Express.” If that doesn’t clear things up, let’s unpack this a bit.
Flash and Solid State Devices (SSDs) are a type of non-volatile memory (NVM). At the risk of oversimplifying, NVM is a type of memory that keeps its content when the power goes out. It also implies a way by which the data stored on the device is accessed. NVMe is a way that you can access that memory.
If it’s not quite clear yet, don’t worry. Let’s break it down with pretty pictures.
Starting Before the Beginning – SCSI
For this, you better brace yourself. This is going to get weird, but I promise that it will make sense when it’s all over.
Let’s imagine for the moment that you are responsible for programming a robot in a factory. The factory has a series of conveyor belts, each with things that you have to put and get with the robot.
The robot is on a track, and can only move from sideways on the track. Whenever it needs to get a little box on the conveyor belt, it has to move from side to side until it gets to the correct belt, and then wait for the correct orange block (below) to arrive as the belt rotates.
Now, to make things just a little bit more complicated, each conveyor belt is a little slower than the previous one. For example, belt 1 is the fastest, belt 2 is a little slower, belt 3 even slower, and so on.
In a nutshell, this is analogous to how spinning disk drives work. The robot arm – called a read/write head – moves across a spinning disk from sector to sector (analogous, imperfect as it may be, to our trusty conveyor belts) to pick up blocks of data.
(As an aside, this is the reason why there are differences in performance between long contiguous blocks to read and write – called sequential data – and randomly placing blocks down willy-nilly in various sectors/conveyor belts – called random read/writes.)
Now remember, our trusty robot needs to be programmed. That is, you – in the control room – need to send instructions to the robot so that it can get/put its data. The command set that is used to do this, in our analogy, is SCSI.
Characteristics of SCSI
SCSI is a tried-and-true command structure. It is, after all, the protocol for controlling most devices in the world. In fact, its ubiquity is so prevalent, most people nowadays think that it’s simply a given. It works on so many levels and layers as an upper layer protocol, with so many different devices, that it’s easily the default “go-to” application. It’s used with Fibre Channel, FCoE, iSCSI (obviously), InfiniBand – even the hard drives in your desktop and laptops.
It was also built for devices that relied heavily on the limitations of these conveyor belts. Rotational media has long been shown as superior than linear (i.e., tape) in terms of speed and performance, but the engineering required to make up for the changes in speeds from the inside of the drive (where the rotational speed is much slower) to the outside – similar to the difference between our conveyor belts “4” and “1” – results in some pretty fancy footwork on the storage side.
Because the robot arm must move back and forth, it’s okay that it can only handle one series of commands at a time. Even if you could tell it to get a block from conveyor belt 1 and 3 at the same time, it couldn’t do it. It would have to queue the commands and get to each in turn.
So, the fact that SCSI only sent commands one-at-a-time was okay, because the robot could only do that anyway.
Then came Flash, and suddenly the things seemed a bit… constricted.
The Flash Robot
Let’s continue with the analogy, shall we? We still have our mandate – control a robot to get and put blocks of information onto storage media. This time, it’s Flash.
First, let’s do away with the conveyor belts altogether. Instead, let’s lay out all the blocks (in a nice OCD-friendly way) on the media, so that the robot arm can see it all at once:
From its omniscient view, the robot can “see” all the blocks at once. Nothing moves, of course, as this is solid-state. However, as it stands, this is how we currently use Flash with a robot arm that responds to SCSI commands. We still have to address our needs one-at-a-time.
Now, it’s important to note that because we’re not talking about spinning media, the robot arm can go really fast (of course, there’s no real moving read/write head in a Flash drive, but remember we’re looking at this from a SCSI standpoint). The long and the short of it is that even though we can see all of the data from our vantage point, we still only ask for information one at a time, and we still only have one queue for those requests.
Enter NVMe
This is where things get very interesting.
NVMe is the standardized interface for PCIe SSDs (it’s important to note, however, that NVMe is not PCIe!). It was architected from the ground-up, specifically for the characteristics of Flash and NVM technologies.
Most of the technical specifics are available at the NVM Express website, but here are a couple of the main highlights.
First, whereas SCSI has one queue for commands, NVMe is designed to have up to 64 thousand queues. Moreover, each of those queues in turn can have up to 64 thousand commands. Simultaneously. Concurrently. That is, at the same time.
That’s a lot of freakin’ commands goin’ on at once!
Let’s take a look at our programmable robot. To complete the analogy, instead of one arm, our intrepid robot has 64 thousand arms, each with the ability to handle 64 thousand commands.
Second, NVMe streamlines those commands to only the basics that Flash technologies need: 13 to be exact.
Remember when I said that Flash has certain characteristics that allow for radically changing the way data centers store and retrieve data? This is why.
Flash is already fast. NVMe can make this even faster than we do so today. How fast? Very fast.
Source: The Performance Impact of NVMe and NVMe over Fabrics
This is just an example, of course, because enterprise data centers have more workloads than just those that run 4K random reads. Even so, it’s a pretty nifty example:
- For 100% random reads, NVMe has 3x better IOPS than 12Gbps SAS
- For 70% random reads, NVMe has 2x better IOPS than 12Gbps SAS
- For 100% random writes, NVMe has 1.5x better IOPs than 12Gbps SAS
What about sequential data?
Source: The Performance Impact of NVMe and NVMe over Fabrics
Again, this is just one scenario, but the results are still quite impressive. For one, NVMe delivers greater than 2.5Gbps read performance and ~2Gbps write performance:
- 100% reads: NVMe has 2x performance of 12Gbps SAS
- 100% writes: NVMe has 2.5x performance of 12Gbps SAS
Of course, there is more to data center life than IOPS! Those efficiencies of command structure that I mentioned above also reduce CPU cycles by half, as well as reduce latency by more than 200 microseconds than 12 Gbps SAS.
I know it sounds like I’m picking on poor 12 Gbps SAS, but at the moment it is the closest thing to the NVMe-type of architecture. The reason for this is because of NVMe’s relationship with PCIe.
Relationship with PCIe
If there’s one place where there is likely to be some confusion, it’s here. I have to confess, when I first started going deep into NVMe, I got somewhat confused, too. I understood what PCIe was, but I was having a much harder time figuring out where NVMe and PCIe intersected, because most of the time the conversations tend to blend the two technologies together in the discussion.
That’s when I got it: they don’t intersect.
When it comes to “hot data,” we in the industry have been seeing a progressive migration towards the CPU. Traditional hosts contain an I/O controller that sits in-between the CPU and the storage device. By using PCIe, however it is possible to eliminate that I/O controller from the data path, making the flows go very, very quick.
Because of the direct connection to the CPU, PCIe has some pretty nifty advantages, including (among others):
- Lower latency
- Scalable performance (1 GB/s, per lane, and PCI 3.0 x8 cards have 8 lanes – that’s what the “x8” means)
- Increased I/O (up to 40 PCIe lanes per CPU socket)
- Low power
This performance of PCIe, as shown above, is significant. Placing a SSD on that PCIe interface was, and is, inevitable. However, there needed to be a standard way to communicate with the SSDs through the PCIe interface, or else there would be a free-for-all for implementations. The interoperability matrices would be a nightmare!
Enter NVMe.
NVM Express is that standardized way of communicating with a NVM storage device, and is backed by an ever-expanding consortium of hardware and software vendors to make your life easier and more productive with Flash technologies.
Think of PCIe as the physical interface and NVMe as the protocol for managing NVM devices using that interface.
Not Just PCIe
Just like with SCSI, an interest has emerged in moving the storage outside of the host chassis itself. It’s possible to do this with PCIe, but it requires extending the PCIe hardware interface outside as well, and as a result there has been some interest in using NVMe with other interface and storage networking technologies.
Work is progressing on not just point-to-point communication (PCIe, RDMA-friendly technologies such as RoCE and iWARP), but also Fabrics as well (InfiniBand-, Ethernet- and Fibre Channel-based, including FCoE).
By expanding these capabilities, it will be possible to attach hundreds, maybe thousands, of SSDs in a network – far more than can be accommodated via PCIe-based systems. There are some interesting aspects for NVMe using Fabrics that are best left for another blog post, but it was worth mentioning here as an interesting tidbit.
Bottom Line
NVM Express has been in development since 2007, believe it or not, and has just released version 1.2 as of this writing. Working prototypes of NVMe using PCIe- and Ethernet-based connectivity have been demonstrated for several years. It’s probably the most-developed standards work that few people have ever heard about!
Want to learn more? I encourage you to listen/watch the SNIA ESF Webcast “The Performance Impact of NVMe and NVMe over Fabrics” which goes into much more technical detail, sponsored by the NVMe Promoter’s Board and starring some of the brainiacs behind the technical work. Oh, and I’m there too (as the moderator, of course).
Keep an eye open for more information about NVMe in the coming months. The progress made is remarkably rapid, and more companies are joining each year. It will be extremely interesting to see just how creative the data center architectures can get in the coming years, as Flash technology comes to realize its full potential.
Flash Webcast Q&A
Our recent Webcast: Flash – Plan for the Disruption was very well received and well attended. We thank everyone who was able to make the live event. For those of you who couldn’t make it, it’s now available on demand. Check it out here.
There wasn’t enough time to respond to all of the questions during the Webcast, so we have consolidated answers to all of them in this blog post from the presentation team. Feel free to comment and provide your input.
Q. Are you going to cache both read and writes in NetApp FlashCache?
A. Flash Cache is a level 2 Read cache and it is used to accelerate random read operations. NetApp offers an additional capability called Flash Pool which caches both random reads and random overwrites. Both technologies are part of the NetApp Virtual Storage Tier family within the Data ONTAP operating environment.
Q. Is eMLC flash available today?
A. Yes, a number of Flash vendors are shipping eMLC today.
Q. Also can you review the write cycle performance of SLC vs. MLC?
A. Write cycles for SLC are typically around 100,000. With eMLC, write cycles of 30,000 per bit can be achieved.
Q. Has specific analysis been conducted on what applications and relative data can be cached at the server versus at the storage controller (tolerance for latency, user patience, etc.)?
A. This varies but server caching will typically be used for applications with the most hot spots such as databases. If there is a particular requirement for ultra low latency such as in OLTP environments, server caching may be appropriate. Server caching can also yield significant benefit to increase VM density. Generally, server caching will be deployed to accelerate a specific application while storage controller caching will be used to accelerate storage which is shared across multiple applications.
Q. Does the data running over the network storage PDUs or Ethernet Layer2/IP traffic?
A. Ethernet Layer 2 in this demo, thought it could have been scaled to for L3 IP routed traffic.
Q. What is the difference between flash tier and flash cache?
A. A flash tier is persistent storage whereby datasets are pinned to flash technology for some period of time (or permanently). In Automated Storage Tiering, data may be migrated to and from the flash tier based on the temperature of the data. A flash cache, on the other hand is a caching technology in which the most frequently accessed data is copied to flash for data access but then evicted as the data cools down. Data is copied to the flash cache either on the basis of calculated data temperature or on a first-in first-out basis.
Q. Given the large advantages of flash on power (direct), cooling, and DC footprint, why do enterprise data centers not just completely switch out their HDDs? It seems like there is a good ROI even without considering performance. Is it the operational complexities that make this challenging?
A. For many applications, this is not cost justified given the significant price difference of the SSD and HDD devices. Since hot data typically amounts to less than 20% of total data, a small amount of flash can be deployed successfully. In the caching case, this can be around 1%.
Flash Webcast – Are You Ready for the Disruption?
There’s no doubt that flash is a game changer. Even a relatively small percentage of flash can drive a significant improvement in peak storage performance. How are you planning for the disruption? Join me and my SNIA colleague, Paul Feresten, for a live Webcast next week, Thursday, September 20th (11:00 a.m. ET, 8:00 am. PT) as we discuss the impact of flash. We’ll take a look at how flash is being deployed in storage systems, key considerations and tradeoffs, performance benefits, trends in non-volatile memory and more. And because it’s live we’ll take your questions on the spot. We hope to see you there. Register now.
10GbE Answers to Your Questions
Our recent Webcast: 10GbE – Key Trends, Drivers and Predictions was very well received and well attended. We thank everyone who was able to make the live event. For those of you who couldn’t make it, it’s now available on demand. Check it out here.
There wasn’t enough time to respond to all of the questions during the Webcast, so we have consolidated answers to all of them in this blog post from the presentation team. Feel free to comment and provide your input.
Question: When implementing VDI (1000 to 5000 users) what are best practices for architecting the enterprise storage tier and avoid peak IOPS / Boot storm problems? How can SSD cache be used to minimize that issue?
Answer: In the case of boot storms for VDI, one of the challenges is dealing with the individual images that must be loaded and accessed by remote clients at the same time. SSDs can help when deployed either at the host or at the storage layer. And when deduplication is enabled in these instances, then a single image can be loaded in either local or storage SSD cache and therefore it can be served the client much more rapidly. Additional best practices can include using cloning technologies to reduce the space taken up by each virtual desktop.
Question: What are the considerations for 10GbE with LACP etherchannel?
Answer: Link Aggregation Control Protocol (IEEE 802.1AX-2008) is speed agnostic. No special consideration in required going to 10GbE.
Question: From a percentage point of view what is the current adoption rate of 10G Ethernet in data Centers vs. adoption of 10G FCoE?
Answer: As I mentioned on the webcast, we are at the early stages of adoption for FCoE. But you can read about multiple successful deployments in case studies on the web sites of Cisco, Intel, and NetApp, to name a few. The truth is no one knows how much FCoE is actually deployed today. For example, Intel sells FCoE as a “free” feature of our 10GbE CNAs. We really have no way of tracking who uses that feature. FC SAN administrators are an extraordinarily conservative lot, and I think we all expect this to be a long transition. But the economics of FCoE are compelling and will get even more compelling with 10GBASE-T. And, as several analysts have noted, as 40GbE becomes more broadly deployed, the performance benefits of FCoE also become quite compelling.
Question: What is the difference between DCBx Baseline 1.01 and IEEE DCBx 802.1 Qaz?
Answer: There are 3 versions of DCBX
- Pre-CEE (also called CIN)
- CEE
- 802.1Qaz
There are differences in TLVs and the ways that they are encoded in all 3 versions. Pre-CEE and CEE are quite similar in terms of the state machines. With Qaz, the state machines are quite different — the
notion of symmetric/asymmetric/informational parameters was introduced because of which the way parameters are passed changes.
Question: I’m surprise you would suggest that only 1GBe is OK for VDI?? Do you mean just small campus implementations? What about multi-location WAN for large enterprise if 1000 to 5000 desktop VMs?
Answer: The reference to 1GbE in the context of VDI was to point out that enterprise applications will also rely on 1GbE in order to reach the desktop. 1GbE has sufficient bandwidth to address VoIP, VDI, etc… as each desktop connects to the central datacenter with 1GbE. We don’t see a use case for 10GbE on any desktop or laptop for the foreseeable future.
Question: When making a strategic bet as a CIO/CTO in future (5-8 years plus) of my datacenter, storage network etc, is there any technical or biz case to keep FC and SAN? Versus, making move to 10/40GbE path with SSD and FC? This seems especially with move to Object Based storage and other things you talked about with Big Data and VM? Seems I need to keep FC/SAN only if vendor with structured data apps requires block storage?
Answer: An answer to this question really requires an understanding of the applications you run, the performance and QOS objectives, and what your future applications look like. 10GbE offers the bandwidth and feature set to address the majority of application requirements and is flexible enough to support both file and block protocols. If you have existing investment in FC and aren’t ready to eliminate it, you have options to transition to a 10GbE infrastructure with the use of FCoE. FCoE at its core is FCP, so you can connect your existing FC SAN into your new 10GbE infrastructure with CNAs and switches that support both FC and FCoE. This is one of the benefits of FCoE – it offers a great migration path from FC to Ethernet transports. And you don’t have to do it all at once. You can migrate your servers and edge switches and then migrate the rest of your infrastructure later.
Question: Can I effectively emulate or out-perform SAN on FC, by building VLAN network storage architecture based on 10/40GBe, NAS, and use SSD Cache strategically.
Answer: What we’ve seen, and you can see this yourself in the Yahoo case study posted on the Intel website, is that you can get to line rate with FCoE. So 10GbE outperforms 8Gbps FC by about 15% in bandwidth. FC is going to 16 Gbps, but Ethernet is going to 40Gbps. So you should be able to increasingly outperform FC with FCoE — with or without SSDs.
Question: If I have large legacy investment in FC and SAN, how do cost-effectively migrate to 10 or 40 GBe using NAS? Does it only have to be greenfield opportunity? Is there better way to build a business case for 10GBe/NAS and what mix should the target architecture look like for large virtualized SAN vs. NAS storage network on IP?
Answer: The combination of 10Gb converged network adapter (CNA) and a top of the rack (TOR) switch that supports both FCoE and native FC allows you to preserve connectivity to your existing FC SAN assets while taking advantage of putting in place a 10Gb access layer that can be used for storage and IP. By using CNAs and DCB Ethernet switches for your storage and IP access you are also helping to reduce your CAPEX and OPEX (less equipment to buy and manage using a common infrastructure. You get the added performance (throughput) benefit of 10G FCoE or iSCSI versus 4G or 8G Fibre Channel or 1GbE iSCSI. For your 40GbE core switches so you have t greater scalability to address future growth in your data center.
Question: If I want to build an Active-Active multi-Petabyte storage network over WAN with two datacenters 1000 miles apart to primarily support Big Data analytics, why would I want to (..or not) do this over 10/40GBe / NAS vs. FC on SAN? Does SAN vs. NAS really enter into the issue? If I got mostly file-based demand vs. block is there a tech or biz case to keep SAN ?
Answer: You’re right, SAN or NAS doesn’t really enter into the issue for the WAN part; bandwidth does for the amount of Big Data that will need to be moved, and will be the key component in building active/active datacenters. (Note that at that distance, latency will be significant and unavoidable; applications will experience significant delay if they’re at site A and their data is at site B.)
Inside the data center, the choice is driven by application protocols. If you’re primarily delivering file-based space, then a FC SAN is probably a luxury and the small amount of block-based demand can be delivered over iSCSI with equal performance. With a 40GbE backbone and 10GbE to application servers, there’s no downside to dropping your FC SAN.
Question: Are you familiar with VMware and CISCO plans to introduce a Beat for virtualized GPU Appliance (aka think Nivdia hardware GPUs) for heavy duty 3D visualization apps on VDI? No longer needing expensive 3D workstations like RISC based SGI desktops. If so, when dealing with these heavy duty apps what are your concerns for network and storage network?
Answer: I’m afraid I’m not familiar with these plans. But clearly moving graphics processing from the client to the server will add increasing load to the network. It’s hard to be specific without a defined system architecture and workload. However, I think the generic remarks Jason made about VDI and how NVM storage can help with peak loads like boot storms apply here as well, though you probably can’t use the trick of assuming multiple users will have a common image they’re trying to access.
Question: How do I get a copy of your slides from today? PDF?
Answer: A PDF of the Webcast slides is available at the SNIA-ESF Website at: http://www.snia.org/sites/default/files/SNIA_ESF_10GbE_Webcast_Final_Slides.pdf