• Home
  • About
  •  

    Clearing Up Confusion on Common Storage Networking Terms

    January 12th, 2017

    Do you ever feel a bit confused about common storage networking terms? You’re not alone. At our recent SNIA Ethernet Storage Forum webcast “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Mauve,” we had experts from Cisco, Mellanox and NetApp explain the differences between:

    • Channel vs. Busses
    • Control Plane vs. Data Plane
    • Fabric vs. Network

    If you missed the live webcast, you can watch it on-demand. As promised, we’re also providing answers to the questions we got during the webcast. Between these questions and the presentation itself, we hope it will help you decode these common, but sometimes confusing terms.

    And remember, the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” is a webcast series with a “colorfully-named pod” for each topic we tackle. You can register now for our next webcast: Part Teal, The Buffering Pod, on Feb. 14th.

    Q. Why do we have Fibre and Fiber

    A. Fiber Optics is the term used for the optical technology used by Fibre Channel Fabrics.  While a common story is that the “Fibre” spelling came about to accommodate the French (FC is after all, an international standard), in actuality, it was a marketing idea to create a more unique name, and in fact, it was decided to use the British spelling – “Fibre”.

    Q. Will OpenStack change all the rules of the game?

    A. Yes. OpenStack is all about centralizing the control plane of many different aspects of infrastructure.

    Q. The difference between control and data plane matters only when we discuss software defined storage and software defined networking, not in traditional switching and storage.

    A. It matters regardless. You need to understand how much each individual control plane can handle and how many control planes you have from a overall management perspective. In the case were you have too many control planes SDN and SDS can be a benefit to you.

    Q. As I’ve heard that networks use stateless protocols, would FC do the same?

    A. Fibre Channel has several different Classes, which can be either stateful or stateless. Most applications of Fibre Channel are Class 3, as it is the preferred class for SCSI traffic, A connection between Fibre Channel endpoints is always stateful (as it involves a login process to the Fibre Channel fabric). The transport protocol is augmented by Fibre Channel exchanges, which are managed on a per-hop basis. Retransmissions are handled by devices when exchanges are incomplete or lost, meaning that each exchange is a stateful transmission, but the protocol itself is considered stateless in modern SCSI-transport Fibre Channel.

    iSCSI, as a connection-oriented protocol, creates a nexus between an initiator and a target, and is considered stateful. In addition, SMB, NFSv4, ftp, and TCP are stateful protocols, while NFSv2, NFSv3, http, and IP are stateless protocols.

    Q. Where do CIFS/SMB come into the picture?

    A. CIFFS/SMB is part of a network stack.  We need to have a separate talk about network stacks and their layers.  In this presentation, we were talking primarily about the physical layer of the networks and fabrics.  To overly simplify network stacks, there are multiple layers of protocols that run on top of the physical layer.  In the case of FC, those protocols include the control plane protocols (such as FC-SW), and the data plane protocols.  In FC, the most common data plane protocol is FCP (used by SCSI, FICON, and FC-NVMe).  In the case of Ethernet, those protocols also include the control plan (such as TCP/IP), and data plane protocols.  In Ethernet, there are many commonly used data plane protocols for storage (such as iSCSI, NFS, and CIFFS/SMB)


    Containers, Docker and Storage – An Expert Q&A

    December 19th, 2016

    Containers continue to be a hot topic today as is evidenced by the more than 2,000 people who have already viewed our SNIA Cloud webcasts, “Intro to Containers, Container Storage and Docker“ and “Containers: Best Practices and Data Management Services.” In this blog, our experts, Keith Hudgins of Docker and Andrew Sullivan of NetApp, address questions from our most recent live event.

    Q. What is the major challenge for storage in containerized environment?

    A. Containers move fast. Users can spin up and spin down containers extremely quickly. The biggest challenge in production-bound container environments is simply keeping up with the movement of data.

    Docker Engine does not delete base container images when the container is shut down. Likewise, Registry assumes you’ve got unlimited storage on hand. For containers that push frequent revisions (as would be the case in a continuous delivery environment), that leads to a lot of orphaned container images that can fill up all available storage if left unchecked.

    There are some community-led scripts that will help to keep things in control. That’s the beauty of community-led technology.

    Q. What about the speed of retrieving the data from storage?

    A. That’s where being a solid storage architect comes in. Every storage system has different strengths and weaknesses, so it’s important to engineer your solution to fit your performance goals. Docker containers are running on the main kernel of the host system. IO is not constrained by abstraction, as in the case of virtual machines. Rather, it is constrained more by density – hundreds of containers on a host can push massive IOPS, so you want your pipes fat and data sources close to the host systems.

    Q. Can you expand on moving Docker Volumes from On-Premise bare metal to Cloud Service Providers? Data Migration? Encryption? 

    A. None of these capabilities are built-in to Docker Engine. We rely on external storage systems to provide those features. Private-to-cloud replication is primarily a feature of software-based companies, like Portworx, Blockbridge, or Hedvig. Encryption and migration are both common features across other companies as well. Flocker from ClusterHQ is a service broker system that provides many bolt-on features for storage systems they support. You can also use community-supplied services like Ceph to get you there.

    Q. Are you familiar with “Flocker” that apparently is able to copy persistent data to another container? Can share your thoughts?

    A. Yes. ClusterHQ (makers of Flocker) provide an API broker that sits between storage engines and Docker (and other dynamic infrastructure providers, like OpenStack), and they also provide some bolt-on features like replication and encryption.

    Q. Is there any sort of feature in the volume plugins that allows a persistent volume to re-connect to a container if the container is moved across multiple hosts?

    A. There’s no feature in plugins to cover that specifically. The plugin API is very simple. In practice, what you would do is write your plugin to expose volumes to Docker Engine on every host that it’s possible to mount that volume. In your container specification, whether it’s a Compose file, DAB file, or what have you, specify the name of your volume. Wherever that unique name is encountered, it will be mounted and attached to the container when it’s re-launched.

    If you have more questions on containers, Docker and storage, check out our first Q&A blog: Containers: No Shortage of Interest or Questions.

    I also encourage you to join our Containers opt-in email list. It will be a good way to keep up with all the SNIA Cloud is doing on this important technology.


    No Shortage of Container Storage Questions

    November 29th, 2016

    We covered a lot of ground in out recent SNIA Ethernet Storage Forum webcast, “Current State of Storage in the Container World.” We had a technical discussion on why containers are so compelling, how Docker containers work, persistent shared storage and future considerations for container storage. We received some great questions during the live event, and as promised, here are answers to them all.

    Q. Docker cannot be installed on bare metal and requires a base OS to operate upon right?

    A. That is correct.

    Q. Does the application code need to be changed so that it can “fit and operate” in a container?

    A. No, the application code does not need to change. The challenge most people face when migrating an application to a container is how to maintain the application’s state. One of the motivations for this webcast was to explain how to allow applications within containers to persist data. Hopefully the Docker Volume construct will meet your needs.

    Q. Seems like containers share one OS/kernel… That suggests that there is just one OS in the “containerized” server… And yet there is still mention of hypervisor (or at least Hyper-V)… Can you clarify? If the containers share an OS, is a hypervisor needed?

    A. You are correct, containers are designed to share a single kernel; therefore a hypervisor is not required to run containers. Having said that, VMware and Microsoft both offer options that run a single container in its own virtual machine (running a minimal operating system).

    Q. Can the Docker Hub be compared to something like the GitHub?

    A. Yes, that is a great analogy. Docker Hub (hub.docker.com) is to container images as GitHub (github.com) is to source code.

    Q. What are the differences between the base and the host image?

    A. If you’re referring to the webcast slides; the box labeled “Base Image” is the first layer in an image. The box labeled “Host OS” is not a layer, but represents the hosting operating system (kernel) that is shared by the containers.

    Q. So there is a separate root per container?

    A. In most cases the image will provide a root, therefore each container will have a separate root. This is made possible by a kernel feature called namespaces. Alternatively, Docker does allow you to share a directory between the host operating system and any number of containers though.

    Q. If Deduplication is enabled on the storage LUNs, won’t that affect the performance of the containers?

    A. Well implemented data reduction features (compression and deduplication) should have little to no effect on performance and should provide significant benefit by reducing the space required to store containers.

    Q. Can you please quickly review the concept of copy-on-write with one or two sentences to boil it down?

    A. How the copy-on-write works depends on whether the driver is file or block based. For the sake of simplicity, let’s assume a file-based implementation. Since the image layers are read-only, we need an area to store the changes that the container has made. This area is the copy-on-write layer. When a process reads a file that has not been modified, the file is read from one of the read only layers. When that file is modified and needs to be written back to disk, the new file is written to the copy-on-write layer as is the metadata that describes the file. The next time this file is read, it is read from copy-on-write layer. The graph driver is responsible for this functionality and varies by implementation.

    Q. Can network locations be used for /data? If yes, how does the Docker Engine manage network authentication for the driver?

    A. Yes, network locations can be used. The best practice is to use the Local Volume Driver, where you can pass in the required authentication via the options (see slide 15). Alternatively, the network location can be mounted on the host operating system and exposed to containers (see slides 21 & 22).

    Q. Is this where VAAI like primitives would get implemented?

    A. VAAI defines several in-band primitives.  The Docker Volume plug-in framework is completely out-of-band.  There can be some overlap in features though.  For example, the XCOPY primitive can be used to offload ‘copy jobs’ to an array.  If the vendor chooses to do so, a ‘copy job’ can be offloaded through the Docker Volume plug-in as well.  For example, a plug-in might implement a “clone” option that provides this service.

    Q. Could you share some details about Kubernetes storage ? Persistent volumes and the difference from Docker volumes? Also, what is your perspective of Flocker?

    A. Kubernetes has the concept of persistent storage. This abstraction is also called a volume. In addition, Kubernetes provides a plug-in option as well. The Kubernetes implementation predates the Docker Volume and is currently not compatible.

    Q. Comment on mainframe: IBM runs Linux on zSeries, therefore can run Linux Docker containers.

    A. Thanks, that’s good to know.

    Q. How many operating systems changes on the x86 platform? How many on the mainframe platform? Can x86 architecture run the same code/OS from 40 years ago? Docker on mainframe?

    A. The mainframe architecture has been very solid and consistent for many years.

    Q. What is a big challenge for storage in container environment?

    A. I don’t think storage has a challenge in the container environment. I think, with a properly implemented Docker Volume Plug-in, storage provides a solution to the persistent shared storage need in a container environment.

    Q. Do you ever look into RexRay or VMDK storage drivers?

    A. Yes, these are both examples of Docker Volume plug-in implementations.

     


    The Next Step for Containers: Best Practices and Data Management Services

    October 25th, 2016

    In our first SNIA Cloud webcast on containers, we provided a solid foundation on what containers are, container storage challenges and Docker. If you missed the live event, it’s now available on-demand. I encourage you to check it out, as well as our webcast Q&A blog.

    So now that we have set the stage and you’ve become acquainted with basic container technologies and the associated storage challenges in supporting applications running within containers in production, we will be back on December 7th. This time we will take a deeper dive into what differentiates this technology from what you are used to with virtual machines. Containers can both complement virtual machines and also replace them, as they promise the ability to scale exponentially higher. They can easily be ported from one physical server to another or to one platform—such as on-premise—to another—such as public cloud providers like Amazon AWS.

    At our December 7th webcast, “Containers: Best Practices and Data Management Services,” we’ll explore container best practices to address the various challenges around networking, security and logging. We’ll also look at what types of applications more easily lend themselves to a microservice architecture versus which applications may require additional investments to refactor/re-architect to take advantage of microservices.

    On December 7th, we’ll be on hand to answer your questions on the spot. I encourage you to register today. We hope you can attend!


    The Current State of Storage in the Container World

    October 25th, 2016

    It seems like everyone is talking about containers these days, but not everyone is talking about storage – and they should be. The first wave of adoption of container technology was focused on micro services and ephemeral workloads. The next wave of adoption won’t be possible without persistent, shared storage. That’s why the SNIA Ethernet Storage Forum is hosting a live webcast on November 17th, “Current State of Storage in the Container World.” In this webcast, we will provide an overview of Docker containers and the inherent challenge of persistence when containerizing traditional enterprise applications.  We will then examine the different storage solutions available for solving these challenges and provide the pros and cons of each. You’ll hear:

    • An Overview of Containers
      • Quick history, where we are now
      • Virtual machines vs. Containers
      • How Docker containers work
      • Why containers are compelling for customers
      • Challenges
      • Storage
    • Storage Options for Containers
      • NAS vs. SAN
      • Persistent and non-persistent
    • Future Considerations
      • Opportunities for future work

    This webcast should appeal to those interested in understanding the basics of containers and how it relates to storage used with containers. I encourage you to register today! We hope you can make it on November 17th. And if you’re interested in keeping up with all that SNIA is doing with containers, please sign up for our Containers Opt-In Email list and we’ll be sure to keep you posted.

     


    SNIA Storage Developer Conference-The Knowledge Continues

    October 13th, 2016

    SNIA’s 18th Storage Developer Conference is officially a success, with 124 general and breakout sessions;  Cloud Interoperability, Kinetiplugfest 5c Storage, and SMB3 plugfests; ten Birds-of-a-Feather Sessions, and amazing networking among 450+ attendees.  Sessions on NVMe over Fabrics won the title of most attended, but Persistent Memory, Object Storage, and Performance were right behind.  Many thanks to SDC 2016 Sponsors, who engaged attendees in exciting technology discussions.

    For those not familiar with SDC, this technical industry event is designed for a variety of storage technologists at various levels from developers to architects to product managers and more.  And, true to SNIA’s commitment to educating the industry on current and future disruptive technologies, SDC content is now available to all – whether you attended or not – for download and viewing.

    20160919_120059You’ll want to stream keynotes from Citigroup, Toshiba, DSSD, Los Alamos National Labs, Broadcom, Microsemi, and Intel – they’re available now on demand on SNIA’s YouTube channel, SNIAVideo.

    All SDC presentations are now available for download; and over the next few months, you can continue to download SDC podcasts which combine audio and slides. The first podcast from SDC 2016 – on hyperscaler (as well as all 2015 SDC Podcasts) are available here, and more will be available in the coming weeks.

    SNIA thanks all its members and colleagues who contributed to make SDC a success! A special thanks goes out to the SNIA Technical Council, a select group of acknowledged industry experts who work to guide SNIA technical efforts. In addition to driving the agenda and content for SDC, the Technical Council oversees and manages SNIA Technical Work Groups, reviews architectures submitted by Work Groups, and is the SNIA’s technical liaison to standards organizations. Learn more about these visionary leaders at http://www.snia.org/about/organization/tech_council.

    And finally, don’t forget to mark your calendars now for SDC 2017 – September 11-14, 2017, again at the Hyatt Regency Santa Clara. Watch for the Call for Presentations to open in February 2017.


    The Everything You Want To Know About Storage Is On Again With Part Mauve – The Architecture Pod

    October 11th, 2016

    The first installment of our “colorful” Webcast series, “Everything You Wanted To Know about Storage But Were Too Proud To Ask – Part Chartreuse,” covered the fundamental elements of storage systems. If you missed it, you can check it out on-demand. On November 1st, we’ll be back at it, focusing on the network aspect of storage systems with “Everything You Wanted To Know About Storage But Were Too Proud To Ask – Part Mauve.”

    As with any technical field, it’s too easy to dive into the jargon of the pieces and expect people to know exactly what you mean. Unfortunately, some of the terms may have alternative meanings in other areas of technology. In this Webcast, we look at some of those terms specifically and discuss them as they relate to storage networking systems.

    In particular, you’ll find out what we mean when we talk about:

    • Channel versus Busses
    • Control Plane versus Data Plane
    • Fabric versus Network

    Register now for Part Mauve of “Everything You Wanted To Know About Storage But Were Too Proud to Ask.

    For people who are familiar with data center technology, whether it be compute, programming, or even storage itself, some of these concepts may seem intuitive and obvious… until you start talking to people who are really into this stuff. This series of Webcasts will help be your Secret Decoder Ring to unlock the mysteries of what is going on when you hear these conversations. We hope to see you there!

     

     

     


    The Changing World of SNIA Technical Work – A Conversation with Technical Council Chair Mark Carlson

    August 3rd, 2016

    carlson_mark_resizeMark Carlson is the current Chair of the SNIA Technical Council (TC). Mark has been a SNIA member and volunteer for over 18 years, and also wears many other SNIA hats.   Recently, SNIA on Storage sat down with Mark to discuss his first nine months as the TC Chair and his views on the industry.

    SNIA on Storage (SoS):  Within SNIA, what is the most important activity of the SNIA Technical Council?

    Mark Carlson (MC): The SNIA Technical Council works to coordinate and approve the technical work going on within SNIA. This includes both SNIA Architecture (standards) and SNIA Software. The  work is conducted within 13 SNIA Technical Work Groups (TWGs).  The members of the TC are elected from the voting companies of SNIA, and the Council also includes appointed members and advisors as well as SNIA regional affiliate advisors. SNIA_Technology_Infographic_4

    SoS:  What has been your focus this first nine months of 2016?   

    MC: The SNIA Technical Council has overseen a major effort to integrate a new standard organization into SNIA.  The creation of the new SNIA SFF Technology Affiliate (TA) Technical Work Group has brought in a very successful group of folks and standards related to storage connectors and transceivers. This work group, formed in June 2016, carries forth the longstanding SFF Committee work efforts that has operated since 1990 until mid-2016.  In 2016, SFF Committee leaders transitioned the organizational stewardship to SNIA, to operate under a special membership class named Technology Affiliate, while retaining the long standing technical focus on specifications in a similar fashion as all SNIA TWGs do.

    SoS:  What changes did SNIA implement to form the new Technology Affiliate membership class and why?

    MC: The SNIA Policy and Procedures were changed to account for this new type of membership.  Companies can now join an Affiliate TWG without having to join SNIA as a US member.  Current SNIA members who want to participate in a Technology Affiliate like SFF can join a Technology Affiliate and pay the separate dues.  The SFF was a catalyst – we saw an organization looking for a new home as its membership evolved and its leadership transitioned.  They felt SNIA could be this home but we needed to complete some activities to make it easier for them to seamlessly continue their work.   The SFF is now fully active within SNIA and also working closely with T10 and T11, groups that SNIA members have long participated in.

    SoS:  Is forming this Technology Affiliate a one-time activity?

    MC: Definitely not.  The SNIA is actively seeking organizations who are looking for a structure that SNIA provides with IP policies, established infrastructure to conduct their work, and 160+ leading companies with volunteers who know storage and networking technology.

    SoC:  What are some of the customer pain points you see in the industry?

    MC: Critical pain points the TC has started to address with new TWGs over the last 24 months include: performance of solid state storage arrays, where the SNIA Solid State Storage Systems (S4) TWG is working to identify, develop, and coordinate system performance standards for solid state storage systems; and object drives, where work is being done by the Object Drive TWG to identify, develop, and coordinate standards for object drives operating as storage nodes in scale out storage solutions.  With the number of different future disk drive interfaces emerging that add value from external storage to in-storage compute, we want to make sure they can be managed at scale and are interoperable.TC org chart 2016

    SoS:  What’s upcoming for the next six months?

    MC: The TC is currently working on a white paper to address data center drive requirements and the features and existing interface standards that satisfy some of those requirements.  Of course, not all the solutions to these requirements will come from SNIA, but we think SNIA is in a unique position to bring in the data center customers that need these new features and work with the drive vendors to prototype solutions that then make their way into other standards efforts.  Features that are targeted at the NVM Express, T10, and T13 committees would be coordinated with these customers.

    SoS:  Can non-members get involved with SNIA?

    MC:   Until very recently, if a company wanted to contribute to a software project within SNIA, they had to become a member. This was limiting to the community, and cut off contributions from those who were using the code, so SNIA has developed a convenient Contributor License Agreement (CLA) for contributions to individual projects.  This allows external contributions but does not change the software licensing. The CLA is compatible with the IP protections that the SNIA IP Policy provides to our members.  Our hope is that this will create a broader community of contributors to a more open SNIA, and facilitate open source project development even more.

    SoS:  Will you be onsite for the upcoming SNIA Storage Developer Conference (SDC)?

    MC: Absolutely!  I look forward to meeting SNIA members and colleagues September 19-22 at the Hyatt Regency Santa Clara.  We have a great agenda, now online, that the TC has developed for this, our 18th conference, and registration is now open.  SDC brings in more than 400 of the leading storage software and hardware developers, storage product and solution architects, product managers, storage product quality assurance engineers, product line CTOs, storage product customer support engineers, and in–house IT development staff from around the world.  If technical professionals are not familiar with the education and knowledge that SDC can provide, a great way to get a taste is to check out the SDC Podcasts now posted, and the new ones that will appear leading up to SDC 2016.


    Linear Tape File System Now an International Standard

    June 9th, 2016

    By David Pease, Co-Chair SNIA Linear Tape File System Technical Working Group

    In 2011 the Linear Tape File System (LTFS) earned IBM an Engineering Emmy Award after being recognized by FOX Networks for “improving the ability of media companies to capture, manage and exploit content in digital form, fundamentally changing the way that audio and video content is managed and stored.”  Now, the International Standardization Organization (ISO) has named LTFS an International Standard (ISO/IEC 20919:2016).

    LTFS’s road to standardization was a long one.  It started with IBM and the LTO (Linear Tape Open) Consortium jointly publishing the LTFS Format Specification as an open format in April, 2010, the day that LTFS was announced at the NAB (National Association of Broadcasters) show in Las Vegas.  In 2012, at the invitation of the Storage Networking Industry Association (SNIA), we formed the SNIA LTFS Technical Work Group, with a specific goal of moving towards international standardization.  The LTFS TWG and SNIA proceeded to publish several revisions of the LTFS Format Specification, inviting all interested parties to join the work group and contribute, or to comment on the specification before formal publication.  In 2014 SNIA helped the LTFS TWG format the then-current version of the specification (V2.2) to ISO standards and worked with the ISO organization to publish the specification as a draft standard and solicit comments.  After review and comments, the LTFS Format Specification was approved by ISO as an international standard in April of 2016 (just 6 years after it was first announced).

    We are thrilled by the recognition of LTFS as an ISO standard; it is one more step towards guaranteeing that the LTFS format is a truly open standard that will continue to be available and usable for the foreseeable future.  In my opinion, two of the major inhibitors to the widespread use of tape technology for data storage have been the lack of a standard format for data storage and interchange on tape, and its perceived difficulty of use. LTFS addresses both of these problems by providing a general-purpose, open format that can easily be used like any other storage medium.

    As the world’s data continues to grow at an increasing pace, and the need for affordable, large-scale storage becomes more important, the standardization of LTFS will make the use of tape for long-term, affordable storage easier and more attractive.

    Use Case: Making Digital Media Storage Open and Future-Proof

    Just as in personal photography, the last couple of decades have seen a major shift from analog and film technologies to digital ones in the Media and Entertainment industry, where modern cameras record directly to digital media. This has led to the need for new technologies to replace traditional film as a long-term storage medium for television and movies.

    Film has some specific advantages for the Media & Entertainment industry that a new technology needs to replicate, including long shelf life, inexpensive, and zero-power storage, and a format that is “future-proof.” Tape storage is a perfect match to several of these criteria, including long (30+ years) shelf life, and zero-power, inexpensive storage. However, a stumbling block for the wide-spread acceptance of tape for digital storage in the media and entertainment business had been the lack of an open, easy-to-use, future-proof standard for the format of the data on tape. You can imagine an entertainment company using proprietary storage software, for example, only to run into problems like the provider going out of business or increasing its software costs to an unacceptable level.

    We created LTFS to be an open and future-proof format from the beginning: open, because when we published the format, we made it publicly available at no charge, and future-proof because the format is self-documenting and can be easily accessed without the need for proprietary software.

    Being an international standard should make anyone who is considering the use of LTFS even more comfortable with the fact that it is an open standard that is not owned or controlled by any single company, and is a format that will continue to be supported in the future.  As such, becoming an international standard has the potential to increase the use, and therefore the value, of LTFS across industries.

    For more information about the work of the SNIA LTFS TWG, please visit www.snia.org/ltfs.


    Podcasts Bring the Sounds of SNIA’s Storage Developer Conference to Your Car, Boat, Train, or Plane!

    May 26th, 2016

    SNIA’s Storage Developer Conference (SDC) offers exactly what a developer of cloud, solid state, security, analytics, or big data applications is looking  for – rich technical content delivered in a no-vendor bias manner by today’s leading technologists.  The 2016 SDC agenda is being compiled, but now yousdc podcast pic can get a “sound bite” of what to expect by downloading  SDC podcasts via iTunes, or visiting the SDC Podcast site at http://www.snia.org/podcasts to download the accompanying slides and/or listen to the MP3 version.

    Each podcast has been selected by the SNIA Technical Council from the 2015 SDC event, and include topics like:

    • Preparing Applications for Persistent Memory from Hewlett Packard Enterprise
    • Managing the Next Generation Memory Subsystem from Intel Corporation
    • NVDIMM Cookbook – a Soup to Nuts Primer on Using NVDIMMs to Improve Your Storage Performance from AgigA Tech and Smart Modular Systems
    • Standardizing Storage Intelligence and the Performance and Endurance Enhancements It Provides from Samsung Corporation
    • Object Drives, a New Architectural Partitioning from Toshiba Corporation
    • Shingled Magnetic Recording- the Next Generation of Storage Technology from HGST, a Western Digital Company
    • SMB 3.1.1 Update from Microsoft

    Eight podcasts are now available, with new ones added each week all the way up to SDC 2016 which begins September 19 at the Hyatt Regency Santa Clara.  Keep checking the SDC Podcast website, and remember that registration is now open for the 2016 event at http://www.snia.org/events/storage-developer/registration.  The SDC conference agenda will be up soon at the home page of http://www.storagedeveloper.org.

    Enjoy these great technical sessions, no matter where you may be!