• Home
  • About
  •  

    Principles of Networked Solid State Storage – Q&A

    June 22nd, 2016

    At this month’s SNIA Ethernet Storage Forum Webcast, “Architectural Principles for Networked Solid State Storage Access,” Doug Voigt, Chair of the SNIA NVM Programming Technical Working Group, and a member of the SNIA Technical Council, outlined key architectural principles surrounding the application of networked solid state technologies. We had a flurry of questions near the end of the Webcast that we did not have enough time to answer. Here are Doug’s answers to all the questions we received during the event:

    Q. Are there wait cycles in accessing persistent memory?

    A. It depends entirely on which persistent memory (PM) technology is being accessed and how the memory interconnect is used.  Some technologies have write times that are quite different from read times.  When using tightly timed interconnects such as DDR with those technologies it may be difficult to avoid wait cycles.

    Q. How do Pmalloc and malloc share the virtual address space of the application?

    A. This is entirely up to the OS and other libraries operating within any constraints of the processor architecture-specific memory management units.  A good mental model would be fairly large regions of contiguous address space in both the physical and virtual domains, where each region will comprise a single type of memory. Capacity will be reserved for pmalloc and malloc in the appropriate regions.

    Q. Always flush after doing your memory-mapped IO.  Is that simply good hygiene?

    A. Not exactly. The term “Memory Mapped IO” is used to reference control plane (as opposed to data plane) access.  It is often reasonable to set up control plane memory as uncacheable. The need for strict order of access to physical control plane registers is so pervasive that caching is generally not useful. Uncacheable writes are always flushed by the processor, as opposed to the application.

    Generally with memory mapped IO devices the data plane uses direct memory access (DMA).  With memory mapped files (as opposed to memory mapped IO) Load/Store (more commonly referred to as “Ld/St”), not DMA, is used in the data plane. Disabling caching in the data plane is generally a big performance sacrifice for small byte range access.

    In the Ld/St datapath, strategically placed flushing is required to retain both performance and power failure recovery. The SNIA NVM Programming Model describes this type of functionality.

    Q. Once NVDIMM support become pervasive with support from NVMe drives in the server box, should network storage be more focused on SAS Flash or just SAS HDDs?

    A. Not necessarily.  NVMe over Fabric, Fibre Channel and iSCSI are also types of networked storage that will likely retain significant market share relative to SAS.

    Q. Are the ‘Big Data’ Data Warehouse applications starting to use the persistence memory and domain technologies in their applications?

    A. It is too early to see much of this yet. PM technologies might become a priority as a staging area for analytic applications with high ingest or checkpoint rates. NVDIMMs are likely to be too expensive to store anything “big” for quite a while.

    Q. Also, is the persistence memory/domains being used in the Hyper-converged and Converged hardware infrastructures?

    A. Persistent memory is quintessentially (Hyper-) converged.  It wouldn’t be unreasonable to expect some traction with hyper-converged solutions that experience high storage-performance demand.

    Q. What distance would you associate with 10’s of microseconds?

    A. In terms of transmission delay, 10’s of uS align with a campus or small city scale, but the distance itself is often not the primary factor.  Switching delays, transmission line properties and software overhead are generally bigger factors.

    Q. So latency would be the binding factor for distances…not a question, an observation.

    A. Yes, in effect, either through transmission or relay.  See above.

    Q. Aren’t there multi-threaded SSDs?

    A. Yes, but since the primary metric in this presentation is latency we ignore multi-threading.  It can enable more work to get done, but it generally increases latency rather than reducing it.

    Q. Is Pmalloc universal usage?

    A. The term is starting to be recognized among developers and has been used in research. Various similar names have been used in early research prototypessuch as pmalloc in Mnemosyne and nvmalloc in SCMFS.

    Q. So how would PM help in a (stock broking) requirement, where we currently prophesize an RDMA or iWARP solution?

    A. With PM the answer is always lower latency.  PM can be litegrated like memory or like flash. RDMA network paths for both of these options were discussed in the presentation. In either case, PM is low-latency enough that networking and software overheads will completely determine performance, even when using RDMA. The performance boost from PM is greatest when it is accessed locally.  If remote access is a requirement then the new work being done in the RDMA community should help.

    Q. If data stored in memory requires to be copied to a different host, memory (for consistency) how does PM assist, or is there an extension to PM? Coherency between multiple hosts in a cluster, if you will?

    A. PM technology does not help with this; the methods of managing consistency across hosts remain unchanged by PM.  All PM offers is low latency persistence.

    Coordination across hosts or nodes in a cluster must use existing clustering techniques such as locking and quorums. In addition, the relative timescales of memory access and network communication suggest the application of asynchronous remote replication techniques used in today’s storage solutions.

    Regarding coherency, PM brings nothing new to the known techniques for managing coherency.  Classical cluster architecture must be applied outside of symmetric multi-processing coherency domains. Within coherency domains, all of the logic is above the PM level in a processor side memory controller or a software emulation of the same algorithms.

     

     

     

     


    Podcasts Bring the Sounds of SNIA’s Storage Developer Conference to Your Car, Boat, Train, or Plane!

    May 26th, 2016

    SNIA’s Storage Developer Conference (SDC) offers exactly what a developer of cloud, solid state, security, analytics, or big data applications is looking  for – rich technical content delivered in a no-vendor bias manner by today’s leading technologists.  The 2016 SDC agenda is being compiled, but now yousdc podcast pic can get a “sound bite” of what to expect by downloading  SDC podcasts via iTunes, or visiting the SDC Podcast site at http://www.snia.org/podcasts to download the accompanying slides and/or listen to the MP3 version.

    Each podcast has been selected by the SNIA Technical Council from the 2015 SDC event, and include topics like:

    • Preparing Applications for Persistent Memory from Hewlett Packard Enterprise
    • Managing the Next Generation Memory Subsystem from Intel Corporation
    • NVDIMM Cookbook – a Soup to Nuts Primer on Using NVDIMMs to Improve Your Storage Performance from AgigA Tech and Smart Modular Systems
    • Standardizing Storage Intelligence and the Performance and Endurance Enhancements It Provides from Samsung Corporation
    • Object Drives, a New Architectural Partitioning from Toshiba Corporation
    • Shingled Magnetic Recording- the Next Generation of Storage Technology from HGST, a Western Digital Company
    • SMB 3.1.1 Update from Microsoft

    Eight podcasts are now available, with new ones added each week all the way up to SDC 2016 which begins September 19 at the Hyatt Regency Santa Clara.  Keep checking the SDC Podcast website, and remember that registration is now open for the 2016 event at http://www.snia.org/events/storage-developer/registration.  The SDC conference agenda will be up soon at the home page of http://www.storagedeveloper.org.

    Enjoy these great technical sessions, no matter where you may be!


    Your Questions Answered on NVDIMM

    May 23rd, 2016

    The recent NVDIMM webcasts on the SNIA BrightTALK Channel sparked many questions from the almost 1,000 viewers who have watched it live or downloaded the on-demand cast. Now,  NVDIMM SIG Chairs Arthurnvdimm blog Sainio and Jeff Chang answer 35 of them in this blog.  Did you miss the live broadcasts? No worries, you can view NVDIMM and other webcasts on the SNIA webcast channel https://www.brighttalk.com/channel/663/snia-webcasts.

    FUTURES QUESTIONS

    What timeframe do you see server hardware, OS, and applications readily adopting/supporting/recognizing NVDIMMs?

    DDR4 server and storage platforms are ready now. There are many off-the shelf server and/or storage motherboards that support NVDIMM-N.

    Linux version 4.2 and beyond has native support for NVDIMMs. All the necessary drivers are supported in the OS.

    NVDIMM adoption is in progress now.

    Technical Preview 5 of Windows Server 2016 has NVDIMM-N support
     

    How, if at all, does the positioning of NVDIMM-F change after the eventual introduction of new NVM technologies?

    If 3DXP is successful it will likely to have a big impact on NVDIMM-F. 3DXP could be seen as an advanced version of a NVDIMM-F product. It sits directly on the DDR4 bus and is byte addressable.

    NVDIMM-F products have the challenge of making them BYTE ADDRESSBLE, depending on what kind of persistent media is used.

    If NAND flash is used, it would take a lot of techniques and resources to make such a product BYTE ADDRESSABLE.

    On the other hand, if the new NVM technologies bring out persistent media that are BYTE ADDRESSABLE then the NVDIMM-F could easily use them for their backend.
    How does NVDIMM-N compare to Intel’s 3DXPoint technology?

    At this point there is limited technical information available on 3DXP devices.

    When the specifications become available the NVDIMM SIG can create a comparison table.

    NVDIMM-N products are available now. 3DXP-based products are planned for 2017, 2018. Theoretically 3DXP devices could be used on NVDIMM-N type modules

     

     

     

    PERFORMANCE AND ENDURANCE QUESTIONS

    What are the NVDIMM performance and endurance requirements?

    NVDIMM-N is no different from a RDIMM under normal operating conditions. The endurance of the Flash or NVM technology used on the NVDIMM-N is not a critical factor since it is only used for backup.

    NVDIMM-F would depend on various factors: (1) is the backend going to be NAND Flash or some other entity? (2) What kind of access pattern is going to be done by the application? The performance must be at least same as that of NVDIMM-N.

    Are there endurance requirements for NVDIMM-F? Won’t the flash wear out quickly when used as memory?

    Yes, the aspect of Flash being used as a RANDOM access device with MEMORY access characteristics would definitely have an impact on the endurance.
    NVDIMM-F – Doesn’t the performance limitations of the NAND vs. DRAM effect the application?

    NAND Flash would never hit the performance requirements of the DRAM when seen as an entity to entity comparison. But, in the whole perspective of a wider solution, the data path of DRAM data -> Persistence Data in a traditional model would have more delays contributed by a good number of software layers involved in making the data persistent versus, in the NVDIMM-F the data that is instantly persistent — for just a short term additional latency.
    Is there extra heat being generated….does it need any other cooling (NVDIMM-F, NVDIMM-N)

    No
    In general, our testing of NVDIMM-F vs PCIe based SSDs has not shown the expected value of NVDIMMs.  The PCIe based NVMe storage still outperforms the NVDIMMs.

    TBD
    What is the amount of overhead that NVDIMMs are adding on CPUs?

    None at normal operation
    What can you say about the time required typically to charge the supercaps?  Is the application aware of that status before charge is complete?

    Approximately two minutes depending on the density of the NVDIMM and the vendor.

    The NVDIMM will not be ready because the charging status and in turn the system BIOS will wait; until it times out if the NVDIMM is not functioning.

    USE QUESTIONS

    What will happen if a system crashes then comes back before the NVDIMM finishes backup? How the OS know what to continue as the state in the register/L1/L2/L3 cache is already lost?

    When system comes back up, it will check if there is valid data backed up in the NVDIMM. If yes, backed up data will be restored first before the BIOS sets up the system.

    The OS can’t depend on the contents of the L1/L2/L3 cache. Applications must do I/O fencing, use commit points, etc. to guarantee data consistency.

    Power supply should be able to hold power for at least 1ms after the warning of AC power loss.

    Is there garbage collection on NVDIMMs?

    This depends on individual vendors. NVDIMM-N may have overprovisioning and wear levering management for the NAND Flash.

    Garbage collection really only makes sense for NVDIMM-F.
    How is byte addressing enabled for NAND storage?

    By default, the NAND storage can be addressed only through the BLOCK mode addressing. If BYTE addressability is desired, then the DDR memory at the front must provide sophisticated CACHING TECHNIQUES to trick the Host Memory Controller in to thinking that it is actually accessing a larger capacity DDR memory.
    Is the restore command issued over the I2C bus?  Is that also known as the SMBus?

    Yes, Yes
    Could NVDIMM-F products be used as both storage and memory within the same server?

    NVDIMM-F is by definition only block storage. NVDIMM-P is both (block) storage and memory.

     

    COMPATIBILITY QUESTIONS

     

    Is NVDIMM-N support built into the OS or do the NVDIMM vendors need to provide drivers? What OS’s (Windows version, Linux kernel version) have support?

    In Linux, right from 4.2 version of the Kernel, the generic NVDIMM-N support is available.

    All the necessary drivers are provided in the OS itself.

    Regarding the Linux distributions, only Fedora and Ubuntu have upgraded themselves to the 4.x kernel.

    The crucial aspect is, the BIOS/MRC support needed for the vendor specific NVDIMM-N to get exposed to the Host OS.

    MS Windows has OS support – need to download.
    What OS support is available for NVDIMM-F? I’m assuming some sort of drivers is required.

    Diablo has said they worked the BIOS vendors to enable their Memory1 product. We need to check with them.

    For other NVDIMM-F vendors they would likely require drivers.

    As of now no native OS support is available.
    Will NVDIMMs work with typical Intel servers that are 2-3 years old?   What are the hardware requirements?

    The depends on the CPU. For Haswell, Grantley, Broadwell, and Purley the NVDIMM-N are and/or will be supported

    The hardware requires that the CPLD, SAVE, and ADR signals are present

    Is RDMA compatible with NVDIMM-F or NVDIMM-N?

    The RDMA (Remote Direct Memory Access) is not available by default for NVDIMM-N and NVDIMM-F.

    A software layer/extension needs to be written to accommodate that. Works are in progress by the PMEM committee (www.pmem.io) to make the RDMA feature available transparently for the applications in the future.

    SNIA Reference: http://www.snia.org/sites/default/files/SDC15_presentations/persistant_mem/ChetDouglas_RDMA_with_PM.pdf
    What’s the highest capacity that an NVDIMM-N can support?

    Currently 8GB and 16GB but this depends on individual vendor’s roadmaps.

     

    COST QUESTIONS

    What is the NVDIMM cost going to look like compared to other flash type storage options?

    This relates directly to what types and quantizes of Flash, DRAM, controllers and other components are used for each type.

     

    MISCELLANEOUS QUESTIONS

    How many vendors offer NVDIMM products?

    AgigA Tech, Diablo, Hynix, Micron, Netlist, PNY, SMART, and Viking Technology are among the vendors offering NVDIMM products today.

     

    Is encryption on the NVDIMM handled by the controller on the NVDIMM or the OS?

    Encryption on the NVDIMM is under discussion at JEDEC. There has been no standard encryption method adopted yet.

    If the OS encrypts data in memory the contents of the NVDIMM backup would be encrypted eliminating the need for the NVDIMM to perform encryption. Although because of the performance penalty of OS encryption, NVDIMM encryption is being considered by NVDIMM vendors.
    Are memory operations what is known as DAX?

    DAX means Direct Access and is the optimization used in the modern file systems – particularly EXT4 – to eliminate the Kernel Cache for holding the write data. With no intermediate cache buffers, the write operations go directly to the media. This makes the writes persistent as soon as they are committed.

    Can you give some practical examples of where you would use NVDIMM-N, -F, and –P?

    NVDIMM-N: load/store byte access for journaling, tiering, caching, write buffering and metadata storage

    NVDIMM-F: block access for in-memory database (moving NAND to the memory channel eliminates traditional HDD/SSD SAS/PCIe link transfer, driver, and software overhead)

    NVDIMM-P: can be used either NVDIMM-N or –F applications
    Are reads and writes all the same latency for NVDIMM-F?

    The answer depends on what kind of persistent layer is used.   If it is the NAND flash, then the random writes would have higher latencies when compared to the reads. If the 3D XPoint kind of persistent layer is used, it might not be that big of a difference.

     

    I have interest in the NVDIMMs being used as a replacement for SSD and concerns about clearing cache (including credentials) stored as data moves from NVM to PM on an end user device

    The NVDIMM-N uses serialization and fencing with Intel instructions to guarantee data is in the NVDIMM before a power failure and ADR.

     

    I am interested in how many banks of NVDIMMs can be added to create a very large SSD replacement in a server storage environment.

    NVDIMMs are added to a system in memory module slots. The current maximum density is 16GB or 32GB. Server motherboards may have 16 or 24 slots. If 8 of these slots have 16GB NVDIMMs that should be like a 96GB SSD.
    What are the environmental requirements for NVDIMMs (power, cooling, etc.)?

    There are some components on NVDIMMs that have a lower operating temperature than RDIMMs like flash and FPGA devices. Refer to each vendor’s data sheet for more information. Backup Energy Sources based on ultracapacitors require health monitoring and a controlled thermal environment to ensure an extended product life.
    How about data-at-rest protection management? Is the data in NVDIMM protected/encrypted? Complying with TCG and FIPS seems very challenging. What are the plans to align with these?

    As of today, encryption has not been standardized by JEDEC. It is currently up to each NVDIMM vendor whether or not to provide encryption..

     

    Could you explain the relationship between the NVDIMM and the IO stack?

    In the PMEM mode, the Kernel presents the NVDIMM as a reserved memory, directly accessible by the Host Memory Controller.

    In the Block Mode, the Kernel driver presents the NVDIMM as a block device to the IO Block Layer.
    With NVDIMMs the data can be in memory or storage. How is the data fragmentation managed?

    The NVDIMM-N is managed as regular memory. The same memory allocation fragmentation issues and handling apply. The NVDIMM-F behaves like an SSD. Fragmentation issues on an NVDIMM-F are handled like an SSD with garbage collection algorithms.

     

    Is there a plan to support PI type data protection for NVDIMM data? If not, achieving E2E data protection cannot be attained.

    As of today, encryption has not been standardized by JEDEC. It is currently up to each NVDIMM vendor whether or not to provide encryption.

     

    Since NVDIMM is still slower than DRAM so we still need DRAM in the system? We cannot get rid of DRAM yet?

    With NVDIMM-N DRAM is still being used. NVDIMM-N operates at the speed of standard RDIMM

    With NVDIMM-F modules, DRAM memory modules are still needed in the system.

    With NVDIMM-P modules, DRAM memory modules are still needed in the system.
    Can you use NVMe over ethernet?

    NVMe over Fabrics is under discussion within SNIA http://www.snia.org/sites/default/files/SDC15_presentations/networking/WaelNoureddine_Implementing_%20NVMe_revision.pdf

     


    Architectural Principles for Networked Solid State Storage Access

    May 20th, 2016

    There are many permutations of technologies, interconnects and application level approaches in play with solid state storage today.  In fact, it is becoming increasingly difficult to reason clearly about which problems are best solved by various permutations of these. That’s why the SNIA Ethernet Storage Forum, together with the SNIA Solid State Storage Initiative, is hosting a live Webcast, “Architectural Principles for Networked Solid State Storage Access,” on June 2nd at 10:00 a.m. PT.

    As our presenter, we are fortunate to have Doug Voigt, chair of the SNIA NVM Programming Technical Working Group and a member of the SNIA Technical Council. Doug will outline key architectural principals that may allow us to think about the application of networked solid state technologies more systematically, answering questions such as:

    • How do applications see IO and memory access differently?
    • What is the difference between a memory and an SSD technology?
    • How do application and technology views permute?
    • How do memory and network interconnects change the equation?
    • What are persistence domains and why are they important?

    I hope you’ll register today and join us on June 2nd for an hour that is sure to be insightful.


    SNIA’s Persistent Memory Education To Be Featured at Open Server Summit 2016

    April 12th, 2016

    sssi boothIf you are in Silicon Valley or the Bay Area this week, SNIA welcomes you to join them and the Solid State Storage Initiative April 13-14 at the Santa Clara Convention Center for Open Server Summit 2016, the industry’s premier event that focuses on the design of next- generation servers with topics on data center efficiency, SSDs, core OS, cloud server design, the future of open server and open storage, and other efforts toward combining industry-standard hardware with open-source software.

    The SNIA NVDIMM Special Interest Group is featured at OSS 2016, and will host a panel Thursday April 14 on NVDIMM technology, moderated by Bill Gervasi of JEDEC and featuring SIG members Diablo Technology, Netlist, and SMART Modular. The panel will highlight the latest activities in the three “flavors” of NVDIMM , and offer a perspective on the future of persistent memory in systems. Also, SNIA board member Rob Peglar of Micron Technology will deliver a keynote on April 14, discussing how new persistent memory directions create new approaches for system architects and enable entirely new applications involving enormous data sets and real-time analysis.

    SSSI will also be in booth 403 featuring demonstrations by the NVDIMM SIG, discussions on SSD data recovery and erase, and updates on solid state storage performance testing.  SNIA members and colleagues can register for $100 off using the code SNIA at http://www.openserversummit.com.


    Are Hard Drives or Flash Winning in Actual Density of Storage?

    March 9th, 2016

    The debate between hard drives and solid state drives goes on in 2016, particularly in the area of areal densities – the actual density of storage on a device.  Fortunately for us, Tom Coughlin, SNIA Solid State Storage Initiative Education Chair, and a respected analyst who contributes to Forbes, has advised that flash memory areal densities have exceeded those of hard drives since last year!

    Coughlin Associates provides several charts in the article which map lab demos and product HDD areal density since 2000, and contrasts that to new flash product announcements.  Coughlin comments that “Flash memory areal density exceeding HDD areal density is important since it means that flash memory products with higher capacity can be built using the same surface area.”

    Check out the entire article here.


    SNIA NVM Summit Delivers the Persistent Memory Knowledge You Need

    January 18th, 2016

    by Marty Foltyn

    The discussion, use, and application of Non-volatile Memory (NVM) has come a long way from the first SNIA NVM Summit in 2013.  The significant improvements in persistent memory, with enormous capacity, memory-like speed and non-volatility, will make the long-awaited promise of the convergence storage and memory a reality. In this 4th annual NVM Summit, we will see how Storage and Memory have now converged, and learn that we are now faced with developing the needed ecosystem.  Register and join colleagues on Wednesday, January 20, 2016 in San Jose, CA to learn more, or follow http://www.snia.org/nvmsummit to review presentations post- event.

    The Summit day begins with Rick Coulson, Senior Fellow, Intel, discussing the most recent developments in persistent memory with a presentation on All the Ways 3D XPoint Impacts Systems Architecture.

    Ethan Miller, Professor of Computer Science at UC Santa Cruz, will discuss Rethinking Benchmarks for Non-Volatile Memory Storage Systems. He will describe the challenges for benchmarks posed by the transition to NVM, and propose potential solutions to these challenges.

    Ken Gibson, NVM SW Architecture, Intel will present Memory is the New Storage: How Next Generation NVM DIMMs will Enable New Solutions That Use Memory as the High-Performance Storage Tier . This talk reviews some of the decades-old assumptions that change for suppliers of storage and data services as solutions move to memory as the new storage

    Jim Handy, General Director, Objective Analysis, and Tom Coughlin, President, Coughlin Associates will discuss Future Memories and Today’s Opportunities, exploring the role of NVM in today’s and future applications. They will give some market analysis and projections for the various NVM technologies in use today.

    Matt Bryson, SVP-Research, ABR, will lead a panel on NVM Futures-Emerging Embedded Memory Technologies, exploring the current status and future opportunities for NVM technologies and in particular both embedded and standalone MRAM technologies and associated applications.

    Edward Sharp, Chief, Strategy and Technology, PMC-Sierra, will present Changes Coming to Architecture with NVM. Although the IT industry has made tremendous progress innovating up and down the computing stack to enable, and take advantage of, non-volatile memory, is it sufficient, and where are the weakest links to fully unlock the potential of NVM.

    Don Jeanette, VP and John Chen, VP of Trendfocus will review the Solid State Storage Market, discuss what is happening in various segments, and why, as it relates to PCIe.

    Dejan Vucinc, HGST San Jose Research Center will discuss Latency in Context: Finding Room for NVMs in the Existing Software Ecosystem. HGST Research has been working diligently to find out where is there room in the existing hardware/software ecosystem for emerging NVM technology when viewed as block storage rather than main memory. Vucinc will show an update on previously published results using prototype PCI Express-attached PCM SSDs and our custom device protocol, DC Express, as well as measurements of its latency and performance through a proper device driver using several different kinds of Linux kernel block layer architecture.

    Arthur Sainio, Director Marketing, SMART Modular and Co-Chair, SNIA NVDIMM SIG, will lead a panel on NVDIMM. discussing how new media types are joining NAND Flash, and enhanced controllers and networking are being developed to unlock the latency and throughput advantages of NVDIMM.

    Neal Christiansen, Principal Development Lead, Microsoft, Microsoft will discuss Storage Class Memory Support in the Windows OS. Storage Class Memories (SCM) have been the topic of R&D for the last few years and with the promise of near term product delivery, the question is how will Windows be enabled for such SCM products and how can applications take advantage of these capabilities.

    Jeff Moyer, Principal Software Engineer, Red Hat will give an overview of the current state of Persistent Memory Support in the Linux Kernel.

    Cristian Diaconu, Principal Software Engineer, Microsoft will present Microsoft SQL Hekaton – Towards Large Scale Use of PM for In-memory Databases, using the example of Hekaton (Sql Server in-memory database engine) to break down the opportunity areas for non-volatile memory in the database space.

    Tom Talpey, Architect File Server Team, Microsoft, will discuss Microsoft Going Remote at Low Latency: A Future Networked NVM Ecosystem. As new ultra-low latency storage such as Persistent Memory and NVM is deployed, it becomes necessary to provide remote access – for replication, availability and resiliency to errors.

    Kevin Deierling, VP Marketing, Mellanox will discuss the role of the network in developing Persistent Memory over Fabrics, and what are the key goals and key fabric features requirements.


    Outstanding Keynotes from Leading Storage Experts Make SDC Attendance a Must!

    September 18th, 2015

    Posted by Marty Foltyn

    Tomorrow is the last day to register online for next week’s Storage Developer Conference at the Hyatt Regency Santa Clara. What better incentive to click www.storagedeveloper.org and register than to read about the amazing keynote and featured speakers at this event – I think they’re the best since the event began in 1998! Preview sessions here, and click on the title to download the full description.SDC15_WebHeader3_999x188

    Bev Crair, Vice President and General Manager, Storage Group, Intel will present Innovator, Disruptor or Laggard, Where Will Your Storage Applications Live? Next Generation Storage and discuss the leadership role Intel is playing in driving the open source community for software defined storage, server based storage, and upcoming technologies that will shift how storage is architected.

    Jim Handy, General Director, Objective Analysis will report on The Long-Term Future of Solid State Storage, examining research of new solid state memory and storage types, and new means of integrating them into highly-optimized computing architectures. This will lead to a discussion of the way that these will impact the market for computing equipment.

    Jim Pinkerton, Partner Architect Lead, Microsoft will present Concepts on Moving From SAS connected JBOD to an Ethernet Connected JBOD . This talk examines the advantages of moving to an Ethernet connected JBOD, what infrastructure has to be in place, what performance requirements are needed to be competitive, and examines technical issues in deploying and managing such a product.

    Andy Rudoff, SNIA NVM Programming TWG, Intel will discuss Planning for the Next Decade of NVM Programming describing how emerging NVM technologies and related research are causing a change to the software development ecosystem. Andy will describe use cases for load/store accessible NVM, some transparent to applications, others non-transparent.

    Richard McDougall, Big Data and Storage Chief Scientist, VMware will present Software Defined Storage – What Does it Look Like in 3 Years? He will survey and contrast the popular software architectural approaches and investigate the changing hardware architectures upon which these systems are built.

    Laz Vekiarides, CTO and Co-founder, ClearSky Data will discuss Why the Storage You Have is Not the Storage Your Data Needs , sharing some of the questions every storage architect should ask.

    Donnie Berkholz, Research Director, 451 Research will present Emerging Trends in Software Development drawing on his experience and research to discuss emerging trends in how software across the stack is created and deployed, with a particular focus on relevance to storage development and usage.

    Gleb Budman, CEO, Backblaze will discuss Learnings from Nearly a Decade of Building Low-cost Cloud Storage. He will cover the design of the storage hardware, the cloud storage file system software, and the operations processes that currently store over 150 petabytes and 5 petabytes every month.

    You could wait and register onsite at the Hyatt, but why? If you need more reasons to attend, check out SNIA on Storage previous blog entries on File Systems, Cloud, Management, New Thinking, Disruptive Technologies, and Security sessions at SDC. See the full agenda and register now for SDC at http://www.storagedeveloper.org.


    SNIA’s Solid State Storage Initiative Advances the Industry at Flash Memory Summit

    August 28th, 2015

    A classic case of SNIA Solid State Storage Initiative (SSSI) member collaboration for industry advancement was on display in the SSSI booth for NVDIMM-N demonstration at the Flash Memory Summit (FMS) 2015. Under the direction of SSSI Chair Jim Ryan and coordinated by NVDIMM SIG co chairs Arthur Sainio and Jeff Chang and TechDev Committee chair Eden Kim, the SSSI was able to update and include NVDIMM-N storage performance in the SSSI marketing collaterals on the Summary Performance Comparison by Storage Class charts.

    2015SummaryPerformanceChart.NVDIMM.1200

    Five SSSI member companies – AgigA Tech, Calypso, Micron, SMART Modular, and Viking Technology – collaborated over a four week period on the introduction of a new NVDIMM-N storage performance demonstration. While it is rare to have potential competitors collaborate in such a fashion, NVDIMM-N storage represents a new paradigm for super fast, low latency, high IO/watt storage solutions. The NVDIMM-SIG has taken a leadership position by evangelizing the technology and developing the industry infrastructure necessary for large scale deployment.

    This collaboration highlighted a classic blend of technical, marketing and industry association cooperation.

    In the weeks leading up to FMS, the NVDIMM-SIG planned for an in-booth demonstration of the NVDIMM-N storage modules. To pave the way for universal adoption, the team worked together to dial in the Intel Open Source block IO development driver to meet the standards of the SNIA Performance Test Specification (PTS). An added goal was inclusion of NVDIMM-N modules as a new line item on the Summary Performance Comparison by Storage Class chart which lists PTS performance for various storage technologies. Under the guidance of NVDIMM-SIG, a rush project was instigated to get NVDIMM-N performance data tested to the PTS for the trade show.

    Micron took the lead by lending a Supermicro server with Micron NVDIMM-N to Calypso for testing. Calypso then installed CTS test software on the server to allow full testing to the PTS. Viking and SMART Modular contributed by helping dial in the drivers, as well as sending modules from Viking and SMART Modular to cross reference with the Micron modules. The test plan was comprised of several test iterations using single, dual and finally quad modules using each of the vendor contributed modules.

    The early single and dual module tests ran into repeatability and stability issues. NVDIMM-SIG consulted with Intel on the nuance of the Intel block IO driver while Calypso continued testing. The team successfully completed a test run that met the PTS steady state requirements on the quad module in time to release data for the show.

    We had a solid demonstration at the SNIA SSSI Flash Memory Summit Booth on NVDIMM-N Performance complete with marketing collateral available for review and a handout. NVDIMM-SIG members responded to the many questions and interest in the NVDIMM-N storage technology.

    fms booth

    “Once again,” said SSSI Chair Jim Ryan, “we can see the value and benefit of SNIA SSSI to its members, the SNIA educational community and the NVDIMM industry. I believe this is a great case study in how we all can contribute and benefit from working within the SSSI for the betterment of individual companies, market development and the Solid State Storage industry at large.” SSSI provides educational and marketing materials free of charge on its public website while SNIA SSSI members may join the NVDIMM-SIG and other SSSI committees. Anyone interested to find out more about the SSSI or any of its many committees can go to the following link http://www.snia.org/sssi.

     


    Data Recovery and Selective Erasure of Solid State Storage a New Focus at SNIA

    July 15th, 2015

    The rise of solid state storage has been incredibly beneficial to users in a variety of industries. Solid state technology presents a more reliable and efficient alternative to traditional storage devices. However, these benefits have not come without unforeseen drawbacks in other areas. For those in the data recovery and data erase industries, for example, solid state storage has presented challenges. The obstacles to data recovery and selective erasure capabilities are not only a problem for those in these industries, but they can also make end users more hesitant to adopt solid state storage technology.

    Recently a new Data Recovery and Erase Special Interest Group (SIG) has been formed within the Solid State Storage Initiative (SSSI) within the Storage Networking Industry Association (SNIA). SNIA’s mission is to “lead the storage industry worldwide in developing and promoting standards, technologies and educational services to empower organizations in the management of information.” This fantastic organization has given the Data Recovery and Erase SIG a solid platform on which to build the initiative.

    The new group has held a number of introductory open meetings for SNIA members and non-members to promote the group and develop the group’s charter. For its initial meetings, the group sought to recruit both SNIA members and non-members that were key stakeholders in fields related to the SIG. This includes data recovery providers, erase solution providers and solid state storage device manufacturers. Aside from these groups, members of leading standards bodies and major solid state storage device consumers were also included in the group’s initial formation.

    The group’s main purpose is to be an open forum of discussion among all key stakeholders. In the past, there have been few opportunities for representatives from different industries to work together, and collaboration had often been on an individual basis rather than as a group. With the formation of this group, members intend to cooperate between industries on a collective basis in order to foster a more constructive dialogue incorporating the opinions and feedback of multiple parties.

    During the initial meetings of the Data Recovery and Erase SIG, members agreed on a charter to outline the group’s purpose and goals. The main objective is to foster collaboration among all parties to ensure consumer demands for data recovery and erase services on solid state storage technology can be performed in a cost-effective, timely and fully successful manner

    In order to achieve this goal, the group has laid out six steps needed, involving all relevant stakeholders:

    1. Build the business case to support the need for effective data recovery and erase capabilities on solid state technology by using use cases and real examples from end users with these needs.
    2. Create a feedback loop allowing data recovery providers to provide failure information to manufacturers in order to improve product design.
    3. Foster cooperation between solid state manufacturers and data recovery and erase providers to determine what information is necessary to improve capabilities.
    4. Protect sensitive intellectual property shared between data recovery and erase providers and solid state storage manufacturers.
    5. Work with standards bodies to ensure future revisions of their specifications account for capabilities necessary to enable data recovery and erase functionality on solid state storage.
    6. Collaborate with solid state storage manufacturers to incorporate capabilities needed to perform data recovery and erase in product design for future device models.

    The success of this special interest group depends not only on the hard work of the current members, but also in a diverse membership base of representatives from different industries. We will be at Flash Memory Summit in booth 820 to meet you in person! Or you can visit our website at www.snia.org/forums/sssi for more information on this new initiative and all solid state storage happenings at SNIA.   If you’re a SNIA member and you’d like to learn more about the Data Recovery/Erase SIG or you think you’d be a good fit for membership, we’d love to speak with you.  Not a SNIA member yet? Email marty.foltyn@snia.org for details on joining.