• Home
  • About
  •  

    Your Questions Answered on Non-Volatile DIMMs

    April 3rd, 2017

     

    by Arthur Sainio, SNIA NVDIMM SIG Co-Chair, SMART Modular

    SNIA’s Non-Volatile DIMM (NVDIMM) Special Interest Group (SIG) had a tremendous response to their most recent webcast:  NVDIMM:  Applications are
    Here
    !  You can view the webcast on demand.

    Viewers had many questions during the webcast.  In this blog, the NVDIMM SIG answers those questions and shares the SIG’s knowledge of NVDIMM technology.

    Have a question?  Send it to nvdimmsigchair@snia.org.

    1. What about 3DXpoint, how will this technology impact the market?

    3DXPoint DIMMs will likely have a significant impact on the market. They are fast enough to use as a slower tier of memory between NAND and DRAM.  It is still too early to tell though.

    2. What are good benchmark tools for DAX and what are the differences between NVML applications and DAX aware applications?

    For benchmark tools, please see the answer for (11).

    NVML applications are written specifically for NVM (Non-Volatile Memory). They may use the open source NVML libraries (http://pmem.io/nvml) for their usage.

    DAX is a File System feature that avoids the usage of Page Cache buffers.  DAX aware applications are aware that the writes and reads would go directly to the underlying NVM without being cached.

    3. On the slide talking about NUMA, there was a mention accessing NVDIMMs from a CPU on a different memory bus. The part about larger access times was clear enough. However, I came away with the impression that there is a correctness issue with handling of ADR signal as well. Please clarify.

    If this question is asking whether the NUMA remote CPU will successfully flush ADR-protected buffers to memory connected to the NUMA near CPU then yes there is the potential for a problem in this area. However ADR is an Intel feature that is not specified in the JEDEC NVDIMM standard, so this is an Intel specific implementation question. The question needs to be posed to Intel.

    4. How common is NVDIMM compatible BIOS? How would one check?

    They are becoming more common all the time. There are at least 8 server/storage systems from Intel and 22 from Supermicro that support NVDIMMs.  Several other motherboard vendors have systems that support NVDIMMs.  Most of the NVDIMM vendors have the lists posted on their websites.

    5. How does a system go in to save? How what exactly does the BIOS have to do to get a system before asserting save?

    The BIOS does the initial checking of making sure the NVDIMM has backup supply on power loss, before it ARMs it. Also, the BIOS makes sure that any RESTORE of the previously saved data is properly done. This involves a set of operations by setting appropriate registers in the NVDIMM module – all that happens during the boot up initialization. On A/C Power Loss, the PCH (Platform Control Hub) detects the condition and initiates what is called the ADR (Asynchronous DRAM Refresh) sequence, terminating in the assertion of SAVE signal by the CPLD. Without the BIOS ARM-ing the NVDIMM module, the NVDIMM module will not respond to the SAVE signal on power loss situation.

    6. Could you paint the picture of hardware costs at this point? How soon will NVDIMM-enabled systems be able to become “the rest of us”?

    The NVDIMM use DRAM, NAND Flash, a controller and well as many other parts in addition to what are used on standard RDIMMs. On that basis the cost of NVDIMM-N is higher that standard RDIMMs.  NVDIMM-enabled systems have been available for several years and are shipping now.

    7. Does RHEL 7.3 easily support Linux Kernel 4.4?

    RHEL 7.3 is still using the 3.10 version of the Linux Kernel. For RHEL related information, please, check with Red Hat.

    You can also refer to: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.3_Release_Notes/index.html

    The distribution has drivers to support the persistent memory. They have also packaged the libraries for the persistent memory.

    8. What are the usual sizes for NVDIMMs available today?

    4GB, 8GB, 16GB, 32GB

    9. Are there any case studies of each of the NVDIMM-N applications mentioned?

    You can find some examples of case studies at these websites:  https://channel9.msdn.com/events/build/2016/p466 and https://msdn.com/events/build/2016/p470

    10. What is the difference between pmem lib/pmfs in Linux and an DAX enabled files system (like ext-DAX)?

    A DAX based File System avoids the usage of Kernel Page Cache Layer for caching its write data. This would make all its write operations go directly to the underlying storage unit. One important thing to understand is, a DAX File System can still use BLOCK DRIVERS for accessing its underlying storage.

    PMFS is a File System that is optimized to use Persistent Memory, by completely avoiding the Page Cache and the Block Drivers. It is designed to provide efficient access to Persistent Memory that would be directly accessible via CPU load/store instructions.

    Refer to this link: https://github.com/linux-pmfs/pmfs for more details. PMFS, as of now is only in experimental stages.

    11. What tool is used to measure the performance?

    The performance measurement depends on what kind of Application workload is to be characterized. This is a very complex topic. No single benchmarking tool is good for all the workload characteristics.

    For File System performance, SpecFS, Bonnie++, IOZone, FFSB, FileBench etc., are good tools.

    SysBench is good for a variety of performance measurements.

    Phoronix Test Suite (http://www.phoronix.com/scan.php?page=home) has a variety of tools for Linux based performance measurements.

    12. How similar do you expect the OS support for P to be to this support for –N? I don’t see a lot of need for differences at this level (though there certainly will be differences in the BIOS).

    As of now, the open source libraries (http://pmem.io) are designed to be agnostic about the underlying memory types. They are simply classified as Persistent Memory, meaning, it could be “-N” or “-P” or something else. The libraries are written for User Space, and they assume that any underlying Kernel support should be transparent.

    The “-P” type has been thought of supporting both the DRAM and the PERSISTENT access at the same time. This might need a separate set of drivers in the Kernel.

     

    13.  Does the PM-based file system appear to be block addressable from the Application?

    A File System creates a layer of virtualization to support the logical entities such as VOLUMES, DIRECTORIES and FILES. Typically, an Application that is running in the User Space has no knowledge of the underlying mechanisms used by a File System for accessing its storage units such as the Persistent Memory. The access provided by a File System to an Application is typically a POSIX File System interface such as open, close, read, write, seek, etc.,

     14. Is ADR a pin?

    ADR stands for Asynchronous DRAM Refresh. ADR is a feature supported on Intel chipsets that triggers a hardware interrupt to the memory controller which will flush the write-protected data buffers and place the DRAM in self-refresh. This process is critical during a power loss event or system crash to ensure the data is in a “safe” state when the NVDIMM takes control of the DRAM to backup to Flash. Note that ADR does not flush the processor cache. In order to do so, an NMI routine would need to be executed prior to ADR.


    How Many IOPS? Users Share Their 2017 Storage Performance Needs

    March 24th, 2017

    New on the Solid State Storage website is a whitepaper from analysts Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis which details what IT manager requirements are for storage performance. The paper examines how requirements have changed over a four-year period for a range of applications, including databases, online transaction processing, cloud and storage services, and scientific and engineering computing.  Users disclose how many IOPS are needed, how much storage capacity is required,  and what system bottlenecks prevent them for getting the performance they need.

    You’ll want to read this report before signing up for a SNIA BrightTalk webcast at 2:00 pm ET/11:00 am PT on May 3, 2017 where Tom and Jim will discuss their research and provide answers to questions like:

    • Does a certain application really need the performance of an SSD?
    • How much should a performance SSD cost?
    • What have other IT managers found to be the right balance of performance and cost?

    Register for the “How Many IOPS?  Users Share Their 2017 Storage Performance Needs” at https://www.brighttalk.com/webcast/663/252723


    Latency Budgets for Solid State Storage Access

    March 7th, 2017

    New solid state storage technologies are forcing the industry to refine distinctions between networks and other types of system interconnects.  The question on everyone’s mind is: when is it beneficial to use networks to access solid state storage, particularly persistent memory?

    It’s not quite as simple as a “yes/no” answer. The answer to this question involves application, interconnect, memory technology and scalability factors that can be analyzed in the context of a latency budget.

    On April 19th, Doug Voigt, Chair SNIA NVM Programming Model Technical Work Group, returns for a live SNIA Ethernet Storage Forum webcast, “Architectural Principles for Networked Solid State Storage Access – Part 2where we will explore latency budgets for various types of solid state storage access. These can be used to determine which combinations of interconnects, technologies and scales are compatible with Load/Store instruction access and which are better suited to IO completion techniques such as polling or blocking.

    In this webcast you’ll learn:

    • Why latency is important in accessing solid state storage
    • How to determine the appropriate use of networking in the context of a latency budget
    • Do’s and don’ts for Load/Store access

    This is a technical seminar built upon part 1 of this series. If you missed it, you can view it on demand at your convenience. It will give you a solid foundation on this topic, outlining key architectural principles that allow us to think about the application of networked solid state technologies more systematically.

    I hope you will register today for the April 19th event. Doug and I will be on hand to answer questions on the spot.


    Attend Live – or Live Stream – SNIA’s Persistent Memory Summit January 18

    January 12th, 2017

    by Marty Foltyn

    SNIA’s Persistent Memory Summit makes its fifth annual appearance in Silicon Valley next Wednesday, January 18, and if you are in the vicinity of the Westin San Jose, you owe it to yourself to check it out. PMSummitLogo (2)

    SNIA is well known for its technology-focused, no vendor-hype conferences, and this one-day event will feature 12 presentations and two panels that will “level set” the discussion, review persistent memory usage, describe applications incorporating PM available today, discuss the infrastructure and implementation, and provide a vision of the “next generation” of persistent memory.… Continue reading


    SNIA Storage Developer Conference-The Knowledge Continues

    October 13th, 2016

    SNIA’s 18th Storage Developer Conference is officially a success, with 124 general and breakout sessions;  Cloud Interoperability, Kinetiplugfest 5c Storage, and SMB3 plugfests; ten Birds-of-a-Feather Sessions, and amazing networking among 450+ attendees.  Sessions on NVMe over Fabrics won the title of most attended, but Persistent Memory, Object Storage, and Performance were right behind.  Many thanks to SDC 2016 Sponsors, who engaged attendees in exciting technology discussions.… Continue reading


    The Changing World of SNIA Technical Work – A Conversation with Technical Council Chair Mark Carlson

    August 3rd, 2016

    carlson_mark_resizeMark Carlson is the current Chair of the SNIA Technical Council (TC). Mark has been a SNIA member and volunteer for over 18 years, and also wears many other SNIA hats.   Recently, SNIA on Storage sat down with Mark to discuss his first nine months as the TC Chair and his views on the industry.

    SNIA on Storage (SoS):  Within SNIA, what is the most important activity of the SNIA Technical Council?Continue reading


    Flash Memory Summit Highlights SNIA Innovations in Persistent Memory & Flash

    July 28th, 2016

    SNIA and the Solid State Storage Initiative (SSSI) invite you to join them at Flash Memory Summit 2016, August 8-11 at the Santa Clara Convention Center. SNIA members and colleagues receive $100 off any conference package using the code “SNIA16” by August 4 when registering for Flash Memory Summit at fms boothhttp://www.flashmemorysummit.com

    On Monday, August 8, from 1:00pm – 5:00pm, a SNIA Education Afternoon will be open to the public in SCCC Room 203/204, where attendees can learn about multiple storage-related topics with five SNIA Tutorials on flash storage, combined service infrastructures, VDBench, stored-data encryption, and Non-Volatile DIMM (NVDIMM) integration from SNIA member speakers.

    Following the Education Afternoon, the SSSI will host a reception and networking event in SCCC Room 203/204 from 5:30 pm – 7:00 pm with SSSI leadership providing perspectives on the persistent memory and SSD markets, SSD performance, NVDIMM, SSD data recovery/erase, and interface technology. Attendees will also be entered into a drawing to win solid state drives.

    SNIA and SSSI members will also be featured during the conference in the following sessions:

    arthur crop

    • Persistent Memory (Preconference Session C)
      NVDIMM presentation by Arthur Sainio, SNIA NVDIMM SIG Co-Chair (SMART Modular)
      Monday, August 8, 8:30am- 12:00 noon 
    • Data Recovery of SSDs (Session 102-A)
      SIG activity discussion by Scott Holewinski, SSSI Data Recovery/Erase SIG Chair (Gillware)
      Tuesday, August 9, 9:45 am – 10:50 am
    • Persistent Memory – Beyond Flash sponsored by the SNIA SSSI (Forum R-21) Chairperson: Jim Pappas, SNIA Board of Directors Vice-Chair/SSSI Co-Chair (Intel); papers presented by SNIA members Rob Peglar (Symbolic IO), Rob Davis (Mellanox), Ken Gibson (Intel), Doug Voigt (HP), Neal Christensen (Microsoft) Wednesday, August 10, 8:30 am – 11:00 am
    • NVDIMM Panel, organized by the SNIA NVDIMM SIG (Session 301-B) Chairperson: Jeff Chang SNIA NVDIMM SIG Co-Chair (AgigA Tech); papers presented by SNIA members Alex Fuqa (HP), Neal Christensen (Microsoft) Thursday, August 11, 8:30am – 9:45am

    Finally, don’t miss the SNIA SSSI in Expo booth #820 in Hall B and in the Solutions Showcase in Hall C on the FMS Exhibit Floor. Attendees can review a series of updated performance statistics on NVDIMM and SSD, see live NVDIMM demonstrations, access SSD data recovery/erase education, and preview a new white paper discussing erasure with regard to SSDs. SNIA representatives will also be present to discuss other SNIA programs such as certification, conformance testing, membership, and conferences.


    Principles of Networked Solid State Storage – Q&A

    June 22nd, 2016

    At this month’s SNIA Ethernet Storage Forum Webcast, “Architectural Principles for Networked Solid State Storage Access,” Doug Voigt, Chair of the SNIA NVM Programming Technical Working Group, and a member of the SNIA Technical Council, outlined key architectural principles surrounding the application of networked solid state technologies. We had a flurry of questions near the end of the Webcast that we did not have enough time to answer. Here are Doug’s answers to all the questions we received during the event:

    Q. Are there wait cycles in accessing persistent memory?

    A. It depends entirely on which persistent memory (PM) technology is being accessed and how the memory interconnect is used.  Some technologies have write times that are quite different from read times.  When using tightly timed interconnects such as DDR with those technologies it may be difficult to avoid wait cycles.

    Q. How do Pmalloc and malloc share the virtual address space of the application?

    A. This is entirely up to the OS and other libraries operating within any constraints of the processor architecture-specific memory management units.  A good mental model would be fairly large regions of contiguous address space in both the physical and virtual domains, where each region will comprise a single type of memory. Capacity will be reserved for pmalloc and malloc in the appropriate regions.

    Q. Always flush after doing your memory-mapped IO.  Is that simply good hygiene?

    A. Not exactly. The term “Memory Mapped IO” is used to reference control plane (as opposed to data plane) access.  It is often reasonable to set up control plane memory as uncacheable. The need for strict order of access to physical control plane registers is so pervasive that caching is generally not useful. Uncacheable writes are always flushed by the processor, as opposed to the application.

    Generally with memory mapped IO devices the data plane uses direct memory access (DMA).  With memory mapped files (as opposed to memory mapped IO) Load/Store (more commonly referred to as “Ld/St”), not DMA, is used in the data plane. Disabling caching in the data plane is generally a big performance sacrifice for small byte range access.

    In the Ld/St datapath, strategically placed flushing is required to retain both performance and power failure recovery. The SNIA NVM Programming Model describes this type of functionality.

    Q. Once NVDIMM support become pervasive with support from NVMe drives in the server box, should network storage be more focused on SAS Flash or just SAS HDDs?

    A. Not necessarily.  NVMe over Fabric, Fibre Channel and iSCSI are also types of networked storage that will likely retain significant market share relative to SAS.

    Q. Are the ‘Big Data’ Data Warehouse applications starting to use the persistence memory and domain technologies in their applications?

    A. It is too early to see much of this yet. PM technologies might become a priority as a staging area for analytic applications with high ingest or checkpoint rates. NVDIMMs are likely to be too expensive to store anything “big” for quite a while.

    Q. Also, is the persistence memory/domains being used in the Hyper-converged and Converged hardware infrastructures?

    A. Persistent memory is quintessentially (Hyper-) converged.  It wouldn’t be unreasonable to expect some traction with hyper-converged solutions that experience high storage-performance demand.

    Q. What distance would you associate with 10’s of microseconds?

    A. In terms of transmission delay, 10’s of uS align with a campus or small city scale, but the distance itself is often not the primary factor.  Switching delays, transmission line properties and software overhead are generally bigger factors.

    Q. So latency would be the binding factor for distances…not a question, an observation.

    A. Yes, in effect, either through transmission or relay.  See above.

    Q. Aren’t there multi-threaded SSDs?

    A. Yes, but since the primary metric in this presentation is latency we ignore multi-threading.  It can enable more work to get done, but it generally increases latency rather than reducing it.

    Q. Is Pmalloc universal usage?

    A. The term is starting to be recognized among developers and has been used in research. Various similar names have been used in early research prototypessuch as pmalloc in Mnemosyne and nvmalloc in SCMFS.

    Q. So how would PM help in a (stock broking) requirement, where we currently prophesize an RDMA or iWARP solution?

    A. With PM the answer is always lower latency.  PM can be litegrated like memory or like flash. RDMA network paths for both of these options were discussed in the presentation. In either case, PM is low-latency enough that networking and software overheads will completely determine performance, even when using RDMA. The performance boost from PM is greatest when it is accessed locally.  If remote access is a requirement then the new work being done in the RDMA community should help.

    Q. If data stored in memory requires to be copied to a different host, memory (for consistency) how does PM assist, or is there an extension to PM? Coherency between multiple hosts in a cluster, if you will?

    A. PM technology does not help with this; the methods of managing consistency across hosts remain unchanged by PM.  All PM offers is low latency persistence.

    Coordination across hosts or nodes in a cluster must use existing clustering techniques such as locking and quorums. In addition, the relative timescales of memory access and network communication suggest the application of asynchronous remote replication techniques used in today’s storage solutions.

    Regarding coherency, PM brings nothing new to the known techniques for managing coherency.  Classical cluster architecture must be applied outside of symmetric multi-processing coherency domains. Within coherency domains, all of the logic is above the PM level in a processor side memory controller or a software emulation of the same algorithms.

     

     

     

     


    Podcasts Bring the Sounds of SNIA’s Storage Developer Conference to Your Car, Boat, Train, or Plane!

    May 26th, 2016

    SNIA’s Storage Developer Conference (SDC) offers exactly what a developer of cloud, solid state, security, analytics, or big data applications is looking  for – rich technical content delivered in a no-vendor bias manner by today’s leading technologists.  The 2016 SDC agenda is being compiled, but now yousdc podcast pic can get a “sound bite” of what to expect by downloading  SDC podcasts via iTunes, or visiting the SDC Podcast site at http://www.snia.org/podcasts to download the accompanying slides and/or listen to the MP3 version.… Continue reading


    Your Questions Answered on NVDIMM

    May 23rd, 2016

    The recent NVDIMM webcasts on the SNIA BrightTALK Channel sparked many questions from the almost 1,000 viewers who have watched it live or downloaded the on-demand cast. Now,  NVDIMM SIG Chairs Arthurnvdimm blog Sainio and Jeff Chang answer 35 of them in this blog.  Did you miss the live broadcasts? No worries, you can view NVDIMM and other webcasts on the SNIA webcast channel https://www.brighttalk.com/channel/663/snia-webcasts.

    FUTURES QUESTIONS

    What timeframe do you see server hardware, OS, and applications readily adopting/supporting/recognizing NVDIMMs?

    DDR4 server and storage platforms are ready now. There are many off-the shelf server and/or storage motherboards that support NVDIMM-N.

    Linux version 4.2 and beyond has native support for NVDIMMs. All the necessary drivers are supported in the OS.

    NVDIMM adoption is in progress now.

    Technical Preview 5 of Windows Server 2016 has NVDIMM-N support
     

    How, if at all, does the positioning of NVDIMM-F change after the eventual introduction of new NVM technologies?

    If 3DXP is successful it will likely to have a big impact on NVDIMM-F. 3DXP could be seen as an advanced version of a NVDIMM-F product. It sits directly on the DDR4 bus and is byte addressable.

    NVDIMM-F products have the challenge of making them BYTE ADDRESSBLE, depending on what kind of persistent media is used.

    If NAND flash is used, it would take a lot of techniques and resources to make such a product BYTE ADDRESSABLE.

    On the other hand, if the new NVM technologies bring out persistent media that are BYTE ADDRESSABLE then the NVDIMM-F could easily use them for their backend.
    How does NVDIMM-N compare to Intel’s 3DXPoint technology?

    At this point there is limited technical information available on 3DXP devices.

    When the specifications become available the NVDIMM SIG can create a comparison table.

    NVDIMM-N products are available now. 3DXP-based products are planned for 2017, 2018. Theoretically 3DXP devices could be used on NVDIMM-N type modules

     

     

     

    PERFORMANCE AND ENDURANCE QUESTIONS

    What are the NVDIMM performance and endurance requirements?

    NVDIMM-N is no different from a RDIMM under normal operating conditions. The endurance of the Flash or NVM technology used on the NVDIMM-N is not a critical factor since it is only used for backup.

    NVDIMM-F would depend on various factors: (1) is the backend going to be NAND Flash or some other entity? (2) What kind of access pattern is going to be done by the application? The performance must be at least same as that of NVDIMM-N.

    Are there endurance requirements for NVDIMM-F? Won’t the flash wear out quickly when used as memory?

    Yes, the aspect of Flash being used as a RANDOM access device with MEMORY access characteristics would definitely have an impact on the endurance.
    NVDIMM-F – Doesn’t the performance limitations of the NAND vs. DRAM effect the application?

    NAND Flash would never hit the performance requirements of the DRAM when seen as an entity to entity comparison. But, in the whole perspective of a wider solution, the data path of DRAM data -> Persistence Data in a traditional model would have more delays contributed by a good number of software layers involved in making the data persistent versus, in the NVDIMM-F the data that is instantly persistent — for just a short term additional latency.
    Is there extra heat being generated….does it need any other cooling (NVDIMM-F, NVDIMM-N)

    No
    In general, our testing of NVDIMM-F vs PCIe based SSDs has not shown the expected value of NVDIMMs.  The PCIe based NVMe storage still outperforms the NVDIMMs.

    TBD
    What is the amount of overhead that NVDIMMs are adding on CPUs?

    None at normal operation
    What can you say about the time required typically to charge the supercaps?  Is the application aware of that status before charge is complete?

    Approximately two minutes depending on the density of the NVDIMM and the vendor.

    The NVDIMM will not be ready because the charging status and in turn the system BIOS will wait; until it times out if the NVDIMM is not functioning.

    USE QUESTIONS

    What will happen if a system crashes then comes back before the NVDIMM finishes backup? How the OS know what to continue as the state in the register/L1/L2/L3 cache is already lost?

    When system comes back up, it will check if there is valid data backed up in the NVDIMM. If yes, backed up data will be restored first before the BIOS sets up the system.

    The OS can’t depend on the contents of the L1/L2/L3 cache. Applications must do I/O fencing, use commit points, etc. to guarantee data consistency.

    Power supply should be able to hold power for at least 1ms after the warning of AC power loss.

    Is there garbage collection on NVDIMMs?

    This depends on individual vendors. NVDIMM-N may have overprovisioning and wear levering management for the NAND Flash.

    Garbage collection really only makes sense for NVDIMM-F.
    How is byte addressing enabled for NAND storage?

    By default, the NAND storage can be addressed only through the BLOCK mode addressing. If BYTE addressability is desired, then the DDR memory at the front must provide sophisticated CACHING TECHNIQUES to trick the Host Memory Controller in to thinking that it is actually accessing a larger capacity DDR memory.
    Is the restore command issued over the I2C bus?  Is that also known as the SMBus?

    Yes, Yes
    Could NVDIMM-F products be used as both storage and memory within the same server?

    NVDIMM-F is by definition only block storage. NVDIMM-P is both (block) storage and memory.

     

    COMPATIBILITY QUESTIONS

     

    Is NVDIMM-N support built into the OS or do the NVDIMM vendors need to provide drivers? What OS’s (Windows version, Linux kernel version) have support?

    In Linux, right from 4.2 version of the Kernel, the generic NVDIMM-N support is available.

    All the necessary drivers are provided in the OS itself.

    Regarding the Linux distributions, only Fedora and Ubuntu have upgraded themselves to the 4.x kernel.

    The crucial aspect is, the BIOS/MRC support needed for the vendor specific NVDIMM-N to get exposed to the Host OS.

    MS Windows has OS support – need to download.
    What OS support is available for NVDIMM-F? I’m assuming some sort of drivers is required.

    Diablo has said they worked the BIOS vendors to enable their Memory1 product. We need to check with them.

    For other NVDIMM-F vendors they would likely require drivers.

    As of now no native OS support is available.
    Will NVDIMMs work with typical Intel servers that are 2-3 years old?   What are the hardware requirements?

    The depends on the CPU. For Haswell, Grantley, Broadwell, and Purley the NVDIMM-N are and/or will be supported

    The hardware requires that the CPLD, SAVE, and ADR signals are present

    Is RDMA compatible with NVDIMM-F or NVDIMM-N?

    The RDMA (Remote Direct Memory Access) is not available by default for NVDIMM-N and NVDIMM-F.

    A software layer/extension needs to be written to accommodate that. Works are in progress by the PMEM committee (www.pmem.io) to make the RDMA feature available transparently for the applications in the future.

    SNIA Reference: http://www.snia.org/sites/default/files/SDC15_presentations/persistant_mem/ChetDouglas_RDMA_with_PM.pdf
    What’s the highest capacity that an NVDIMM-N can support?

    Currently 8GB and 16GB but this depends on individual vendor’s roadmaps.

     

    COST QUESTIONS

    What is the NVDIMM cost going to look like compared to other flash type storage options?

    This relates directly to what types and quantizes of Flash, DRAM, controllers and other components are used for each type.

     

    MISCELLANEOUS QUESTIONS

    How many vendors offer NVDIMM products?

    AgigA Tech, Diablo, Hynix, Micron, Netlist, PNY, SMART, and Viking Technology are among the vendors offering NVDIMM products today.

     

    Is encryption on the NVDIMM handled by the controller on the NVDIMM or the OS?

    Encryption on the NVDIMM is under discussion at JEDEC. There has been no standard encryption method adopted yet.

    If the OS encrypts data in memory the contents of the NVDIMM backup would be encrypted eliminating the need for the NVDIMM to perform encryption. Although because of the performance penalty of OS encryption, NVDIMM encryption is being considered by NVDIMM vendors.
    Are memory operations what is known as DAX?

    DAX means Direct Access and is the optimization used in the modern file systems – particularly EXT4 – to eliminate the Kernel Cache for holding the write data. With no intermediate cache buffers, the write operations go directly to the media. This makes the writes persistent as soon as they are committed.

    Can you give some practical examples of where you would use NVDIMM-N, -F, and –P?

    NVDIMM-N: load/store byte access for journaling, tiering, caching, write buffering and metadata storage

    NVDIMM-F: block access for in-memory database (moving NAND to the memory channel eliminates traditional HDD/SSD SAS/PCIe link transfer, driver, and software overhead)

    NVDIMM-P: can be used either NVDIMM-N or –F applications
    Are reads and writes all the same latency for NVDIMM-F?

    The answer depends on what kind of persistent layer is used.   If it is the NAND flash, then the random writes would have higher latencies when compared to the reads. If the 3D XPoint kind of persistent layer is used, it might not be that big of a difference.

     

    I have interest in the NVDIMMs being used as a replacement for SSD and concerns about clearing cache (including credentials) stored as data moves from NVM to PM on an end user device

    The NVDIMM-N uses serialization and fencing with Intel instructions to guarantee data is in the NVDIMM before a power failure and ADR.

     

    I am interested in how many banks of NVDIMMs can be added to create a very large SSD replacement in a server storage environment.

    NVDIMMs are added to a system in memory module slots. The current maximum density is 16GB or 32GB. Server motherboards may have 16 or 24 slots. If 8 of these slots have 16GB NVDIMMs that should be like a 96GB SSD.
    What are the environmental requirements for NVDIMMs (power, cooling, etc.)?

    There are some components on NVDIMMs that have a lower operating temperature than RDIMMs like flash and FPGA devices. Refer to each vendor’s data sheet for more information. Backup Energy Sources based on ultracapacitors require health monitoring and a controlled thermal environment to ensure an extended product life.
    How about data-at-rest protection management? Is the data in NVDIMM protected/encrypted? Complying with TCG and FIPS seems very challenging. What are the plans to align with these?

    As of today, encryption has not been standardized by JEDEC. It is currently up to each NVDIMM vendor whether or not to provide encryption..

     

    Could you explain the relationship between the NVDIMM and the IO stack?

    In the PMEM mode, the Kernel presents the NVDIMM as a reserved memory, directly accessible by the Host Memory Controller.

    In the Block Mode, the Kernel driver presents the NVDIMM as a block device to the IO Block Layer.
    With NVDIMMs the data can be in memory or storage. How is the data fragmentation managed?

    The NVDIMM-N is managed as regular memory. The same memory allocation fragmentation issues and handling apply. The NVDIMM-F behaves like an SSD. Fragmentation issues on an NVDIMM-F are handled like an SSD with garbage collection algorithms.

     

    Is there a plan to support PI type data protection for NVDIMM data? If not, achieving E2E data protection cannot be attained.

    As of today, encryption has not been standardized by JEDEC. It is currently up to each NVDIMM vendor whether or not to provide encryption.

     

    Since NVDIMM is still slower than DRAM so we still need DRAM in the system? We cannot get rid of DRAM yet?

    With NVDIMM-N DRAM is still being used. NVDIMM-N operates at the speed of standard RDIMM

    With NVDIMM-F modules, DRAM memory modules are still needed in the system.

    With NVDIMM-P modules, DRAM memory modules are still needed in the system.
    Can you use NVMe over ethernet?

    NVMe over Fabrics is under discussion within SNIA http://www.snia.org/sites/default/files/SDC15_presentations/networking/WaelNoureddine_Implementing_%20NVMe_revision.pdf