Feedback Needed on New Persistent Memory Performance White Paper

A new SNIA Technical Work draft is now available for public review and comment – the SNIA Persistent Memory Performance Test Specification (PTS) White Paper.

A companion to the SNIA NVM Programming Model, the SNIA PM PTS White Paper (PM PTS WP) focuses on describing the relationship between traditional block IO NVMe SSD based storage and the migration to Persistent Memory block and byte addressable storage.

The PM PTS WP reviews the history and need for storage performance benchmarking beginning with Hard Disk Drive corner case stress tests, the increasing gap between CPU/SW/HW Stack performance and storage performance, and the resulting need for faster storage tiers and storage
products.

Read More

Up to the Challenge!

The SNIA Persistent Memory and NVDIMM Special Interest Group announced a programming challenge for NVDIMM-based systems in Q4 of 2019.  Participants get free online access to persistent memory systems based at the SNIA Technology Center using NVDIMM-Ns provided by SIG members AgigA Tech, Intel, SMART Modular, and Supermicro.  The goal of the challenge is to spark interest by developers in this new technology so they can understand more clearly how persistent memory applications can be developed and applied in 2020 environments and beyond. Response to the NVDIMM Programming Challenge has been very positive.  Entrants to date have backgrounds from no experience programming persistent memory to those who develop persistent memory applications as part of their day jobs. Read More

Judging Has Begun – Submit Your Entry for the NVDIMM Programming Challenge!

We’re 11 months in to the Persistent Memory Hackathon program, and over 150 software developers have taken the tutorial and tried their hand at programming to persistent memory systems.   AgigA Tech, Intel SMART Modular, and Supermicro, members of the SNIA Persistent Memory and NVDIMM SIG, have now placed persistent memory systems with NVDIMM-Ns into the SNIA Technology Center as the backbone of the first SNIA NVDIMM Programming Challenge.

Interested in participating?  Send an email to PMhackathon@snia.org to get your credentials.  And do so quickly, as the first round of review for the SNIA NVDIMM Programming Challenge is now open.  Any entrants who have progressed to a point where they would like a review are welcome to contact SNIA at PMhackathon@snia.org to request a time slot.  SNIA will be opening review times in December and January as well.  Submissions that meet a significant amount of the judging criteria described below, as determined by the panel, will be eligible for a demonstration slot to show the 400+ attendees at the January 23, 2020 Persistent Memory Summit  in Santa Clara CA.

Your program or results should be able to be visually demonstrated using remote access to a PM-enabled server. Submissions will be judged by a panel of SNIA experts.  Reviews will be scheduled at the convenience of the submitter and judges, and done via conference call.

NVDIMM Programming Challenge Judging Criteria include:

Use of multiple aspects of NVDIMM/PM capabilities, for example:

  1. Use of larger DRAM/NVDIMM memory sizes
  2. Use of the DRAM speed of NVDIMM PMEM for performance
  3. Speed-up of application shut down or restart using PM where appropriate
  4. Recovery from crash/failure
  5. Storage of data across application or system restarts

Demonstrates other innovative aspects for a program or tool, for example:

  1. Uses persistence to enable new features
  2. Appeals across multiple aspects of a system, beyond persistence

Advances the cause of PM in some obvious way:

  1. Encourages the update of systems to broadly support PM
  2. Makes PM an incremental need in IT deployments

Program or results apply to all types of NVDIMM/PM systems, though exact results may vary across memory types.

Questions? Contact Jim Fister, SNIA Hackathon Program Director, at pmhackathon@snia.org, and happy coding!

Learn the Latest on Persistence at the 2020 Persistent Memory Summit

The 2020 SNIA Persistent Memory Summit is coming to the Hyatt Regency Santa Clara on Thursday, January 23, 2020. The day before, on January 22, an expanded version of the SNIA Persistent Memory Hackathon will return, co-located again with the SNIA Annual Members Symposium. We’ll share Hackathon details in an upcoming SNIA Solid State blog. For those who have already attended a Persistent Memory Summit, you will find significant changes in the makeup of the agenda.  For those who have never attended, the new agenda might also be an opportunity to learn more about development options and experiences for persistent memory. The focus of the 2020 Summit will be on tool and application development for systems with persistent memory.  Read More

It’s a Wrap for SNIA and the Solid State Storage Initiative at Flash Memory Summit 2019

A Best of Show award, over 12 hours of content, three days of demos, and a new program drawing attention to persistent memory programming completed, Flash Memory Summit 2019 is officially a success! SNIA volunteers were again recognized for their hard work developing standards for datacenters and storage professionals with a “Most Innovative Flash Memory Technology” FMS Best of Show award. This year, it was SNIA’s Object Drive Technical Work Group who received kudos for the SNIA Technical Position Key Value Storage API Specification.  Jay Kramer, head of the FMS awards program, presented the award to Bill Martin, Chair of the Object Drive TWG, commenting “Key value store technology can enable NVM storage devices to map and store data more efficiently and with enhanced performance, which is of paramount significance to facilitate computational storage.  Flash Memory Summit is proud to recognize the SNIA Object Drive Technical Work Group (TWG) for creating the SNIA Technical Position Key Value Storage API Specification Version 1.0 defining an application programming interface (API) for key value storage devices and making this available to the public for download. SNIA Sessions at FMS Now Available for Viewing and Download Read More

It’s a Wrap for SNIA and the Solid State Storage Initiative at Flash Memory Summit 2019

A Best of Show award, over 12 hours of content, three days of demos, and a new program drawing attention to persistent memory programming completed – Flash Memory Summit 2019 is officially a success!

SNIA volunteers were again recognized for their hard work developing standards for datacenters and storage professionals with a “Most Innovative Flash Memory Technology” FMS Best of Show award. This year, it was SNIA’s Object Drive Technical Work Group who received kudos for the SNIA Technical Position Key Value Storage API Specification.  Jay Kramer, head of the FMS awards program, presented the award to Bill Martin, Chair of the Object Drive TWG, commenting “Key value store technology can enable NVM storage devices to map and store data more efficiently and with enhanced performance, which is of paramount significance to facilitate computational storage.  Flash Memory Summit is proud to recognize the SNIA Object Drive Technical Work Group (TWG) for creating the SNIA Technical Position Key Value Storage API Specification Version 1.0 defining an application programming interface (API) for key value storage devices and making this available to the public for download.

SNIA Sessions at FMS Now Available for Viewing and Download

SNIA Executive Director Michael Oros again took the mainstage to describe “Standards that Can Change Your Job and Your Life” encapsulating SNIA work in three core areas:  persistent memory, computational storage, and storage management.

Also at Flash Memory Summit, SNIA work and volunteers were on display in eight sessions on persistent memory, highlighting advances in persistent memory, PM software and applications, remote persistent memory, and current research in PM, sponsored by SNIA, JEDEC, and the OpenFabrics Alliance.   A new 2019 SNIA-sponsored track on computational storage featured four sessions on controllers and technology, deploying solutions, implementation methods, and applications.

SNIA’s SFF Technology Affiliate highlighted their work on the Enterprise and Datacenter 1U Short SSD Form Factor (E1.S) specification SFF-TA 1006,  while the Object Drive TWG expanded on their work in standardization for a key value storage interface underway at SNIA and NVM Express.   SNIA also presented a preconference seminar tutorial on persistent memory and NVDIMM, and a session on Storage Management with Swordfish APIs for Open Channel SSDs.

A new session on programming to persistent memory featured a tutorial (video available soon) and a 2 ½ day Persistent Memory Programming Hackathon where attendees programmed to persistent memory systems and discussed their applications.  Next up for the Hackathon series – a 2-day event at SNIA Storage Developer Conference.

Find PDFs of these sessions by clicking on Flash Memory Summit 2019 under Associated Event in the SNIA Educational Library.

We continued our discussions on the exhibit floor featuring JEDEC-compliant NVDIMM-Ns from SNIA Persistent Memory and NVDIMM SIG members AgigA Tech, Micron, SMART Modular Technologies, and Viking in a Supermicro box running an open source performance demonstration.  If you missed it, the SIG will showcase a similar demonstration at the upcoming SNIA Storage Developer Conference September 23-26, 2019, and at the 2020 SNIA Persistent Memory Summit January 23, 2020, both at the Hyatt Regency Santa Clara.  Click on the conference names to register for both events.

The Blurred Lines of Memory and Storage – A Q&A

The lines are blurring as new memory technologies are challenging the way we build and use storage to meet application demands. That’s why the SNIA Networking Storage Forum (NSF) hosted a “Memory Pod” webcast is our series, “Everything You Wanted to Know about Storage, but were too Proud to Ask.” If you missed it, you can watch it on-demand here along with the presentation slides. We promised Q. Do tools exist to do secure data overwrite for security purposes? A. Most popular tools are cryptographic signing of the data where you can effectively erase the data by throwing away the keys. There are a number of technologies available; for example, the usual ones like BitLocker (part of Windows 10, for example) where the NVDIMM-P is tied to a specific motherboard. There are others where the data is encrypted as it is moved from NVDIMM DRAM to flash for the NVDIMM-N type. Other forms of persistent memory may offer their own solutions. SNIA is working on a security model for persistent memory, and there is a presentation on our work here. Read More

Your Questions Answered – Now You Can Be a Part of the Real World Workload Revolution!

The SNIA Solid State Storage Initiative would like to thank everyone who attended our webcast: How To Be Part of the Real World Workload Revolution.  If you haven’t seen it yet, you can view the on demand version here.  You can find the slides here. Eden Kim and Jim Fister led a discussion on the testmyworkload (TMW) tool and data repository, discussing how a collection of real-world workload data captures can revolutionize design and configuration of hardware, software and systems for the industry.   A new SNIA white paper available in both English and Chinese authored by Eden Kim, with an introduction by Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis, discusses how we can all benefit by sharing traces of our digital workloads through the SNIA SSSI Real-World Workload Capture program. In an environment where workloads are becoming more complex — and the choices of hardware configuration for solid-state storage are growing — the opportunity to better understand the characteristics of data transfers to and from the storage systems is critical.  By sharing real-world workloads on the Test My Workload repository, the industry can benefit overall in design and development at every level from SSD development to system configuration in the datacenter. There were several questions asked in and after the webcast.  Here are some of the answers.  Read More

Your Questions Answered – Now You Can Be a Part of the Real World Workload Revolution!

The SNIA Solid State Storage Initiative would like to thank everyone who attended our webcast: How To Be Part of the Real World Workload Revolution.  If you haven’t seen it yet, you can view the on demand version here.  You can find the slides here.

Eden Kim and Jim Fister led a discussion on the testmyworkload (TMW) tool and data repository, discussing how a collection of real-world workload data captures can revolutionize design and configuration of hardware, software and systems for the industry.   A new SNIA white paper available in both English and Chinese authored by Eden Kim, with an introduction by Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis, discusses how we can all benefit by sharing traces of our digital workloads through the SNIA SSSI Real-World Workload Capture program.

In an environment where workloads are becoming more complex — and the choices of hardware configuration for solid-state storage are growing — the opportunity to better understand the characteristics of data transfers to and from the storage systems is critical.  By sharing real-world workloads on the Test My Workload repository, the industry can benefit overall in design and development at every level from SSD development to system configuration in the datacenter.

There were several questions asked in and after the webcast.  Here are some of the answers.  Any additional questions can be addressed to asksssi@snia.org.

Q: Shouldn’t real world workloads have concurrent applications?  Also, wouldn’t any SQL workloads also log or journal sequential writes?

A: Yes.  Each capture shows all of the IO Streams that are being applied to each logical storage recognized by the OS.  These IO Streams are comprised of IOs generated by System activities as well as a variety of drivers, applications and OS activities.  The IOProfiler toolset allows you to not only see all of the IO Stream activity that occurs during a capture, but also allows you to parse, or filter, the capture to see just the IO Streams (and other metrics) that are of interest.

Q: Is there any collaboration with the SNIA IOTTA Technical Work Group on workload or trace uploading?

A:  While IOTTA TWG and SSS TWG work closely together, an IO Capture is fundamentally different from an IO Trace and hence is not able to be presented on the IOTTA trace repository.  An IO Trace collects all of the data streams that occur during the IO Trace capture period and results in a very large file.  An IO Capture, on the other hand, captures statistics on the observed IO Streams and saves these statistics to a table.  Hence, no actual personal or user data is captured in an IO Capture, only the statistics on the IO Streams. Because IO Captures are a series of record tables for individual time steps, the format is not compatible with a repository for the streaming data captured in an IO Trace.

For example, an IO Trace could do a capture where 50,000 RND 4K Write and 50,000 RND 4K Read IOPS are recorded, resulting in 100,000 4K transfers, or 40M bytes of data.  OTOH, an IO Capture that collects statistics would log the fact that 50,000 RND 4K Writes and 50,000 RND 4K Reads occurred… a simple two item entry in a table.  Of course, the IOPS, Response Times, Queue Depths and LBA Ranges could also be tracked resulting in a table of 100,000 entries times the above 4 metrics, but 400,000 table entries is much smaller than 40 MB of data.

Both of these activities are useful, and the SNIA supports both.

Q: Can the traces capture a cluster workload or just single server?

A: IO Captures capture the IO Streams that are observed going from User space to all logical storage recognized by the OS.  Accordingly, for clusters, there will be an individual capture for each logical unit.  Note that all logical device captures can be aggregated into a single capture for analysis with the advanced analytics offered by the commercial IOProfiler tools.

Q: Have you seen situation where the IO size on the wire does not matched what application request?  Example Application request 256K but driver chopped the IO into multiple 16K before sent to the storage. How would we verify this type of issue?

A: Yes, this is a common situation. Applications may generate a large block SEQ IO Stream for video on demand.  However, that large block SEQ IO Stream is often fragmented into concurrent RND block sizes.  For example, in Linux OS, a 1MB file is often fragmented into random concurrent 128K block sizes for transmission to and from storage, but then coalesced back into a single 1024K BS in user space..

Q: Will you be sharing the costs for your tools or systems?

A: The tool demonstrated in the webcast is available free at testmyworkload.com (TMW).  This is done to build the repository of workloads at the TMW site.  Calypso Systems does have a set of Pro tools built around the TMW application.  Contact Calypso for specific details.

Q: Can the capture be replayed on different drives?

A: Yes.  In fact, this is one of the reasons that the tool was created.  The tool and repository of workloads are intended to be used as a way to compare drive and system performance, as well as tune software for real-world conditions.

Q: How are you tracking compressibility & duplication if the user does not turn on compression or dedupe?

A: The user must turn on compression or duplication at the beginning of the capture to see these metrics.

Q: An end user can readily use this to see what their real world workload looks like.  But, how could an SSD vendor mimic the real world workload or get a more “realworld-like” workload for use in common benchmarking tools like FIO & Sysbench?

A: The benchmarking tools mentioned are synthetic workloads, and write a predictable stream to and from the drive.  IO Captures ideally are run as a replay test that recreates the sequence of changing IO Stream combinations and Queue Depths observed during the capture.  While the Calypso toolset can do this automatically, free benchmark tools like FIO and sysbench may not be able to change QDs and IO Stream combinations from step to step in a test script.  However, the IO Capture will also provide a cumulative workload that list the dominant IO Streams and their percentage of occurrence.  This list of dominant IO Streams can be used with fio or sysbench to create a synthetic composite IO stream workload.

Q: Is it possible to use the tool to track CPU State such as IOWAIT or AWAIT based on the various streams?

A: Yes, IO Captures contain statistics on CPU usage such as CPU System Usage %, CPU IO Wait, CPU User usage, etc.

Q: Can we get more explanation of demand intensity and comparison to queue depth?

A: Demand Intensity (DI) is used to refer to the outstanding IOs at a given level of the software/hardware stack.  It may be referred to simply as the outstanding Queue Depth (QD) or as the number of outstanding Thread Count (TC) and QD.  The relevance of TC depends on where in the stack you are measuring the DI.  User QD varies from level to level and depends on what each layer of abstraction is doing.  Usually, focus is paid to the IO Scheduler and the total outstanding IOs at the block IO level.  Regardless of nomenclature, it is important to understand the DI as your workload traverses the IO Stack and to be able to minimize bottlenecks due to high DI.

Q: In these RWSW application traces do these include non-media command percentages such as identify and read log page (SMART), sleep states, etc.?  Depending on the storage interface and firmware this can adversely affect performance/QoS.

A: IO Capture metrics are the IO Streams at the logical storage level and thus do not include protocol level commands.  Non performance IO commands such as TRIMs can be recorded, and SMART logs can be tracked if access to the physical storage is provided.

Q: Isn’t latency a key performance metric for these workloads so collecting only 2 minute burst might not show latency anomalies?

A: IO Captures average the statistics over a selected time window.  Each individual IO Stream and its metrics are recorded and tabulated on a table but the time window average is what is displayed on the IO Stream map.  Of course, the min and max Response times over the 2 minute window are displayed, but the individual IO latencies are not displayed.  In order to track IO Bursts, the time window resolution should be set to a narrow time range, such as 100 mS or less, in order to distinguish IO Bursts and Host Idle times.

SNIA at Flash Memory Summit 2019 – Your Guide Here!

SNIA technical work and education advances will play a prominent role in the program at the 2019 Flash Memory Summit, August 5-8, 2019, in Santa Clara, CA.  Over 40 speakers will present on key standards activities and education initiatives, including the first ever FMS Persistent Memory Hackathon hosted by SNIA.  Check out your favorite technology (or all), and learn what SNIA is doing in these sessions:

SNIA-At-A-Glance

  • •SNIA Solid State Storage Reception
    Monday, August 5, 5:30 pm, Room 209/210
  • •SNIA Standards mainstage presentation by Michael Oros, SNIA Executive Director
    Tuesday, August 6, 2:50 pm, Mission City Ballroom
  • •Beer and Pizza with SNIA Experts on Persistent Memory/NVDIMM, Remote Persistent Memory/Open Fabrics, SNIA Swordfish, and more
    Tuesday, August 6, 7:15 pm – 9:00 pm, Ballrooms A-C
  • •SNIA Solid State Storage Initiative booth #820 featuring Persistent Memory demos and Performance, Computational Storage, and SNIA Swordfish discussions
    Tuesday, August 6, 4:00 pm – 7:00 pm; Wednesday August 7, Noon to 7 pm; and Thursday, August 8, 10:00 am – 2:30 pm, Exhibit Hall

Persistent Memory

  • SNIA Persistent Memory Programming Tutorial and Introduction to the FMS Persistent Memory Hackathon hosted by SNIA
    Learn how programming persistent memory works and get started on your own “hacks”
    Monday, August 5, 1:00 p.m. – 5:00 p.m, Room 209/210
  • •Persistent Memory Hackathon hosted by SNIA
    Bring your laptop and drop by anytime over the two days. SNIA persistent memory experts will support software developers in a live coding exercise to better understand the various tiers and modes of persistent memory and explore existing best practices.
    Tuesday, August 6 and Wednesday August 7, 8:30 am – 7:00 pm, Great America Ballroom Foyer
  • •Persistent Memory Track sessions sponsored by SNIA, JEDEC, and Open Fabrics Alliance
    See experts speak on Advances in Persistent Memory and PM Software and Applications in sessions PMEM-101-1 and PMEM-102-1
    Tuesday, August 6, 8:30 am – 10:50 am in Ballroom E and 3:40 pm – 6:00 pm, in Great America Ballroom J
  • •Persistent Memory Track sessions sponsored by SNIA, JEDEC, and Open Fabrics Alliance
    The track continues with sessions on Remote Persistent Memory and the latest research in the field in sessions PMEM-201-1 and PMEM-202-1
    Wednesday, August 7, 8:30 am – 10:50 am and 3:20 pm – 5:45 pm, Great America Meeting Room 3

Computational Storage

  • •Don’t miss the first ever Computational Storage track at FMS. This SNIA sponsored day features expert presentations and panels on Controllers and Technology, Deploying Solutions, Implementation Methods and Applications.(COMP-301A-1; COMP-301B-1; COMP-302A-1; COMP-302B-1)
    Thursday, August 8, 8:30 am – 10:50 am and 3:20 pm – 5:45 pm, in Ballroom A

Form Factors

  • •Learn what the SFF TA Technical Work Group has been doing in the session New Enterprise and Data Center SSD Form Factors (SSDS-201B-1)
    Wednesday, August 7, 9:45 am -10:50 am, in Great America Ballroom K

SNIA Swordfish

  • •Hear an update on Storage Management with Swordfish APIs for Open-Channel SSDs in session SOFT-201-1
    Wednesday, August 7, 9:45 am -10:50 am, in Ballroom F

Object Drives

  • •Learn about Standardization for a Key Value Interface Underway at NVM Express and SNIA in session NVME-201-1
    Wednesday, August 7,8:30 am – 9:35 am, in Great America Meeting Room 2