• Home
  • About
  •  

    Latency Budgets for Solid State Storage Access

    March 7th, 2017

    New solid state storage technologies are forcing the industry to refine distinctions between networks and other types of system interconnects.  The question on everyone’s mind is: when is it beneficial to use networks to access solid state storage, particularly persistent memory?

    It’s not quite as simple as a “yes/no” answer. The answer to this question involves application, interconnect, memory technology and scalability factors that can be analyzed in the context of a latency budget.

    On April 19th, Doug Voigt, Chair SNIA NVM Programming Model Technical Work Group, returns for a live SNIA Ethernet Storage Forum webcast, “Architectural Principles for Networked Solid State Storage Access – Part 2where we will explore latency budgets for various types of solid state storage access. These can be used to determine which combinations of interconnects, technologies and scales are compatible with Load/Store instruction access and which are better suited to IO completion techniques such as polling or blocking.

    In this webcast you’ll learn:

    • Why latency is important in accessing solid state storage
    • How to determine the appropriate use of networking in the context of a latency budget
    • Do’s and don’ts for Load/Store access

    This is a technical seminar built upon part 1 of this series. If you missed it, you can view it on demand at your convenience. It will give you a solid foundation on this topic, outlining key architectural principles that allow us to think about the application of networked solid state technologies more systematically.

    I hope you will register today for the April 19th event. Doug and I will be on hand to answer questions on the spot.


    SNIA Activities in Security, Containers, and File Storage on Tap at Three Bay Area Events

    February 14th, 2017

    SNIA will be out and about in February in San Francisco and Santa Clara, CA, focused on their security, container, and file storage activities.

    February 14-17 2017, join SNIA in San Francisco at the RSA Conference in the OASIS Interop: KMIP & PKCS11 booth S2115. OASIS and SNIA member companies will be demonstrating OASIS Key Management Interoperability Protocol (KMIP) through live interoperability across all participants. … Continue reading


    SNIA Recognizes Outstanding Individual and Group Contributors

    February 2nd, 2017

    The backbone of SNIA is its passionate and dedicated volunteers – over 3,500 from 160 companies involved in storage and technology.  At the end of each year, SNIA members vote anonymously to recognize both individuals and groups who have made significant contributions over that year to advancing SNIA’s mission to lead the storage industry worldwide in developing and promoting vendor-neutral architectures, standards, and educational services … Continue reading


    Attend Live – or Live Stream – SNIA’s Persistent Memory Summit January 18

    January 12th, 2017

    by Marty Foltyn

    SNIA’s Persistent Memory Summit makes its fifth annual appearance in Silicon Valley next Wednesday, January 18, and if you are in the vicinity of the Westin San Jose, you owe it to yourself to check it out. PMSummitLogo (2)

    SNIA is well known for its technology-focused, no vendor-hype conferences, and this one-day event will feature 12 presentations and two panels that will “level set” … Continue reading


    Cast Your Vote on November 8 for the Magic and Mystery of In-Memory Apps!

    November 2nd, 2016

    It’s an easy “Yes” vote for this great webcast from the SNIA Solid State Storage Initiative on the Magic and Mystery of In-Memory Apps! Join us on Election Day – November 8 – at 1:00 pm ET/10:00 am PT to learn about today’s market and the disruptions that happen when combining big-data vote-yes(Petabytes) with in-memory/real-time requirements.  You’ll understand the interactions with Hadoop/Spark, Tachyon, SAP HANA, NoSQL, and the related infrastructure of DRAM, NAND, 3DXpoint, NV-DIMMs, and high-speed networking and learn what happens to infrastructure design and operations when “tiered-memory” replaces “tiered storage”.

    Presenter Shaun Walsh of G2M Communications is an expert in memory technology – and a great speaker! He’ll share with you what you need to know about evaluating, planning, and implementing in-memory computing applications, and give you the framework to evaluation and plan for your adoption of in-memory computing.

    Register at: https://www.brighttalk.com/webcast/663/230103


    SNIA Puts the You in YouTube

    October 26th, 2016

    Did you know that SNIA has a YouTube Channel?  SNIAVideo is the place designed for You to visit for the latest technical and educational content – all free to download – from SNIA thought leaders and events. youtube channel

    Our latest videos cover a wide range of topics discussed at last month’s SNIA Storage Developer Conference.  Enjoy The Ride Cast video playlist where industry expert Marc Farley … Continue reading


    SNIA Storage Developer Conference-The Knowledge Continues

    October 13th, 2016

    SNIA’s 18th Storage Developer Conference is officially a success, with 124 general and breakout sessions;  Cloud Interoperability, Kinetiplugfest 5c Storage, and SMB3 plugfests; ten Birds-of-a-Feather Sessions, and amazing networking among 450+ attendees.  Sessions on NVMe over Fabrics won the title of most attended, but Persistent Memory, Object Storage, and Performance were right behind.  Many thanks to SDC 2016 Sponsors, who engaged attendees in exciting technology … Continue reading


    Flash Memory Summit Highlights SNIA Innovations in Persistent Memory & Flash

    July 28th, 2016

    SNIA and the Solid State Storage Initiative (SSSI) invite you to join them at Flash Memory Summit 2016, August 8-11 at the Santa Clara Convention Center. SNIA members and colleagues receive $100 off any conference package using the code “SNIA16” by August 4 when registering for Flash Memory Summit at fms boothhttp://www.flashmemorysummit.com

    On Monday, August 8, from 1:00pm – 5:00pm, a SNIA Education Afternoon will be open to the public in SCCC Room 203/204, where attendees can learn about multiple storage-related topics with five SNIA Tutorials on flash storage, combined service infrastructures, VDBench, stored-data encryption, and Non-Volatile DIMM (NVDIMM) integration from SNIA member speakers.

    Following the Education Afternoon, the SSSI will host a reception and networking event in SCCC Room 203/204 from 5:30 pm – 7:00 pm with SSSI leadership providing perspectives on the persistent memory and SSD markets, SSD performance, NVDIMM, SSD data recovery/erase, and interface technology. Attendees will also be entered into a drawing to win solid state drives.

    SNIA and SSSI members will also be featured during the conference in the following sessions:

    arthur crop

    • Persistent Memory (Preconference Session C)
      NVDIMM presentation by Arthur Sainio, SNIA NVDIMM SIG Co-Chair (SMART Modular)
      Monday, August 8, 8:30am- 12:00 noon 
    • Data Recovery of SSDs (Session 102-A)
      SIG activity discussion by Scott Holewinski, SSSI Data Recovery/Erase SIG Chair (Gillware)
      Tuesday, August 9, 9:45 am – 10:50 am
    • Persistent Memory – Beyond Flash sponsored by the SNIA SSSI (Forum R-21) Chairperson: Jim Pappas, SNIA Board of Directors Vice-Chair/SSSI Co-Chair (Intel); papers presented by SNIA members Rob Peglar (Symbolic IO), Rob Davis (Mellanox), Ken Gibson (Intel), Doug Voigt (HP), Neal Christensen (Microsoft) Wednesday, August 10, 8:30 am – 11:00 am
    • NVDIMM Panel, organized by the SNIA NVDIMM SIG (Session 301-B) Chairperson: Jeff Chang SNIA NVDIMM SIG Co-Chair (AgigA Tech); papers presented by SNIA members Alex Fuqa (HP), Neal Christensen (Microsoft) Thursday, August 11, 8:30am – 9:45am

    Finally, don’t miss the SNIA SSSI in Expo booth #820 in Hall B and in the Solutions Showcase in Hall C on the FMS Exhibit Floor. Attendees can review a series of updated performance statistics on NVDIMM and SSD, see live NVDIMM demonstrations, access SSD data recovery/erase education, and preview a new white paper discussing erasure with regard to SSDs. SNIA representatives will also be present to discuss other SNIA programs such as certification, conformance testing, membership, and conferences.


    Principles of Networked Solid State Storage – Q&A

    June 22nd, 2016

    At this month’s SNIA Ethernet Storage Forum Webcast, “Architectural Principles for Networked Solid State Storage Access,” Doug Voigt, Chair of the SNIA NVM Programming Technical Working Group, and a member of the SNIA Technical Council, outlined key architectural principles surrounding the application of networked solid state technologies. We had a flurry of questions near the end of the Webcast that we did not have enough time to answer. Here are Doug’s answers to all the questions we received during the event:

    Q. Are there wait cycles in accessing persistent memory?

    A. It depends entirely on which persistent memory (PM) technology is being accessed and how the memory interconnect is used.  Some technologies have write times that are quite different from read times.  When using tightly timed interconnects such as DDR with those technologies it may be difficult to avoid wait cycles.

    Q. How do Pmalloc and malloc share the virtual address space of the application?

    A. This is entirely up to the OS and other libraries operating within any constraints of the processor architecture-specific memory management units.  A good mental model would be fairly large regions of contiguous address space in both the physical and virtual domains, where each region will comprise a single type of memory. Capacity will be reserved for pmalloc and malloc in the appropriate regions.

    Q. Always flush after doing your memory-mapped IO.  Is that simply good hygiene?

    A. Not exactly. The term “Memory Mapped IO” is used to reference control plane (as opposed to data plane) access.  It is often reasonable to set up control plane memory as uncacheable. The need for strict order of access to physical control plane registers is so pervasive that caching is generally not useful. Uncacheable writes are always flushed by the processor, as opposed to the application.

    Generally with memory mapped IO devices the data plane uses direct memory access (DMA).  With memory mapped files (as opposed to memory mapped IO) Load/Store (more commonly referred to as “Ld/St”), not DMA, is used in the data plane. Disabling caching in the data plane is generally a big performance sacrifice for small byte range access.

    In the Ld/St datapath, strategically placed flushing is required to retain both performance and power failure recovery. The SNIA NVM Programming Model describes this type of functionality.

    Q. Once NVDIMM support become pervasive with support from NVMe drives in the server box, should network storage be more focused on SAS Flash or just SAS HDDs?

    A. Not necessarily.  NVMe over Fabric, Fibre Channel and iSCSI are also types of networked storage that will likely retain significant market share relative to SAS.

    Q. Are the ‘Big Data’ Data Warehouse applications starting to use the persistence memory and domain technologies in their applications?

    A. It is too early to see much of this yet. PM technologies might become a priority as a staging area for analytic applications with high ingest or checkpoint rates. NVDIMMs are likely to be too expensive to store anything “big” for quite a while.

    Q. Also, is the persistence memory/domains being used in the Hyper-converged and Converged hardware infrastructures?

    A. Persistent memory is quintessentially (Hyper-) converged.  It wouldn’t be unreasonable to expect some traction with hyper-converged solutions that experience high storage-performance demand.

    Q. What distance would you associate with 10’s of microseconds?

    A. In terms of transmission delay, 10’s of uS align with a campus or small city scale, but the distance itself is often not the primary factor.  Switching delays, transmission line properties and software overhead are generally bigger factors.

    Q. So latency would be the binding factor for distances…not a question, an observation.

    A. Yes, in effect, either through transmission or relay.  See above.

    Q. Aren’t there multi-threaded SSDs?

    A. Yes, but since the primary metric in this presentation is latency we ignore multi-threading.  It can enable more work to get done, but it generally increases latency rather than reducing it.

    Q. Is Pmalloc universal usage?

    A. The term is starting to be recognized among developers and has been used in research. Various similar names have been used in early research prototypessuch as pmalloc in Mnemosyne and nvmalloc in SCMFS.

    Q. So how would PM help in a (stock broking) requirement, where we currently prophesize an RDMA or iWARP solution?

    A. With PM the answer is always lower latency.  PM can be litegrated like memory or like flash. RDMA network paths for both of these options were discussed in the presentation. In either case, PM is low-latency enough that networking and software overheads will completely determine performance, even when using RDMA. The performance boost from PM is greatest when it is accessed locally.  If remote access is a requirement then the new work being done in the RDMA community should help.

    Q. If data stored in memory requires to be copied to a different host, memory (for consistency) how does PM assist, or is there an extension to PM? Coherency between multiple hosts in a cluster, if you will?

    A. PM technology does not help with this; the methods of managing consistency across hosts remain unchanged by PM.  All PM offers is low latency persistence.

    Coordination across hosts or nodes in a cluster must use existing clustering techniques such as locking and quorums. In addition, the relative timescales of memory access and network communication suggest the application of asynchronous remote replication techniques used in today’s storage solutions.

    Regarding coherency, PM brings nothing new to the known techniques for managing coherency.  Classical cluster architecture must be applied outside of symmetric multi-processing coherency domains. Within coherency domains, all of the logic is above the PM level in a processor side memory controller or a software emulation of the same algorithms.

     

     

     

     


    NVM Big at Storage Developer Conference SDC Precon

    September 19th, 2015

    Objective Analysis 3D XPoint Report GraphicI’ll be speaking at SNIA’s SDC Pre-Conference this Sunday, Sept 20, about the new Intel-Micron 3D XPoint memory.  I was surprised to find that my talk won’t be unique.  There are about 15 papers at this conference that will be discussing NVM, or persistent memory.

    What’s all this fuss about?

    Part of it has to do with the introduction by Micron & Intel of their 3D XPoint (pronounced “Crosspoint”) memory.  This new product will bring nonvolatility, or persistence, to main memory, and that’s big!

    Intel itself will present a total of seven papers to tell us all how they envision this technology being used in computing applications.  Seven other companies, other than Objective Analysis (my company) will also discuss this hot new topic.

    SNIA is really on top of this new trend.  This organization has been developing standards for nonvolatile memory for the past couple of years, and has published an NVM Programming Model to help software developers produce code that will communicate with nonvolatile memory no matter who supplies it.  Prior to SNIA’s intervention the market was wildly inconsistent, and all suppliers’ NVDIMMs differed slightly from one another, with no promise that this would become any better once new memory technologies started to make their way onto memory modules.

    Now that Intel and Micron will be producing their 3D XPoint memory, and will be supplying it on industry-standard DDR4 DIMMs, it’s good to know that there will be a standard protocol to communicate with it.  This will facilitate the development of standard software to harness all that nonvolatile memory has to offer.

    As for me, I will be sharing information from my company’s new report on the Micron-Intel 3D XPoint memory.  This is new, and it’s exciting.  Will it succeed?  I’ll discuss that with you there.