An update from SNIA on Storage: Computational Storage was definitely the “buzz” at SDC! Session slides can now be downloaded. The BoF attracted over 50 attendees, and a F2F meeting (and WebEx) is set for October 11 at the SNIA Technology Center in Colorado Springs. Go to snia.org/computational to sign up for the meeting to plan the mission and charter for the SNIA Computational Storage Technical Work Group. All SNIA and non-SNIA members are welcome to participate in this phase of the TWG.
The SNIA Storage Developer Conference (SDC) is running September 24-27, 2018 at the Hyatt Regency Santa Clara CA. Registration is open, and the agenda is live!
SNIA On Storage is teaming up with the SNIA Technical Council to dive into major themes of the 2018 conference. The SNIA Technical Council takes a leadership role to develop the content for each SDC, so SNIA on Storage spoke with Mark Carlson, SNIA Technical Council Co-Chair and Principal Engineer, Industry Standards, Toshiba Memory America, to understand why SDC is bringing Computational Storage to conference attendees.
SNIA On Storage (SOS): Just in the last few weeks, there’s been a tremendous buzz about “computational storage”. Is this a new kid on the block?
Mark Carlson (MC): We all know the classic architecture of a computer as a host with CPU and memory – and attached networking storage and peripherals like graphics and FPGAs often connected by a PCI Express bus. These systems operate at very high speeds with low latency and high throughput. Now, what would happen if we took some of the computational capabilities that are in a typical host, and put them on the other side of the PCIe bus? You’d have a “computational peripheral”.
SOS: What could you do with a “computational peripheral”?
MC: One use case is to treat the computational peripheral as an enhanced storage device on the PCIe bus. For example, even though this peripheral may not have any solid state storage on it, you could place it between the traditional host and a solid state drive. The computational peripheral could act as a compressor, taking uncompressed data from the host and compressing it as its being sent to the SSD. The system would not require either as many (or as large) SSDs.
SOS: What if you wanted to do compression and decompression in the SSD?
MC: A computational peripheral could be combined within the SSD to do compression/decompression within the drive itself. The advantage would be additional functionality from a single device, but the disadvantage would be that the computational resource could only be used for that SSD. Any data needing to be compressed would have to go to an SSD with computational capability, which would only be available when that SSD was being used.
SOS: How else do you see computational storage being used?
MC: Another useful application for computational storage is offload of encryption and decryption, which could address the numerous debates about whether this functionality should be in the drive or the host. Secure computational storage can create a trusted “security box” around the drive by optimizing for encryption and decryption. Another application is analytics offload – where you move the computation to the data rather than pulling it into a “host”. Because of the low latency, you would get results much faster.
SOS: So why have we not seen computational storage in computer systems before now?
MC: What enables computational storage is the PCIe bus and the NVMe standard. The NVMe interface allows you to move massive amounts of data across the PCIe bus, so leveraging NVMe and PCIe makes this whole ecosystem of computational storage possible.
SOS: How will SNIA play into this?
MC: SNIA has created a new provisional Technical Work Group or TWG to understand, educate, and perhaps develop standards for this new paradigm of computer architecture. Anyone can join this group at the link snia.org/computational. We have an open mailing list of over 40 companies to date to work on the charter and scope of work for the TWG. Once we form the TWG, you will need to be a SNIA member to join. You can contact Marty Foltyn for details on how to join SNIA.
SOS: How can I learn more about computational storage?
MC: if you are in Silicon Valley, the easiest way is to come to our open Birds of a Feather session at the SNIA Storage Developer Conference on Monday, September 24 at 7:00 pm in the Cypress room of the Hyatt Regency Santa Clara. We’ll have leaders from four companies involved in computational storage, along with TWG members, available for a lively discussion on where we are and where we need to go. No badge is needed, just come on by.
SOS: Will you have a talk at SDC?
MC: In my talk, Datacenter Management of NVMe Drives, I’ll be envisioning a computational storage peripheral doing SNIA SwordfishTM management – tying into Swordfish access to all the PCIe and NVMe devices for computation and storage both to present a comprehensive view of any systems that were composed out of those peripherals. I’ll talk about what you could do with this – using Swordfish in-band to the NVMe device, for example. If the drive used Ethernet for NVMe, a regular http port could produce the Redfish storage schema to whoever wants to manage it that way.
SOS: What other talks should I make sure to see?
At SDC next week, we will have six other sessions touching on various topics about computational storage. On Monday, September 24, check out these three sessions:
- FPGA Accelerator Disaggregation using NVMe over Fabrics will cover over fabrics connections allowing servers to share accelerators on demand.
- Accelerating Storage with NVM Express SSDs and P2PDMA will show how these systems outperform their conventional counterparts and lead to lower cost and lower power designs.
- A Comparison of In-storage Processing Architectures and Technologies will analyze the in-storage processing trend, compare different architectures, present a roadmap, and list application use cases.
On Tuesday, a general session on Compute and Storage Innovation Combine to Provide a Pathway to Composable Architecture will examine how best to tie the traditional software and hardware layers together for scale and use. Also, look for these sessions later in the day:
- FPGA-Based ZLIB//GZIP Compression Engine as a NVMe Namespace will look at how off-loading compression from processors to FPGAs can free up valuable CPU time, reduce compression time and power consumption, and improve resource utilization and lower operation costs.
- Deployment of In-storage Compute with NVMe Storage at Scale and Capacity will discuss a paradigm shift with In-Storage Compute, a simple, scalable, low power solution that provides developers with opportunities to have intelligent storage and break the boundaries still being hung onto via traditional rotating media architectures.
SOS: Looks like I have my week cut out for me, keeping up with computational storage at SDC!
MC: Absolutely. I look forward to many interesting “hallway track” discussions every year!