Category: High Performance Computing
It’s a Wrap! SNIA’s 20th Storage Developer Conference a Success!
Reviews are in for the 20th Storage Developer Conference (SDC) and they are thumbs up! The 2017 SDC was the largest ever- expanding to four full days with seven keynotes, five SNIA Tutorials, and 92 sessions. The SNIA Technical Council, who oversees conference content, compiled a rich agenda of 18 topic categories focused on Read More
SNIA Storage Developer Conference-The Knowledge Continues
SNIA’s 18th Storage Developer Conference is officially a success, with 124 general and breakout sessions; Cloud Interoperability, Kinetic Storage, and SMB3 plugfests; ten Birds-of-a-Feather Sessions, and amazing networking among 450+ attendees. Sessions on NVMe over Fabrics won the title of most attended, but Persistent Memory, Object Storage, and Performance were right behind. Many thanks to SDC 2016 Sponsors, who engaged attendees in exciting technology discussions.
For those not familiar with SDC, this technical industry event is designed for a variety of storage technologists at various levels from developers to architects to product managers and more. And, true to SNIA’s commitment to educating the industry on current and future disruptive technologies, SDC content is now available to all – whether you attended or not – for download and viewing.
You’ll want to stream keynotes from Citigroup, Toshiba, DSSD, Los Alamos National Labs, Broadcom, Microsemi, and Intel – they’re available now on demand on SNIA’s YouTube channel, SNIAVideo.
All SDC presentations are now available for download; and over the next few months, you can continue to download SDC podcasts which combine audio and slides. The first podcast from SDC 2016 – on hyperscaler (as well as all 2015 SDC Podcasts) are available here, and more will be available in the coming weeks.
SNIA thanks all its members and colleagues who contributed to make SDC a success! A special thanks goes out to the SNIA Technical Council, a select group of acknowledged industry experts who work to guide SNIA technical efforts. In addition to driving the agenda and content for SDC, the Technical Council oversees and manages SNIA Technical Work Groups, reviews architectures submitted by Work Groups, and is the SNIA’s technical liaison to standards organizations. Learn more about these visionary leaders at http://www.snia.org/about/organization/tech_council.
And finally, don’t forget to mark your calendars now for SDC 2017 – September 11-14, 2017, again at the Hyatt Regency Santa Clara. Watch for the Call for Presentations to open in February 2017.
Podcasts Bring the Sounds of SNIA’s Storage Developer Conference to Your Car, Boat, Train, or Plane!
SNIA’s Storage Developer Conference (SDC) offers exactly what a developer of cloud, solid state, security, analytics, or big data applications is looking for – rich technical content delivered in a no-vendor bias manner by today’s leading technologists. The 2016 SDC agenda is being compiled, but now you can get a “sound bite” of what to expect by downloading SDC podcasts via iTunes, or visiting the SDC Podcast site at http://www.snia.org/podcasts to download the accompanying slides and/or listen to the MP3 version.
Each podcast has been selected by the SNIA Technical Council from the 2015 SDC event, and include topics like:
- Preparing Applications for Persistent Memory from Hewlett Packard Enterprise
- Managing the Next Generation Memory Subsystem from Intel Corporation
- NVDIMM Cookbook – a Soup to Nuts Primer on Using NVDIMMs to Improve Your Storage Performance from AgigA Tech and Smart Modular Systems
- Standardizing Storage Intelligence and the Performance and Endurance Enhancements It Provides from Samsung Corporation
- Object Drives, a New Architectural Partitioning from Toshiba Corporation
- Shingled Magnetic Recording- the Next Generation of Storage Technology from HGST, a Western Digital Company
- SMB 3.1.1 Update from Microsoft
Eight podcasts are now available, with new ones added each week all the way up to SDC 2016 which begins September 19 at the Hyatt Regency Santa Clara. Keep checking the SDC Podcast website, and remember that registration is now open for the 2016 event at http://www.snia.org/events/storage-developer/registration. The SDC conference agenda will be up soon at the home page of http://www.storagedeveloper.org.
Enjoy these great technical sessions, no matter where you may be!
OpenStack File Services for HPC Q&A
We got some great questions during our Webcast on how OpenStack can consume and control file services appropriate for High Performance Computing (HPC) in a cloud and multi-tenanted environment. Here are answers to all of them. If you missed the Webcast, it’s now available on-demand. I encourage you to check it out and please feel free to leave any additional questions at this blog.
Q. Presumably we can use other than ZFS for the underlying filesystems in Lustre?
A. Yes, there a plenty of other filesystems that can be used other than ZFS. ZFS was given as an example of a scale up and modern filesystem that has recently been integrated, but essentially you can use most filesystem types with some having more advantages than others. What you are looking for is a filesystem that addresses the weaknesses of Lustre in terms of self-healing and scale up. So any filesystem that allows you to easily grow capacity whilst also being capable of protecting itself would be a reasonable choice. Remember, Lustre doesn’t do anything to protect the data itself. It simply places objects in a distributed fashion of the Object Storage Targets.
Q. Are there any other HPC filesystems besides Lustre?
A. Yes there are and depending on your exact requirements Lustre might not be appropriate. Gluster is an alternative that some have found slightly easier to manage and provides some additional functionality. IBM has GPFS which has been implemented as an HPC filesystem and other vendors have their scale-out filesystems too. An HPC filesystem is simply a scale-out filesystem capable of very good throughput with low latency. So under that definition a flash array could be considered a High Performance storage platform, or a scale out NAS appliance with some fast disks. It’s important to understand you’re workloads characteristics and demands before making the choice as each system has pro’s and con’s.
Q. Does “embarrassingly parallel” require bandwidth or latency from the storage system?
A. Depending on the workload characteristics it could require both. Bandwidth is usually the first demand though as data is shipped to the nodes for processing. Obviously the lower the latency the fast though jobs can start and run, but its not critical as there is limited communication between nodes that normally drives the low latency demand.
Q. Would you suggest to use Object Storage for NFV, i.e Telco applications?
A. I would for some applications. The problem with NFV is it actually captures a surprising breadth of applications so of which have very limited data storage needs. For example there is little need for storage in a packet switching environment beyond the OS and binaries needed to stand up the VM’s. In this case, object is a very good fit as it can be easily, geographically distributed ensuring the same networking function is delivered in the same manner. Other applications that require access to filtered data (so maybe billing based applications or content distribution) would also be good candidates.
Q. I missed something in the middle; please clarify, your suggestion is to use ZFS (on Linux) for the local file system on OSTs?
A. Yes, this was one example and where some work has recently been done in the Lustre community. This affords the OSS’s the capability of scaling the capacity upwards as well as offering the RAID-like protection and self-healing that comes with ZFS. Other filesystems can offer those some things so I am not suggesting it is the only choice.
Q. Why would someone want/need scale-up, when they can scale-out?
A. This can often come down to funding. A lot of HPC environments exist in academic institutions that rely on grant funding and sponsorship to expand their infrastructure. Sometimes it simply isn’t feasible to buy extra servers in order to add capacity, particularly if there is already performance headroom. It might also be the case that rack space, power and cooling could be factors in which case adding drives to cope with bigger workloads might be the only option. You do need to consider if the additional capacity would also provoke the need for better performance so we can’t just assume that adding disk is enough, but it’s certainly a good option and a requirement I have seen a number of times.
OpenStack File Services Options
How can OpenStack consume and control file services appropriate to High Performance Compute (HPC) in a cloud and multi-tenanted environment? Find out on September 22nd when SNIA Cloud hosts a live Webcast and examines two approaches to integration.
One approach is to have OpenStack manage the storage infrastructure services using Cinder, Nova and Neutron to provide HPC Filesystem as a Service.
A second option is to use Manila file services for OpenStack to control the HPC File system deployment and manage the exports etc. This part also looks at the creation (in progress) of the Lustre Manila driver and its current progress.
I hope you’ll join Alex McDonald and me as we discuss the pros and cons of each approach. Register today and