OpenStack Manila – A Q&A on Liberty and Mitaka

Our recent Webcast with OpenStack Manila OpenStack Manila Project Team Lead (PTL), Ben Swartzlander, generated a lot of great questions. As promised, we’ve complied answers for all of the questions that came in. If you think of additional questions, please feel free to comment on this blog. And if you missed the live Webcast, it’s now available on-demand.

Q. Is Hitachi Data Systems contributing to the Manila project?

A. Yes, Hitachi contributed a new driver and also contributed a major new feature (migration) during Liberty. HDS was also active during the Kilo release with a different driver which is unfortunately no longer maintained.

Q. EMC has open sourced ViPER as CopperHD. Do you see any overlap between Manila/Cinder from one side and CopperHD from the other hand?

A. I’m not familiar enough with CoprHD to answer authoritatively, but I understand that there is definitely some overlap between it and Cinder, and I also expect there is some overlap with Manila. Assuming there is some overlap, I think that’s a great thing because competition within open source drives greater quality, and it’s confirmation that there is real demand for what we’re building.

Q. Could Manila be used stand-alone (without OpenStack) to create a fileshare server?

A. Yes, the only OpenStack service Manila depends on is Keystone (for authentication). Running Manila in a stand-alone fashion is a specific use case the team supports.

Q. If we are mapping the snap shot images what is the guarantee for data integrity?

A. Snapshots are typically crash-consistent copies of the filesystem at a point in time. In reality the exact guarantee depends on the backend used, and that’s something we’d like to avoid, so that the snapshot semantics are clear to the user. In the future, backends which cannot meet the crash-consistent guarantee will probably be forced to advertise a different capability so end users are aware of what they’re getting.

Q. Is there manila automation with ansible?

A. As far as I know this hasn’t been done yet.

Q. For kilo deployed in production does it work for all commercial drivers or is there a chart that says which commercial drivers support kilo?

A. The developer doc now has a table which attempts to answer this question. However, the most reliable way to see which drivers are part of the stable/kilo release would be to look at the driver directory of the code. This is an area where the docs need to improve.

Q. Could you explain consistency groups?

A. Consistency groups are a mechanism to ensure that 2 or more shares can be snapshotted in a single operation. Without CGs, you can take 2 snapshots of 2 shares but there is no guarantee that those snapshots will represent the same point in time. CGs allow you to guarantee that the snapshots are synchronized, which makes it possible to use multiple shares together for a single application and to take snapshots of that application’s data in a consistent way.

Q. How is the consistency group in Manila different from Cinder? Is it similar?

A. The designs are very similar. There are some semantic differences in terms of how you modify the membership of the CGs, but the snapshot functionality is identical.

Q. Are you considering pNFS, but I guess this will be hard since it has req. on the client as well?

A. Manila is agnostic to the data protocol so if the backend supports pNFS and Manila is asked to create an NFS share, it may very well get a share with pNFS support. Certainly Manila supports shares with multiple export locations so that on a system with multiple network interfaces, or a clustered system, Manila will tell the clients about all of the paths to the share. In the future we may want Manila to actually know the capabilities of the backends w.r.t. what version of NFS they support so that if a user requires a minimum version we can guarantee that they get that version or get a sensible error if it’s not possible.

Q. Share Replication. In what mode, Async and/or Sync?

A. We plan to support both, and the choice of which is used will be up to the administrator. Communication about which is used and any relevant information like RPO time would be out of band from Manila. The goal of the feature in Manila is to make Manila able to configure the replication relationship, and able to initiate failovers. The intention is for planned failovers to be disruptive but with no data loss, and for unplanned failovers to be disruptive, with data loss corresponding to the RPO that the administrator configured (which would be zero for synchronous replication).

Q. Can you point me to any resources with SNIA available for OpenStack? Where can I download document, videos, etc?

A. You can find several informative OpenStack on-demand Webcasts on the SNIA BrightTalk channel here.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Moving Data Protection to the Cloud: Key Considerations

Leveraging the cloud for data protection can be an advantageous and viable option for many organizations, but first you must understand the pros and cons of different approaches. Join us on Nov. 17th for our live Webcast, “Moving Data Protection to the Cloud: Trends, Challenges and Strategies” where we’ll discuss the experiences of others with advice on how to avoid the pitfalls, especially during the transition from strictly local resources to cloud resources. We’ve pulled together a unique panel of SNIA experts as well as perspectives from some leading vendor experts Acronis, Asigra and Solid Fire who’ll discuss and debate:

  • Critical cloud data protection challenges
  • How to use the cloud for data protection
  • Pros and cons of various cloud data protection strategies
  • Experiences of others to avoid common pitfalls
  • Cloud standards in use – and why you need them

Register now for this live and interactive event. Our entire panel will be available to answer your questions. I hope you’ll join us!

 

Storage Performance Benchmarking – The Sequel

We at the Ethernet Storage Forum heard you loud and clear. You need more info on storage performance benchmarking. Our first Webcast, “Storage Performance Benchmarking: Introduction and Fundamentals” was tremendously popular – reaching over 2x the average audience, while hundreds more have read our Q&A blog on the same topic. So, back by popular demand Mark Rogov, Advisory Systems Engineer at EMC, Ken Cantrell, Performance Engineering Manager at NetApp, and I will move past the basics to the second Webcast in this series Storage Performance Benchmarking Part 2. With a focus on System Under Test (SUT), we’ll cover:

  • Commonalities and differences between basic Block and File terminology
  • Basic file components and the meaning of data workloads
  • Main characteristics of various workloads and their respective dependencies, assumptions and environmental components
  • The complexity of the technology benchmark interpretations
  • The importance to System Under Test:
    • What are the elements of a SUT?
    • Why are caches so important to understanding performance of a SUT?
    • Bottlenecks and threads and why they matter

I hope you’ll join us on October 21st at 9:00 a.m. PT. to learn why file performance benchmarking truly is an art. My colleagues and I plan to deliver another informative and interactive hour. Please register today and bring your questions. I hope to see you there.

See SNIA at OpenStack Summit Tokyo

Are you headed to the OpenStack Summit in Tokyo later this month? If so, I encourage you to stop by two “Birds of a Feather” (BoF) sessions I’ll be hosting on behalf of SNIA. Here’s the info on both of them:

Extending OpenStack Swift with S3 and CDMI Interfaces – Tues. Oct. 27th 11:15 a.m.

Cloud application developers using the OpenStack infrastructure are demanding implementations of not just the Swift API, but also the S3 defacto and CDMI standard APIs. Each of these APIs not only offers features in common, but also offers what appear to be unique and incompatible facilities. At this BoF, we’ll discuss how to: Implement a multi-API strategy simply and effectively, sensibly manage the differences between each of the APIs, map common features to each other, take advantage of each of the APIs’ strengths, avoid lowest common denominator implementations

Object Drive Integration with Swift – Thurs. Oct. 29th 9:00 a.m.

With the emergence of disk drives and perhaps solid state drives with Key/Value and other object interfaces, what are the implications on solution architectures and systems built around OpenStack Swift. One approach is termed “PACO” where the Object Node speaks Key/Value to the drive and is hosted with other Swift Services. Are there other approaches to this? Are you developing products or solutions based on Object Drives? Come to this BoF to discuss these issues with fellow developers.

I expect both of these BoFs will be full of lively discussions around standards, emerging technologies, challenges, best practices and more. If you have any questions about these sessions or about work that SNIA is doing, do not hesitate to contact me. I hope to see you in Tokyo!

 

 

 

OpenStack File Services for HPC Q&A

We got some great questions during our Webcast on how OpenStack can consume and control file services appropriate for High Performance Computing (HPC) in a cloud and multi-tenanted environment. Here are answers to all of them. If you missed the Webcast, it’s now available on-demand. I encourage you to check it out and please feel free to leave any additional questions at this blog.

Q. Presumably we can use other than ZFS for the underlying filesystems in Lustre?

A. Yes, there a plenty of other filesystems that can be used other than ZFS. ZFS was given as an example of a scale up and modern filesystem that has recently been integrated, but essentially you can use most filesystem types with some having more advantages than others. What you are looking for is a filesystem that addresses the weaknesses of Lustre in terms of self-healing and scale up. So any filesystem that allows you to easily grow capacity whilst also being capable of protecting itself would be a reasonable choice. Remember, Lustre doesn’t do anything to protect the data itself. It simply places objects in a distributed fashion of the Object Storage Targets.

Q. Are there any other HPC filesystems besides Lustre?

A. Yes there are and depending on your exact requirements Lustre might not be appropriate. Gluster is an alternative that some have found slightly easier to manage and provides some additional functionality. IBM has GPFS which has been implemented as an HPC filesystem and other vendors have their scale-out filesystems too. An HPC filesystem is simply a scale-out filesystem capable of very good throughput with low latency. So under that definition a flash array could be considered a High Performance storage platform, or a scale out NAS appliance with some fast disks. It’s important to understand you’re workloads characteristics and demands before making the choice as each system has pro’s and con’s.

Q. Does “embarrassingly parallel” require bandwidth or latency from the storage system?

A. Depending on the workload characteristics it could require both. Bandwidth is usually the first demand though as data is shipped to the nodes for processing. Obviously the lower the latency the fast though jobs can start and run, but its not critical as there is limited communication between nodes that normally drives the low latency demand.

Q. Would you suggest to use Object Storage for NFV, i.e Telco applications?

A. I would for some applications. The problem with NFV is it actually captures a surprising breadth of applications so of which have very limited data storage needs. For example there is little need for storage in a packet switching environment beyond the OS and binaries needed to stand up the VM’s. In this case, object is a very good fit as it can be easily, geographically distributed ensuring the same networking function is delivered in the same manner. Other applications that require access to filtered data (so maybe billing based applications or content distribution) would also be good candidates.

Q. I missed something in the middle; please clarify, your suggestion is to use ZFS (on Linux) for the local file system on OSTs?

A. Yes, this was one example and where some work has recently been done in the Lustre community. This affords the OSS’s the capability of scaling the capacity upwards as well as offering the RAID-like protection and self-healing that comes with ZFS. Other filesystems can offer those some things so I am not suggesting it is the only choice.

Q. Why would someone want/need scale-up, when they can scale-out?

A. This can often come down to funding. A lot of HPC environments exist in academic institutions that rely on grant funding and sponsorship to expand their infrastructure. Sometimes it simply isn’t feasible to buy extra servers in order to add capacity, particularly if there is already performance headroom. It might also be the case that rack space, power and cooling could be factors in which case adding drives to cope with bigger workloads might be the only option. You do need to consider if the additional capacity would also provoke the need for better performance so we can’t just assume that adding disk is enough, but it’s certainly a good option and a requirement I have seen a number of times.

 

Congestion Control in New Storage Architectures Q&A

We had a great response to last week’s Webcast “Controlling Congestion in New Storage Architectures” where we introduced CONGA, a new congestion control mechanism that is the result of research at Stanford University. We had many good questions at the live event and have complied answers for all of them in this blog. If you think of additional questions, please feel free to comment here and we’ll get back to you as soon as possible.

Q. Isn’t the leaf/spine network just a Clos network?  Since the network has loops, isn’t there a deadlock hazard if pause frames are sent within the network?

A. CLOS/Spine-Leaf networks are based on routing, which has its own loop prevention (TTLs/RPF checks).

Q. Why isn’t the congestion metric subject to the same delays as the rest of the data traffic?  

A. It is, but since this is done in the data plane with 40/100g within a data center fabric it can be done in near real time and without the delay of sending it to a centralized control plane.

Q. Are packets dropped in certain cases?

A. Yes, there can be certain reasons why a packet might be dropped.

Q. Why is there no TCP reset? Is it because the Ethernet layer does the flowlet retransmission before TCP has to do a resend?

A. There are many reasons for a TCP reset, CONGA does not prevent them, but it can help with how the application responds to a loss.  If there is a loss of the flowlet it is less detrimental to how the application performs because it will resend what it has lost versus the potential for full TCP connection to be reset.

Q. Is CONGA on an RFC standard track?

A. CONGA is based on research done at Stanford. It is not currently an RFC.

The research information can be found here.

Q. How does ECN fit into CONGA?

A. ECN can be used in conjunction with CONGA, as long as the host/networking hardware supports it.

 

 

NVM Big at Storage Developer Conference SDC Precon

Objective Analysis 3D XPoint Report GraphicI’ll be speaking at SNIA’s SDC Pre-Conference this Sunday, Sept 20, about the new Intel-Micron 3D XPoint memory.  I was surprised to find that my talk won’t be unique.  There are about 15 papers at this conference that will be discussing NVM, or persistent memory.

What’s all this fuss about?

Part of it has to do with the introduction by Micron & Intel of their 3D XPoint (pronounced “Crosspoint”) memory.  This new product will bring nonvolatility, or persistence, to main memory, and that’s big!

Intel itself will present a total of seven papers to tell us all how they envision this technology being used in computing applications.  Seven other companies, other than Objective Analysis (my company) will also discuss this hot new topic.

SNIA is really on top of this new trend.  This organization has been developing standards for nonvolatile memory for the past couple of years, and has published an NVM Programming Model to help software developers produce code that will communicate with nonvolatile memory no matter who supplies it.  Prior to SNIA’s intervention the market was wildly inconsistent, and all suppliers’ NVDIMMs differed slightly from one another, with no promise that this would become any better once new memory technologies started to make their way onto memory modules.

Now that Intel and Micron will be producing their 3D XPoint memory, and will be supplying it on industry-standard DDR4 DIMMs, it’s good to know that there will be a standard protocol to communicate with it.  This will facilitate the development of standard software to harness all that nonvolatile memory has to offer.

As for me, I will be sharing information from my company’s new report on the Micron-Intel 3D XPoint memory.  This is new, and it’s exciting.  Will it succeed?  I’ll discuss that with you there.

Outstanding Keynotes from Leading Storage Experts Make SDC Attendance a Must!

Posted by Marty Foltyn

Tomorrow is the last day to register online for next week’s Storage Developer Conference at the Hyatt Regency Santa Clara. What better incentive to click www.storagedeveloper.org and register than to read about the amazing keynote and featured speakers at this event – I think they’re the best since the event began in 1998! Preview sessions here, and click on the title to download the full description.SDC15_WebHeader3_999x188

Bev Crair, Vice President and General Manager, Storage Group, Intel will present Innovator, Disruptor or Laggard, Where Will Your Storage Applications Live? Next Generation Storage and discuss the leadership role Intel is playing in driving the open source community for software defined storage, server based storage, and upcoming technologies that will shift how storage is architected.

Jim Handy, General Director, Objective Analysis will report on The Long-Term Future of Solid State Storage, examining research of new solid state memory and storage types, and new means of integrating them into highly-optimized computing architectures. This will lead to a discussion of the way that these will impact the market for computing equipment.

Jim Pinkerton, Partner Architect Lead, Microsoft will present Concepts on Moving From SAS connected JBOD to an Ethernet Connected JBOD . This talk examines the advantages of moving to an Ethernet connected JBOD, what infrastructure has to be in place, what performance requirements are needed to be competitive, and examines technical issues in deploying and managing such a product.

Andy Rudoff, SNIA NVM Programming TWG, Intel will discuss Planning for the Next Decade of NVM Programming describing how emerging NVM technologies and related research are causing a change to the software development ecosystem. Andy will describe use cases for load/store accessible NVM, some transparent to applications, others non-transparent.

Richard McDougall, Big Data and Storage Chief Scientist, VMware will present Software Defined Storage – What Does it Look Like in 3 Years? He will survey and contrast the popular software architectural approaches and investigate the changing hardware architectures upon which these systems are built.

Laz Vekiarides, CTO and Co-founder, ClearSky Data will discuss Why the Storage You Have is Not the Storage Your Data Needs , sharing some of the questions every storage architect should ask.

Donnie Berkholz, Research Director, 451 Research will present Emerging Trends in Software Development drawing on his experience and research to discuss emerging trends in how software across the stack is created and deployed, with a particular focus on relevance to storage development and usage.

Gleb Budman, CEO, Backblaze will discuss Learnings from Nearly a Decade of Building Low-cost Cloud Storage. He will cover the design of the storage hardware, the cloud storage file system software, and the operations processes that currently store over 150 petabytes and 5 petabytes every month.

You could wait and register onsite at the Hyatt, but why? If you need more reasons to attend, check out SNIA on Storage previous blog entries on File Systems, Cloud, Management, New Thinking, Disruptive Technologies, and Security sessions at SDC. See the full agenda and register now for SDC at http://www.storagedeveloper.org.

Security is Strategic to Storage Developers – and a Prime Focus at SDC and SNIA Data Storage Security Summit

Posted by Marty Foltyn

Security is critical in the storage development process – and a prime focus of sessions at the SNIA Storage Developer Conference AND the co-located SNIA Data Storage Security Summit on Thursday September 24. Admission to the Summit is complimentary – register here at http://www.snia.org/dss-summit.DataStorageSecuritySummitlogo200x199[1]

The Summit agenda is packed with luminaries in the field of storage security, including keynotes from Eric Hibbard (SNIA Security Technical Work Group and Hitachi), Robert Thibadeau (Bright Plaza), Tony Cox (SNIA Storage Security Industry Forum and OASIS KMIP Technical Committee), Suzanne Widup (Verizon), Justin Corlett (Cryptsoft), and Steven Teppler (TimeCertain); and afternoon breakouts from Radia Perlman (EMC); Liz Townsend (Townsend Security); Bob Guimarin (Fornetix); and David Siles (Data Gravity). Roundtables will discuss current issues and future trends in storage security. Don’t miss this exciting event!

SDC’s “Security” sessions highlight security issues and strategies for mobile, cloud, user identity, attack prevention, key management, and encryption. Preview sessions here, and click on the title to find more details.SDC15_WebHeader3_999x188

Geoff Gentry, Regional Director, Independent Security Evaluators Hackers, will present Attack Anatomy and Security Trends, offering practical experience from implementing the OASIS Key Management Interoperability Protocol (KMIP) and from deploying and interoperability testing multiple vendor implementations of KMIP .

David Slik, Technical Director, Object Storage, NetApp will discuss Mobile and Secure: Cloud Encrypted Objects Using CDMI, introducing the Cloud Encrypted Object Extension to the CDMI standard, which permits encrypted objects to be stored, retrieved, and transferred between clouds.

Dean Hildebrand, IBM Master Inventor and Manager | Cloud Storage Software and Sasikanth Eda, Software Engineer, IBM will present OpenStack Swift On File: User Identity For Cross Protocol Access Demystified. This session will detail the various issues and nuances associated with having common ID management across Swift object access and file access ,and present an approach to solve them without changes in core Swift code by leveraging powerful SWIFT middleware framework.

Tim Hudson, CTO and Technical Director, Cryptsoft will discuss Multi-Vendor Key Management with KMIP, offering practical experience from implementing the OASIS Key Management Interoperability Protocol (KMIP) and from deploying and interoperability testing multiple vendor implementations of KMIP .

Nathaniel McCallum, Senior Software Engineer, Red Hat will present Network Bound Encryption for Data-at-Rest Protection, describing Petera, an open source project which implements a new technique for binding encryption keys to a network.

Finally, check out SNIA on Storage previous blog entries on File Systems, Cloud, Management, New Thinking, and Disruptive Technologies. See the agenda and register now for SDC at http://www.storagedeveloper.org.

SNIA Leads the Way with 16 Sessions on “Disruptive Technologies” at SDC!

Posted by Marty Foltyn

In the two weeks leading up to the 2015 SNIA Storage Developer Conference, which begins on September 21, SNIA on Storage is highlighting exciting interest areas in the SDC agenda. Our previous blog entries have covered File Systems, Cloud, Management, and New Thinking, and this week we continue with Disruptive Technologies. If you have not registered, you need to! Visit http://www.storagedeveloper.org/ to see the four day overview and sign up.SDC15_WebHeader3_999x188

SDC’s “Disruptive Technologies” sessions highlight those new areas which are revolutionizing storage and the work of developers: Persistent Memory, Object Drives, and Shingled Magnetic Recording (SLR). Leading experts will do a deep dive with sixteen sessions spread throughout the conference agenda. Preview sessions here, and click on the title to find more details.

If you are just dipping your toes into disruptive technologies, you will want to check out the SDC Pre-Conference Primer on Sunday September 20. These sessions are included with full conference registration.

At the Primer, Thomas Coughlin, SNIA Solid State Storage Initiative Governing Board and President, Coughlin Associates and Edward Grochowski, Storage Consultant, will present Advances in Non-Volatile Storage Technologies, where they will address the status of NVM device technologies and review requirements in process, equipment, and innovations.

Jim Handy, SNIA Solid State Storage Initiative Member and General Director, Objective Analysis will discuss The Long-Term Future of Solid State Storage, examining research of new solid state memory and storage types and new means of integrating them into highly-optimized computing architectures. This will lead to a discussion of the way that these will impact the market for computing equipment.

David Cohen, System Architect, and Brian Hausauer, Hardware Architect, at Intel will present Nonvolatile Memory (NVM), Four Trends in the Modern Data Center, and the Implications for the Design of Next Generation Distributed Storage Platforms. They will discuss increasing performance of network bandwidth; storage media approaching the performance of DRAM; OSVs optimizing the code path of their storage stacks; and single processor/core performance and their implications on the design of distributed storage platforms.

Dr. Thomas Willhalm and Karthik Kumar, Senior Application Engineers at Intel, will present Developing Software for Persistent Memory. They will discuss how to identify which data structures that are suited for this new memory tier, and which data structures are not. They will provide developers a systematic methodology to identify how their applications can be architected to take advantage of persistence in the memory tier

And you won’t want to miss the Wednesday evening September 23 Birds-Of-a-Feather (BOF) on Enabling Persistent Memory Applications with NVDIMMs! Come to this OPEN TO ALL IN THE INDUSTRY Birds of a Feather session for an interactive discussion on what customers, storage developers, and the industry would like to see to improve and enhance NVDIMM integration and optimization.

At the SDC Conference, sessions on “Disruptive Technologies – Persistent Memory” kick off with Doug Voigt, SNIA NVM Programming Technical Work Group Co-Chair and Distinguished Technologist, HP who will discuss Preparing Applications for Persistent Memory, using the concepts of the SNIA NVM Programming Model to explore the emerging landscape of persistent memory related software from an application evolution point of view.

Paul von Behren, SNIA NVM Programming Technical Work Group Co-Chair and Software Architect, Intel will present Managing the Next Generation Memory Subsystem, providing an overview of emerging memory device types, covering management concepts and features, and conclude with an overview of standards that drive interoperability and encourage the development of memory subsystem management tools.

SNIA NVDIMM Special Interest Group Co-Chairs Jeff Chang, VP Marketing and Business Development, AgigA Tech and Arthur Sainio, Senior Director Marketing, SMART Modular will present The NVDIMM Cookbook: A Soup-to-Nuts Primer on Using NVDIMMs to Improve Your Storage Performance. In this SNIA Tutorial, they will walk you through a soup-to-nuts description of integrating NVDIMMs into your system, from hardware to BIOS to application software, highlighting some of the “knobs” to turn to optimize use in your application as well as some of the “gotchas” encountered along the way.

Pete Kirkpatrick, Principal Engineer, Pure Storage will discuss Building NVRAM Subsystems in All-Flash Storage Arrays, including the hardware and software development of an NVDIMM using NVMe over PCIe-based NVRAM solutions and comparison of the performance of the NVMe-based solution to an SLC NAND Flash-based solution.

Tom Talpey, Architect, Microsoft, will discuss Remote Access to Ultra-low-latency Storage, exploring the issues and outlining a path-finding effort to make small, natural extensions to RDMA and upper layer storage protocols to reduce latencies to acceptable, minimal levels, while preserving the many advantages of the storage protocols they extend.

Sarah Jelinek, Senior SW Engineer, Intel, will present Solving the Challenges of Persistent Memory Programming, reviewing key attributes of persistent memory as well as outlining architectural and design considerations for making an application persistent memory aware.

Chet Douglas, Principal SW Architect, Intel will discuss RDMA with PM: Software Mechanisms for Enabling Persistent Memory Replication, reviewing key HW components involved in RDMA and introduce several SW mechanisms that can be utilized with RDMA with PM.

In the Disruptive Technology area of Object Drives, Mark Carlson, Principal Engineer, Industry Standards, Toshiba will present a SNIA Tutorial on Object Drives: A New Architectural Partitioning, discussing the current state and future prospects for object drives, including use cases, requirements, and best practices.

Abhijeet Gole, Senior Director of Engineering, Toshiba will present Beyond LBA: New Directions in the Storage Interface, exploring the paradigm shift introduced by these new interfaces and modes of operation of storage devices.

In the Disruptive Technology area of Shingled Magnetic Recording, Jorge Campello, Director of Systems – Architecture and Solutions, HGST will discuss SMR – The Next Generation of Storage Technology articulating the difference in SMR drive architectures and performance characteristics, and illustrating how the open source community has the distinct advantage of integrating a host-managed platform that leverages SMR HDDs

Albert Chen Engineering Program Director and Jim Malina, Technologist, WDC, will discuss Host Managed SMR, going over the various SW/FW paradigms that attempt to abstract away SMR behavior (e.g. user space library, device mapper, SMR aware file system, enlightened application). Along the way, they will also explore what deficiencies (e.g. ATA sense data reporting) are holding back SMR adoption in the data center.

Join your peers – register now at www.storagedeveloper.org. And stay tuned for tomorrow’s blog on Security topics at SDC!