NFS 4.2 Q&A

We received several great questions at our What’s New in NFS 4.2 Webcast. We did not have time to answer them all, so here is a complete Q&A from the live event. If you missed it, it’s now available on demand.

Q. Are there commercial Linux or windows distributions available which have adopted pNFS?

A. Yes. RedHat RHEL6.2, Suse SLES 11.3 and Ubuntu 14.10 all support the pNFS capable client. There aren’t any pNFS servers on Linux so far; but commercial systems such as NetApp (file pNFS), EMC (block pNFS), Panasas (object pNFS) and maybe others support pNFS servers. Microsoft Windows has no client or server support for pNFS.

Q. Are we able to prevent it from going back to NFS v3 if we want to ensure file lock management?

A. An NFSv4 mount (mount -t nfs4) won’t fall back to an nfs3 mount. See man mount for details.

Q. Can pNFS metadata servers forward clients to other metadata servers?

A. No, not currently.

Q. Can pNfs provide a way similar to synchronous writes? So data’s instantly safe in at least 2 locations?

A. No; that kind of replication is a feature of the data servers. It’s not covered by the NFSv4.1 or pNFS specification.

Q. Does hole punching depend on underlying file system in server?

A. If the underlying server supports it, then hole punching will be supported. The client & server do this silently; a user of the mount isn’t aware that it’s happening.

Q. How are Ethernet Trunks formed? By the OS or by the NFS client or NFS Server or other?

A. Currently, they’re not! Although trunking is specified and is optional, there are no servers that support it.

Q. How do you think vVols could impact NFS and VMware’s use of NFS?

A. VMware has committed to supporting NFSv4.1 and there is currently support in vSphere 6. vVols adds another opportunity for clients to inform the server with IO hints; it is an area of active development.

Q. In pNFS, does the callback call to the client must come from the original-called-to metadata server?

A. Yes, the callback originates from the MDS.

Q. Is hole punched in block units?

A. That depends on the server.

Q. Is there any functionality like SMB continuous availability?

A. Since it’s a function of the server, and much of the server’s capabilities are unspecified in NFSv4, the answer is – it depends. It’s a question for the vendor of your server.

Q. NFS has historically not been used in large HPC cluster environments for cluster-wide storage, for performance reasons. Do you see these changes as potentially improving this situation?

A. Yes. There’s much work being done on the performance side, and the cluster parallelism that pNFS brings will have it outperform NFSv3 once clients employ more of its capabilities.

Q. Speaking of the Amazon adoption for NFSv4.0. Do you have insight / guess on why Amazon did not select NFSv4.1, which has a lot more performance / scalability advantages over NFSv4.0?

A. No, none at all.

New Webcast: Block Storage in the Open Source Cloud called OpenStack

On June 3rd at 10:00 a.m. SNIA-ESF will present its next live Webcast “Block Storage in the Open Source Cloud called OpenStack.” Storage is a major component of any cloud computing platform. OpenStack is one of largest and most widely supported Open Source cloud computing platforms that exist in the market today. The OpenStack block storage service (Cinder) provides persistent block storage resources that OpenStack Nova compute instances can consume.

I will be moderating this Webcast, presented by a core member of the OpenStack Cinder team, Walt Boring. Join us, as we’ll dive into:

  • Relevant components of OpenStack Cinder
  • How block storage is managed by OpenStack
  • What storage protocols are currently supported
  • How it all works together with compute instances

I encourage you to register now to block your calendar. This will be a live and interactive Webcast, please bring your questions. I look forward to “seeing” you on June 3rd

New SNIA SSSI Webcast May 28 on Persistent Memory Advances

Join the NVDIMM Special Interest Group for an informative SNIA Brighttalk webcast on Persistent Memory Advances:  Solutions with Endurance, Performance & Non-Volatility on Thursday, May 28, 2015 at 12:00 noon Eastern/9:00 am Pacific.  Register at http://www.snia.org/news_events/multimedia#webcasts

Mario Martinez of Netlist, a SNIA SSSI NVDIMM SIG member, will discuss how persistent memory solutions deliver the endurance and performance of DRAM coupled with the non-volatility of Flash. This webinar will also update you on the latest solutions for enterprise server and storage designs, and provide insights into future persistent memory advances. A specific focus will be NVDIMM solutions, with examples from the member companies of the SNIA NVDIMM Special Interest Group.

Swift, S3 or CDMI – Your Questions Answered

Last week’s live SNIA Cloud Webcast “Swift, S3 or CDMI – Why Choose?” is now available on demand. Thanks to all the folks who attended the live event. We had some great questions from attendees, in case you missed it, here is a complete Q&A.

Q. How do you tag the data? Is that a manual operation?

A. The data is tagged as part of the CDMI API by supplying key value pairs in the JSON Object. Since it is an API you can put a User Interface in front of it to manually tag the data. But you can also develop software to automatically tag the data. We envision an entire ecosystem of software that would use this interface to better manage data in the future

Q. Which vendors support CDMI today?

A. We have a page that lists all the publically announced CDMI implementations here. We also plan to start testing implementations with standardized tests to certify them as conformant. This will be a separate list.

Q. FC3 Common Services layer vs. SWIFT, S3, & CDMI – Will it fully integrate with encryption at rest vendors?

A. Amazon does offer encryption at rest for example, but does not (yet) allow you choose the algorithm. CDMI allows vendors to show a list of algorithms and pick the one they want.

Q. You’d mentioned NFS, other interfaces for compatibility – but often “native” NFS deployments can be pretty high performance. Object storage doesn’t really focus on performance, does it? How is it addressed for customers moving to the object model?

A. CDMI implementations are responsible for the performance not the standard itself, but there is nothing in an object interface that would make it inherently slower. But if the NFS interface implementation is faster, customers can use that interface for apps with those performance needs. The compatibility means they can use whatever interface makes sense for each application type.

Q. Is it possible to query the user-metadata on a container level for listing all the data objects that have that user-metadata set?

A. Yes. Metadata query is key and it can be scoped however you like. Data system metadata is also hierarchical and inherited – meaning that you can override the parent container settings.

Q. So would it be reasonable to say that any current object storage should be expected to implement one or more of these metadata models? What if the object store wasn’t necessarily meant to play in a cloud? Would it be at a disadvantage if its metadata model was proprietary?

A. Yes, but as an add-on that would not interfere with the existing API/access method. Eventually as CDMI becomes ubiquitous, products would be at a disadvantage if they did not add this type of interface.

 

 

 

New Webcast: Hierarchical Erasure Coding: Making Erasure Coding Usable

On May 14th the SNIA-CSI (Cloud Storage Initiative) will be hosting a live Webcast “Hierarchical Erasure Coding: Making erasure coding usable.” This technical talk, presented by Vishnu Vardhan, Sr. Manager, Object Storage, at NetApp and myself, will cover two different approaches to erasure coding – a flat erasure code across JBOD, and a hierarchical code with an inner code and an outer code. This Webcast, part of the SNIA-CSI developer’s series, will compare the two approaches on different parameters that impact the IT business and provide guidance on evaluating object storage solutions. You’ll learn:

  • Industry dynamics
  • Erasure coding vs. RAID – Which is better?
  • When is erasure coding a good fit?
  • Hierarchical Erasure Coding- The next generation
  • How hierarchical codes make growth easier
  • Key areas where hierarchical coding is better than flat erasure codes

Register now and bring your questions. Vishnu and I will look forward to answering them.

Next Webcast: The 2015 Ethernet Roadmap for Networked Storage

The ESF is excited to announce our next live Webcast, “The 2015 Ethernet Roadmap for Networked Storage.”

For over three decades, Ethernet has advanced on a simple “powers-of-ten” speed increases, and this model has served the industry well.  Ethernet is changing in big ways and the Ethernet Alliance has captured the latest changes in the 2015 Ethernet Roadmap.

On June 30th at 10:00 a.m. PT an expert panel comprised of Scott Kipp, President of the Ethernet Alliance, David Chalupsky, Chair IEEE P802.3bq/bz TFs and the Ethernet Alliance BASE-T Subcommittee and myself will present the Ethernet Alliance’s 2015 Ethernet Roadmap for the networking technology that underlies most of future network storage.

SNIA has focused on protocols and usage models and more or less just takes Ethernet for granted.  The biggest technology disruption in the storage space is the emergence into the mainstream of Non-Volatile Memory (NVM), FLASH in particular.  NVM increasingly moves system bottlenecks from the storage subsystem to the network.  Developments in NVM — most recently 3D FLASH — assure that the cost per GB will continue aggressive declines and demand for bandwidth will go up.  NVM will become more prevalent, making the roadmap for Ethernet increasingly more important to the storage networking community.

This will be a live and interactive session. I encourage you to register now and bring your questions for our experts. I hope to see you on June 30th.

Four Ways Disaster Recovery is Simplified with Storage Management Standards

Disaster recovery as a service (DRaaS) is a growing area of investment for many companies; the DRaaS market will be worth $5.7 billion by 2018. IT professionals with experience implementing a business continuity plan are painstakingly aware that disaster recovery can be a very manual and complex workflow. Automation and orchestration can help simplify the experience, eliminate human error, minimize complexity, and reduce downtime. However, to achieve this promise, vendors must work together to ensure maximum interoperability between software and devices in the data center.

Maximum Interoperability

The Storage Networking Industry Association (SNIA) is a consortium of storage manufacturers and management software vendors actively contributing to the Storage Management Initiative Specification (SMI-S) to ensure maximum interoperability. The Storage Management Initiative (SMI) is the set of working groups that define SMI-S. Management software vendors can manage any SMI-S–compliant storage devices in the data center using a consistent interface. Common tasks include discovering storage devices, configuring features, provisioning storage, monitoring health and operational status, and collecting performance information. With the latest version of SMI-S 1.6.1, management software can orchestrate failover of storage between two sites using synchronous or asynchronous replication.

Microsoft supports the latest version of SMI-S 1.6.1 for managing storage across many storage devices in the data center. Windows Server and System Center manage private cloud storage. Microsoft Azure Site Recovery (ASR) orchestrates storage replication failover for disaster recovery. The primary method for SCVMM to talk to external storage is using SMI-S, this is the industry default.

Consistent DR Experience

Management software that supports SMI-S–compliant devices can present a consistent experience across many devices without resorting to lowest-common-denominator capabilities. Storage manufacturers implement rich features in the devices and make them easy to manage using SMI-S. Management software can quickly identify the capabilities of each device using SMI-S and optimize the experience accordingly.

ASR is the latest Microsoft product that integrates with SMI-S–complaint devices to present a consistent experience for planned, unplanned, and test failovers of storage and workloads between sites. ASR and System Center experiences focus on enabling protection at the workload level, under the covers, while the storage is configured and replication is enabled using SMI-S.

Quality and Scale

Data centers with multiple storage devices more than likely use storage from multiple manufacturers and storage management products from multiple vendors. Although SMI-S ensures that the interfaces are well known, the quality of the interfaces must be tested. Members of SMI participate in multiple plugfests every year to test product functionality and scalability. Plug-fest attendees work side by side for a week in a lab to identify implementation issues and required updates to SMI-S and to work with companies new to SMI-S. Investing in plugfests ensures that customers receive the best quality product that works out of the box.

The SNIA SMI-S organization offers a comprehensive Conformance Testing Program (CTP) to test adherence to the protocol which offers independent verification of compliance that customers can view directly on the SNIA website. In Addition, Microsoft worked with multiple SMI members for over a year to integrate storage replication management into System Center and Azure Site Recovery for site-to-site workloads and storage, planned, unplanned, and test failover. Microsoft provided each storage manufacturer with test suites to exercise functionality, scale, and stress of the end-to-end solution.

Cost-Effective DR Solutions

The cost of implementing a disaster recovery plan includes building out a secondary data center. Customers that can afford this setup replicate storage between two sites so that workloads can fail over. With public and hosted clouds, the secondary site is provided by a different company. In this case, the local data center is primary. In both cases, by using SMI-S, the failover experience is consistent and works with all SMI-S 1.6.1–compliant storage devices.

Microsoft will demonstrate this exact scenario in Chicago at the Ignite Conference on May 4–May 8 using SMI-S–enabled storage devices. One of the advanced sessions will show how to mirror a virtual machine to a public cloud and replicate data drives to a cloud-adjacent storage device. This configuration enables DR without investing in a secondary data center with the added benefit of offering data sovereignty for applications that require it.

This session will include NetApp® storage front and center and will show how NetApp can provide Hyper-V DR to cloud scenarios that maintain data sovereignty and enable elastic cloud compute. Please catch Hector Linares’s presentations as well as Barry Shilmover’s presentations regarding ASR.