Reaching Nirvana? Maybe Not, But You Can Help In Better Understanding SSD and HDD Performance via SNIA’s Workload I/O Capture Program

SNIA’s Solid State Storage Initiative (SSSI) recently rolled out its new Workload I/O Capture Program, or WIOCP, a simple tool that captures software applications’ I/O activity by gathering statistics on workloads at the user level (IOPS, MB/s, response times queue depths, etc).  The WIOCP helps users to identify “Hot Spots” where storage performance is creating bottlenecks. SNIA SSSI hopes that users will help the association to collect real-use statistics on workloads by uploading their results to the SNIA website.

How it works
The WIOCP software is a safe and thoroughly-tested tool which runs unobtrusively in the background to constantly capture a large set of SSD and HDD I/O metrics that are useful to both the computer user and to SNIA. Users simply enter the drive letters for those drives for which I/O operations metrics are to be collected. The program does not record anything that might be sensitive, including details of your actual workload (for example, files you’ve accessed.) Results are presented in clear and accessible report formats.

How would the WIOCP help me as a user of computer systems?
Our upcoming white paper gives many reasons why you would want to download and run the WIOCP.  One reason is that empirical file and disk I/O operation performance metrics can be invaluable with regard to theories and claims about disk I/O performance. This is especially so when these metrics reflect the actual file and disk I/O operation activity performed by individual applications/workloads during normal usage. Moreover, such empirical I/O metrics can be instrumental in uncovering/understanding performance “bottlenecks”, determining more precise I/O performance requirements, better matching disk storage purchases to the particular workload usage/needs, and designing/optimizing various disk storage solutions.

How can I help this project?
by downloading and running the WIOCP you help us collect I/O metrics, which can reveal insights into the particular ways that applications actually perform and experience I/O operation activity in “real-life” use. And using this information,  SNIA member companies will be able to improve the performance of their solid state storage solution, including SSDs and flash storage arrays. Help SNIA get started on this project by clicking http://www.hyperIO.com/hIOmon/hIOmonSSSIworkloadIOcaptureProgram.htm and using the “Download Key Code” enter SSSI52kd9A8Z. The WIOCP tool will be delivered to your system with a unique digital signature. The tool only takes a few minutes to download and initialize, after which you can return to the task at hand!

If you have any questions or comments, please contact: SSSI_TechDev-Chair@SNIA.org

Red Hat adds commercial support for pNFS

Red Hat Enterprise Linux shipped their first commercially supported parallel NFS client on February 21st. The Red Hat ecosystem can deploy pNFS with the confidence of engineering, test, and long-term support of the industry standard protocol.

 

Red Hat Engineering has been working with the upstream community and several SNIA ESF member companies to backport code and test interoperability with RHEL6. This release supports all IO functions in pNFS, including Direct IO. Direct IO support is required for KVM virtualization, as well as to support the leading databases. Shared workloads and Big Data have performance and capacity requirements that scale unpredictably with business needs.

 

Parallel NFS (pNFS) enables scaling out NFS to improve performance, manage capacity and reduce complexity.  An IETF standard storage protocol, pNFS can deliver parallelized IO from a scale-out NFS array and uses out-of-band metadata services to deliver high-throughput solutions that are truly industry standard. SNIA ESF has published several papers and a webinar specifically focused on pNFS architecture and benefits. They can be found on the SNIA ESF Knowledge Center.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

Take Our 10GBASE-T Quick Poll

I’ve gotten some interesting feedback on my recent 10GBASE-T blog, “How is 10GBASE-T Being Adopted and Deployed.” It’s prompted us at the ESF to learn more about your 10BASE-T plans. Please let us know by taking our 3-question poll. I’ll share the results in a future blog post.

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll. Note: There is a poll embedded within this post, please visit the site to participate in this post's poll. Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

How DCB Makes iSCSI Better

A challenge with traditional iSCSI deployments is the non-deterministic nature of Ethernet networks. When Ethernet networks only carried non-storage traffic, lost data packets where not a big issue as they would get retransmitted. However; as we layered storage traffic over Ethernet, lost data packets became a “no no” as storage traffic is not as forgiving as non-storage traffic and data retransmissions introduced I/O delays which are unacceptable to storage traffic. In addition, traditional Ethernet also had no mechanism to assign priorities to classes of I/O.

Therefore a new solution was needed. Short of creating a separate Ethernet network to handle iSCSI storage traffic, Data Center Bridging (DCB), was that solution.

The DCB standard is a key enabler of effectively deploying iSCSI over Ethernet infrastructure. The standard provides the framework for high-performance iSCSI deployments with key capabilities that include:
- Priority Flow Control (PFC)—enables “lossless Ethernet”, a consistent stream of data between servers and storage arrays. It basically prevents dropped frames and maximizes network efficiency. PFC also helps to optimize SCSI communication and minimizes the effects of TCP to make the iSCSI flow more reliably.
- Quality of Service (QoS) and Enhanced Transmission Selection (ETS)—support protocol priorities and allocation of bandwidth for iSCSI and IP traffic.
- Data Center Bridging Capabilities eXchange (DCBX) — enables automatic network-based configuration of key network and iSCSI parameters.

With DCB, iSCSI traffic is more balanced over high-bandwidth 10GbE links. From an investment protection perspective, the ability to support iSCSI and LAN IP traffic over a common network makes it possible to consolidate iSCSI storage area networks with traditional IP LAN traffic networks. There is also another key component needed for iSCSI over DCB. This component is part of Data Center Bridging eXchange (DCBx) standard, and it’s called TCP Application Type-Length-Value, or simply “TLV”! TLV allows the DCB infrastructure to apply unique ETS and PFC settings to specific sub-segments of the TCP/IP traffic. This is done through switches which can identify the sub-segments based on their TCP socket or port identifier which are included in the TCP/IP frame. In short, TLV directs servers to place iSCSI traffic on available PFC queues, which separates storage traffic from other IP traffic. PFC also eliminates data retransmission and supports a consistent data flow with low latency. IT administrators can leverage QoS and ETS to assign bandwidth and priority for iSCSI storage traffic, which is crucial to support critical applications.

Therefore, depending on your overall datacenter environment, running iSCSI over DCB can improve:
- Performance by insuring a consistent stream of data, resulting in “deterministic performance” and the elimination of packet loss that can cause high latency
- Quality of service through allocation of bandwidth per protocol for better control of service levels within a converged network
- Network convergence

For more information on this topic or technologies discussed in this blog, please visit some of our other blog articles:
- What Up with DCBX Blog and iSCSI over DCB: Reliability and predictable performance or check out the IEEE website on DCB