Q&A from “SAS 101: The Most Widely-Deployed Storage Interface” Webcast

Questions from SAS 101 Webcast Answered

In an effort to provide ongoing educational content to the industry, the SCSI Trade Association (STA) tackled the basics of Serial Attached SCSI (SAS) in a webinar titled “SAS 101: The Most Widely-Deployed Storage Interface,” now available on demand here.

Immediately following their presentations, our experts Jeremiah Tussey of Microchip Technology and current STA Vice President; and former STA board member Jeff Mason of TE Connectivity, held a Q&A session. In this blog, we’ve captured the questions asked and answers given to provide you with the extra insight needed on why SAS remains the protocol of choice in the data center.

Q1. You mentioned that SAS supports up to 64K devices. Has anyone been able to test a SAS topology of that scale?
A1. Although the technology is capable of supporting 64K devices, the realistic implementations are based on routing table designs within expanders and controllers of today. Typically, you see quantities of 1K to 2K attached devices per topology, but you’ll have a lot of these topologies in parallel, so the amount of devices you support can certainly span out to that level of devices. But the practical reality with the I/O flow, RAID, and different applications and data transfer factors, results in the average topologies probably having no more than 2K.

Q2. What are the key gaps in NVMe technology that are in place for SAS?
A2. NVMe from a performance and latency standpoint is top of its class with its typical shipping x4 interface. However, the fact remains that as we develop and extend our innovations in SAS, the majority of SAS market deployments are utilizing rotating media, so inherent scalability and the flexibility to dynamically add on a mixture of SAS and SATA targets (SSDs, HDDs, tape, etc.) to new or existing configurations is where SAS topologies excel. The SATA deployments are generally going to be higher capacities at lower cost, while not quite at the performance and reliability of SAS deployments intended for more mission critical applications or balanced workloads. Overall, SAS is a technology that’s going to be implemented in enterprise and cloud and not in the PC world or some of the other higher volume, lower cost markets where NVMe is becoming the go-to choice.

Q3. What’s the difference on the device level of a SAS SSD vs. a NVMe SSD?
A3. Overall, if you look at it on a lane-by-lane perspective, SAS is a faster interface in today’s typical applications. NVMe drives are deployed as a x4 interface in most applications, maybe even in x2, so they have some advantages there natively. But there are capabilities built into SAS drives that have been hardened over many years. For example, SAS supports surprise hot plug in environments that need to swap drives dynamically. Those features are natively built into SAS, but NVMe drives are making progress there.

Q4. What’s the maximum length of external 24G SAS copper cables?
A4. In general, we’re going to follow the same lengths that we had in the past generations: 1.5 meters. Beyond that, you might be looking at active cable technology, but overall that is the safe distance for copper cables.

Q5. What’s the real-world bandwidth of a x4 24G SAS external port?
A5. Overall, the real-world bandwidth of a x4 24G SAS external port is 9.6 GB/s, which is full bandwidth supporting x4 connectivity. There are new cables now that support x8, so that bandwidth can be doubled even further.

Q6. Is T10 working on the tri-mode U.3 specification?
A6. The way we are defining our standards, we are focused on SAS technology. The U.3 specification is something that has been standardized through the Standard Form Factor (SFF) Technical Workgroup, which is now under the SNIA umbrella. Therefore, we don’t directly drive the requirements for that, but we are certainly open to supporting these types of applications that allow different types of connectivity that include SAS products. STA has several members that do contribute to SFF standardization work, typically supporting SAS and SATA inclusion in the standards that directly apply.

Q7. In one of your topology diagrams, you showed a mix of SAS HDDs, SAS SSDs, and SATA HDDs. Could you discuss again why someone would mix drive types like that?
A7. Certainly, it’s really a choice of the implementer, but some of the ideas behind doing the different types of media relate to data center applications requiring different tiers that provide varying metrics for hot data versus warm data versus cold data. So, when you need higher performance and lower latency, that’s typically where you would use SSDs. Depending on your performance and cost requirements, you can use SAS SSDs for the highest performance at an added cost, or you can use SATA SSDs that give you a slightly lower performance metric at a lower cost point.

What you typically see is an overlap in some of the different areas in the overall tiering, where you’ll have SAS SSDs at the top of the line, and a mixture of SATA SSDs with SAS or SATA HDDs in a cached type of JBOD that provides more of a medium level, warm access data platform. Then, down the spectrum, you would have your colder data, where there would be nearline SATA HDDs and SAS HDDs, all the way down to SATA HDDs with SMR. The SMR technology provides serialization and striping of data that gives you the lowest cost per GB. There are even tiers lower than that, including tape and CD technologies as well, which are certainly part of the ecosystem that can be supported with SAS infrastructure.

Q8. What is SMR?
A8. SMR stands for Shingled Magnetic Recording. This is a technology that a lot of the hard drive manufacturers are deploying today in various applications, specifically cloud data center applications where you need the lowest cost per data metric. It allows the striping of data on the disk platter themselves, actually overlapping to a degree, so you get more compact amounts of data being formed on the platters.

SMR has a specific use case and it requires more of a serialization of the data streams to and from the drives, so more management is needed from the host. This means a little bit more oversight and control of how the data is being put on the drive. It’s not as well-suited for more random IOPS, but it certainly provides a more compact method of recording the data.

Leave a Reply

Your email address will not be published. Required fields are marked *