• Home
  • About
  •  

    What’s New in the CDMI 1.1 Cloud Storage Standard

    November 17th, 2014

    On December 2, 2014, the CSI is hosting a Developer Tutorial Webcast “Introducing CDMI 1.1” to dive into all the capabilities of CDMI 1.1.

    Register now to join David Slik, Co-Chair, SNIA Cloud Storage Technical Work Group and me, Alex McDonald, as we’ll explore what’s in this major new release of the CDMI standard, with highlights on what you need to know when moving from CDMI 1.0.2 to CDMI 1.1.

    The latest release – CDMI 1.1 –  includes:

    • Enabling support for other popular industry supported cloud storage protocols such as OpenStack Swift and Amazon S3
    • A variety of extensions, some part of the core specification and some stand-alone, that include a CIMI standard extension, support for immediate queries , an LTFS Export extension, an OVF extension, along with multi-part MIME and versioning extensions. A full list can be found here.
    • 100% backwards compatibility with ISO certified CDMI v. 1.0.2 to ensure continuity and backward compatibility with existing CDMI systems
    • And more

    This event on December 2nd will be live, so please bring your specific questions. We’ll do our best to answer them on the spot. I hope you’ll join us!

     


    Implementing Multiple Cloud Storage APIs

    November 13th, 2014

    OpenStack Summit Paris

    The beauty of cloud storage APIs is that there are so many to choose from. Of course if you are implementing a cloud storage API for a customer to use, you don’t want to have to implement too many of these. When customers ask for support of a given API, can a vendor survive if they ignore these requests? A strategy many vendors are taking is to support multiple APIs with a single implementation. Besides the Swift API, many support the S3 defacto and CDMI standard APIs in their implementation. What is needed for these APIs to co-exist in an implementation? There are basic operations that are nearly identical between them, but what about semantics that have multiple different expressions such as metadata?

    Mark Carlson, Alex McDonald and Glyn Bowden lead the discussion of this at the Paris summit.

    SummitSlideFront

     

    For the implementers of a cloud storage solution, it is not just the semantics of the APIs, but also the Authentication and Authorization mechanisms related to those APIs need to be supported as well. This is typically done by hosting the services that are required somewhere on the network and syncronizing them with a back end Directory service.

    Multiple APIs

     Swift leverages Keystone for authentication, and in order to support Swift Clients, you would need to run a Keystone instance on your Auth Server. If you want to support S3 clients, you need a service that is compatible with Signature Version 4 from Amazon. When creating a client, you might use a common library/proxy to insulate your code from the underlying semantic differences of these APIs. Jclouds is such a tool. The latest version of the CDMI API (version 1.1) has capability metadata (like a service catalog) that shows which Auth APIs any given cloud supports. This allows a CDMI Client to use Keystone, for example, as it’s auth mechanism while using the standard HTTP based storage operations and the advanced metadata standards from CDMI. To address the requirements for multiple APIs with the least amount of code duplication, there are some synergies that can be realized.

    Storage Operations

    • CRUD – All pretty much determined by HTTP standard (common code)
    • Headers are API unique however (handle in API specific modules)

    Security Operations

    • Client communication with Auth Server (API unique)
    • Multiple separate services running in Auth Server

     Looking at two of the interfaces in particular, this chart shows the relationship of the Swift API model and that from the CDMI standard.

    CDMISwift

     When an object with a name that includes one or more “/“ characters is stored in a cloud, the model viewed via Swift and the view that CDMI shows are similar. Using CDMI, however, the client has access to additional capabilities to manage each level of “/“ containers and subcontainers. CDMI also standardizes a rich set of metadata that is understood and interpreted by the system implementing the cloud.

    If you are looking for information that compares the Amazon S3 API with the CDMI standard one, there is a white paper available.

    NewImage

     

     

     

      

    The latest version of CDMI – http://www.snia.org/sites/default/files/CDMI_Spec_v1.1.pdf makes this even easier:

    • Spec text that explicitly forbid (in 1.0) functionality required for S3/Swift integration has been removed from the spec (“/”s may create intervening CDMI Containers)
    • Baseline operations (mostly governed by RFC 2616) now documented in Clause 6 (pgs. 28-35)
    • CDMI now uses content type to indicate CDMI-style operations (as opposed to X-CDMI-Specification-Version)
    • Specific authentication is no longer mandatory. CDMI implementations can now use S3 or Swift authentication exclusively, if desired.

    CDMI 1.1 now includes a standard means of discovering what auth methods are available: cdmi_authentication_methods (Data System Metadata) 12.1.3   “If present, this capability contains a list of server-supported authentication methods that are supported by a domain. The following values for authentication method strings are defined: 

    • “anonymous”-Absence of authentication supported

    • “basic”-HTTP basic authentication supported (RFC2617)

    • “digest”-HTTP digest authentication supported (RFC2617)

    • “krb5″-Kerberos authentication supported, using the Kerberos Domain specified in the CDMI domain (RFC 4559)

    • “x509″-certificate-based authentication via TLS (RFC5246)”

    The following values are examples of other widely used authentication methods that may be supported by a CDMI server: 

    “s3″-S3 API signed header authentication supported 

    “openstack”-OpenStack Identity API header authentication supported

    Interoperability with these authentication methods are not defined by this international standard. Servers may include other authentication methods not included in the above list. In these cases, it is up to the CDMI client and CDMI server (implementations themselves) to ensure interoperability. When present, the cdmi_authentication_methods data system metadata shall be supported for all domains. 

    NewImage

     

     

     

    Other resources that are available for developers include:

    CDMI for S3 Developers

    Comparison of S3/Swift functions

    Implementation of CDMI filter driver for Swift

    Implementation of S3 filter driver for Swift

     For the slides from the talk, the site snia.org/cloud has the slideshare and .pdf links.

     

     


    A Beginner’s Guide to NVMe

    November 11th, 2014

    When I first started in storage technology (it doesn’t seem like that long ago, really!) the topic seemed like it was becoming rather stagnant. The only thing that seemed to be happening was that disks were getting bigger (more space) and the connections were getting faster (more speed).

    More speed, more space; more space, more speed. Who doesn’t like that? After all, you can never have too much bandwidth, or too much disk space! Even so, it does get rather routine. It gets boring. It gets well, “what have you done for me lately?

    Then came Flash.

    People said that Flash memory was a game changer, and though I believed it, I just didn’t understand how, really. I mean, sure, it’s really, really fast storage drives, but isn’t that just the same story as before? This is coming, of course, from someone who was far more familiar with storage networks than storage devices.

    Fortunately, I kept that question to myself (well, at least until I was asked to write this blog), thus saving myself from potential embarrassment.

    There is no shortage of Flash vendors out there who (rightfully) would have jumped at the chance to set my misinformed self on the straight and narrow; they would have been correct, too. Flash isn’t just “cool,” it allows the coordination, access, and retrieval of data in ways that simply weren’t possible with more traditional media.

    There are different ways to use Flash, of course, and different architectures abound in the marketplace – from fully “All Flash Arrays” (AFA), “Hybrid Arrays” (which are a combination of Flash and spinning disk), to more traditional systems that have simply replaced the spinning drives with Flash drives (without much change to the architecture).

    Even through these architectures, though, Flash is still constrained by the very basic tenets of SCSI connectivity. While this type of connectivity works very well (and has a long and illustrious history), the characteristics of Flash memory allow for some capabilities that SCSI wasn’t built to do.

    Enter NVMe.

    What’s NVMe?

    Glad you asked. NVMe stands for “Non-Volatile Memory Express.” If that doesn’t clear things up, let’s unpack this a bit.

    Flash and Solid State Devices (SSDs) are a type of non-volatile memory (NVM). At the risk of oversimplifying, NVM is a type of memory that keeps its content when the power goes out. It also implies a way by which the data stored on the device is accessed. NVMe is a way that you can access that memory.

    If it’s not quite clear yet, don’t worry. Let’s break it down with pretty pictures.

    Starting Before the Beginning – SCSI

    For this, you better brace yourself. This is going to get weird, but I promise that it will make sense when it’s all over.

    Let’s imagine for the moment that you are responsible for programming a robot in a factory. The factory has a series of conveyor belts, each with things that you have to put and get with the robot.

    Graphic 1

    The robot is on a track, and can only move from sideways on the track. Whenever it needs to get a little box on the conveyor belt, it has to move from side to side until it gets to the correct belt, and then wait for the correct orange block (below) to arrive as the belt rotates.

    Now, to make things just a little bit more complicated, each conveyor belt is a little slower than the previous one. For example, belt 1 is the fastest, belt 2 is a little slower, belt 3 even slower, and so on.

    Graphic 2

    In a nutshell, this is analogous to how spinning disk drives work. The robot arm – called a read/write head – moves across a spinning disk from sector to sector (analogous, imperfect as it may be, to our trusty conveyor belts) to pick up blocks of data.

    (As an aside, this is the reason why there are differences in performance between long contiguous blocks to read and write – called sequential data – and randomly placing blocks down willy-nilly in various sectors/conveyor belts – called random read/writes.)

    Now remember, our trusty robot needs to be programmed. That is, you – in the control room – need to send instructions to the robot so that it can get/put its data. The command set that is used to do this, in our analogy, is SCSI.

    Characteristics of SCSI

    SCSI is a tried-and-true command structure. It is, after all, the protocol for controlling most devices in the world. In fact, its ubiquity is so prevalent, most people nowadays think that it’s simply a given. It works on so many levels and layers as an upper layer protocol, with so many different devices, that it’s easily the default “go-to” application. It’s used with Fibre Channel, FCoE, iSCSI (obviously), InfiniBand – even the hard drives in your desktop and laptops.

    It was also built for devices that relied heavily on the limitations of these conveyor belts. Rotational media has long been shown as superior than linear (i.e., tape) in terms of speed and performance, but the engineering required to make up for the changes in speeds from the inside of the drive (where the rotational speed is much slower) to the outside – similar to the difference between our conveyor belts “4” and “1” – results in some pretty fancy footwork on the storage side.

    Because the robot arm must move back and forth, it’s okay that it can only handle one series of commands at a time. Even if you could tell it to get a block from conveyor belt 1 and 3 at the same time, it couldn’t do it. It would have to queue the commands and get to each in turn.

    So, the fact that SCSI only sent commands one-at-a-time was okay, because the robot could only do that anyway.

    Then came Flash, and suddenly the things seemed a bit… constricted.

    The Flash Robot

    Let’s continue with the analogy, shall we? We still have our mandate – control a robot to get and put blocks of information onto storage media. This time, it’s Flash.

    First, let’s do away with the conveyor belts altogether. Instead, let’s lay out all the blocks (in a nice OCD-friendly way) on the media, so that the robot arm can see it all at once:

    Graphic 3

    From its omniscient view, the robot can “see” all the blocks at once. Nothing moves, of course, as this is solid-state. However, as it stands, this is how we currently use Flash with a robot arm that responds to SCSI commands. We still have to address our needs one-at-a-time.

    Now, it’s important to note that because we’re not talking about spinning media, the robot arm can go really fast (of course, there’s no real moving read/write head in a Flash drive, but remember we’re looking at this from a SCSI standpoint). The long and the short of it is that even though we can see all of the data from our vantage point, we still only ask for information one at a time, and we still only have one queue for those requests.

    Enter NVMe

    This is where things get very interesting.

    NVMe is the standardized interface for PCIe SSDs (it’s important to note, however, that NVMe is not PCIe!). It was architected from the ground-up, specifically for the characteristics of Flash and NVM technologies.

    Most of the technical specifics are available at the NVM Express website, but here are a couple of the main highlights.

    First, whereas SCSI has one queue for commands, NVMe is designed to have up to 64 thousand queues. Moreover, each of those queues in turn can have up to 64 thousand commands. Simultaneously. Concurrently. That is, at the same time.

    That’s a lot of freakin’ commands goin’ on at once!

    Let’s take a look at our programmable robot. To complete the analogy, instead of one arm, our intrepid robot has 64 thousand arms, each with the ability to handle 64 thousand commands.

    Graphic 4

     

    Second, NVMe streamlines those commands to only the basics that Flash technologies need: 13 to be exact.

    Remember when I said that Flash has certain characteristics that allow for radically changing the way data centers store and retrieve data? This is why.

    Flash is already fast. NVMe can make this even faster than we do so today. How fast? Very fast.

    Graphic 5

    Source: The Performance Impact of NVMe and NVMe over Fabrics 

    This is just an example, of course, because enterprise data centers have more workloads than just those that run 4K random reads. Even so, it’s a pretty nifty example:

    • For 100% random reads, NVMe has 3x better IOPS than 12Gbps SAS
    • For 70% random reads, NVMe has 2x better IOPS than 12Gbps SAS
    • For 100% random writes, NVMe has 1.5x better IOPs than 12Gbps SAS

    What about sequential data?

    Graphic 6

    Source: The Performance Impact of NVMe and NVMe over Fabrics 

    Again, this is just one scenario, but the results are still quite impressive. For one, NVMe delivers greater than 2.5Gbps read performance and ~2Gbps write performance:

    • 100% reads: NVMe has 2x performance of 12Gbps SAS
    • 100% writes: NVMe has 2.5x performance of 12Gbps SAS

    Of course, there is more to data center life than IOPS! Those efficiencies of command structure that I mentioned above also reduce CPU cycles by half, as well as reduce latency by more than 200 microseconds than 12 Gbps SAS.

    I know it sounds like I’m picking on poor 12 Gbps SAS, but at the moment it is the closest thing to the NVMe-type of architecture. The reason for this is because of NVMe’s relationship with PCIe.

    Relationship with PCIe

    If there’s one place where there is likely to be some confusion, it’s here. I have to confess, when I first started going deep into NVMe, I got somewhat confused, too. I understood what PCIe was, but I was having a much harder time figuring out where NVMe and PCIe intersected, because most of the time the conversations tend to blend the two technologies together in the discussion.

    That’s when I got it: they don’t intersect.

    When it comes to “hot data,” we in the industry have been seeing a progressive migration towards the CPU. Traditional hosts contain an I/O controller that sits in-between the CPU and the storage device. By using PCIe, however it is possible to eliminate that I/O controller from the data path, making the flows go very, very quick.

    Because of the direct connection to the CPU, PCIe has some pretty nifty advantages, including (among others):

    • Lower latency
    • Scalable performance (1 GB/s, per lane, and PCI 3.0 x8 cards have 8 lanes – that’s what the “x8” means)
    • Increased I/O (up to 40 PCIe lanes per CPU socket)
    • Low power

    This performance of PCIe, as shown above, is significant. Placing a SSD on that PCIe interface was, and is, inevitable. However, there needed to be a standard way to communicate with the SSDs through the PCIe interface, or else there would be a free-for-all for implementations. The interoperability matrices would be a nightmare!

    Enter NVMe.

    NVM Express is that standardized way of communicating with a NVM storage device, and is backed by an ever-expanding consortium of hardware and software vendors to make your life easier and more productive with Flash technologies.

    Think of PCIe as the physical interface and NVMe as the protocol for managing NVM devices using that interface.

    Not Just PCIe

    Just like with SCSI, an interest has emerged in moving the storage outside of the host chassis itself. It’s possible to do this with PCIe, but it requires extending the PCIe hardware interface outside as well, and as a result there has been some interest in using NVMe with other interface and storage networking technologies.

    Work is progressing on not just point-to-point communication (PCIe, RDMA-friendly technologies such as RoCE and iWARP), but also Fabrics as well (InfiniBand-, Ethernet- and Fibre Channel-based, including FCoE).

    By expanding these capabilities, it will be possible to attach hundreds, maybe thousands, of SSDs in a network – far more than can be accommodated via PCIe-based systems. There are some interesting aspects for NVMe using Fabrics that are best left for another blog post, but it was worth mentioning here as an interesting tidbit.

    Bottom Line

    NVM Express has been in development since 2007, believe it or not, and has just released version 1.2 as of this writing. Working prototypes of NVMe using PCIe- and Ethernet-based connectivity have been demonstrated for several years. It’s probably the most-developed standards work that few people have ever heard about!

    Want to learn more? I encourage you to listen/watch the SNIA ESF Webcast “The Performance Impact of NVMe and NVMe over Fabrics” which goes into much more technical detail, sponsored by the NVMe Promoter’s Board and starring some of the brainiacs behind the technical work. Oh, and I’m there too (as the moderator, of course).

    Keep an eye open for more information about NVMe in the coming months. The progress made is remarkably rapid, and more companies are joining each year. It will be extremely interesting to see just how creative the data center architectures can get in the coming years, as Flash technology comes to realize its full potential.

     

     


    Data Storage & the Software Defined Data Center

    November 3rd, 2014

    gnagle-color     The overall success of and general acceptance of server virtualization as a way to make servers more fully utilized and agile, have encouraged IT managers to pursue virtualization of the other two major data center functions, storage and networking.  And most recently, the addition of external resources such as the public Cloud to what the IT organization must manage has encouraged taking these trends a step further.  Many in the industry believe that the full software definition of all IT infrastructure, that is a software defined data center (SDDC), should be the end goal, to make all resources capable of fast adaptation to business needs and for the holy grail of open-API-based, application-aware, single-pane-of-glass management.

    So as data storage professionals we must ask:  is storage changing in ways that will make it ready to be a full participant in the software defined data center? And what new storage techniques are now being proven to offer the agility, scalability and cost effectiveness that are sought by those embracing the SDDC?

    These questions can best be answered by examining the current state of software defined storage (SDS) and how it is being integrated with other aspects of the SDDC. SDS does for storage what virtualization did for servers—breaking down the physical barriers that bind data to specific hardware. Using SDS, storage repositories can now be made up of high volume, industry standard hardware, where “white boxes,” typically in the form of multiple CPU, Open Compute Project servers with a number of solid-state and/or spinning disks, perform storage tasks that formerly required specialized disk controller hardware.  This is similar to what is beginning to happen to network switches under software defined networking (SDN).  And in another parallel to the SDN world, the software used in SDS is coming both from open source communities such as the OpenStack Swift design for object storage and from traditional storage vendors such as EMC’s ViPR and NetApp’s clustered Data ONTAP, or from hypervisor vendors such as VMware and Microsoft.  Making industry standard hardware handle the performance and high availability requirements of enterprise storage is being done by applying clustering technologies, both local and geographically distributed, to storage – again with object storage in the lead, but new techniques are also making this possible for more traditional file systems.  And combining geographically distributed storage clusters with snapshot may well eliminate the need for traditional types of data protection in the form of backup and disaster recovery.

    Integrating storage devices, SDS or traditional, into the rest of the data center requires protocols that facilitate either direct connections of storage to application servers, or networked connections.  And as storage clustering gains traction, networking is the logical choice, with high speed Ethernet, such as 10 Gigabit per second (10 GbE) and 40 GbE increasingly dominant and the new 25 GbE coming along as well.  Given this convergence – the use of the same basic networking standards for all networking requirements, SAN or NAS, LANs, and WANs – storage will integrate quite readily over time into the increasingly accepted SDN technologies that are enabling networking to become a full participant in the virtualization and cloud era.  One trend that will bring SDS and SDN together is going to be the increasing popularity of private and hybrid clouds, since development of a private cloud, when done right, gives an IT organization pretty close to a “clean slate” on which to build new infrastructure and new management techniques — an opportune time to begin testing and using SDS.

    Industry trends in servers, storage and networking, then, are heading in the right direction to make possible comprehensive, policy-driven management of the software defined data center.  However, despite the strong desire by IT managers and their C-level bosses for more agile and manageable data centers, a lot of money is still being spent just to maintain existing storage infrastructure, such as Fibre Channel.  So any organization that has set its sights on embracing the SDDC should start NOW to steadily convert its storage infrastructure to the kinds of devices and connectivity that are being proven in cloud environments – both by public cloud providers and by organizations that are taking a clean-slate approach to developing private and hybrid clouds.

     


    Cloud File Services: SMB/CIFS and NFS…in the Cloud – Q&A

    November 3rd, 2014

    Cloud File Services: SMB/CIFS and NFS…in the Cloud – Q&A

    At our recent live ESF Webcast, “Cloud File Services: SMB/CIFS and NFS…in the Cloud” we talked about evaporating your existing file server into the cloud. Over 300 people have viewed the Webcast. If you missed it, it’s now available on-demand. It was an interactive session with a lot of great questions from attendees. We did not have time to address them all – so here is a complete Q&A from the Webcast. If you think of additional questions, please feel free to comment on this blog.

    Q. Can your Storage OS take advantage of born-in-the-cloud File Storage like Zadara Storage at AWS and Azure?

    A. The concept presented is generic in nature.  Whichever storage OS the customer chooses to use in the cloud will have its own requirements on the underlying storage beneath it.  Most Storage OS’s used for Cloud File Services will likely use block or object backends rather than a file backend.

    Q. Regarding Cloud File Services for “Client file services,” since the traditional file services require the client and server to be in a connected mode, and in the same network. And, they are tied to identities available in the network. How can the SMB/NFS protocols be used to serve data from the cloud to the clients that could be coming from different networks (4G/Corporate)? Isn’t REST the appropriate interface for that model?

    A. The answer depends on the use case.  There are numerous examples of SMB over the WAN, for example, so it’s not far fetched to imagine someone using Cloud File Services as an alternative to a “Sync & Share” solution for client file services.  REST (or similar) may be appropriate for some, while file-based protocols will work better for others.  Cloud File Services provides the capability to extend into this environment where it couldn’t before.

    Q. Is Manila like VMware VSAN or VASA?

    A. Please take a look at the Manila project on OpenStack’s website https://wiki.openstack.org/wiki/Manila

    Q. How do you take care of data security while moving data from on-premises to cloud (Storage OS)?

    A. The answer depends on the Storage OS you are using for your Cloud File Services platform.  If your Storage OS supports encryption, for example, in its storage-to-storage in-flight data transport, then data security in-flight would be taken care of.  There are many facets to security that need to be thought through, including security at rest, some of which may depend on the environment (private/on-premises, service provider, hyperscalar) the Storage OS is sitting in.

    Q. How do you get the data out of the cloud?  I think that’s been a traditional concern with cloud storage.

    A. That’s the beauty of Cloud File Services!  With data movement and migration provided at the storage-level by the same Storage OS across all locations, you can simply move the data between on-premises and off-premises and expect similar behavior on both ends.  If you choose to put data into a native environment specific to a hyperscalar or service provider, you run the risk of lock-in.

    Q. 1. How does one address issue of “chatty” applications over the cloud?  2. File services have “poor” performance for small files. How does one address that issue? Block & Objects do address that issue 3. Why not expose SMB, NFS, Object Interface on the Compute note?

    A. 1. We should take this opportunity to make the applications less chatty! :)  One possible solution here is to operate the application and Storage OS in the same environment, in much the same way you would have on-premises.  If you choose a hyperscalar or service provider, for chatty use cases, it may be best to keep the application and storage pieces “closer” together.

    2. Newer file protocols are getting much better at this.  SMB 3.02 for instance, was optimized for 8K transactions.  With a modern Storage OS, you will be able to take advantage of new developments.

    3. That is precisely the idea: the Storage OS operating in the “compute nodes,” serving out their interfaces, while taking advantage of different backend offerings for cost and scalability.

    Q. Most storage arrays NetApp, EMC etc., can provide 5 9s of resilience, Cloud VMs typically offer 3 9′s.  How do you get to 5 9′s with CFS?

    A. Cloud File Services (CFS) as a platform can span across all of your environments, and as such, the availability guarantees will depend upon each environment in which CFS is operating.

    Q. Why are we “adding” another layer? Why can we just use powerful “NAS” devices that can have different media like NVMe, Flash SSD or HDDs?

    A. Traditional applications may not want to change, but this architecture should suit those well.  It’s worth examining that “cloud-ready” model.  Is the goal to be “cloud-ready,” or is the goal to support the scaling, failover, and on-demand-ness that the cloud has the ability to provide?  Shared nothing is a popular way of accomplishing some of this, but it may not be the only way.

    The existing interfaces provided by hyperscalars do provide abstraction, but if you are building an application, you run a strong risk of lock-in on any particular abstraction.  What is your exit strategy then?  How do you move your data (and applications) out?

    By leveraging a common Storage OS across your entire infrastructure (on-premises, service providers, and hyperscalars), you have a very simple exit strategy, and your exit and mobility strategy become very similar, if not the same, with the ability to scale or move across any environment you choose.

    Q. How do you virtualize storage OS? What happens to native storage OS hardware/storage?

    A. A Storage OS can be virtualized similar to a PC or traditional server OS.  Some pieces may have to be switched or removed, but it is still an operating system.

    Q. Why is your Storage displayed as part of your Compute layer?

    A. In the hyperscalar model, the Storage OS is sitting in the compute layer because it is, in effect, running as a virtual machine the same as any other.  It can then take advantage of different tiers of storage offered to it.

    Q. My concern is that it would be slower as a VM than a storage controller.  There’s really no guarantee of storage performance in the cloud in fact most hyperscalers won’t give me a good SLA without boatloads of money.  How might you respond to this?

    A. Of course with on-premises infrastructure, a company or service provider will have more of a guarantee in the sense that they control the hardware behind it.  However, as we’ve seen, SLA’s continue to improve over time, and costs continue to come down for the Public Cloud.

    Q. Does FreeNAS qualify as a Storage OS?

    A. I recommend checking with their team.

    Q. Isn’t this similar to Hybrid cloud?

    A. Cloud File Services (CFS) is one way of looking at Hybrid Cloud.  Savvy readers and listeners will pick up that having the same Storage OS everywhere doesn’t necessarily limit you to only File Services. iSCSI or RESTful interfaces could work exactly the same.

    Q. What do you mean by Storage OS? Can you give some examples?

    A. As I work for NetApp, one example is Data ONTAP.  EMC has several as well, such as one for the VNX platform.  Most major storage vendors will have their own OS.

    Q. I think one of the key questions is the data access latency over WAN, how I can move my data to the cloud, how I can move back when needed – for example, when the service is terminated?

    A. Latency is a common concern, and connectivity is always important.  Moving your data into and out of the cloud is the beauty of the Cloud File Services platform, as I mentioned in other answers.  If one of your environments goes down (for example, your on-premises datacenter) then you would feasibly be able to shift your workloads over to one of your other environments, similar to a DR situation.  That is one example of where storage replication and application awareness across sites is important.

    Q. Running applications like Oracle, Exchange through SMB/NFS (NAS), don’t you think it will be slow compared to FC (block storage)?

    A. Oracle has had great success running over NFS, and it is extremely popular.  While Exchange doesn’t currently support running directly over SMB at this time, it’s not ludicrous to think that it may happen at some point in the future, in the same way that SQL has.

    Q. What about REST and S3 API or are they just for object storage?  What about CINDER?

    A. The focus of this presentation was only File Services, but as I mentioned in another answer, if your Storage OS supports these services (like REST or S3), it’s feasible to imagine that you could span them in the same way that we discussed CFS.

    Q. Why SAN based application moving to NAS?

    A. This was discussed in one of the early slides in the presentation (slide 10, I believe).  Data mobility and granular management were discussed, as it’s easier to move, delete, and otherwise manage files than LUNs, an admin can operate at a more granular level, and it’s easier to operate and maintain.  No HBA’s, etc.  File protocols are generally considered “easier” to use.

     

     


    Object Storage 201 Q&A

    October 29th, 2014

    Now available on-demand, our recent live CSI Webcast, “Object Storage 201: Understanding Architectural Trade-Offs,” was a highly-rated event that almost 250 people have seen to date. We did not have time to address all of the questions, so here are answers to them. If you think of additional questions, please feel free to comment on this blog.

    Q. In terms of load balancers, would you recommend a software approach using HAProxy on Linux or a hardware approach with proprietary appliances like F5 and NetScaler?

    A. This really depends on your use case. If you need HA load balancers, or load balancers that can maintain sessions to particular nodes for performance, then you probably need commercial versions. If you just need a basic load balancer, using a software approach is good enough.

    Q. With billions of objects what Erasure Codes are more applicable in the long term? Reed Solomon where code words are very small resulting in many billions of code words or Fountain type codes such as LDPC where one can utilize long code words to manage billions of objects more efficiently?

    A. Tracking Erase Code fragments have a higher cost than replication but the tradeoff is higher HDD utilization. Using Rateless coding lowers this overhead because each Fragment has equal value. Reed Solomon requires knowledge of fragment placement for repair.

    Q. What is the impact of having HDDs of varying capacity within the object store?  Does that affect hashing algorithms in any way?

    A. The smallest logical storage unit is a Volume. Because Scale-Out does not stripe volumes there is no impact. Hashing, being used for location would not understand volume size, so a separate Database is used, on a volume basis, to track open space. Hashing algorithms can be modified to suit the underlying disk. The problem is not so much whether they can be designed a priority for the underlying system, but really the rigidity they introduce by tying placement very tightly with topology. That makes failure / exception handling hard.

    Q. Do you think RAID6 is sufficient protection with these types of Object Storage Systems or do we need higher parity based Erasure codes?

    A. RAID6 makes sense for a Direct Attached storage solution where all drives in the RAID Set can maintain sync. Unlike filesystems (with a few exceptions) Scale-Out Object Storage systems are “Storage as a workload” systems that already have protection as part of the system. So the question is what data protection method is used on solution x as apposed to solution y. You must also think about what you are trying to do.  Are you trying to protect against a single disk failure, or are you trying to protect against a node failure, or are you trying to protect against a site failure. Disk failures – RAID is great, but not if you’re trying to do node failure or site failure. Site failure is an EC sweet spot, but hard to solve from a deployment perspective.

    Q. Is it possible to brief how this hash function decides the correct data placement order among the available storage nodes?

    A. Take a look at the following links: “http://en.wikipedia.org/wiki/Consistent_hashing“; https://swiftstack.com/openstack-swift/architecture/

    Q. What do you consider to be a typical ratio of controller to storage nodes? Is it better to separate the two, or does it make sense to consolidate where a node is both controller and storage?

    A. The flexibility of Scale-Out Object Storage makes these two components independently scalable. The systems we test all have separate controllers and storage nodes so we can test this independence. This is also very dependent on the Object Store technology you use. We know of some object stores where there is a 1GB RAM / TB of data, while there are others that use 1/10 of that.  The compute is dependent on whether you are using erasure coding, and what codes. There is no one answer.

    Q. Is the data stored in the Storage depository interchangeable with other vendor’s controller units? For instance, can we load LTO tapes from vendor A’s library to Vendor B’s library and have full access to data?

    A. The data stored in these systems are part of the “Storage as a workload” principle. So system metadata used to track Objects stored as a function within the Controller. I would not expect any content stored to be interchangeable with another system architecture.

    Q. Would you consider the Seagate Kinetic Open Storage Platform a radical architectural shift in how object storage can be done?  Kinetic basically eliminates the storage server, POSIX and RAID or all of the “busy work” that storage servers are involved in today.

    A. Ethernet drives with key value interface provides a new approach to design object storage solution. It is yet to be seen how compelling they are for TCO and infrastructure availability.

    Q. Will the inherent reduction in blast radius by the move towards Ethernet-interface HDDs be a major driver of the Ethernet HDD in object stores?

    A. Yes. We define Blast Radius by a compute failure that impacts access to connected hard drives. As we lower the Number of Connected Hard Drives to compute the Blast Radius is reduced. For Ethernet drives, you may need redundant Ethernet switches to minimize the blast radius.  Blast radius can be also minimized with intelligent data placements with software as well.


    Join SNIA-CSI at the OpenStack Summit

    October 21st, 2014

    Get the tips needed when implementing multiple cloud storage APIs. The SNIA Cloud Storage Initiative (CSI) is hosting a Birds of a Feather session – Tips to Implementing Multiple Cloud Storage APIs at the OpenStack Summit in Paris on November 5th at 9:00 a.m. Room 212/213.

    There are three main object storage APIs today; OpenStack’s Swift (open but not standardized), Amazon’s S3 (proprietary yet a defacto standard) and SNIA’s CDMI (an ISO standard). With three APIs to support, it might sound expensive or difficult to support all of them, yet not doing so could be costly when customers want innovation and industry standard solutions and interoperability in your product.

    What about the similarities and differences between the APIs, and can they be reconciled? Can these APIs be effectively and efficiently implemented in a single product? I hope you’ll join us at this session to learn about and discuss various ways to cope with this situation. You will discover best practices and tips on how to implement these three protocols in your cloud storage solution.

    Register now. I look forward to seeing you on November 5th at the OpenStack Summit.

     

     


    New Website for SSSI Highlights Key SSD Technology Activities

    October 21st, 2014

    A new, more easily “navigatable” website is now online for the SNIA Solid State Storage Initiative (SSSI)  technology community.

    Divided into the activities the SSSI focuses on – Performance, NVDIMM, Non-Volatile Memory Programming, and PCIe SSDs, and with two new tabs linking directly to “News” and “Resources”, the new format gives readers quick access to webcasts, white papers, articles, presentations, and technical materials critically needed in the rapidly changing world of Solid State Storage technology.

    The right navigation bar also highlights SSSI member companies and provides direct links to the SSSI blog you are reading now, the SSSI Twitter feed, and the SSSI LinkedIn Group SSDs – What’s Important to You?

    Check it out, and let us know what you think at asksssi@snia.org or on our social media links!


    New Webcast: Object Storage – Understanding Architectural Trade-Offs

    September 30th, 2014

    The Cloud Storage Initiative (CSI) is excited to announce a live Webcast as part of the upcoming BrightTalk Cloud Storage Summit on October 16thObject Storage 201: Understanding Architectural Trade-Offs. It’s a follow-up to the SNIA Ethernet Storage Forum’s Object Storage 101: Understanding the What, How and Why behind Object Storage Technologies.

    Object-based storage systems are fast becoming one of the key building blocks for a cloud storage infrastructure. They address some of the shortcomings and provide an alternative to more traditional file- and block-based storage for unstructured data.

    An object storage system must accommodate growth (and yes, the rumors are true – data growth is a huge and accelerating problem), be flexible in their provisioning, provide support multiple geographies and legal frameworks, and cope with the inevitable issues of resilience, performance and availability.

    Register now for this Webcast. Experts from the SNIA Cloud Storage Initiative will discuss:

    • Object Storage Architectural Considerations
    • Replication and Erasure Encoding for resilience
    • Pros and Cons of Hash Tables and Key-Value Databases
    • And more…

    This is a live presentation, so please bring your questions and we’ll do our very best to answer them. We hope you’ll join us on October 16th for an unbiased, deep dive into the design considerations for object storage systems.

     


    New Webcast: Cloud File Services: SMB/CIFS and NFS…in the Cloud

    September 18th, 2014

    Imagine evaporating your existing file server into the cloud with the same experience and level of control that you currently have on-premises. On October 1st, ESF will host a live Webcast that introduces the concept of Cloud File Services and discusses the pros and cons you need to consider.

    There are numerous companies and startups offering “sync & share” solutions across multiple devices and locations, but for the enterprise, caveats are everywhere. Register now for this Webcast to learn:

    • Key drivers for file storage
    • Split administration with sync & share solutions and on-premises file services
    • Applications over File Services on-premises (SMB 3, NFS 4.1)
    • Moving to the cloud: your storage OS in a hyperscalar or service provider
    • Accommodating existing File Services workloads with Cloud File Services
    • Accommodating cloud-hosted applications over Cloud File Services

    This Webcast will be a vendor-neutral and informative discussion on this hot topic. And since it’s live, we encourage your to bring your questions for our experts. I hope you’ll register today and we’ll look forward to having you attend on October 1st