Now available on-demand, our recent live CSI Webcast, “Object Storage 201: Understanding Architectural Trade-Offs,” was a highly-rated event that almost 250 people have seen to date. We did not have time to address all of the questions, so here are answers to them. If you think of additional questions, please feel free to comment on this blog.
Q. In terms of load balancers, would you recommend a software approach using HAProxy on Linux or a hardware approach with proprietary appliances like F5 and NetScaler?
A. This really depends on your use case. If you need HA load balancers, or load balancers that can maintain sessions to particular nodes for performance, then you probably need commercial versions. If you just need a basic load balancer, using a software approach is good enough.
Q. With billions of objects what Erasure Codes are more applicable in the long term? Reed Solomon where code words are very small resulting in many billions of code words or Fountain type codes such as LDPC where one can utilize long code words to manage billions of objects more efficiently?
A. Tracking Erase Code fragments have a higher cost than replication but the tradeoff is higher HDD utilization. Using Rateless coding lowers this overhead because each Fragment has equal value. Reed Solomon requires knowledge of fragment placement for repair.
Q. What is the impact of having HDDs of varying capacity within the object store? Does that affect hashing algorithms in any way?
A. The smallest logical storage unit is a Volume. Because Scale-Out does not stripe volumes there is no impact. Hashing, being used for location would not understand volume size, so a separate Database is used, on a volume basis, to track open space. Hashing algorithms can be modified to suit the underlying disk. The problem is not so much whether they can be designed a priority for the underlying system, but really the rigidity they introduce by tying placement very tightly with topology. That makes failure / exception handling hard.
Q. Do you think RAID6 is sufficient protection with these types of Object Storage Systems or do we need higher parity based Erasure codes?
A. RAID6 makes sense for a Direct Attached storage solution where all drives in the RAID Set can maintain sync. Unlike filesystems (with a few exceptions) Scale-Out Object Storage systems are “Storage as a workload” systems that already have protection as part of the system. So the question is what data protection method is used on solution x as apposed to solution y. You must also think about what you are trying to do. Are you trying to protect against a single disk failure, or are you trying to protect against a node failure, or are you trying to protect against a site failure. Disk failures – RAID is great, but not if you’re trying to do node failure or site failure. Site failure is an EC sweet spot, but hard to solve from a deployment perspective.
Q. Is it possible to brief how this hash function decides the correct data placement order among the available storage nodes?
A. Take a look at the following links: “http://en.wikipedia.org/wiki/Consistent_hashing“; https://swiftstack.com/openstack-swift/architecture/
Q. What do you consider to be a typical ratio of controller to storage nodes? Is it better to separate the two, or does it make sense to consolidate where a node is both controller and storage?
A. The flexibility of Scale-Out Object Storage makes these two components independently scalable. The systems we test all have separate controllers and storage nodes so we can test this independence. This is also very dependent on the Object Store technology you use. We know of some object stores where there is a 1GB RAM / TB of data, while there are others that use 1/10 of that. The compute is dependent on whether you are using erasure coding, and what codes. There is no one answer.
Q. Is the data stored in the Storage depository interchangeable with other vendor’s controller units? For instance, can we load LTO tapes from vendor A’s library to Vendor B’s library and have full access to data?
A. The data stored in these systems are part of the “Storage as a workload” principle. So system metadata used to track Objects stored as a function within the Controller. I would not expect any content stored to be interchangeable with another system architecture.
Q. Would you consider the Seagate Kinetic Open Storage Platform a radical architectural shift in how object storage can be done? Kinetic basically eliminates the storage server, POSIX and RAID or all of the “busy work” that storage servers are involved in today.
A. Ethernet drives with key value interface provides a new approach to design object storage solution. It is yet to be seen how compelling they are for TCO and infrastructure availability.
Q. Will the inherent reduction in blast radius by the move towards Ethernet-interface HDDs be a major driver of the Ethernet HDD in object stores?
A. Yes. We define Blast Radius by a compute failure that impacts access to connected hard drives. As we lower the Number of Connected Hard Drives to compute the Blast Radius is reduced. For Ethernet drives, you may need redundant Ethernet switches to minimize the blast radius. Blast radius can be also minimized with intelligent data placements with software as well.