• Home
  • About
  •  

    SMB3 – These Questions Rock!

    April 24th, 2017

    Earlier this month, the SNIA Ethernet Storage Forum hosted a live webcast on Server Message Block (SMB), “Rockin’ and Rollin’ with SMB3.” Presenting was Ned Pyle, Microsoft SMB Program Manager. If you missed the live event, I encourage you to watch it on-demand. We had a lot of questions from the big audience this event drew, so as promised, here are answers to them all.

    Q. Other than that audit setup, is there a way to determine, via the OS, which SMB version is in use?

    A. No. Network captures alone will tell you, but Windows doesn’t track this explicitly other than SMB1 with auditing we added specifically for the task of identifying removal options

    Q. Old Linux + NetApp 7-Mode + 2003 Server = Stuck with SMB1.0?

    A. You have to ask NetApp. 🙂

    Q. SMB 3.1.1 over Ethernet… can you discuss/compare with SMB 3.1.1 over Infiniband?

    A. If the question is ‘what’s better, Infiniband or Ethernet’, my answer is always: it depends. I really don’t want to get into a competitive conversation under the guides of SNIA. I simply recommend looking at the vendor stories and make an informed decision. Overall, Ethernet/TCP/IP versions like RoCE and iWARP configurations are generally less expensive than Infiniband ones. They all have tremendous performance. They all have their various ups and downs.

    Q. Do you have statistics regarding SMB-Direct adoption?

    A. It’s tricky, as our telemetry for Server usage is quite inaccurate due to firewall rules preventing servers from reaching the Internet. I can say indirectly that we know of thousands of customer deployments.

    Q. What’s the name of the IO application?

    A. DiskSPD

    Q. I don’t believe your I/O data tests, wouldn’t you need to trunk 17 10 Gigabit Network Cards to achieve 168 gigabit I/O capability?

    A. This was a misunderstanding, you thought I said 10Gb but it was 100Gb. We used 100Gb RDMA NICs in this demo with RoCEv2. The bottleneck was the storage at that point, the network had plenty of bandwidth left over. 

    Q. These are great, but how many of these new features will end up locking out FOSS/GPL implementations of SMB such as SAMBA?

    A. Absolutely not! We work with Samba team and Linux to ensure that SMB can be broadly deployed with all of its capabilities inside open source software.

    Q. NetApp supports CA shares (which uses transparent failover) in two use cases: SQL over SMB and Hyper-V over SMB3.

    A. This sounds likes someone from NetApp stating a fact, so I will simply say “good!” 🙂

    Q. Can you please post links to the tools mentioned in this presentation, and I/O tests? Is there a comparison using I/O Meter?

    A. Here you go:

    • https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223
    • https://github.com/Microsoft/diskspd
    • https://github.com/Microsoft/diskspd/tree/master/Frameworks/VMFleet

    Q. You are forced to use SMB1 because of the Windows 2003 issue?

    A. Windows Server 2003 and XP (and older, like Win2000) all use SMB1. If they are still around, you will need to leave SMB1 enabled on any machines talking to them.

    Q. When will Microsoft officially drop support for SMB1?

    A. Overall for the protocol, there is no timeline. It is deprecated however, so no further work will be done in SMB1 other than critical security patches. SMB1 will start being removed *by default* in a coming release of Windows Server and Windows 10 client. This doesn’t mean totally removed forever, but instead “missing by default”, where you must directly opt in to adding it back. It will be done on a per-SKU basis, so that enterprises are first likely to see it, since they are equipped better to understand it and less likely to need SMB1

    Q. Is there a way to change block size in SMB3 ?

    A. In SMB2_READ processing section 3.3.5.12 (https://msdn.microsoft.com/en-us/library/cc246729.aspx):

    The server SHOULD<296> fail the request with STATUS_INVALID_PARAMETER if the Length field is greater than Connection.MaxReadSize.

    If Connection.SupportsMultiCredit is TRUE the server MUST validate CreditCharge based on Length, as specified in section 3.3.5.2.5. If the validation fails, it MUST fail the read request with STATUS_INVALID_PARAMETER.

    There is similar text for SMB2_WRITE in 3.3.5.13 (https://msdn.microsoft.com/en-us/library/cc246730.aspx).

    Then, off to SMB2_NEGOTIATE  in 3.3.5.4 (https://msdn.microsoft.com/en-us/library/cc246768.aspx) to discover:

    • MaxReadSize is set to the maximum size, in bytes, of the Length in an SMB2 READ Request (section 2.2.19) that the server will accept on the transport that established this connection. This value SHOULD<231> be greater than or equal to 65536. MaxReadSize MUST be set to MaxReadSize.
    • MaxWriteSize is set to the maximum size, in bytes, of the Length in an SMB2 WRITE Request (section 2.2.21) that the server will accept on the transport that established this connection. This value SHOULD<232> be greater than or equal to 65536. MaxWriteSize MUST be set to MaxWriteSize.
    Windows version\Connection.Dialect 2.0.2 All other SMB2 dialects
    Windows Vista SP1\Windows Server 2008 65536 N/A
    Windows 7\Windows Server 2008 R2 65536 1048576
    Windows 8 without [MSKB-2934016]\Windows Server 2012 without [MSKB-2934016] 65536 1048576
    All other SMB2 servers 65536 8388608

    <232> Section 3.3.5.4: If the underlying transport is NETBIOS over TCP, Windows servers set MaxWriteSize to 65536. Otherwise, MaxWriteSize is set based on the following table.

    Windows version\Connection.Dialect 2.0.2 All other SMB2 dialects
    Windows Vista SP1\Windows Server 2008 65536 N/A
    Windows 7\Windows Server 2008 R2 65536 1048576
    Windows 8 without [MSKB-2934016]\Windows Server 2012 without [MSKB-2934016] 65536 1048576
    All other SMB2 servers 65536 8388608

    Buffers, Queues and Caches Explained

    April 19th, 2017
    Finely tuning buffers, queues and caches can make your storage system hum. And that’s exactly what we discussed in our recent SNIA Ethernet Storage Forum webcast, ““Everything You Wanted to Know About Storage But Were Too Proud To Ask – Part Teal: The Buffering Pod.” If you missed it, it’s now available on-demand. In this blog, you’ll find detailed answers from our panel of experts to all the great questions we received during the live event. I also encourage you to check out the other on-demand webcasts in this “Too Proud To Ask” series here and stay informed on upcoming events in this series by following us on Twitter @SNIAESF.

    Storage Expert Takes on Hyperconverged Questions

    April 17th, 2017

    Last month, we were fortunate enough to have Greg Schulz, analyst and founder of Server Storage IO, as a guest speaker at our SNIA Ethernet Storage Forum webcast, “What Does Hyperconverged Mean to Storage.” If you missed it, it’s now available on-demand. Greg fielded many great questions during the live event, but we didn’t have time to get to them all. So here they are:

    Q. What is the difference between Converged Infrastructure (CI) and Hyperconverged Infrastructure (HCI)?

    A. HCI is aggregated. You scale compute and storage in lock step. Converged is disaggregated. You can scale the compute independently of the storage. There are some software solutions that can support both hyper-converged (aggregated) and converged (disaggregated) deployments.

     Q. What is your definition of “Little Data”?

    A. Little Data is anything that’s not Big Data. It encompasses traditional databases, traditional structured, semi-structured and even some unstructured data.

    Q. With convergence, what is the impact on the IT organization?

    A. There is an opportunity for organizations to converge how they manage data infrastructure resources and services delivery. In other words, the technology can be leveraged to help an organization itself converge. Another impact is how converged solutions are protected, backed up, BC/BR/DR and related management done. Traditionally there are separate IT teams for compute, storage, and networking, especially in a large organization. New technology solutions may allow an organization to converge those teams.

    Q. Is there a hybrid strategy? Where a complete information system is composed of HCI/CI building blocks? If yes, what management tools would span these components?

    A. Sure, why not? Certainly you can converge your environment into a particular CI/HCI solution or approach, likewise, different CI/HCI solutions can co-exist along with other solutions in a given environment in hybrid ways. Have a hybrid strategy that looks at how technologies and solutions adapt to your needs and environment. Focus on how it’s going to work for you, vs. you having to work for them.

    Q. What does FUZE stand for?

    A. FUZE is not an acronym. It is the actual fuzing as in melding and bringing together things – literally fuzing thing together.

    Q. Do HCI vendors re-balance (compute, I/O, storage) automatically as more nodes are added?

    A. Solutions vary in how they rebalance the workloads. Some are dynamic while others rebalance on intervals; it varies how, when and what they rebalance. So, as you add capacity as you make changes, you need to make sure resources are properly allocated to address performance.

    Q. Can’t you offload those CPU cycles caused by I/O to another CPU?

    A. That’s an interesting question. Yes, move the application to another CPU. There is software that will leverage the resources on another CPU. Most HCI and CI solutions are running on a stack that requires hardware somewhere.

    Q. This discussion has touched on compute and storage scaling. What about network between compute in the CI/HCI infrastructure and external to other compute, databases, or end-users?

    A. Both CI and HCI need to connect to other resources, but in most cases the highest levels of network traffic are inside the CI or HCI stack because the compute and storage resources are contained within. Their connections to outside clients or servers data exchange, application integration, or client access is important but usually not very demanding on network bandwidth. (External connections for storage remote replication or backup could be bandwidth-intensive.)

    Q. How can the current Enterprise Storage Products blend with either CI or HCI? Enterprise Storage is basically centralized storage architecture however the HCI is built mostly on ‘distributed storage architecture’. So how can current Enterprise Storage show use cases to the customer to sell their Enterprise Storage either as part of the HCI solution or exist along with HCI?

    A. Generally enterprise storage products can be included in CI but are not blended with HCI. For example Dell EMC, Cisco (with NetApp and other storage vendors), IBM and Oracle offer CI solutions that include enterprise storage arrays in the rack. Most HCI platforms do not interoperate with enterprise storage arrays because the HCI platforms include their own storage. They can co-exist with enterprise storage arrays and that’s how most customers deploy them—some workloads run on the HCI infrastructure while others continue to use enterprise storage arrays.

    Q. One of the HCI selling points is simplicity and cost reductions from a la carte. It seems that from what is being presented, that may not be the case. Can you elaborate on where HCI may become more complex, costly?

    A. It comes down to value. You can buy all the components yourself and glue them all together and may come up with a lower total cost, but what is the value of your time? What is the cost of staff time to evaluate, test, deploy and maintain. The total value must be considered. It’s possible that HCI will be more costly than a disaggregated deployment that separates compute and storage, but this depends heavily on the workload and specific vendor product solution implementation.

    Q. Current HCI “full stack” solutions claim compute and storage convergence, but what about the network? Given the east/west traffic introduced by HCI solutions, what networking solutions should customers be looking at?

    A. Most of the common HCI solutions are packaged with server, storage, compute and most have networking included as well—typically the network adapters and sometimes also the switches. Some even have a backend software defined networking (SDN) capability as part of their stack.

    Q. Related to HCI answer, what about vendors who allow for storage growth and/or server (compute) and storage additions. This allows for aggregated and dis-aggregated…yes?

    A. Most HCI vendors require compute and storage to be added simultaneously, though many support different nodes with different ratios of compute and storage. This allow customers to change the ratio of compute and storage by adding different node types. And yes, some HCI vendors also support both a hyper-converged and disaggregated model, with the disaggregated model allowing compute and storage to be added separately.

    Q. What are the tools available to make HCI work in a hybrid load environment, with different workload requirements, e.g.: VDI and Databases?

    A. There are tools for moving and migrating applications, workloads, systems and VMs into CI/HCI environments, likewise for tuning, optimizing, gaining insight, analytics and reporting. Most of the CI/HCI solutions have tools built into them for optimizing PACE (Performance, Availability, Capacity, Economics) attributes along with server compute, memory, storage, and I/O resources. Some CI/HCI solutions are optimized for VDI/workspaces, while others are able to support general workloads including databases, and some even support HPC/SC or other specialized workloads.

    Q. Does network performance affect HCI or CI performance?

    A. Sometimes. Most hybrid HCI nodes are happy with the bandwidth of 10GbE, but if the nodes are all-flash or have many disks, then a faster speed may be required to avoid a network bottleneck. Network latency could affect HCI or CI performance in some cases, especially with all-flash storage. Of course a reliable network helps ensure reliable CI/HCI operations.

     


    Managing Your Computing Ecosystem

    April 12th, 2017

      By George Ericson, Distinguished Engineer, Dell EMC; Member, SNIA Scalable Storage Management Technical Working Group, @GEricson

    Introduction

    This blog is part one of a three-part series recently published on “The Data Cortex”, which represents the thoughts and opinions from members of the CTO Team of Dell EMC’s Data Protection Division.  The author, George Ericson, has been actively participating on the SNIA Scalable Storage Management Technical Working Group which has been developing the SNIA Swordfish storage management specification.Continue reading


    Q&A on All Things iSCSI

    April 7th, 2017

    In the recent SNIA Ethernet Storage Forum iSCSI pod webcast, from our “Everything You Wanted To Know About Storage Part Were Too Proud to Ask” series, we discussed all things iSCSI. If you missed the live event, it’s now available on-demand. As promised, we’ve compiled all the webcast questions with answers from our panel of experts. If you have additional questions, please feel free to ask them in the comment field of this blog. I also encourage you to check out the other on-demand webcasts in this “Too Proud To Ask” series here and stay informed on upcoming events in this series by following us on Twitter @SNIAESF.

    Q. What does SPDK stand for?

    A. SPDK stands for Storage Performance Development Kit. It is comprised of tools and libraries for developers to write high performance and scalable storage applications in user-mode. For details, see www.spdk.io.

    Q. Can you elaborate on SPDK use? A quick search seems to indicate it is a “half-baked” solution, and available only on Linux systems.

    A. SPDK isn’t a solution, per se – it’s a development kit, intended to provide common building blocks (NVMe drivers, NVMe over Fabrics targets & host/initiator, etc.) for solutions developers who care about latency, license (BSD) and efficiency.

    Q. Is iSCSI ever going to be able to work with object storage?

    A. iSCSI is a block storage protocol while object storage is normally accessed using a RESTful API such as Amazon’s S3 API or the Swift API. For this reason, iSCSI is unlikely to be used for direct access to object storage. However, an object storage system controller could use iSCSI—or other block protocols–to access remote storage enclosures or for data replication. There also could be storage systems that support both iSCSI/block and object storage access simultaneously.

    Q. Does a high-density virtualized workload represent something better served with a full or partial offload solution?

    A. The type of workload that is better served with full or partial offload will really depend more on what that workload is doing. If you are processing a lot of very large data segments, LSO or LRO might be very helpful. If you have a lot of smaller data sets, you might be able to benefit from checksum or chimney offload. Unfortunately, the best way to see is to test things out (but not on production, obviously).

    Q. How does one determine if TOE NIC cards are worth the cost?

    A. This is a really tough question to answer without context. The best way to look at it is do some digging into what your CPU and memory utilization and IO patters look like on your servers and try to map that to TCP connections. If you have a lot of iSCSI IO and a large amount of TCP connections on a server, that might be a candidate for TOE. That’s just a technical response, but then comes the really tricky part – the QUANTITY measurement of how many dollars it is worth… that’s way more challenging. For example, if I have a regular 10G NIC that costs $200 and a TOE card that costs 3x that and only saves 5% CPU, then it may not have enough value. On the other hand, if that 5% CPU can be used by your application to transact enough business to pay for the extra $400, then it’s worth it. Sorry to say that I have seen no scientific way to enumerate that value outside of specific hands-on testing of the solution with and without TOE NICs.

    Q. What is the difference between a stateless and stateful TCP offload? Are RSS and TSS (receive-side and transmission-side scaling) offloads a type of TCP offload or are they operating at a lower level like Layer 2?

    A. Stateless offloading is basically any offload function that can be done without the NIC needing to maintain a connection state table. Checksum offloads are an example. Stateful offloading is any offloading that requires the NIC to maintain a full state connection table. Receive Side Scaling has to do with distributing inbound connections in order to alternate connections coming into the server to different CPUs on a multi-CPU server. There are also some other performance-enhancements that can be done such as RPS, RFS, XPS and some others. These are more about how to get data from the network to the CPU, but are not really specifically TCP functions, as they have to do with uniform processing, not necessarily to do with the TCP stack.

    Q. Is using the host CPU to run iSCSI really a downside?

    A. There may be applications where this is a problem, but you’rr generally right; it’s not too much of an issue today. But there are iSCSI-based storage solutions coming up where a consistent 100s of nanoseconds to low microseconds of latency from the device is possible – and that’s very fast indeed. So an iSCSI stack in these circumstances needs to ensure that its consumption of CPU doesn’t increase the latency (even very efficient stacks can add 100s of micro- to milliseconds of latency), or cause contention for the CPU (busy CPUs mean you may queue for compute resources).

    Q. Is the term “onload” for iSCSI new – never heard this before?

    A. It was intended as a quick shorthand word to stand in contrast to iSCSI offload. It will probably not catch on!

     

     

     

     


    Your Questions Answered on Non-Volatile DIMMs

    April 3rd, 2017

     

    by Arthur Sainio, SNIA NVDIMM SIG Co-Chair, SMART Modular

    SNIA’s Non-Volatile DIMM (NVDIMM) Special Interest Group (SIG) had a tremendous response to their most recent webcast:  NVDIMM:  Applications are
    Here
    !  You can view the webcast on demand.

    Viewers had many questions during the webcast.  In this blog, the NVDIMM SIG answers those questions and shares the SIG’s knowledge of NVDIMM technology.

    Have a question?  Send it to nvdimmsigchair@snia.org.

    1. What about 3DXpoint, how will this technology impact the market?

    3DXPoint DIMMs will likely have a significant impact on the market. They are fast enough to use as a slower tier of memory between NAND and DRAM.  It is still too early to tell though.

    2. What are good benchmark tools for DAX and what are the differences between NVML applications and DAX aware applications?

    For benchmark tools, please see the answer for (11).

    NVML applications are written specifically for NVM (Non-Volatile Memory). They may use the open source NVML libraries (http://pmem.io/nvml) for their usage.

    DAX is a File System feature that avoids the usage of Page Cache buffers.  DAX aware applications are aware that the writes and reads would go directly to the underlying NVM without being cached.

    3. On the slide talking about NUMA, there was a mention accessing NVDIMMs from a CPU on a different memory bus. The part about larger access times was clear enough. However, I came away with the impression that there is a correctness issue with handling of ADR signal as well. Please clarify.

    If this question is asking whether the NUMA remote CPU will successfully flush ADR-protected buffers to memory connected to the NUMA near CPU then yes there is the potential for a problem in this area. However ADR is an Intel feature that is not specified in the JEDEC NVDIMM standard, so this is an Intel specific implementation question. The question needs to be posed to Intel.

    4. How common is NVDIMM compatible BIOS? How would one check?

    They are becoming more common all the time. There are at least 8 server/storage systems from Intel and 22 from Supermicro that support NVDIMMs.  Several other motherboard vendors have systems that support NVDIMMs.  Most of the NVDIMM vendors have the lists posted on their websites.

    5. How does a system go in to save? How what exactly does the BIOS have to do to get a system before asserting save?

    The BIOS does the initial checking of making sure the NVDIMM has backup supply on power loss, before it ARMs it. Also, the BIOS makes sure that any RESTORE of the previously saved data is properly done. This involves a set of operations by setting appropriate registers in the NVDIMM module – all that happens during the boot up initialization. On A/C Power Loss, the PCH (Platform Control Hub) detects the condition and initiates what is called the ADR (Asynchronous DRAM Refresh) sequence, terminating in the assertion of SAVE signal by the CPLD. Without the BIOS ARM-ing the NVDIMM module, the NVDIMM module will not respond to the SAVE signal on power loss situation.

    6. Could you paint the picture of hardware costs at this point? How soon will NVDIMM-enabled systems be able to become “the rest of us”?

    The NVDIMM use DRAM, NAND Flash, a controller and well as many other parts in addition to what are used on standard RDIMMs. On that basis the cost of NVDIMM-N is higher that standard RDIMMs.  NVDIMM-enabled systems have been available for several years and are shipping now.

    7. Does RHEL 7.3 easily support Linux Kernel 4.4?

    RHEL 7.3 is still using the 3.10 version of the Linux Kernel. For RHEL related information, please, check with Red Hat.

    You can also refer to: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.3_Release_Notes/index.html

    The distribution has drivers to support the persistent memory. They have also packaged the libraries for the persistent memory.

    8. What are the usual sizes for NVDIMMs available today?

    4GB, 8GB, 16GB, 32GB

    9. Are there any case studies of each of the NVDIMM-N applications mentioned?

    You can find some examples of case studies at these websites:  https://channel9.msdn.com/events/build/2016/p466 and https://msdn.com/events/build/2016/p470

    10. What is the difference between pmem lib/pmfs in Linux and an DAX enabled files system (like ext-DAX)?

    A DAX based File System avoids the usage of Kernel Page Cache Layer for caching its write data. This would make all its write operations go directly to the underlying storage unit. One important thing to understand is, a DAX File System can still use BLOCK DRIVERS for accessing its underlying storage.

    PMFS is a File System that is optimized to use Persistent Memory, by completely avoiding the Page Cache and the Block Drivers. It is designed to provide efficient access to Persistent Memory that would be directly accessible via CPU load/store instructions.

    Refer to this link: https://github.com/linux-pmfs/pmfs for more details. PMFS, as of now is only in experimental stages.

    11. What tool is used to measure the performance?

    The performance measurement depends on what kind of Application workload is to be characterized. This is a very complex topic. No single benchmarking tool is good for all the workload characteristics.

    For File System performance, SpecFS, Bonnie++, IOZone, FFSB, FileBench etc., are good tools.

    SysBench is good for a variety of performance measurements.

    Phoronix Test Suite (http://www.phoronix.com/scan.php?page=home) has a variety of tools for Linux based performance measurements.

    12. How similar do you expect the OS support for P to be to this support for –N? I don’t see a lot of need for differences at this level (though there certainly will be differences in the BIOS).

    As of now, the open source libraries (http://pmem.io) are designed to be agnostic about the underlying memory types. They are simply classified as Persistent Memory, meaning, it could be “-N” or “-P” or something else. The libraries are written for User Space, and they assume that any underlying Kernel support should be transparent.

    The “-P” type has been thought of supporting both the DRAM and the PERSISTENT access at the same time. This might need a separate set of drivers in the Kernel.

     

    13.  Does the PM-based file system appear to be block addressable from the Application?

    A File System creates a layer of virtualization to support the logical entities such as VOLUMES, DIRECTORIES and FILES. Typically, an Application that is running in the User Space has no knowledge of the underlying mechanisms used by a File System for accessing its storage units such as the Persistent Memory. The access provided by a File System to an Application is typically a POSIX File System interface such as open, close, read, write, seek, etc.,

     14. Is ADR a pin?

    ADR stands for Asynchronous DRAM Refresh. ADR is a feature supported on Intel chipsets that triggers a hardware interrupt to the memory controller which will flush the write-protected data buffers and place the DRAM in self-refresh. This process is critical during a power loss event or system crash to ensure the data is in a “safe” state when the NVDIMM takes control of the DRAM to backup to Flash. Note that ADR does not flush the processor cache. In order to do so, an NMI routine would need to be executed prior to ADR.


    SNIA Ranked #2 for Storage Certifications – and Now You Can Take Exams at 900 Locations Worldwide

    March 29th, 2017

    The SNIA Storage Networking Certification Program (SNCP) provides a strong foundation of vendor-neutral, systems-level credentials that integrate with and complement individual vendor certifications. Its four credentials – SNIA Certified Storage Professional; SNIA Certified Storage Engineer; SNIA Certified Storage Architect; and SNIA Certified Storage Networking Expert  – reflect the advancement and growth of storage networking technologies, and establish a uniform standard by which individual knowledge and skill sets can be evaluated, thereby providing employers in the storage industry with an independent assessment of the individual.… Continue reading


    How Many IOPS? Users Share Their 2017 Storage Performance Needs

    March 24th, 2017

    New on the Solid State Storage website is a whitepaper from analysts Tom Coughlin of Coughlin Associates and Jim Handy of Objective Analysis which details what IT manager requirements are for storage performance. The paper examines how requirements have changed over a four-year period for a range of applications, including databases, online transaction processing, cloud and storage services, and scientific and engineering computing.  Users disclose how many IOPS are needed, how much storage capacity is required,  and what system bottlenecks prevent them for getting the performance they need.

    You’ll want to read this report before signing up for a SNIA BrightTalk webcast at 2:00 pm ET/11:00 am PT on May 3, 2017 where Tom and Jim will discuss their research and provide answers to questions like:

    • Does a certain application really need the performance of an SSD?
    • How much should a performance SSD cost?
    • What have other IT managers found to be the right balance of performance and cost?

    Register for the “How Many IOPS?  Users Share Their 2017 Storage Performance Needs” at https://www.brighttalk.com/webcast/663/252723


    Object Drives Now Have a Management Standard

    March 9th, 2017

    The growing popularity of object-based storage has resulted in the development of Ethernet-connected storage devices, also referred to as IP-Based Drives, that support object interfaces, and in some cases the ability to run applications on the drives themselves. These scale-out storage nodes consist of relatively inexpensive drive-sized enclosures with IP network connectivity, CPU, memory and storage.

    While inexpensive to deploy, these solutions require more management than a traditional drive. In order to simplify management of these drives, SNIA has developed and approved the release of the IP-Based Drive Management Specification. On April 20th, the SNIA Cloud Storage Initiative is hosting a live webcast, “Object Drives Now Have a Management Standard.” It will be a unique opportunity to learn about this specification from the authors who wrote it. In this webcast, we’ll discuss:

    • Major components of the IP-Based Drive Management Standard
    • How the standard leverages the DMTF Redfish management standard to manage IP-Based Drives
    • The standard management interface for drives that are part of JBOD (Just A Bunch Of Disks) or JBOF (Just A Bunch Of Flash) enclosures

    This standard allows drive management to scale to data centers and beyond, enabling high degrees of automation and software only management of data centers. Reserve your spot today to learn more and ask questions to the folks behind the spec. I hope to see you on April 20th.

     

     


    Live Stream: What’s Happening with Enterprise Hyperscaler Storage

    March 7th, 2017

    What are the issues emerging for Hyperscalers and storage? Find out on March 10th when I’ll be presenting at Storage Field Day. The brainchild of Stephen Foskett, Storage Field Day brings together thought leaders to share information and opinions in a presentation and discussion format with a carefully selected delegate panel of independent bloggers, speakers, freelance writers, and other social media tech influencers.

    As large enterprise customers take a queue from Hyperscalers like Google and Amazon and build their own storage systems using software defined storage and best-in-class commodity components, assembled in racks, a range of issues around drives, APIs and tail latency are emerging.

    Join me on March 10th at 9:00 am PT via live stream to hear:

    • Increasing attention to the fast growing Hyperscaler storage market by drive and SDS vendors
    • Metrics for disk: IOPS, capacity, lower tail latency, security and lower TCO
    • The importance of tail latency and tail latency remediation
    • Current fractured approach of new features via RFP procurement does not scale
    • The impact on the supply chain
    • How the industry can respond to these technical requirements to promote adoption and standards for Hyperscaler storage
    • Case study – large global bank goes Hyperscaler

    And keep an eye on Twitter during Storage Field Days for what we hope will be a lively debate on the issues raised during this Hyperscaler storage presentation.