Gartner’s Challenge Will Be Remodeling Its ‘Magic Quadrant’ Kingdom



Gartner's Challenge: Remodeling Its 'Magic Quadrant' Kingdom

The new Gartner 2015 Magic Quadrant for general purpose disk arrays was recently published, making it clear that the transition sweeping the storage industry poses problems for everybody involved, including analysts, customers and vendors. Gartner’s Magic Quadrants are a fast and convenient way of summarizing a lot of information about technology, markets and vendors and they have become a reasonable yardstick for narrowing product investigations. A vendor’s trajectory within an MQ signals either progress or problems. For instance, the constellation formed by Tegile, Tintri and Nimble continue to move together in the right direction (up and right), while past storage system bellwethers IBM and Netapp slid the opposite direction.





But the big challenge for Gartner is not necessarily placing vendors inside an MQ, but structuring the MQs when so much is changing so quickly. How are they supposed to incorporate catalytic technology like flash and maintain the positions of large slow-moving incumbent vendors while simultaneously highlighting the vendors who are growing fastest in the most explosive industry segments? For enterprise storage, Gartner decided to roll hybrid flash arrays together with disk-based arrays in the category of General Purpose Arrays and separate all flash array vendors in their own MQ analysis. Fair enough, but many people believe storage arrays are more accurately segmented into three natural categories as illustrated in the image above: all-flash, hybrid and the valley of disks. The problem is that all-disk arrays are still being sold in large, but shockingly declining numbers. Hence the “valley of disks” (death implied).

For what it’s worth, Tegile is happily in both the general purpose and all-flash MQs, which is something some of our competitors (Nimble, Pure, SolidFire) can’t claim. That speaks to the breadth of our products and our ability to provide a cost-competitive solution for all customers, whatever workloads they are having problems with.

But structuring the MQs by flash integration level (hybrid and all-flash) is not going to be realistic for Gartner in the years ahead. As high-density (HD) flash prices drop to near-disk levels and storage performance expectations rise, the market for flash+disk hybrids will likely tail off and be replaced with hybrid-flash (high-performance + high-density) arrays. Within five years (1.5 refresh cycles from today), disk drives in enterprise storage arrays will be about as popular as carburetors in cars. Gartner won’t have to pick and choose their MQ categories based on anything having to do with disks anymore. By then everything else will have changed too and most MQs will be considerably different than their current versions. The questions are; “what will drive the change and what criteria will Gartner base the new MQ’s on?”

The most likely drivers of change will be cloud computing, big data and infrastructure containerization. The cloud is forcing the IT industry to find lower-margin business models. Dell’s move to acquire EMC is a move to reduce the costs and margins of EMC storage products. Competitors who survive the low-margin transformation of EMC will have the technologies, architectures and business models that can adapt to the new rules of engagement. When you understand the financial imperatives at play, the array vendors that spend the least acquiring new customers will have a distinct advantage over those that spend the most. The market available to non-incumbent vendors will be larger than ever and the Gartner MQs will become even more important to vendors than they are today. It’s good to be Gartner and it’s good to be a company moving the right direction.

The Big data revolution is well underway and it is forcing a re-think of the design and architecture of enterprise storage. Will big data applications use server-based, non-shared, scale-out approaches to data persistence or will enterprise storage evolve to cost-effectively support big-data applications?

Although container technology is relatively new, it is quickly moving to challenge hypervisors as the ultimate consolidation and cost reduction technology. A container-based compute world will reshuffle the infrastructure deck once again, just as hypervisors did a decade ago.

Tegile was founded on the philosophy that things always change, which is why we built a flexible platform that accommodates all types of environments and workloads, including containers. Containers will have nuances that Tegile will exploit effortlessly for our customers.

You can say it’s all speculation, but the next five years will bring huge changes to our industry. We will continue to move up and right in the MQ, whichever one it turns out to be.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>