Dispelling the Hyper-Converged Myths

Comparing Hyper-Convergence to Dedicated Architectures

Increasingly, organizations are being asked to look at hyper-converged infrastructure (HCI) as a way to become more cloud-like in their delivery of IT services. IT infrastructure is essentially made up of three basic components; compute, network and storage. HCI vendors combine all of these components into a single product offering and claim that it is a better solution for the customer. Is it really? These claims have reached almost mythical proportions, so it is time to do some myth busting.

Myth #1 – HCI Is Easier to Deploy

Most HCI offerings require at least three nodes in order to function, implying that the initial implementation requires the racking of three pieces of hardware, networking that hardware and creating a cluster. In all fairness, the HCI vendors have done a pretty good job with establishing the cluster but the physical work in stacking those nodes and making the connections is significant. It also requires the migration of virtual machines (VM) to the new cluster since most (almost all) HCI solutions cannot present storage resources to any VM outside of the cluster.

A dedicated storage system requires the implementation of one piece of hardware and then just a few connections to the network/fabric of your choice, which you probably already have. Additionally, existingservers and the VMs on those servers can immediately access that storage, with no migration required.

While it is true that some HCI vendors have recently come out with a single node solution, these place data at risk and are problematic when the time comes to expand beyond that single node. Vendors will sell most of these HCI starter systems as two node pairs with data mirrored for availability and VM mobility. The first problem is that this protection method is the most wasteful.

There is also a challenge when the organization scales and wants to go to three nodes or more. The data protection method will need to change from mirroring to parity-based protection, which stripes data across the nodes. Each vendor has a method to manage this migration and none are pretty; in fact, some are downright ugly.

 

Myth #2 HCI Is Easier to Scale

The second myth is that HCI is easier to scale, with customers told that they can “just add a node”. Technically this is true, the customer can just add another node to the rack but it is still necessary to network that node into the cluster. Networking means properly configuring and setting ports. Most likely there are very specific V-LAN functions so that some network traffic is dedicated to storage only, some to cluster communications, and still others connecting out so that the cluster is available to the organization.

Once the node has been mounted and properly connected to the network, the storage software has to figure out how to take advantage of the new capacity. Usually, the availability of new capacity means re-balancing current data by moving a percentage of the data to the new node. This re-balancing task can be especially problematic in the early days of cluster expansion. Let us assume the organization has a three-node HCI configuration and “just adds another node”. This means that approximately 25% or more of the data on the original three nodes needs to be copied the new node. Considering that a single node can store tens of terabytes of data, we could be talking about the movement of 25 TBs or more of data across the network.

In the dedicated storage system example, if the first storage head has run out of capacity, just add another shelf. There is no external networking, no copying of data, all new data is automatically stored on the new shelf, which makes sense.

Team HCI may claim that a dedicated storage system will be difficult to upgrade once it can no longer add additional shelves. Let us examine that. First, will it ever really run out of capacity? Our current systems with modest deduplication and compression rates can deliver effective capacities from 1 to 5 petabytes. How many nodes would it take a hyper-converged system to reach the same capacity levels? For most vendors it would take dozens of nodes, if not more. Most HCI vendors have a limit on how many nodes they can support in a cluster before the organization is forced to manage multiple clusters. Isn’t having to manage multiple clusters counter to the simplicity promise of HCI?

Now let us say the customer does need more than five petabytes of capacity, we can easily manage multiple IntelliFlash arrays from within our GUI and unlike a hyper-converged system, you won’t need dozens of systems, maybe two or three.

 

Myth Busting Continues

There are more HCI myths to bust, the goal here was to get the two big ones out of the way, easy to start and easy to scale, both claims are suspect. In my next entry, we will discuss the HCI Performance Myth.


Leave a Reply

Your email address will not be published. Required fields are marked *