Cisco's announcement to end its Hyperconverged Infrastructure (HCI) solution, Hyperflex (HX), raised a question I've had for a long time around when to HCI and when not to HCI.

Cisco Hyperflex

HX had a unique selling point – it’s performance. ESG testing validated that HX provided consistent performance to all hosted VMs. Whereas other HCI solutions would provide great performance for some VMs, but poor for others. When it comes to day 2 operations, consistent performance makes everyone’s life easier and I’d argue, is essential. This was a key differentiator for HX and no other HCI solution delivers it.

Why HCI?

HCI was promoted as a solution to simplify the data centre.

This was ideal in theory but doesn’t always translate to practice. When it scales you often need multiple clusters for different workload profiles and a separate management cluster. So, you need to manage multiple storage pools together with multiple replication policies and configurations. And if you have a mix of storage capacity drives or nodes which can easily happen over the lifetime of the solution, you need to ensure there’s sufficient capacity elsewhere should a large capacity drive or node fail. Not the simple solution you thought you had signed up for. Management overhead.

Want to enable deduplication? Firstly, ensure it’s supported for the workload in question as this isn’t always the case! Want data replication? Erasure Coding? Encryption? Tiering? Ensure the load on CPU doesn’t degrade performance. And thoroughly test the performance hit during a drive or node failure. Management overhead.

Then do you want N+1 or N+2 redundancy in the cluster? And do you carve clusters up into node groupings to optimise failure domains? Management overhead.

Software and hardware upgrade of clusters takes significant planning. Management overhead.

It started simple. It’s ended up complex, as HCI has expanded its capability to support pretty much any workload. Is it a case of just because you could, doesn’t mean you should! Or square peg, round hole?

Market Development

Let’s compare HCI with a leading all-flash array such as Pure FlashArrays in a Converged Infrastructure (CI) solution.

We are assured with in-line dedup, compression, encryption, redundancy, etc. that has zero impact on performance, for any workload with no management overhead. Consistent sub-millisecond latency performance at 100,000+ IOPs. Continuous backend deep dedup for efficiency. No tiering or multiple groupings required or concerns around different capacity drives. Simple management. Simple non-disruptive software and hardware upgrades. Simple … full stop.

Yeah, okay, you’ve got separate compute in CI. However, there are advantages in deploying a blade chassis such as Cisco’s UCS-X over the rack servers in HCI, and when there are solutions such as Cisco Intersight that provides a single platform for management of storage, networking and compute that includes automation, CI management can now be simpler than an HCI solution.

Do the math around compute resource overhead for HCI controller VMs. Add those resources up across clusters and you may be looking at additional nodes’ worth! Additional licensing, power, cooling, etc.

Sustainability

Which brings us to Sustainability which has become a business KPI that companies are becoming accountable for. In a DC, 20 – 25% of energy consumption is on storage.

With HCI, each cluster has built in storage redundancy in the event of disk or node failure – a whole node or two’s worth of disk. When deduplication and erasure coding can’t be used, as is for many databases, more storage is required which in turn requires more energy, cooling and more DC space, increasing your carbon footprint . After deduplication, the resultant data is replicated two or three times across the cluster. The overhead is therefore significant.

With CI, if we take the Pure FlashArray, the overhead after dedup is a mere 12.5%. Then take into account Pure’s DirectFlash drive technology and even less raw storage is required.

There’s a chasm’s difference in the Sustainability benefit in the storage of CI over HCI.

To HCI or not HCI? That is the question.

So, is there a good rule of thumb as to where HCI is a good fit?

It comes down to scale. Tempered by performance.

If the performance of all workloads can be accommodated on a single cluster with dedup enabled then HCI is worth considering. This is usually true in small scale environments such as at the edge, for smaller clients, or where clients have migrated most workloads to the Public Cloud.

When consistent performance is key, or you’re faced with multiple HCI clusters, or the quantity of required storage is high, I’d argue CI would be the answer.

Nigel Pyne – Principal Architect

 

How can Natilik help?

There are many factors that can impact the overall decision of technology strategy, be it the businesses priorities, budget, skillset, growth plans or having to play nice with legacy kit, that’s why at Natilik we vow to act as the confident guide and work through each nuanced instance with no ‘one solution fits all’ approach.

It may be that HCI is the answer, it may be that CI fits the bill, but clarity and understanding of the benefits and limitations of each allows the design and implementation of the best fit solution every time.

If you’d like to discuss your HCI or CI challenges, please get in touch today.

Return to Resources