How to Choose a Hyperconverged Infrastructure (HCI) Solution – Cost and Performance – by Eric Slack

By , Monday, January 23rd 2017

Categories: Analyst Blogs

Tags: Cost, Eval(u)Scale, Evaluation, HCI, hyperconverged, performance, TCO, Testing,


Evaluating products in the IT space is a complex process. A simple “feeds and speeds” comparison isn’t enough as features and functionality proliferate. This is especially true with Hyperconverged Infrastructure (HCI) products where the evaluation now encompasses compute and management functions, not just storage.

In the last blog I started a short series on Eval(u)Scale, a new tool created by Evaluator Group that IT professionals can use to make better product decisions. In it we list what we consider the 10 most important characteristics and rate each product according to these criteria. In this blog we’ll look at Economics and Performance.

Cost can be a tough topic to accurately cover. Nobody pays list price and the “street price” for a given piece of equipment is somewhat of a moving target, especially when vendors get into price cutting around the end of the quarter. Rather than trying compare acquisition price, we focused on the features that can impact cost of ownership. That way the appropriate factors can be applied to comparisons at any price. These include:

Data reduction – Deduplication and compression can significantly reduce the storage capacity consumed for a given application or workload, especially with all-flash storage common in hyperconverged systems. SimpliVity performs data reduction at ingest, reducing storage costs and data handling. Atlantis Computing runs deduplication in DRAM to increase effectiveness.

Data resiliency – All HCIs create a certain amount of redundant data in the process of ensuring availability. Offering parity-based RAID or erasure coding instead of simply creating multiple copies of each data set can reduce this capacity overhead.

Hardware savings – Offering smaller models and configurations (less capacity, smaller CPUs, fewer nodes) can save on hardware in smaller environments. Some vendors claim their software requires less CPU overhead, reducing costs by supporting more VMs per node.

Software savings – A proprietary hypervisor that’s included can save money, as can node configurations that don’t run a hypervisor or systems that can run using fewer CPU cores. Nutanix and Scale Computing provide a hypervisor with each node. Data services and management features that replace third-part software can also improve economics.

Operational savings – HCI appliances that are easy to set up and configure and re-configure can cost less over time. Systems that can be run by users instead of IT admins also save money.


In a storage system performance is typically expressed in terms of data throughput, IOPS or latency. These “feeds and speeds” can be measured using industry standard testing benchmarks, or just derived by the vendor or a third party lab and published. Care must be taken to express the context for each test, such as data block size, the mix of read and write operations, etc.

In a hyperconverged system these kinds of performance statistics are only part of the story, and supporting part at that. HCIs are bought to run virtual server or desktop workloads and also include compute resources. A more useful measure of performance is how many virtual machines or virtual desktops a given configuration can support. And, instead of block sizes or read and write percentages, the context involves describing the workloads, typically the type of applications that are running. Login VSI is a tool that can measure performance for VDI workloads and IOmark is a testing suite developed by Evaluator Group that provides that information on both virtual machine and virtual desktop workloads.

Just like costing, performance can be directly affected by a number of factors. These include the storage devices used, the total capacity of NAND flash and whether models are available with all flash storage. Most HCI vendors now offer all-flash models, some over 40TB, and one, Pivot3, offers all-flash nodes that have 60TB of capacity.

NVMe is a new high-speed connection based on PCIe that’s just starting to be offered by HCI vendors. With flash replacing traditional disk storage NVMe can be the new caching layer. Evaluator Group used this configuration for a series of IOmark tests involving VMware vSAN running on Intel servers. The results were surprising to many, in that the cost per VM was actually lower for an all-flash configuration than for a hybrid configuration.

For hybrid systems, the ability to pin workloads in flash can boost application speed. For all-flash nodes, some vendors use DRAM and flash in a “tiered cache” configuration.

Aside from storage, obviously, compute power directly affect performance. All HCIs offer some choice of processors, with current generation CPUs providing over cores per node (dual CPUs). In VDI use cases, graphics processing units (GPUs) can improve performance, which many HCI vendors offer as well.

The importance of performance and economics in a product comparison is obvious, but pulling out meaningful numbers from available data isn’t. The Evaluator Group Eval(u)Scale is designed to help with that process. In the next post we’ll look at two more criteria.

The amount and diversity of technology available in infrastructure products can be overwhelming for those trying to evaluate appropriate solutions. In this blog we discuss pertinent topics to help IT professionals think outside the checkbox of features and functionality.

Forgot your password? Reset it here.