Infinite I/O

Notes from our storage lab on write performance (Part 1)

Posted by Sheryl Koenigsberg on Jan 23, 2017

As a company with a read cache, we often hear from prospects and partners concerned with whether we can improve write performance.  Nearly all environments are a mix of reads and writes, and IT decision makers are right to question what impact we can have on their entire I/O stream.

Fortunately, we have always been able to impact write performance by offloading substantial read I/O from arrays.  This has been our narrative for a long time, and it's proven itself out in many customer environments over the years.  However, more recently we've put Infinio through some substantial benchmarking in our performance labs.  We're going to share some of the results today, so you can see exactly what our read cache can do for writes.

First up, let's see what impact Infinio has on the storage system.  To do this, we ran a battery of HCIBench tests on an all-flash server SAN.  It's important to note that these are not maximum IOPS numbers or minimum response time numbers that Infinio (or the SAN) can drive. Rather, these tests were designed to measure improvement in mixed workload environments.

This test is with an 8K block size, with a 50/50 read/write ratio.  Let's take a look at performance in vCenter, looking at the datastore and the VMDK.

From 3:00 to 3:10 you can see the reads and writes are pretty equal (which you'd expect from a 50/50 ratio), around 24,000 IOPS each. But right after that, Infinio begins caching, and you can see the reads drop to about 5,000. With significantly fewer reads hitting the storage, there is headroom for more writes. In fact, the writes reach nearly 40,000 IOPS, leveling off at around 36,000 IOPS. This is what we mean by the benefit of offload: 50% more writes, with bursts higher than that.  

vCenter read and write performance for datastore


Now let's look at it from the VMDK perspective. From 3:00 to 3:10, things look similar to the datastore chart: a 50/50 workload, evenly driving read and write IOPS. What's different here is that when Infinio begins caching at 3:10, the VMDK continues to see a 50/50 workload. However, both the reads and writes are 50% higher. The additional read performance is coming from the cache, and the additional write performance is coming from the headroom created from offloading reads. The VMDK sees a 50% increase in performance across the board.

vCenter read and write performance for VMDK


This is the power of offload with a server-side cache. In our next post, we'll look more closely at the performance benefits for different read/write profiles.

Meanwhile, if you're interested in seeing what Infinio can do for you: 

Request a trial


Topics: Talking Tech