Infinite I/O

More RAM doesn't mean faster VMs

Posted by Vishal Misra on Oct 9, 2014
Find me on:


RAM is an abundant, but ultimately limited resource in server environments. Part of any performance troubleshooting effort will go into identifying where to most effectively allocate the resource. More often than not, I believe the allocation of available memory to Infinio is of more value to your infrastructure.


Here's why:

There is an important difference between how the Guest OS on a VM can cache data versus the distributed scale-out caching solution Infinio offers. Infinio creates a caching layer using small amounts of RAM across ESXi servers and pooling them together into one shared cache. That RAM could also have been repurposed by giving it to individual Guest VMs so they can implement their own file system caching. So why give it to Infinio instead?

There are three primary reasons for it, and let me go through them:

  • Statistical multiplexing. Guest VMs have their own load variations and demands, it is not immediately obvious how that RAM which goes to Infinio (and products similar to Infinio) should be partitioned across the guest VMs. If the guest OSes do not have peak demand simultaneously, then it is much more efficient to have a shared RAM cache across the guest VMs rather than siloed caches individually in each guest VM.

Think of lines at a grocery checkout. If you have one line per checkout clerk, then some lines would move faster and some slower, because of the variation of both the load seen by (i.e. number of grocery items) as well as processing speed of individual clerks. Some faster clerks can have nobody in their line, and their processing capacity gets wasted - it could have been better utilized in helping out the other clerks who are overloaded.



One can constantly change the number of people in a particular line, modulating it according to this speed but a better way is to simply have one shared line for all the clerks and whoever is free next can immediately service the next customer. No capacity is wasted and a fair allocation of the resource is made to the customers.

This is statistical multiplexing and it is the same principle that has made the Internet such a big success, packet switching (the Internet way) uses statistical multiplexing whereas circuit switching (the old telephone system way) does not. At Infinio, we leverage this statistical multiplexing not only across guest VMs in an ESX server, we go a step beyond other products by leveraging it across the servers in a cluster.

  • Deduping. Most virtualized environments have a lot of redundancy in them. The working sets of the guest OSes can have as high as 70-90% redundancy, depending on what kind of workloads the are running. Since Infinio caches by content rather than location, we elegantly exploit this fact. If the RAM is allocated individually to the guest OSes the same block is cached multiple times (say the binaries of an Office application or some file like a PPT or PDF that is viewed by VDI users across an installation).


At Infinio, we cache that redundant block (redundant by content, not necessarily by location) exactly once across all ESX servers, providing a lot more efficient use of that RAM than individual file system caching by the guest OS. Each guest OS can have that block individually in their file system (on the VMDK) and location based caching approaches, whether in the guest OS file system caching or by a shared location based caching layer, will cache the block multiple times thereby making an inefficient use of that RAM. Let's say you have a VDI install and every user runs Office, then if you have 500 simultaneous users in your cluster running the application, other approaches will cache those blocks 500 times, whereas at Infinio we only cache it once. This is efficiency in storage provided by Infinio.

  • Prefetching. This is similar to the previous aspect I described but the effect is a little more subtle. Since we cache by content, the access patterns of similar guest OSes act as an implicit pre-fetch for our cache. Our shared cache is warmed up by the access requests of one guest OS, but if another guest OS requests the same application from the storage array, we already have the content cached. This can provide significant benefits in situations like application loading or boot storms. And again, since at Infinio our cache is shared and distributed across the cluster our benefits extend beyond guest OS caching or siloed caching at an individual ESX server. A little different from the previous effect, this is efficiency in delivery provided by Infinio.

Now we can see that adding RAM in the right place can provide a significant benefit to VM performance. Hopefully this has convinced you that it is better to give that RAM to Infinio. 

Learn About Our Architecture

Topics: About Us, Talking Tech