Infinite I/O

Hidden benefits of distributed cache

Posted by Sheryl Koenigsberg on Mar 19, 2014

I’m new here at Infinio and my first few weeks have been filled with all kinds of questions. Some are technical, like “what does it mean to be content-addressable?” and “what is SHA-1?” Some are more mundane, like “where is the backspace on a Mac?” But the most interesting ones start with “why would a customer…”

  • Why would a customer need a server-side cache?
  • Why would a customer choose virtual distributed switching?
  • Why would a customer have more than one hypervisor?

One of the questions that’s really relevant for us is “Why would a customer choose a distributed cache over a 1:1 cache?”  It turns out there are a few reasons for this.

  1. One is that a distributed cache can be bigger - it pools resources from several servers. In the case of Infinio’s cache, it’s also deduplicated, so all the servers get the (really great) benefit of deduplicated shared cache. For example, with Infinio, each host provides 8GB of DRAM, so in a 5-host cluster, that’s a 40GB cache that each host has access to. However, with deduplication that can effectively act as 7-10X (or more!) of that space, so each host now effectively has 280GB+ of cache. (WOW!)

    openbench dedupe

  2. Another reason is that a distributed cache amortizes the cost of lookups across all the servers in the cluster.  If a system had a uniformly distributed load, each host would make the same number of requests...but the real world isn’t uniformly distributed.  Since you probably won’t be able to predict which hosts will have more or less load at any given time, a distributed cache provides an architecture where all the hosts are sharing the load of answering read requests.
  3. A third benefit of a distributed cache is more subtle but also very valuable.  Consider why administrators use VMware features like vMotion: to limit the interruption to typical operations during optimization and maintenance activities.  Well the last thing that any performance-enhancing tool should do is break that commitment to continuous operations.  Ideally, we’d also like to avoid performance tanking because of a vMotion activity.  (Especially because it might be DRS moving a virtual machine to rebalance resources and increase performance.)

With Infinio software, when a virtual machine is vMotioned from one host to another (either through DRS or a user-initiated activity), the mapping between write locations and hashes (to determine redundancy) is moved along with the virtual machine to the new location.  This is a feature called “Warm Cache vMotion”.  So, the VMs move to their new location (as per normal VMware operations) but they already have information about what data is cached on which node that travels with them.  Not only can the vMotion activity occur without interruption, but the performance advantages gained by using the cache can occur without interruption as well.

In general, I find that our product is designed to do this - increase performance with as little disruption to what your systems are already doing as possible.  You can see this not just in features like warm cache vmotion, but also capabilities like our installation and un-installation.

I look forward to sharing more about Infinio as I get up to speed.

Sheryl is Director of Product Marketing at Infinio

Topics: About Us, Talking Tech