Infinite I/O

Best of Both Worlds: Infinio 2.0 Architecture

Posted by Scott Davis, Infinio CTO on May 28, 2015

As Infinio Accelerator 2.0 is released into “the wild”, I’ve been reflecting on some of the architectural changes we’ve made since version 1.0. 

When our founding team was faced with the challenge of creating an efficient, integrated storage solution, they turned to a simple way to add this type of functionality: create a virtual storage appliance that served as an NFS proxy between the VMKernel NFS client and the storage array. These virtual appliances also communicated with each other to coordinate their activities in a distributed fashion. 

One of the reasons that this was such a popular implementation technique is that a virtual appliance can run most any type of complex user-level code because it has a dedicated OS and robust run-time libraries. And it can maintain efficient pass-thru control of dedicated I/O devices being managed for its service. However, the downside is that as a virtual machine it is scheduled alongside other workloads, when it is actually a shared I/O service for those workloads.  Such an architecture may have longer code paths, and incur context switches and higher latencies when compared to a kernel mode service if not constructed carefully.

When we started building Infinio 2.0, our customers told us we needed to branch out beyond the NFS acceleration market and apply our unique technology to block-based storage systems as well. That meant intercepting Fibre Channel and iSCSI SAN traffic, and communicating with VMFS, the clustered block file system built into vSphere. We were committed to providing the same seamless, transparent user experience we had pioneered with the 1.0 product. More generally, we wanted to keep all the value we had built in our rich content-addressable caching engine for v1.0.  But we also thought it would be nice if we reduced latency and increased throughput from our previous version while we were at it. (We like to set aggressive goals!)

 

JK_image-1

This image depicts the hybrid kernel and virtual appliance approach we took with our 2.0 architecture.

 

The result is a hybrid architecture for our 2.0 product that is unique in the VMware integrated storage product space. We have created a hybrid VMkernel-integrated solution that also leverages a virtual appliance and the benefits that come with it. 

  • At the kernel level, we used the PSA multi-pathing framework (the only pre-vSphere 6.0 VMware commercially-sanctioned suitable kernel intercept mechanism). Our multi-pathing compatible plug-in serves two purposes: it provides the traditional multi-pathing polices supported by VMware, and it handles Infinio cache misses and write traffic directly. This provides the lowest possible latency and best performance.  
  • At the virtual appliance level, our implementation also interfaces with our content-addressable, deduplicated memory store that continues to run in a rich virtual appliance environment.  This enables us to service cache hits from the appliance’s distributed memory store and manage our sophisticated cluster-wide cache.  

With this architecture, we deliver version 2.0 of Infinio Accelerator knowing that it will provide excellent performance without compromising its rich scale-out capabilities. Coupled with the expanded platform support, UI enhancements, and VM-level reporting, the newest version of Infinio Accelerator is sure to add significant value to a broad variety of environments.