My Friend George’s Analysis of VM Aware Vs. ZFS Storage for Tintri

It appears our brethren over at Tintri are feeling the pain of competing against ZFS based storage systems, and found it necessary to hire our friend George Crump at Storage Switzerland to write up a piece that differentiates Tintri’s “VM-Aware” storage and low value-add ZFS players. First and foremost, i would like to thank Tintri for educating the market on the shortcomings of the offerings of ZFS-based systems that have not done the hard work to make them exceptionally valuable solutions for server and desktop virtualization like Tegile does.

Let’s run through the document paragraph by paragraph to peel away the layers of ZFS:

Protocol Flexibility
George’s analysis first takes a poke at the shortcomings of ZFS’ multi-protocol, or unified storage approach. The issue raised is that ZFS must do a double translation of block data into ZFS’ file system before data is read or written. This is not true in Tegile’s implementation. Furthermore, Tegile’s Meta-Data Accelerated Storage System architecture enhances efficient handling of really large datasets, especially if they are de-duped. Our engineering team made enough enhancements to handle large scale datasets at high performance. Also, I want to point out that ZFS is the filesystem. The protocol stack (be it file I/O or block I/O) is completely separate from ZFS. So, there is no such thing as ZFS’ protocol stack. Tegile’s engineering team built several enhancements of The separate protocol stack to optimize performance for unified storage at scale. You can see the results in the Lab Validation Report ESG ran for us in April. They found that our IOPS and latency numbers were very impressive and our block based IO was excellent. Tegile response – “yeah, we fixed that, and our systems are really fast.”

SSD as a Cache
Next, the analysis moves on to the use of SSD as a cache. George and Tintri give credit to ZFS and its ability to leverage SSD with good results. Thanks guys! That is one thing I like about my pal George; he always gives credit where credit is due. (Note – I’ve worked on projects with George for many many years.) Tegile truly believes that flash as a media is a game changer. Therefore, we leverage the best flash SSDs available in the marketplace to dramatically transform read and write I/O performance as well as latency. Furthermore, we uniquely leverage flash as a store for all metadata within the system, thereby accelerating all operations on the system. Once again, the engineers at Tegile took it a bit further and made the SSD pool even better than this, but I’ll get to that after the next section on deduplication (I’ll remind you when I get back to this part).

Data Deduplication – the Counterintuitive Differentiator
I’m going to quote the Storage Switzerland article directly, just to avoid any misunderstanding, because this part is a big deal for us. It states

“Some ZFS-based storage system vendors may strive to add some value to their systems, and interestingly the most common feature they tend to use as their added value is data deduplication. While ZFS does include deduplication and compression, these vendors reinforce the common perception that it is not suitable for primary storage. A few other vendors have done optimization of the ZFS tiering or caching technology but they are still limited to the constraints of what ZFS provides access to.”

This is where Tegile’s IntelliFlash Storage System architecture comes in. We took all the things that a storage system must to to deliver best-in-class performance and capacity optimization: deduplication, compression, RAID, snapshot pointers to name a few, and architected them into an isolated, really fast portion of the system to get all of these things that are traditionally bad for performance, and make them either not hurt performance, or even make our systems faster! This is where I’m going to come back to the SSD caching part. Our MASS architecture lets compression and de-duplication work in conjunction with our caching engine that uses both DRAM and SSD, to make them appear larger to the virtualization platform. Let’s say, for example, you are using our HA2100EP, that has 1.2TB of RAW SSD capacity. In an average server virtualization environment, the effective SSD capacity used for caching would be 3-5 times that of the RAW capacity. That means our cache is 3-5 times bigger than the RAW SSD capacity and the VM’s cache hit ratios go through the roof! That is how our users get about 7X the IOPS of a traditional array, run at about 1ms latencies, and purchase up to 75% less capacity than they would without compression and de-duplication. We make no hedges or warnings about de-duplication’s impact in performance. We lead with it as a massive differentiator against those other guys. Tegile response – “dedupe is good for performance!”


Tegile’s Deduplication and Compression both reduce capacity requirements and speed up our arrays!

VM-Aware Storage
Last, Tintri asked George to write a bit about VM Aware Storage. That makes sense, that’s Tintri’s claim to fame. We like VM-Aware storage too. That’s why we have V-Center integration, are listed on VMware’s HCL, have VAAI integration, and most recently, were highlighted by Scott Herrod, VMware’s CTO as a market leader in “VMware-Awarenss” by recognizing Tegile as a key industry partner for VMware delivering value in storage. Below is a picture of Scott talking about it at the 2012 VMworld Keynote. I see our logo, is someone else’s missing?

Steve Herrod
VMware’s CTO, Scott Herrod acknowledging Tegile and others for VM-Aware Leadership at VMworld 2012

Tegile also has a very powerful reporting engine that is VM-Aware. IT administrators can bore into a Tegile array’s analytics and associate capacity and performance monitoring metrics to VMs. This really accelerates both troubleshooting as well as building economic models for chargebacks. Both powerful tools for a virtualized environment. Now, I’ll give credit where credit is due as well. I believe Tintri has some pretty slick reporting too, and overall does a good job at VM-Awareness. I think there are others in our space that are indeed laggards here. Tegile response – “Yep, VM-Aware, too.”

So, to sum it up, I appreciate that Tintri had my friend George write this up. It gave me the platform to clear the air on how Tegile’s wicked smart engineers have used a ZFS-based platform for many of the boring parts of storage (who really wants to write their own NFS stack these days anyways??), and focus their time and energy on maximizing our value and differentiation in the market, thus giving you an incredibly cost-effective and screaming fast storage system that is VERY VM-Aware and gives me the opportunity to have a much more fun marketing job than my peers at other storage companies.

If you want to see Tegile’s flash storage arrays in action, contact sales@tegile.com for a demo. We’ve got lots of really good customers that will let you know how much they love Tegile too. Just ask.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>