This is a post on DCIG’s site that Jerome Wendt was kind enough to permit Tegile to post here as well.
It was not that long ago – like no more than five (5) years ago – that if as a storage administrator you could configure a storage system to provide average response times of around 2 milliseconds for any application, you were a hero to everyone you supported. Fast forward to today’s hybrid and all flash memory systems and 2 millisecond response times are the new “slow.” In this first installment of my interview series with Tegile System’s VP of Marketing, Rob Commins, we discuss how hybrid and all flash memory are redefining the “Gold” standard for performance in storage systems.
Jerome: In the past, storage system response times were measured in milliseconds (ms) with 2 ms often considered the “Gold” standard. Now I regularly hear about the new gold standard for response times being in the sub-millisecond range. How is Tegile meeting these heightened user expectations for storage system response times?
Rob: We have seen the same shift in market expectations. Last year Tegile did some benchmarks with an independent lab who found that Tegile was able, even in its entry level product, the HA2100, to deliver sub-2 millisecond response times. Further, depending upon the block sizes and the randomness of the IO, it could even achieve sub-1 millisecond response times. Tegile does that using a combination of a large pool of DRAM as well as solid state flash drives that it puts in the front end of our data path to deliver these very fast response times.
One key technology differentiator that Tegile offers in its architecture is its inline deduplication and compression that we put in front of that cache. That allows us to reduce the data footprint coming into the array.
Let’s say on average we get a 5 to 1 data reduction ratio. Host applications think they have got five times the amount of flash and DRAM in the system. Our hit rates on that cache go up by a factor of 5x as well. As a result uses of Tegile storage systems have a higher propensity to see sub-2 and even sub-1 millisecond response times.
Jerome: Tegile has both hybrid and flash arrays so is it realistic for users to expect the same type of consistent performance across both of these two types of arrays?
Rob: Tegile has both all flash systems and hybrid storage systems. Purely by configuration, Tegile has three different flavors of hybrid arrays that deliver 30,000, 75,000, and 125,000 IOPS respectively. Then our all flash array delivers 200,000 IOPS.
Tegile helps customers right size into the amount of performance they need. However across the board they consistently get those sub-2, sub-1 millisecond response times.
Jerome: You used the word “consistent” more than once when describing the sub-millisecond response. How consistent is application performance when using a Tegile system?
Rob: First and foremost, all of our writes land in high performance memory and flash. You are always going to get those types of response times out of the system on a write. Our caching algorithms will flush a small part of the stalest portion of the cache down to disk. At the end of the day, because it is a cache and not a tier, everything lands on spinning disk. Our caching algorithms keep the hottest data in cache and because it is factored up by our data reduction rates, we keep a lot of content in cache.
On the read side, if data is not in cache, the systems will go down to disk to get that data. Having a large pool of spindles and striping the data across those spindles keeps response times very low. Our customers are not coming back to us to say that they are seeing these periodic dips in performance.
Jerome: Which of these arrays most cleanly maps into a user environment?
Rob: What Tegile has done from an engineering standpoint as well as a selling motion standpoint is targeted aggressive virtual server consolidation, virtual desktops, and SQL database optimization. Those are our key target areas we sell into.
We do our benchmarks to reflect the IO pattern of those three different use cases as closely as possible. You can optimize a benchmark so a storage system is running downhill with the wind behind it. We try to actually synthesize or use a real world actual customer applications to characterize our published IO results.
Jerome: So the IO numbers you talked about earlier, those will closely reflect what people are going to see in the real world?
Rob: Yes. Tegile has had third parties do the testing. It is not our own technical marketing people doing it. We do not have a rudder in the water biasing the final published results.
Jerome: Since Tegile has a number of storage systems (hybrid and flash,) what are some steps that Tegile takes to help customers make the right choice for their environment?
Rob: The first thing we do actually takes a little bit of work. On the front end of a customer engagement we try to estimate their aggregate appetite for IOPS and then try to right-size them into one of four systems.
For example, we know virtual desktop environments want significantly more cache than let’s say a file server. We will bias up the configuration for those particular applications. We then go one step further. When a system is deployed in their data center floor, our provisioning tools on a volume by volume basis can characterize that single volume for an individual use case.
If we know that a particular volume is being used for virtual desktop boot volumes, or a database redo log, the system will automatically bias more cache to that volume. Whereas on the complete opposite end of the spectrum, if we know it’s a backup or archive target volume, we’ll write past the cache and straight to spinning disk, freeing up that cache for the apps they want and need that high performance.
In part 2 of this blog interview series, Rob and I discuss when users should consider switching from a hybrid to a flash array and what role HDDs will play in storage arrays going forward.