Running applications at the speed of memory

Businesses are in a never-ending quest to make their applications run faster. It’s become human nature for us to expect instantaneous response times. After all, faster IT services ultimately improves worker productivity, enhances customer service, and increases revenue.

To eek out additional performance, IT managers get faster CPUs, add more memory, re-architect application workflows, etc. But chances are, it’s their old storage system that probably deserves most of the blame.

Wouldn’t it be nice if your storage performed at the speed of memory?

Enter Persistent Memory

Persistent memory technology is fundamentally changing the storage industry. Persistent memory, or PM, is any non-volatile solid-state memory that can retain data for a sustained period of time without refreshing the data in the device. PM comes in various form factors, including DIMM form-factor (e.g. NVDIMMs) and SSD form-factor (e.g. NAND flash, 3D XPoint).

Today, Tegile uses a combination of DRAM, NVDIMMs, and high-density storage media in its IntelliFlash multi-tiered architecture. This gives users memory-speed performance with much better economics.

Within each of our arrays, there’s a performance tier and a capacity tier. In the performance tier we use DRAM, NVDIMMs, and NAND flash. This gives users the highest performance and lowest latency for their mission-critical applications. The capacity tier takes full advantage of the evolution in media densities. In this tier we use either HDDs (e.g. IntelliFlash Hybrid Arrays) or NAND flash (e.g. IntelliFlash All-Flash Arrays and IntelliFlash HD).

The performance tier is logically divided into three sections: the Write Cache, Read Cache, and Metadata. Let’s dive deeper into what each of these three sections does:

Write Cache

Since Tegile arrays use a combination of DRAM, NVDIMM, and NAND flash in the performance tier, they can process a write I/O extremely fast. The moment I/O hits memory it’s immediately synced to NVDIMM in the Write Cache and an acknowledgement is sent from the controller to the client. This process takes 100s microseconds, which is a significant performance gain over storage systems that send an acknowledgement when the data hits NAND flash.

Meanwhile, the data is processed in memory, calculating a checksum, compressing the data, checking dedupe pointers, and so on. Then the data is coalesced and streamed sequentially to the capacity tier.

To ensure availability, the NVDIMMs are mirrored across controllers, and in the rare occurrence of a controller failure, the I/O is sent to the NAND flash.


Read Cache

There are two levels of Read Cache in a Tegile array—both are self-tuning. The first level is the primary DRAM cache. The second level resides on NAND flash. The array proactively fetches and populates the Read Cache with the hottest data. This all happens dynamically in real time, using intelligent, pre-fetch algorithms—without user intervention. In most customer cases, the cache-hit ratio is well over 90%. That means that most read requests are served from DRAM and flash.

Metadata Acceleration

IntelliFlash also dedicates a portion of the performance tier exclusively for metadata. This includes block pointers, dedupe pointers, compression type, and the like. All of the metadata is organized, aggregated, and remains in the performance tier. This performance-optimization technique stands in stark contrast to traditional storage systems, which intersperse metadata with the rest of the data on disk. Over time, as data inevitably gets modified, deleted, and rewritten, the metadata becomes very fragmented, which negatively impacts performance.

ONE Platform, Multiple Grades of Storage

The IntelliFlash architecture is built in such a way that we can essentially add any type of storage media to the performance and capacity tiers. We talked about using NVDIMMs today. In the near future, we will incorporate NVMe.

For more information on how IntelliFlash works, visit www.tegile.com/intelliflash.


Leave a Reply

Your email address will not be published. Required fields are marked *