Hurry up and Get “In-line” to use Flash

Few of us relish rushing to get in a line where we can hurry up and wait for our service. Imagine going to the express checkout lane of the grocery store and the checkout person does not scan any prices as the food goes into the bags. You would of course ask how you were going to know how much this is going to cost if the store is applying discounts that you have earned. The diligent cashier says that the store will go through your bags after they are all in the cart and then we will see what the cost and savings might be. This scanning of your bagged groceries may take a while so you can just pay a flat fee now and then later on we can try to reclaim the difference in cost.

This analogy might sound completely absurd but it’s actually very close to what many storage vendors are offering these days. They have not built their systems to do “data checking” in-line as it is written, so they have to schedule a process to run some number of hours or days later. Now you have to wait to see how much savings you may or may not get back and hope that this process does not slow down your applications too much.

Express-Lane

Space optimization technologies doing all the work after the data is written is wrought with several other ugly questions. When do I schedule it, how long will it take? What will be the performance impact when it kicks off, how much space will get reclaimed? What if other services like backup or replication are running at the same time? What does this do to the longevity of my expensive flash media if I am writing to it all the time? How do I know what data to select for this post processing?

There are even implementations in the market from some of the established vendors that putting data to be deduped or compressed into special constructs in the array. Normal data is not by default selected for processing, only inactive, “colder” data sets. Not only does this result in inefficient space savings but it does not reduce the wear on the flash media. The most effective wear leveling technique is to just simply write less data.

Arrays need to support an In-line CCDC cycle. No this is not a special branch of the center for disease control, but rather a Compress, Checksum, De-duplicate and Coalesce processing pipeline that allows the most efficient use of the media in the array (Flash and/or Spinning media) in terms of performance, price and resulting useable capacity. The enabling technology behind this approach is intelligent metadata handling. The metadata itself needs preferential treatment in terms of the media type it is stored written to. Virtually every IO operation on the array results in some metadata IO as well. The ability to separate out the metadata to specific storage media separate from the data might sound obvious, but is actually quite a profound innovation. The metadata must also have highly optimized code paths within the storage operating system. The OS should not just treat any system call the same regardless if data or metadata.

Doing the CCDC cycle on everything including system cache further optimizes the system performance and efficiency. Using these process on cache as well as data make the effective cache size much larger that is physically allocated resulting in a much higher percentage of IOs coming from cache. The net result of all of this is the ability to do in-line deduplication and compression without taking a performance it, and in fact, increase performance due to more cache hits.

Getting a good spot at the express checkout line is fine so long as the items in your cart are processed in-line and very quickly. The other vendors will direct you to the section of the store where they are going through your bags and seeing if you really saved anything. Hurry up; there is a long line back there.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>