Cache workflow was gradually changed in elliptics 22.214.171.124. We mean it.
Cache is a layer between backend and user requests which main goal is reducing number of disks' IO operations. All read/write/remove operations go through the cache even if NOCACHE flag is set. NOCACHE means that cache will not be populated from the disk if there is no data in cache. If you have something in cache, it will be updated anyway to maintain cache coherency. Usually this is a bad design to mix cache and no-cache operations for the same keys in client code.
If data is being written to cache with DNET_IO_FLAGS_CACHE it will be synced to disk after cache_sync_timeout specified in Elliptics configuration. It is 30 seconds by default. Make attention that if remove with CACHE_ONLY flag will be received by server - all not-synced-yet data will be erased, so make sure not to mix CACHE-only and CACHE+CACHE_ONLY flags for the same keys as it may lead to unwritten data to disk.
Cache layer consists of
caches_number independent caches. Requests are equiprobably distributed between caches to decrease latency caused by locks.
Each cache is implemented as multi-page LRU, which is simple yet powerful and flexible generalization of Segmented LRU. Cache consists of cache pages, that are lightweight LRU lists. Each cache page has constant 'temperature' that characterizes popularity of records in this page. Cache pages are ordered by their 'temperature'. When data is first time added to the cache it gets into the top of last (“coldest”) page. Each time, when it's referenced again, it moves to the top of next (“hotter”) page. If it is already in the hottest page, it stays there. Similar eviction scheme is used: when cache page overflows, it moves record from it's LRU list bottom to previous (“colder”) page. When record is evicted from the last page, it is removed from cache.
Benefit of this strategy is explained by this logic: data that is accessed more frequent will be in the hottest cache page, therefore it is less likely to be removed. Profit directly relies on the pattern of user-data interaction that can be very specific for different applications.
Elliptics works with cache in case if flags DNET_IO_FLAGS_CACHE or DNET_IO_FLAGS_CACHE_ONLY are set.
If neither of this flags are set yet READ, WRITE, REMOVE and LOOKUP operations for given ID still go through cache if it is present in cache, otherwise commands go right to the disk backend.
Here are examples of various operations with DNET_IO_FLAGS_CACHE bit set.
Make attention that if entry is not synced to disk yet lookup will give “old” data position, i.e. lookup returns information about current disk location of the given object.
Each 1 second cache runs lifecheck thread that syncs data with disk and erases old data. It repeats the following sequence of actions:
This scheme is designed that way to avoid races over global cache lock.
There are several parameters for tweaking cache layer:
cache_size- total size of cache layer in bytes
caches_number- number of caches (default 16)
cache_pages_proportions- this parameters allows you to manipulate relative sizes of cache pages, for example if you specify
cache_pages_proportions = 1:2:3your pages will have sizes n, 2n, 3n, where n = cache_size / 6
During work with cache we need to handle efficiently such queries:
This problem can easily be solved using search tree data structure for find/erase requests and priority queue(heap) to manage latest sync and timeout events. But this approach involves a lot of memory overhead, that's why we invented a way to use Cartesian tree data structure for this purpose. Each node of the tree is described with it's key and priority, and main tree property is that it is binary search tree considering keys and heap considering priorities. In our case key is data record's id and priority is minimal event time related to this record. This way, root of the tree contains first event to be processed, and we still can quickly find or erase key from the tree. Moreover, this approach requires only 2 pointers instead of 6. Comparsion of running times showed that cartesian tree is better than std::set on inserts and erases and slightly the same on finds. This graph shows three implementations, std::set, cartesian tree on pointers and cartesian tree on memory allocated in vector.