Reverbrain wiki

Site Tools


Elliptics layers and cache

Layered design

It was long ago since elliptics became not only storage system, but procesing engine. It contains several layers and only the last one is actual storage backend (elliptics has 3 different storage backends to date and it is rather trivial to create your own new one).


The highest is routing layer, where notion of group exists. Group is basically a data replica - it is a set of nodes logically bound to each other (basically they are bound by admin's decision) and identified by group number.

Group may consist of several nodes each of which forms well-known hash ring. This is the layer where DHT lives. If your groups are small enough (we have installations of hundreds of groups of one node in production, where even node is not a server, but smaller instance), then no DHT will be operated in your setup.

That was a routing layer - client issues a new command (read, write, execute, stat or whatever else) with ID, which consists of group number and object ID. The latter is usually sha512 of some string your used when called high-level API, but it is possibly to specify it manually (object ID by default is limited by 512 bits). Command with given ID is routed to appropriate node in the storage and then executed there.


There are several commands which are reserved to the elliptics core like route table update, low-level node joining protocol and so on, but rest of them is directed to IO layer or backend layer. Basically, all other commands (including those which are not listed in elliptics API - you can create your own commands) are pushed down to backend, which decides what to return back.

In-memory cache

We use multi-page generalization of Segmented LRU in elliptics. It gives you a flexible way to adjust cache for your specific I/O workload pattern. Core intercepts read/write/del commands and if IO request contains cache flags, also updates records in the cache. Read and del commands always go to cache first, while write may write data to cache and disk or just to cache (depending on IO flags).

It is also possible to specify lifetime timeout in seconds for every object being written into the cache. By default it equals zero, or infinite lifetime.

There is possibility to remove cached entry not only from in-memory cache, but also from disk, when expiration timeout fires or when you remove object by hands. C++/Python APIs were extended to support this kind of operations.

Use case

Overall possibility to remove or expire cached entries and drop them not only from memory, but also from disk storage is extremely useful for cases like session store. You generally do not want to lose them and want to back them up to disk, otherwise you could just use a plain in-memory cache, but also you know that your in-memory session pool is rather limited for frequently accessing users.

elliptics/layers.txt ยท Last modified: 2013/11/26 19:43 by acid