I disagree with the other reply indicating something like this should not be used in production. For most of the history of practical disk IO, it was observed and assumed that disk reads would be relatively much faster than disk writes. It turns out that this assumption was based on other assumptions, such as that most reading and writing would be handled as "random IO" where a physical disk head accessing an actual spinning disk might need to move around at any given time to read or to update some data.
Riak (the inspiration for this project) and other projects came out at a time when software engineers were exploring how to make disk writes fast and potentially even faster than reads for practical applications. Some tradeoffs to achieve this goal could be enforcing all writes to be sequential ("log-structured" in riak, kafka, and cassandra parlance) and to embrace the model of "eventual consistency".
Eventual consistency is similar to how orders are processed at a cafe or fast-food restaurant. The cashier takes the order and passes it on to the barista or chef - we'll just say "kitchen". The kitchen might not know your order at that moment but it's right there nearby (equivalent in our case: in a RAM buffer ready for disk write). Once the kitchen has finished other orders ahead of yours (the sync interval is reached), it makes your order and delivers it to the counter (the data gets actually written to disk -- "committed" in DB talk).
The key point in this analogy is that the cashier station (system front end UI) doesn't wait around until your order gets made before taking other orders. It assumes all is well and your order will be served by the kitchen "soon enough".
When might these tradeoffs make sense for production systems? Answer: not all data is created equal. For example, if your system stores a steady stream of GPS coordinates from pakage delivery trucks so customers can know when a truck is near their house, it doesn't actually matter if one or two of the coordinates is not immediately available (or even gets lost). The same can go for backend system telemetry, showing CPU or RAM utilization. The trend is the main thing and it's not actually important in a particular real-time instant whether the dashboard chart shows the last 3 readings (since they have yet to be finally written to disk). In cases like these, "ACID" (traditional db term) guarantees not only are not requried, they get in the way of proper system design and implementation.
Riak (the inspiration for this project) and other projects came out at a time when software engineers were exploring how to make disk writes fast and potentially even faster than reads for practical applications. Some tradeoffs to achieve this goal could be enforcing all writes to be sequential ("log-structured" in riak, kafka, and cassandra parlance) and to embrace the model of "eventual consistency".
Eventual consistency is similar to how orders are processed at a cafe or fast-food restaurant. The cashier takes the order and passes it on to the barista or chef - we'll just say "kitchen". The kitchen might not know your order at that moment but it's right there nearby (equivalent in our case: in a RAM buffer ready for disk write). Once the kitchen has finished other orders ahead of yours (the sync interval is reached), it makes your order and delivers it to the counter (the data gets actually written to disk -- "committed" in DB talk).
The key point in this analogy is that the cashier station (system front end UI) doesn't wait around until your order gets made before taking other orders. It assumes all is well and your order will be served by the kitchen "soon enough".
When might these tradeoffs make sense for production systems? Answer: not all data is created equal. For example, if your system stores a steady stream of GPS coordinates from pakage delivery trucks so customers can know when a truck is near their house, it doesn't actually matter if one or two of the coordinates is not immediately available (or even gets lost). The same can go for backend system telemetry, showing CPU or RAM utilization. The trend is the main thing and it's not actually important in a particular real-time instant whether the dashboard chart shows the last 3 readings (since they have yet to be finally written to disk). In cases like these, "ACID" (traditional db term) guarantees not only are not requried, they get in the way of proper system design and implementation.