I don't think it is "focused" on the JSON part of Redis, they have added a lot of commands from Redis, not only JSON. They are also working on storing vectors too for instance.
As for the drivers compatible, if you mean "clients", in our case we use the official Python Redis client with Kvrocks and it works perfectly with the commands we use.
We use it for a year now in production and so far so good. We couldn't handle anymore having huge instances with a lot of RAM to hold data in Redis.
It looks like a normal database to me: disk storage for most of the data, and some cache in memory to speed up read queries. Everything is customizable.
I'm still waiting for a nice way to deploy it in a Kubernetes cluster, like a Helm Chart to easily setup a cluster with primaries and replicas. Also, the lack of keys eviction like LRU is problematic for us in some cases, it would be a nice addition.
I’ve been testing kvrocks as a replacement for a Scylla use case where we have TBs of data and uniform TTLs. In many of these LSM dbs, compaction is the thing that really kills throughput, making TTLs (or really any updates/deletes) difficult.
KVRocks doesn’t expose it directly right now so it requires code changes, but so far I have had good success with FIFO compaction. When an sstable gets old enough, it just gets dropped.
I guess a senior engineer might be "linked" to a single kind of tasks (backend, frontend etc) while a staff engineer has knowledge in a lot of domains and can be the "bridge" for projects that need people from many different teams
Instead of taking a image every 5 seconds from the video and embed it, you could detect when there are enough changes between frames to decide to embed or not. One frame, one scene, one vector.
For instance, Ffmpeg can do that with the filter `select=gt(scene,0.3)`. It selects the frames whose scene detection score is greater then 0.3 (the scene change detection score are values between 0 and 1).
That's the only way to do it. You can't index the whole thing. The challenge is chunking. There are several different algorithms to chunk content for vectorization with different pros and cons.
As far as I understand it, context length degrades llm performance, so just because an llm "supports" a large context length it basically just clips a top and bottom chunk and skips over the middle bits.
Why would you want chunks that big for vector search? Wouldn't there be too much information in each chunk, making it harder to match a query to a concept within the chunk?
PHP is a great language to learn OOP, classes, interfaces, abstract classes, traits, managing dependencies and unit tests. I'm not using it anymore but I learned basically everything with it a decade ago. Thanks PHP!
It performs better and uses different design choices (for example: SableDb uses tokio's local task per connection, and in general it uses green threads to make the code more readable and easy to maintain).
I will release some design documents later on (hopefully this month). Remember that is a one man project (hopefully, not for long), so it takes time to organize everything :)
I like the idea of doing thread local execution of Tokyo tasks; I assume that means SableDb is mostly single threaded? Was this to reduce complexity, or for some other reason? I'm looking forward to the design doc on this!
It is multi-threaded (configurable, you can set it to a specific number configuration file, or use the magic value 0 where SableDb decides based on the number of cores divided by 2).
Each incoming connection is assigned to a worker thread, and two tokio tasks are created for the connection (one for reading and another for writing).
Using tokio allowed me to use the `async` code without using "callback hell" so the code looks clean and readable in a single glance without the need to follow callbacks
Hi SableDb. I am looking for a tech cofounder in databases. Probably not the best place to ask for a cofounder. :-) Regardless, would you be interested?
Not affiliated (not my first comment about this) but we are using KVRocks[1] for now at work, which is based on RocksDB by Meta and it works nicely. Developers are nice and reactive and the Redis commands support is large.
We picked this project because of our RAM usage that was exploding with Redis.
The only downside for us right now is the Kubernetes support. There is an operator and a controller being made but no Helm Chart yet to deploy Kvrocks with master and replicas easily. That will be awesome.
For a few recruitments, we asked the candidates to create a front app like this with React. It was quite nice as we could quickly see how they use the library, what they know etc.
As for the drivers compatible, if you mean "clients", in our case we use the official Python Redis client with Kvrocks and it works perfectly with the commands we use.