Yes, the limits are important to keep in mind, I should have contextualized that before.
For my case it happened to work out because it was a CDC based deduplicating format that compressed batches of chunks. Lots of flexibility with working within the limits given that.
The primary goal here was also making the reader as simple as possible whilst still having decent performance.
I think my workload is very unfair towards (typical) compressing archivers: small incremental additions, needs random access, indeed frequent incompressible files, at least if seen in isolation.
I've really brought up 7z because it is good at what it does, it is just (ironically) too flexible for what was needed. There probably some way of getting it to perform way better here.
zpack is probably a better comparison in terms of functionality, but I didn't want to assume familiarity with that one. (Also I can't really keep up with it, my solution is not tweaked to that level, even ignoring the SQLite overhead)
For my case it happened to work out because it was a CDC based deduplicating format that compressed batches of chunks. Lots of flexibility with working within the limits given that.
The primary goal here was also making the reader as simple as possible whilst still having decent performance.
I think my workload is very unfair towards (typical) compressing archivers: small incremental additions, needs random access, indeed frequent incompressible files, at least if seen in isolation.
I've really brought up 7z because it is good at what it does, it is just (ironically) too flexible for what was needed. There probably some way of getting it to perform way better here.
zpack is probably a better comparison in terms of functionality, but I didn't want to assume familiarity with that one. (Also I can't really keep up with it, my solution is not tweaked to that level, even ignoring the SQLite overhead)