“ Naming a project is always difficult. Since this project is focused on representing changes between JSON documents I naturally started thinking about names like "JSON differ, JSON changes, JSON patch, …". However, most of these names have already been used by existing projects. While I was muttering "JSON, JSON, JSON" it suddenly turned into "JSON, JSON, Jason, Jason Mendoza".”
This is a reference from The Good Place and it is an amazing name!
This is an interesting tool for computing minimal diffs, but the result is not very human friendly. If this is your goal and you are looking for something better than diff, have a look at graphtage: https://github.com/trailofbits/graphtage
I know, I wasn't trying to ignore or denigrate this purpose. It seemed apt to show a similar tool which still has a drastically different (and complementing) purpose, which is human readability.
Interesting approach! But aren't JSON arrays a pretty wasteful encoding?
Since this is an opaque serialization of an instruction set, why not try to encode more bits per number (JSON floats support lossless integers of many more bits), and moving the "symbol table" (string data) to the end?
This way you could also compress redundant symbols into single strings.
The format itself is not strictly speaking coupled to JSON: https://github.com/sanity-io/mendoza/blob/master/docs/format.... If you can encode int8 and string more efficiently, then you can save some bytes with a custom binary encoding. However, you always need to be able to represent JSON as well. If a part of the JSON file isn't present in the old version, then you encode that part by the plain JSON.
> and moving the "symbol table" (string data) to the end? … you could also compress redundant symbols into single strings.
Sounds interesting, but in my experience these types of tricks are usually not paying off compared to just sending it through Gzip afterwards.
Now that you have .TEXT and .DATA sections, you're only a few sentences away from suggesting that the compression/diff algorithm generate and send WASM's binary encoding.
I'm super interested in this topic. Recently (and still ongoing) I started on hashing out how to diff large datasets and what that even means.
I would love to get an understanding of how the HN crowd sees diffing datasets should be (lets say >1GB in size).
Are you more interested in a "patch" quality diff of the data which is more machine tailored? Or is a change report/summary/highlights more interesting in that case?
Currently I'm leaning more towards the understanding/human consumption perspective which offers some interesting tradeoffs.
Out of curiosity, what problem did you have that this approach solves?
Thinking about diffs, plain text diffs are typically compressed for transport anyway, so you end up with something that's human readable at the point of generation and application (where the storage/processing cost associated with legibility is basically insignificant) while being highly compressed during transport (where legibility is irrelevant and no processing beyond copying bits is necessary).
We have a JSON data store at Sanity.io where you update your data by sending in a transaction script which is at a quite high level ("increment karma by 5"). While these scripts explain your intention well, they are a bit of a hassle to work with internally since the language is so rich. We therefore wanted (what we've called) an effect format. The effect format would explain what actually happened when applying the transaction: "Set karma to 183". Mendoza is what we ended up with. The Mendoza patches are much easier to work with from a decoder's point of view.
As a side effect, we've also been able to use the Mendoza format for tracking changes in documents. The JavaScript implementation supports replaying the patches while maintaining information about where each part of the document was introduced. We use this in our Review Changes feature so that you're able to see exactly who introduced a change (in a JSON document!): https://www.sanity.io/blog/review-changes
Have you looked at generating a specialized version of this stack machine for other types than JSON values? It'd be neat to have a version that works directly on data with Haskell's GHC.Generics, or a custom derive in Rust.
Ehm… As I was profiling the code base I found that most of the time was spent hashing the data. The hashing part is quite critical for correctness and any collisions will cause it to generate incorrect. Initially I used SHA256 and was thinking about ways to speed it up. One experiment I did was to tweak it to use 256 bits internally, but then only output 64 bits. This turned out to help quite a lot. Of course, later I got a bit anxious about only using 64 bits so I increased it to 128 bits: https://github.com/sanity-io/mendoza/commit/42c7dee4b42bfb4c...
At this point all of this is probably completely moot.
Hugo doesn't have an easy way of making pages/paginating from an API yet, so you'd have to first generate the markdown files. That can be done though. Here’s a demo: https://codesandbox.io/s/rj4y72j34n
This is a reference from The Good Place and it is an amazing name!