Is JSON5 a thing people use? I see it's from 2012 but it's the first I've heard of it. It looks fairly sensible; I'd take it for trailing commas alone. And comments!
No, VSCode uses jsonc, not JSON5. They are very different, e.g. allowing trailing commas is an explicit design goal of JSON5, whereas in jsonc they are frowned upon (generates a warning, and was illegal at one point). jsonc is basically JSON + comments and nothing more, IIRC.
Tried several "json replacement" formats (bson, cbor, msgpack, ion...) and json5 won by being the smallest after compression (on my data) while also having the nice bonus of retaining human-readability.
How does it affect speed of parsing in JavaScript? I kinda thought the whole reason this deeply-mediocre format caught on in the first place was it was especially natural & fast to serialize/deserialize in JavaScript, on account of being a subset of that language. (XML also had the "fast" going for it thanks to browser APIs, but not so much the "natural")
Most of the "popular" publicly available formats at the time were singularly worse, even ignoring commonly limited or inconvenient language support.
SOAP? ASN.1? plists? CSV? uuencode? I'll still take JSON over all of them, especially when it comes to sending shit to the browser (plists might be workable with a library isolating you from it, but it is way too capable for server to browser communications, or even S2S for that matter not all languages expose a URL or an OSet type).
> it was especially natural & fast to serialize/deserialize in JavaScript, on account of being a subset of that language.
That is certainly a factor, specifically that you could parse it "natively", initially via eval, and relatively quickly[0] via built-in JSON support (for more safety as the eval methods needed a few tricks to avoid full RCE).
But an other factor was almost certainly that it's simple to parse and the data model is a lower common denominator for pretty much every dynamically typed language. And you didn't need to waste time on schemas and codegen, which at the time was a breath of fresh air.
> XML also had the "fast" going for it thanks to browser APIs, but not so much the "natural"
XML never has "fast" going on in any situation, the browser XML APIs are horrible, and you had to implement whatever serialization format you wanted in javascript over that, so that was even slower (especially at a time when JS was mostly interpreted)
[0] compared to the time it started being used: Crockford invented / extracted JSON in 2001, but services started using JSON with the rise of webapps / ajax in the mid aughts, and all of Firefox, Chrome, and Safari added native json support mid-2009
> And you didn't need to waste time on schemas and codegen, which at the time was a breath of fresh air.
Everyone just wrote their own custom clients in code instead :) But it was a gradual process, so it's harder to notice the pain compared to some generator slamming a load of code in your project. coughgRPCcough.
> I kinda thought the whole reason this deeply-mediocre format caught on in the first place was it was especially natural & fast to serialize/deserialize in JavaScript, on account of being a subset of that language
No, people couldn't just eval random strings, especially the ones containing potentially malicious user input. They started writing parsers like any other language, then came the global JSON.parse and .stringify per spec.
I can't say for sure but I doubt JSON being a JS subset helped at all.
> No, people couldn't just eval random strings, especially the ones containing potentially malicious user input.
hehehehehe
> I can't say for sure but I doubt JSON being a JS subset helped at all.
Oh it very much did, both because it Just Worked in a browser context, and because the semantics fit dynamic languages very nicely, and those were quite popular when it broke through (it was pretty much the peak of Rails' popularity).