Apologies for the somewhat off-topic remark, but ... Can we please stop creating new JSON-based config files? Sane config file formats must support comments.
I wouldn't apologize for that, you're addressing the elephant in the room here. JSON has turned into the new XML. There seems to be a pervasive undercurrent that if it looks website-ish, it's got to be good. So it seems to be used quite a bit in lieu of looking carefully at the problem and figuring out the best structured format. Configuration files tend to have a lot of ambiguity in them, going to a format that makes it all that much harder to pull out the why part of settings does nothing to improve on that.
Honestly, I don't think JSON config files are a big problem. What's shitty is when humans are expected to manually create and edit plain JSON, but what's stopping us from using whatever tools we want to generate those JSON configs?
There are some very nice tools out there for this; I have taken to using jsonnet (http://jsonnet.org/), which is excellent as a configuration language. It's pretty comparable to Hashicorp's HCL, but it's a standalone thing.
Summary: Write your config files in a language of your choice. Render them to JSON for your apps to consume. Be thankful that every app doesn't have its own special snowflake config format like they used to.
Unfortunately, it suffers from the huge problem of not having any comments, which was what the original poster was pointing out. Configuration is code that happens to be interpreted at runtime. Having to dig through uncommented code at two in the morning sucks, especially when you're trying to figure out why your automated code and/or config generator just failed. Yes, having to write a config file by hand sucks, however, there are still enough cases out there where you end up having to do just that, and JSON makes the problem of keeping them lucid all the more difficult.
My suggestion, in general, is to use a static language like jsonnet to write config files. It's rather hard for them to break, because the language just isn't capable of talking to the network or various other things that config files shouldn't do. I do agree with you, and I would really prefer to have apps use formats that allow some sort of commenting, but I can live with json - by letting the computer deal with it.
What I hate is that good tools for this like UCL go ignored or forgotten while JSON formats grow like weeds, and even excellent people like Hashicorp wind up contributing to this serious problem.
> There seems to be a pervasive undercurrent that if it looks website-ish, it's got to be good.
I don't get that from the article. There's a stated reason, that it's a stricter format with much less risk of type confusion, which sounds plausible enough to me. There's also the benefit that tool support is good. Why does the web need to be relevant at all?
Doesn't YAML fill the same slot, allow comments, and allow much saner human interaction? And as a bonus it's 100% (?) convertible to JSON (minus comments of course)
You'd have to add a validator to disallow some of the weirder YAML syntaxes. And once you have such a validator, you have neither JSON (it won't parse) nor YAML (it's dangerous to parse the file as YAML without running it through the validator first).
YAML isn't a particularly good config format either, since it has a lot of unnecessary complexity. For example, bare words are strings, but some bare words, such as "true" are booleans. Which wouldn't be so bad, but did you know that "yes" and "no" are also booleans? I've gotten YAML files from an i18n-system that screwed this up in a pretty bad way.
In fact, YAML is a superset of JSON, and any valid JSON is also valid YAML. This becomes more obvious when you realize you can put any JSON objects inside of your YAML document and any YAML parser will correctly deserialize it.
At long as an app isn't strictly validating the JSON, you should be able to add keys like { "comment": "this does xxx" } to a JSON configuration object.
I removed comments from JSON because I saw people were using them to hold parsing directives, a practice which would have destroyed interoperability. I know that the lack of comments makes some people sad, but it shouldn't.
Suppose you are using JSON to keep configuration files, which you would like to annotate. Go ahead and insert all the comments you like. Then pipe it through JSMin before handing it to your JSON parser."
The idea that one would install node + npm + some linter + linter configs + some method of automatically running the linter against the config before sending it to the thing being configured seems pretty insane and unnecessary to me. At the end of the day, it can suck, but being able to look at a config on a running system with inline comments makes debugging infra problems much easier. What the heck was wrong with # comments and the types of configs Apache et al. Have been using for decades?
Nothing, but then you get to write your own parser for whatever bespoke configuration DSL you want to implement. Some people would rather focus on other aspects of functionality.
I (original commenter) second TOML. It is very simple, is instantly familiar to anyone who has used ini files or many Unix-style config files, and has many good implementations available.
The [[array]] notation mentioned below is a tad weird, but I've never had an occasion to use it.
No, you're got upvoted by me. XML is great and has all kinds of well-tested tools - query language, schema validation, namespaces, etc.! Not sure why people think it's worse than JSON, TOML or the likes. Less readable? Not really. Plus, more people know HTML than JSON and so it should be less easy to learn than even JSON for people who already know HTML. Maybe the only improvement would be to make it more relaxed like HTML5.
The transition away from cloud-init and YAML is really quite odd to me. Nobody enjoys editing JSON files. Forgetting commas, no multi-line strings, no comments, escaping characters in strings, etc. Just reading the documentation of ignition should be enough to illustrate the pain it is to manage multiline content for unit files in an JSON string.
But why abandon the cloud-init format in general? Again, why would somebody want to learn a new configuration syntax. Using CoreOS already requires you to know and use systemd units (most other distros this really isn't required knowledge), so that adds two steps to users learning/using CoreOS.
I'm totally with you. We're big users of CoreOS and while there are aspects of cloud-init that I find janky, it's at least mostly easy to read and understand. The use of JSON is nuts. Just looking at the one-shots consolidated to a single line with \n's gives me a headache. This is moving forwards?
What I would prefer is a compiler. You feed it a directory of unit file drop-ins and app config templates and it builds a single artifact that can be served over HTTP and pulled by the server booting up. This could allow for dynamic configuration and automation but still makes it easy for the admin to piece the config together.
A compiler. So now you have another language which compiles into JSON. So now there at least 2 problems, more if we want to be able to edit these configs on the systems directly, since we'll need some kind of tool chain.
Why do we keep re-inventing this wheel over and over again? Very plain text configs have worked for decades.
YAML is good for simple configs. As the article says, JSON is much easier to programmatically generate than non-standard YAML. You still can store files in YAML, but compose and convert them into JSON in user data.
Again here, now you have 2 problems. Why not just go with a simpler config language and skip converting it and all the complexity that adds? It's possible to make this extremely complex, to the point where the config is in a database with ACLs and requires a DB server system (which itself requires config) to host (e.g. WebLogic) , but what do you gain?
JSON is not language that "makes it very easy to write tools to generate new configs or manipulate existing ones". One cannot realistically consume JSON in a shell script, and even with tools that have native JSON support manipulating the config require the knowledge of document semantics as the language itself does not define how to combine or merge 2 documents into one.
For example, consider an application that comes with a complex JSON (or YAML as that language also share this shortcomings) that essentially describes default settings. I cannot just define an extra file that specifies a couple settings that I alter/delete/add, I have to have my own copy of the original file with my changes resulting in painful merge when the original is updated.
What I have found is that things like .ini files or config files in style of ssh_config work much better in practice. It is easy to generate and process them in any language with a notion of text IO including shell scripts and the merging functionality can be provided independent of semantics so I can keep config fragment outside the main file/files.
Alex this looks great and likely addresses a lot of the problems I had with cloud-config. I was one of those with a Bash script that I used to generate multiple files, so this is great.
My biggest issue so far is CoreOS' naming of Ethernet interfaces on VMWare ESXi. It always uses some eno* name for each interface. I have a unique case where each VM I spin up has up to 10 interfaces.
I've solved this by adding net.ifnames=0 to my grub.cfg. It requires that I reboot the machine at least once to get it to take.
If I could have predictable interface names using Ignition, then I'm set!
> My biggest issue so far is CoreOS' naming of Ethernet interfaces on VMWare ESXi. It always uses some eno* name for each interface. I have a unique case where each VM I spin up has up to 10 interfaces.
That's a systemd decision, not CoreOS, and also impacts the 7.x series of RHEL derivatives and anything that uses systemd > v197, really. Your way is one of three to revert it.
Ah, fair enough. But it looks like that introduces another problem. When you spin up a VM in ESXi, it generates a new MAC address for each interface. So it would be great to create systemd .network files that bind to those MAC addresses. Except you don't what MAC addresses you need to match against. The only flow I see working is:
1) Create a VM with your interfaces, but don't boot it.
2) Take the MAC addresses and enter them into your .network files, created via cloud-config/Ignition
3) Mount the cloud-config/Ignition file to your VM and boot
Which is a bit painful to do manually when you have 10 VMs, with 5-10 interfaces each. I'd love to automate this, and if you happen to have a suggestion here, I'd really appreciate it.
Edit: Oh, and when I said it had an eno* number, it's more like eno16777736, which is not very predictable at all.
You don't have to match against MAC addresses, although there's virtually no documentation about how to do it any other way. In the pre-systemd past I've successfully created udev rules that match PCI slots so that the configuration does not depend on MAC addresses. It looks like the same thing should be possible (with trial and error) with path globbing in systemd.
Yes, precisely. The comment below identified one of 3 possible routes to turn off the naming, one of which I'm using (net.ifnames=0 in the kernel config).
If I could find a better way to match the interfaces, it would be fantastic.
Some useful Ignition configs can be found in the https://github.com/coreos/coreos-baremetal examples. For those wondering about the format, they're converted to JSON and served to machines.
I'm curious about how it works. I must be missing something, but if it runs before the network and file system are setup, how does it pull configure from a http:// URL (say on bare metal) or oem:// which is on FS? Does it run with a default setup for network/fs and let systemd redo everything later?
> Running before systemd helps ensure that all services established by Ignition are known to and can be managed by systemd when it subsequently starts. This allows systemd to do what it does best: concurrently start services as quickly as possible.
I think this looks like a slight jab at systemd ;-)
Systemd thinks it does a lot more other things best
I think that was poorly explained on their part. Digging deeper into the docs, it looks like Ignition writes unit files that systemd runs; e.g. networkd is still used to configure networking.
Affirmative. This is just a mechanism to make sure that interfaces can be properly plumbed, disks formatted with the desired states & filesystems, etc.