Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Helm is probably the worst influence I've experienced on the kubernetes ecosystem. It takes something that could have been really good (metaconfig) and makes it illegable, complex, and impossible to debug.

Kustomize, too, doesn't get at the root of all the needs I've found myself having.

A metaconfig language like Jsonnet, Dhall, or Cuelang seems to be where it's at. If you're using jsonnet I can also highly recommend kubecfg [0] as it's a tool that "just works" and is built at the perfect level of abstraction: "just give me something and I'll find all the kubernetes objects inside of it". It's remarkably simple and forgiving and it makes it possible to describe even the most complex application configs + kube configs in a single language. You can also take it a step further and describe your CI stages (pre-merge, release, builds, etc), kube yaml, and infrastructure provisioning (terraform) all in one language.

> What are the benefits of using helm?

The main benefit is redistribution. Since everyone uses helm it's easy to give someone a chart because they know what to do with it. That's pretty much the only upside I've found.

[0] - https://github.com/bitnami/kubecfg



Dhall continues to a be a glimmer of hope in this space. I agree that helm caused the entire space to distort. There is so little reusability in charts. Dhall seems to be the start of reversing this, but I hesitate to mention it since the ergonomics are still rough.

If YAML is the assembly language of k8s, Dhall looks like what you would write the compiler in.


IMO because Dhall is unique syntax to the vast majority of developers who have never done anything with Haskell or Elm or whatever, it’s a non starter for large case use. We use Starlark, where the syntax is familiar to anyone familiar with python.

I like your analogy though. For us, the assembly language is using the k8s API directly in Golang. The “compiler” is the golang Starlark interpreter extended with our own config API, like you would implement in Dhall. It’s just in this case, you can implement it in Golang, which has much much better tooling than Dhall does. A typed compiler, debugger, IDE, unit tests.... so much easier to develop and maintain.


Yea, anything that allows you to think in a higher-level than raw kube objects will save you a lot of time and heartache. For instance, if your application needs a redis cluster in helm you have to mess with subcharts/other headaches. In jsonnet it becomes `redis = import "redis"; redis.Cluster("my-app-name") { nodes: 10 }` or something similar.

DRY configs make me very happy.


Given that the Dhall type system supports things that the other configuration languages do not, isn't it very hard to migrate to for a project like k8s?

I don't see how they could support something in parallel to Dhall, and still enable all the features Dhall provides.


Dhall does have many features that can still "compile down" to YAML or JSON (or some others, too). Depending on this target language, some Dhall features are unavailable.

I don't think OP meant Kubernetes is changing to Dhall, but that we'll see something like Dhall on top of Kubernetes YAML/JSON that catches on.


For internal k8s config at our org we built an config DSL using Starlark. The golang Starlark interpreter is super easy to use and extend. Starlark is familiar to every developer in our org because we are a Python shop. The tooling then spits out k8s YAML.

Essentially the config language implementation would be the same logic that a helm chart would do, but you’re writing the logic in Go versus a text templating engine. You can easily unit test parts and rely on the compiler to catch basic mistakes. Way better than templating YAML.

We also provide escape hatches so people can easily patch the resources before they get serialized to YAML. People can use that to customize our standard deployment config however they want.

So far this has worked very well and been extremely easy to maintain.


Have you thought of open sourcing this?


I actually use and prefer make + kustomize -- unfortunately the version of kustomize that is bundled with kubectl now is a bit behind on features sometimes but still it does "just enough" and I manage the rest with make.

Redistribution is important (and harder with a simple setup like make + kustomize) but I think obscuring the resources you need to run an application (custom or otherwise) is actually a bad thing for everyone in the ecosystem. If you're going to run something in your cluster, intimate knowledge of the resources required, the ways it changes your system is important.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: