Hacker News new | past | comments | ask | show | jobs | submit login

I'm saying that I think the value proposition for the virtual kubelet is tenuous, not multi-tenancy as a whole.

For a single cluster, "very cheap" VMs solve some of the problems, but leave others unsolved (e.g. they prevent some hardware and kernel exploits, but lots of security issues can still hit you -- like the last two big K8s CVEs). They also sacrifice a lot of the things that make containers compelling on the floor (high efficiency and density), so I don't think they should be spun as a panecea.

You seem to be arguing that one shouldn't bother with multi-tenancy on a single cluster, which is a fine approach, but I do think that the technologies and tools to support the single cluster model are evolving. Calling it a "multi-tenancy retrofit" seems a bit FUD-y to me. Just because there are challenges doesn't mean it's not worth doing.




> I'm saying that I think the value proposition for the virtual kubelet is tenuous, not multi-tenancy as a whole.

I was tying them together because I see the former as an effective strategy to achieve the latter.

> Calling it a "multi-tenancy retrofit" seems a bit FUD-y to me. Just because there are challenges doesn't mean it's not worth doing.

What should I call it? It's being added retrospectively to a single-tenant design. The changes have to be correctly threaded through everything, through codebases managed by dozens of working groups, without breaking thousands of existing extensions, tools and applications.

What I expect will happen instead is that it will be better than it is now -- which is a win -- but that no complete, mandatorily-secure, top-to-bottom security boundaries will be created inside single clusters. We will still be left with lots of leaks.

Our industry is replete with folks trying to wedge the business of hypervisors and supervisors into applications and services. It's possible but always leaks and breaks and diverts enormous development bandwidth away from the core thing that is meant to be achieved. Kernels and hypervisors have privileged hardware access and decades of hardening that can't be truly replicated at the application or service level and which when imitated need to be designed in from the beginning.

I don't see that as FUD. I think it just is what it is. But I appreciate that my thinking is line with the doctrine Pivotal advances to its customers, which differs from the doctrine Red Hat and others advance (One Cluster To Rule Them All).


I’m not sure who at Red Hat is advocating one cluster to rule them all, but it’s just one point on the spectrum. There are lots of places where one cluster makes sense and two would be overkill - if you want to run lots of simple workloads, or have one very large scale app. But it’s equally smart to separate clusters by security domain or regulatory zone, or to create partitions to force your teams to treat clusters as fungible.

If there’s Red Hat documentation advising silly absolutes please let me know and I’ll make sure it gets fixed.


I don't have an example to hand, so it's obvious I went on second-hand accounts. Do you have something you'd normally point customers to when describing the tradeoffs?

For myself I see the argument for fewer clusters as about utilisation, the argument for more clusters about isolation. It's the oldest tug-of-war in computing. I think that shared node pools for multiple masters is going to be the combination that for most workloads will increase utilisation without greatly weakening isolation. I don't think multi-tenancy in the master will be as easily achieved or as effective.


In Red Hat OpenShift Consulting, we openly advise against “One cluster to rule them all” and the vast majority of our customers heed our advice. Our default delivery models support Sandbox, Nonprod, Prod cluster stand up. Some of us even support the idea that good IaC/EaC practices get our customers to where the cluster can be treated like cattle (much like pods and containers) in well-designed apps. My colleague Raffaele hinted as much when describing the problem as a matter of availability, disaster recovery and federation [0]. At least in OpenShift, multi-tenancy is a solved problem when cluster right-sizing has taken place. RBAC, node labels and selectors, EgressIP, quotas, requests and limits, multi-tenant or networkpolicy plug-ins go a long way.

[0] https://blog.openshift.com/deploying-openshift-applications-...


> In Red Hat OpenShift Consulting, we openly advise against “One cluster to rule them all” and the vast majority of our customers heed our advice. ... My colleague Raffaele hinted as much when describing the problem as a matter of availability, disaster recovery and federation [0].

To be honest, I should have realised this would be so.

> At least in OpenShift, multi-tenancy is a solved problem when cluster right-sizing has taken place. RBAC, node labels and selectors, EgressIP, quotas, requests and limits, multi-tenant or networkpolicy plug-ins go a long way.

Well, as you can guess, I am not convinced that this is really solved -- it looks like multiple discretionary access control mechanisms that need to be aligned properly, instead of a single mandatory access control mechanism to which other things align.


It's also about tooling.

I've seen many clusters being sold, but with no tooling to automatically build, monitor, secure and maintain these clusters, so you've got a DevOps team playing cluster wack-a-mole.

Of course the consultancies love that because it's a bespoke layer for them to build and support, but the reality is setting up a small team to run a couple of clusters eases the job of discoverability and secops, and for many orgs is "good enough".

Still, there is room for improvement, buy I doubt it's many masters without another product on top.


> It's also about tooling. I've seen many clusters being sold, but with no tooling to automatically build, monitor, secure and maintain these clusters, so you've got a DevOps team playing cluster wack-a-mole.

Pivotal's doctrine of how to use Kubernetes is explicitly multi-cluster oriented, but that's because we come to the table with tooling that excels at this kind of problem: BOSH.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: