Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I will preface by saying I am not a Nebula expert, and it may have changed since I last looked.

Similarities: - Fully open source, using CAs as strong identities (rather than relying on SSO from third parties), completely self-hosted (with 3rd party SaaS options), and providing scalable, performant overlay networking.

Differences:

- OpenZiti is focused on connecting services based on zero trust principles. In contrast, Nebula focuses on connecting machines – e.g., you can authorize only a single port without needing to set up ACLs or firewall rules.

- OpenZiti does not require inbound ports or hole punching, it builds outbound only connections via an overlay which looks sort of similar to DERP (but better with app specific encryption, routing, flow control, smart routing etc). This overlay also removes need for complex FW rules, ACLs, public DNS, L4 loadbalancers, etc.

- As alluded to above, truly private, zero trust DNS entries with unique naming – if you wanted to call your service "my.secret.service", you can do that; it does not force you to have a valid Top Level Domain.

- OpenZiti includes SDKs (along with appliance or host based tunnels) to bring overlay networking and zero trust principles directly into your application.

- FOSS Nebula does not include "provisioning new clients with identities", as this person pointed out in our public forum - https://openziti.discourse.group/t/using-openziti-in-distrib...



Sounds amazing and like it addresses my issues with Nebula. I know that Nebula/Defined Networks was/is working on better Kubernetes integration, but it seems unlikely to become generally available. Is that something you're supporting? i.e. as pod sidecar to authenticate services like nebula has ACL.

What's your funding model? Are enterprises willing to sponsor the development?

I think Nebula has a lot of trust solely because it's made at/used by Slack. In a similar sense, why should enterprises trust OpenZiti? If services do not use e2ee (e.g. service mesh with TLS) but rely on OpenZiti, it places a lot of trust in OpenZiti. How has the code been audited? Why are you confident that it's cryptographic implementation is secure?


OpenZiti is developed and maintained by NetFoundry (https://netfoundry.io/). We provide a productised version which is very easy to deploy, manage, operate, and monitor with high SLAs, support, legal/compliance, liability, security, updates, feature requests etc.

We are not rolling our own crypto, we use well vetted open source standards/implementations - https://openziti.io/docs/learn/core-concepts/security/connec.... If you don't trust that, you can easily roll your own - https://github.com/openziti/tlsuv/blob/main/README.md. I know people who do that. Yes, its been audited, and run my many large enterprises in security conscious use cases - e.g., 8 of the 10 largest banks, some of the largest defence contractors, leaders in ICS/OT automation as well as grid etc.

Yes, we support K8S in a lot of ways, both for tunnelling and deployement - https://openziti.io/docs/reference/tunnelers/kubernetes/. There are more native options being worked on incl. Admission Controller and Ingress Controller but I honestly don't know the exact status of either. If they interest you, feel free to ping me on philip.griffiths@netfoundry.io. I can get more info.


Sounds great. It puzzles me that Nebula hasn't done what you're doing with OpenZiti.

In my opinion, Kubernetes networking is flawed, in that service mesh authentication with mTLS has unnecessary overhead, Cilium network policies are clumsy using labels and work poorly with non-pod workloads (i.e. CIDR-based policies), multi-cluster is hacky, and external workloads are inconvenient to set up. So a simple plug-and-play solution that solves these problems would be great.


My guess is that is how they want to commercialise, they make that bit harder so that more people pay for their hosted solution. I have sympathy, monetisation allowing maintaining FOSS can be a challenge. We all have bills.

I agree with a lot of what you say. Tbh, this is also why we are advocates of app-embedded ZTNA. You get mTLS (and way, way more) out of the box, without the overhead, and its super easy to run your K8S or non-K8s workloads anywhere. No need for VPNs, inbound FW ports, complex ACLs, L4 loadbalancers, public DNS and more. It is thus much easier to build distributed systems which are secure by default from network attacks.


You comment kicked off a big internal chat, which led to someone creating a document on our overlay approach, vs service meshes. I took that, wrote some extra details, comparison and summary - https://docs.google.com/document/d/1ih-kuRvfiGrJODZ5zVjwFLC2....

TL:DR, we believe service meshes introduce complexity with control plane synchronization, service discovery challenges, and network overlays. A Global Overlay removes Kubernetes service dependencies and shifting networking to a Zero Trust, software-defined global overlay which is much simpler, automated and secure.

Super curious to get your thoughts.


Couple of additional small notes (maintainer here)

> In a similar sense, why should enterprises trust OpenZiti?

you don't have to. It's open source - so you go look at all the code and judge for yourself but perhaps better than that (well different anyway) is that OpenZiti allows you to use your own PKI for identities if youlike. With third-party CA support, you can make your own key/cert and deploy them to identities if you desire. https://openziti.io/docs/learn/core-concepts/pki/#third-part...

> If services do not use e2ee

with OpenZiti you basically get this by default between OpenZiti clients. (once offloaded from the OpenZiti overlay, it's up to the underlying transport protocol)


> - OpenZiti does not require inbound ports or hole punching, it builds outbound only connections via an overlay which looks sort of similar to DERP (but better with app specific encryption, routing, flow control, smart routing etc). This overlay also removes need for complex FW rules, ACLs, public DNS, L4 loadbalancers, etc.

The routers that you deploy to make up the overlay still need inbound ports though, right? I thought that's what 10080 was doing.


Yes, but the risk posture is very different. The question I like to ask is, 'what does it take to exploit a listening port on the overlay to get to a service':

- (1) need to bypass the mTLS requirement necessary to connect to the data plane (note, each hope is uses its own mTLS with its own, separate key).

- (2) have a strong identity that authorizes them to connect to the remote service in question (or bypass the authentication layer the controller provides through exploits; note again, each app uses separate and distinct E2EE, routing, and keys)

- (3) know what the remote service name is, allowing the data to target the correct service (not easy as OpenZiti has its own private DNS that does not need to comply to TLDs)

- (4) bypass whatever "application layer" security is also applied at the service (ssh, https, oauth, whatever)

- (5) know how to negotiate the end to end encrypted tunnel to the 'far' identity

So yes, if they can do all that, then they'd definitely be able to attack that remote service. Note, they only have access to 1 single service among hundreds, thousands, or potentially millions of services. Lateral movement is no possible. So the attacker would have to repeat each of the 5 steps for every service.

A colleague wrote this too, its from a slightly different angle but still very relevant - https://blog.openziti.io/no-listening-ports.


Maintainer here. Yes. The routers and the controller will have a port that can accept mTLS traffic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: