Hacker News new | past | comments | ask | show | jobs | submit login
HashiCorp Boundary (hashicorp.com)
582 points by yongshin on Oct 14, 2020 | hide | past | favorite | 163 comments



Hello HN! I'm the founder of HashiCorp.

I'm excited to see Boundary here! I want to note a few things about Boundary, why we made it, why it is different than other solutions in the space, etc.

* Boundary is free and open source. Similar to when we built Vault, we feel like the solution-space for identity-based security is too commercialized. We want to provide access to this type of security to a broader set of people because we feel it's the right way to think about access control. Note: of course as as a company we plan on commercializing Boundary at some point, but we'll do this similarly to Vault, the major featureset of Boundary will remain free and open source forever.

* Dynamic resource catalogs. Other tools in this space usually require manually maintaining a catalog of servers, databases, applications, etc. We're integrating Boundary closely with Terraform, AWS/GCP/Azure, Kubernetes, etc. to give you live auto-updating catalogs based on tags. (Note: this feature is coming in 0.2, and not in this initial release, but is well planned at this point)

* Dynamic credentials. Existing tools often require static credentials. Boundary 0.1 uses static credentials, too, but we're already working on integrating Boundary with Vault and other systems to provide full end-to-end dynamic credentials. You authenticate with your identity, and instead of reusing the same credentials on the backend, we pull dynamic per-session credentials.

And more! Remember this is a 0.1 release. We have a lot of vision and roadmap laid out for this project and we are hard at work on that now. We're really excited about what's to come here.

Specifically, as a 0.1, Boundary focuses in on layer 3 connections (TCP) with minimal layer 7 awareness for protocols such as SSH. This will be expanded dramatically to support multiple DB protocols, Microsoft Remote Desktop, and more.

Also, we're releasing another new product tomorrow that is more developer-focused, if security is not your cup of tea. Stay tuned.

The Boundary team and I will be around the comments to answer any questions.


Happy Nomad + Consul + Terraform user here.

Thanks a lot for the great products, but please give us managed Nomad already. Or even better: a Heroku like app platform. I want to give you money, but I really dislike your companies' enterprise offerings.

BTW I believe there's a great opportunity for Hashicorp right now. Cloud providers are good at selling building blocks, but are terrible at selling a vision of how you should build your applications. On the other hand, low code / enterprise application platforms are a disgrace as always. IMO a coherent stack of managed Nomad + Consul + Vault could provide a solid middle ground for those who want to build apps without the burden of managing K8s or navigating through the incomprehensible maze of products offered by public clouds.


Hello! Thank you :)

(1) HCP Nomad is coming. We announced HCP Consul public beta and HCP Vault private beta today (on AWS, more clouds later). HCP Nomad is planned but not quite ready to talk about beyond that yet. That is "managed Nomad."

(2) Re: Heroku-like app platform. Watch tomorrow's keynote or catch up on our announcements tomorrow. It isn't this, but I think it'll give you an idea of the vision we're heading towards and that is relevant to this idea.


>Or even better: a Heroku like app platform.

So Hashiku? :)

Seriously Heroku seems to have stopped innovating. I wonder how much is Heroku worth now.


Argh. I already find it a nightmare to figure out how to combine hashicorp tools together. Now there's one more! ;)

E.g, if I want a Consul backed Vault, whilst using Vault to generate TLS certs or other creds for Consul. Especially if I want to run either/both of those services using Nomad, backed by Consul. Hopefully I wont have the option of authenticating against any of these services using Boundary. Especially if Boundary is backed by Consul.


Indeed. Our recommendation with Vault now is to use the built-in storage[1] to break that dependency. If you must use Consul, we recommend separate clusters.

One way we're simplifying this a lot for people is the introduction of our managed services[2][3]. We understand not everyone can use a managed service though!

Boundary will integrate fairly deeply with Consul/Vault but these integrations will be optional.

[1]: https://www.vaultproject.io/docs/configuration/storage/raft [2]: https://www.hashicorp.com/blog/hcp-consul-public-beta [3]: https://www.hashicorp.com/blog/vault-on-the-hashicorp-cloud-...


Thanks for the response. My comment was half in jest, but it has been a pain point for me.


This comment resonates with me so hard. Specifically TLS certs, private certificate authorities and Consul. Like I wanna run my PCA out of Vault (right?), but if using Consul as the backend how do I bootstrap? Sounds like the reply from Michael seems to suggest running the integrated backend, which I can get behind.


Yep, we use the integrated vault backend.

In our case, we use lets encrypt to get certificates for vault and then bootstrap a vault cluster with internal storage. Then you have vault and you can use terraform to configure a consul TLS backend.

And then there is a little hitch, because consul-template cannot easily create multiple files from a single vault API call, so you cannot use consul-template directly to create the necessary certificate files. We've written a small messy tool there. But once you have that, it's fairly straight forward to generate consul + nomad TLS certs for the trust and then you're set.


So I actually do this today, and I use Vault. This sounds weird, but I spin up a "bootstrap PKI" Vault that is local-only, and produces, e.g.: "consul.service.dc.consul" certs with the issuer labeled as "bootstrap PKI intermediate" or some such. I generate a full suite of these for everything in a space, get it all up and running, then there's a 2nd layer of automation where self-certs are issued.

That said, I'm moving to a central distributed Vault that is mostly going to exist as a PKI so I'll only really need to repeat this process once more! Going to be using the raft internal engine for this one, and spread it physically across the globe so performance is going to be pretty terrible by design, but it should be quite resilient!


Maybe you’re not using Terraform. I suspect that your problem is an insufficient usage of HCL.


All hail Hashi-stack!


What is used to secure/encrypt the connection between the clients and the workers?

I did a quick search in the GitHub repo for WireGuard and didn't get any results so I guess you aren't using it.



Thanks! That is exactly what I was looking for.


Do you have a video showing a demo of managing a fleet of servers? Does this also address machine-to-machine ssh key trusts? Do you have a contrib repo with existing ansible, chef, puppet scripts to build your cluster and also for deploying agents to machines?


Hi Mitchell: what's your competitive landscape with Boundary?

When I first looked at the product description, I thought I might be looking at a "zero-trust identity-aware-proxy" sort of thing, but as I read more I got more of the "privileged access management" vibe with more of a focus on controlling access to infrastructure for developers vs. applications for end users.


So I've been casually doing some research into this in the past and was just updating my list so here's what I have so far. If I have missed any, please let me know.

* Azure App Proxy

* Google IAP

* Amazon WorkLink

* Cloudflare Access

* Zscaler Private Access

* Duo Beyond

* Hashicorp Beyond



* PrivX by SSH.COM

We provide a lean PAM solution for multi-cloud infrastructure access.




I believe Teleport is SSH only.



I think there may be some overlap with Amazon Systems Manager too.


Google BeyondCorp?


IAP is Google’s concrete implementation/product, BeyondCorp is the overall philosophy (not a product)


I think BeyondCorp == IAP


https://smallstep.com/

One example. I have been testing smallstep, which puts IDP around ssh (with group management), and also includes a dynamic host catalog (hosts run an agent that phones home to your identity provider).

However, I am very excited about Boundary as it seems to be a much more comprehensive solution.


I hope this isn’t too big of a question but what do you see as the migration path towards these newer “zero trust” access control technologies for organizations that are all in on VPNs and are in a hybrid cloud position?


As you say, it's a big question. But one way to start is by integrating this _within your VPN_ such that network access + credentials alone are not enough. With Boundary you could do this by setting up firewalls on the end hosts to only allow ingress from Boundary worker nodes.

Eventually you can migrate towards Boundary nodes (or similar technologies) being the public ingress instead of a VPN endpoint.

(Edit: clarified that I meant firewalls on the end hosts, not on the VPN or elsewhere in the network.)


This is awesome, thanks for making this. Boundary seems like the missing open source building block to achieve Zero Trust.

Zero Trust means authenticating per application instead of per network. For more context see https://about.gitlab.com/blog/2019/04/01/evolution-of-zero-t...

Proxying connections as Boundary does seems like the most elegant solution to achieve this in a way that doesn't require modifying the application.


Over in another thread this was compared to Google's BeyondCorp. Can you comment and compare/contrast Boundary with the concepts of BeyondCorp?


Boundary can be viewed as an implementation of some of these ideas!


Is there a simple paper that explains how this works on a technical level? I have a hard time visualizing how a connection to a remote host would be set up if it runs through Boundary. Does "without requiring direct network access" mean Boundary works as a proxy? And how does Boundary enable the connection if the host does not have direct network access?


We don't have a white paper on this yet, but we have a white board video that explains both how it works conceptually as well as at a more technical level of deployment architecture and data flow. https://www.youtube.com/watch?v=tUMe7EsXYBQ&feature=emb_titl...


Armon, just wanted to say your whiteboard videos are excellent. And the clarity of thought demonstrated in them over the years has been a great ad for the products too. The low tech aspect also feels more human.

But I had a chuckle at the idea of you wheeling a whiteboard into your house (if that is where it is filmed).


This is a really nice video. I appreciate the patient walkthrough of the concepts and motivation.


Wonderful video, really clear!


By "direct network access" we mean between the client and the end host. The Boundary worker node (which proxies traffic) would need to be able to make a network connection to the end host, and the client in turn would need to be able to make a network connection to the worker node.

This indirection provides a way to keep your public and private (or even private and private) networks distinct to remove "being on the same network" as a sufficient credential for access. At the same time, it ensures that the traffic is only proxied if that particular session is authenticated.


I can see how that works for an internal network. How does this work for SaaS solutions that would normally be directly on the internet? Would they have to be "shielded" to be on a private network and somehow be "Boundary enabled"?

And could this be done in a way that is completely transparent to the user (without them having to start a connection to the worker first, and then make a connection to the desired service)?


Generally speaking this is designed for accessing your own systems, not the systems of a third party being consumed as a SaaS. That said, any such provider that allows you to restrict the set of IPs allowed to make calls to the service would operate in a Boundary-friendly mode.


It would be interesting if the networking model for the end targets could also be inverted, so that an agent (or something) on the end target could make an outbound connection to establish a reverse tunnel to the proxy that user connections could then be sent over.

The use case I'm thinking of is for IoT or robotics, where you have devices you want to manage being deployed into remote networks that you don't have much control over. It's really helpful in this situation if devices make outbound connections only, so that network operators don't have to configure their firewalls to port forward or set up a VPN.

Edit: clearer language


It seems like using WireGuard on the "end target" to automatically connect to (WireGuard on) the proxy would be an easy workaround.

I did basically the same thing years ago for remote console devices deployed inside various customer networks where I had little or no control over the network. At that time, I used OpenVPN to automatically connect back to our "VPN servers" -- providing access to the device even if it was behind two or three layers of NAT (which, unfortunately, wasn't uncommon!).


Second this!

CloudFlare Access allows this, using the cloudflared daemon, which acts as a reverse proxy. It essentially means the endpoint can be closed off to incoming connections from the internet, and you don't need to maintain various firewall whitelist (and hope they don't go out of sync)

Is something like this on the roadmap for Boundary?


Without committing to any specifics, I'll say that we are very aware of use-cases where a daemon on the end host can provide enhanced benefits.

As you can imagine we did quite a bit of research with our existing users/customers while working on the design of Boundary. One thing we heard almost universally was "please don't require us to install another agent on our boxes". So we decided to focus initially on transparent use cases that only require running additional nodes (Boundary controller/worker) without requiring additional software to be installed/secured/maintained on your end hosts.

> the endpoint can be closed off to incoming connections from the internet, and you don't need to maintain various firewall whitelist

If you think about this a bit differently, a Boundary worker is also acting as a reverse proxy gating access to your non-public network resources. You can definitely use Boundary right now to take resources you have on the public Internet, put them in a private-only subnet or security group, and then use a Boundary worker to gate access. It's simply a reverse proxy running on a different host rather than one running on the end host. You wouldn't _need_ to add a firewall to ensure that only Boundary workers can make incoming calls to the end hosts, it's simply defense in depth.


Thinking of this as a means for privileged access management, would it be possible for Boundary to gather artifacts (e.g. keystroke logs and/or screen shots) from the session?

This might trigger some folks but have you explored any options for delivering some or all of the Boundary infrastructure through serverless/faas?


Yes this is on the roadmap!


From a first look this is really exciting. And cool to see you here on HN! I live your positioning and how you’re first and foremost building FOSS software and tools that you leverage on, as opposed to building a commercial offering that you then release software for. It’s a vital distinction that sets you apart from eg Google.

Let’s say you have an org that’s doing the whole Consul/Nomad/Vault thing, and starting to have their Nomad jobs using Consul Connect (and it’s proxies/gateways for external).. that’s already a proxy sidecar used for all service ports. How does Boundary fit here? Is it put before/after Connect, is the plan to integrate them, or are they supposed to not be used together?


In an immediate sense you could have targets point to services handled by Connect, so you'd have client -> Boundary worker -> local Connect entrypoint -> end service.

We'll be looking more closely at other integration possibilities going forward!


Are there any plans or a way to use existing tools? By existing tools I mean winscp or any other tools that use a normal ssh client? RDP etc. I guess for shh and rdp you can just run the Boundary cli with a the predefined target in a terminal embedded into the UI (MremoteNG, MobaXterm etc) but tools like winscp are very much used for sftp file transfers.

A desktop client with a list of services/targets would also be great. Especially for the less technologically inclined individuals.

I know that people have their own opinions on port knocking but I find it as a good tool to remove a lot of noise, some pre built tool for that would be nice but could always just use fwknop-2


You can do this already, The `boundary connect ssh` stuff is just a convenience. You can spin up a local boundary proxy to anything and just connect anything that speaks TCP over it. This allows you to use all the tools you just named.

A desktop client is on the way, we already have an internal build of parts of it but it requires more work and didn't make it for 0.1.


Thanks for answering.

boundary proxy is an ok step but user experience should be streamlined especially if it's for teams and orgs and not just individuals who want to hack scripts but I full understand it's a 0.1 release.

Another thing I couldn't find in the docs is support for multiple installations, let's say I have different vpcs (In different accounts) or I have one on-prem installation and one in a cloud how do I login/switch/configure the cli to work seamlessly with multiple controllers.


We don't have something natively, but you can control the address via BOUNDARY_ADDR env var or the -addr flag per-call, and you can use -token-name with the CLI to switch between named tokens, which can be sourced from different accounts. Together it'd be pretty easy to write a shell alias to do what you're looking for.


Given dynamic resource catalogs and dynamic credentials, any plans to integrate dynamic policy engines, such as Open Policy Agent? https://www.openpolicyagent.org


Yep. This is a little bit further out on the roadmap but yes, we plan on integrating dynamic policy engines.


Hey Mitchell, congrats on the new announcements, great stuff! Out of curiosity, how are you building and operate HCP? Are you running it on top of Kubernetes or Nomad, or you're doing some other custom stuff?


    - Full HashiCorp stack (Nomad, Consul, Vault, Terraform)
    - Cadence (https://temporal.io/)
    - Microservice architecture over gRPC and Consul Connect
    - All services written in Go
    - Customer clusters are created/managed by programmatically running Terraform using just-in-time cloud credentials from Vault
    - All internal TLS certs for customer clusters dynamically created using Vault
    - All external TLS certs for customer clusters dynamically created using LetsEncrypt via Terraform
    - Frontend is Ember


> Customer clusters are created/managed by programmatically running Terraform

I have soooo many questions about best practices doing this. I run a service that needs to dynamically provision AWS resources, and lacking a clear path to do this programmatically, I shell out to Terraform.

* I assume you aren't shelling out :). Do you have any additional helper libraries on top of the Terraform code base to make it more of a a programmatically consumable API, as apposed to an end user application?

* Are you still pointing at a directory with resources defined in HCL, or are the resources defined programmatically?

* What are you using for state storage?

* What is the execution environment for the programmatic Terraform process? Since Terraform uses external processes for plugins, I've hit some issues with resource constraints around the max number of process sysctl's in containerized environment where I have multiple Terraform processes running in the same container.

edit: formatting


Yeah this isn't very easy to get right at the moment so there is not going to be any silver bullet here. We had to iterate on our runner a lot to get this right, but we have a lot of experience since we do this for Terraform Cloud too.

Answering your questions:

> * I assume you aren't shelling out :). Do you have any additional helper libraries on top of the Terraform code base to make it more of a a programmatically consumable API, as apposed to an end user application?

We in fact are. There are lots of security concerns you have to consider with this. We published a library to make this easier: https://github.com/hashicorp/terraform-exec

> * Are you still pointing at a directory with resources defined in HCL, or are the resources defined programmatically?

HCL mixed with the JSON flavor of HCL for programmatically generated stuff. Variables in JSON format also programmatically generated.

> * What are you using for state storage?

We output it to a file and handle this in an HCP microservice. We encrypt it using the customer-specific key with Vault and store it in a bucket that only the customer-specific credential has access to. If there is an RCE exploit somehow in our workflows, they can only access that customer's metadata.

> * What is the execution environment for the programmatic Terraform process? Since Terraform uses external processes for plugins, I've hit some issues with resource constraints around the max number of process sysctl's in containerized environment where I have multiple Terraform processes running in the same container.

Containers in HCP and VMs in Terraform Cloud due to increased isolation requirements. HCP has less strict requirements because the Terraform configs and inputs are more tightly controlled.


>> * I assume you aren't shelling out :)...

>>

> We in fact are.

Words can not express the joy I feel in reading this. Thanks so much for the responses!


Just for clarification, is "Cadence" a thing you built with Temporal? I see nothing on that site called "Cadence".


Temporal is the fork of Cadence by the original creators.It is still open source under MIT license.

I'm former creator and tech lead of Cadence and currently tech lead of Temporal.


Looks interesting! Couple of things:

1. It's not clear to me how you actually secure the targets? Do you just enable access to the IP address of the controller proxy? In the video you mention a gateway but there's no description of that in the docs?

2. Is it possible to proxy a web browser session? Or is it limited to individual requests via something like curl at the moment?


Do you think there will be any synergy or potential interaction with consul connect at some point?


Absolutely, 100%. This is already well discussed internally. :)


Looks great! A couple of questions:

Can you view logs of SSH sessions after the fact?

Can you live-view a session?

Can you require a pairing authorization like with https://github.com/square/sudo_pair?


All of the above is on the roadmap.

Our initial focus is on making the connections easy. We have some work to do there still. We'll then move on to more management features like this. They're both super important but from an initial adoption perspective we feel the latter is moot if the former (connections) don't work easily.


Makes sense. You should integrate TailScale too, so you don't need to shunt traffic through the boundary nodes


Would it not be easier to replace SSH with something more modern that actually exposes that as a feature?

I've been thinking about that the past couple of years, with today's building blocks an SSH alternative is so easy to build. I bet if you guys were to build or back such a system it would be the right quality and get the adoption it needs.

Opaque SSH sessions are such a thorn in my side.


mitchellh

I'm sorry, but please cut the corporate-speak.

Reality is that your statements are different from your actions.

"similarly to Vault, the major featureset of Boundary will remain free"

Sounds great doesn't it.

Except Hashicorp decide to hide Multi-factor authentication in Vault behind the paywall.

I mean, I'll forgive you putting a lot of the Vault features behind the paywall (e.g. replication).

But for a security product. Putting a core component of 21st century security (MFA) behind the paywall ?

Pretty unforgivable.


> * Boundary is free and open source. Similar to when we built Vault, we feel like the solution-space for identity-based security is too commercialized. We want to provide access to this type of security to a broader set of people because we feel it's the right way to think about access control. Note: of course as as a company we plan on commercializing Boundary at some point, but we'll do this similarly to Vault, the major featureset of Boundary will remain free and open source forever.

I hate this corporate speak. You're breaking into the space by giving away (basic, as you will commercialize any advanced) features under the guise of open source altruism. The products HashiCorp sells are open core, and you should be more honest about it (GitLab is!). I wish you operated more like other, real, open source companies that use subscriptions or managed service offerings and don't lock features behind various obscure pricing tiers. This is Shareware 2.0.

The difference between what HashiCorp does and what a real open source company like Rancher does is stark: HashiCorp has products, Rancher builds communities. Contributors to HashiCorps stuff have to play in a very specific sandbox, lest they implement lucrative features. Contributors to Rancher help the community at large and have full visibility into the codebase, empowering them to fix or add functionality without restrictions.


I'm sorry, I'm not trying to use any doublespeak here.

Boundary is free and open source. There is no corporate speak here. It is FOSS licensed (MPL2) and everything announced today is completely FOSS.

We do sell open core software and if there is any place where you feel we aren't being honest about that please let me know and I'll work to address that. I added that "NOTE" at the end of the point specifically to ensure I was being honest and show I wasn't trying to hide anything.

We are also starting to offer managed services for folks who prefer to consume our software that way. The managed service offerings do unlock the typically enterprise features. Example: https://www.hashicorp.com/blog/hcp-consul-public-beta


> I wish you operated more like other, real, open source companies that use subscriptions or managed service offerings and don't lock features behind various obscure pricing tiers.

"I want all of the functionality I want without having to pay for it." I hate how discussions around software businesses so often descend into purity tests around how much a company chooses to give away. Software is indeed eating the world, but the eternal battle of who has to pay for the underlying tools of said software continues.


The problem is not not wanting to pay for software. Hashicorp enterprise products have very interesting features which the open source code is lacking (e.g. nomad namespacing) but they are insanely expensive so you are forced to use the open source versions as the enterprise versions are targeted at fortune x companies.


How is this corporate speak? If an indie dev said his/her project is going to be open source initially and then newer features would get monetized, would your first thought be that this dev is "breaking into the space under the guise of open source altruism"?


If they started out by misleadingly[0] describing it as "$THING is free and open source."? Yes!

Edit: 0: It's (presumably) technically not false now, but the implication is that $THING is honestly intended to be FOSS, immediately followed by admiting that their actual intent is to sabotage that embrace-extend-extinguish-style as soon as it's commercially expedient to do so.


> their actual intent is to sabotage that as soon as it’s commercially expedient to do so.

Sabotage??? Wow, that’s quite an accusation for a company that’s, you know, a company. You might have an argument if they kept quiet about plans to monetize the product later, but that allegation is laughable.

If you’re not comfortable with the terms, don’t use the product. They’re being upfront about their plans. This anti-commercial position is hypocritical.


It's understandable the issue brought up, but the history of the company we are talking about (and not just generalize!) must be considered.

Is HashiCorp known to do this?

All I've heard are good things about HashiCorp from people who use HashiCorp products.

Second, it can't be forgotten these are companies. A company exists to create value for itself in some way.

It's the natural behavior of any company.

However in my opinion, "open core" design seems to be very very preferable amongst technologists (myself included). Essentially we are paying for additional features which normally we'd wait years from a sole contributor.


Some people felt burned by Vault where it looked like the free version could be used in production but it couldn't and then the enterprise version is very expensive.


> it looks like the free version can be used in production

I think you might be confusing vault with another product?

We self-host vault in production, and it doesn't cost us a dime.

(other than the engineers we pay internally to operate it, of course)


Err what? Vault can absolutely be used in production for free. If you want the enterprise features, then you pay.


Why can't the free version of Vault be used in production?


Production-worthiness depends on your needs. The free edition is perfectly good for most people, however there are several features and modules that are only available in the Enterprise Edition. Notably, some of the disaster recovery, scaleout, and multifactor authentication features cost extra.

ref: https://www.hashicorp.com/products/vault/pricing


I think the problem was that auto-unseal wasn't free (it is now, so kudos to HashiCorp for listening).


> Is HashiCorp known to do this?

HashiCorp and other companies doing "devops" tools are known for using "open core" and hijacking the spirit of open source in many ways.


Man, this really represents the rift in Open Source and Corporate development right now. It seems like there are developers who contribute to Open Source because they like the mission, the impact, and the values. In contrast, there are others who contribute to open source because their job requires or mandates it. Then there's people who have a mix of both.

All three have wildly different values and historically corporations aren't very good at listening to anyone that isn't waving a check. They use reasoning like "priorities" to close source formerly open source projects, bend project values to reflect their own values, and wedge projects with funding in exchange for representation or control. Corporate controlled and born projects are often used as marketing or for good PR, a cursory browsing of a company's Twitter page will show how they utilize it for this type of end.

I don't really read Mitchell's speak as corporate or double speak, but I do think that referring to HashiCorp (and other) projects as "open source" is a half truth. The line that I draw here is that I don't think Mitchell is lying, rather, I think that open source is now an umbrella term that means very little and really terms like open core, free and open source software, etc are more concise. We owe that outcome to inviting our corporate friends into the fold of open source with not enough restrictions, tracking, and accountability but there's a piece of me that feels this outcome was largely intentional because it's become a means to an end as I described above. These could just be feelings but the situation is common enough that it's relatable.

I'd encourage corporations to be more transparent in their verbiage, their investments, and their representation in these projects so that it doesn't continue to confuse people who participate in and enjoy the "free" side of open source. When I look at an open source project I'd love to know if a majority of the maintainers or funding comes from a corporation. If those things are true, then as someone who highly believes in the ideals of free software I may want to stay far away from people who are susceptible to corporate influence and values. On the other hand, that increased transparency may help clear the air and prevent issues from being perceived as non-transparent or outright misrepresentation.


> that referring to HashiCorp (and other) projects as "open source" is a half truth

Spot on. Corporate "open source" is often open only in terms of licensing, but not in terms of values.

Many companies use tricks to prevent successful forks and keep tight control over the development process.


I can tell you both from an inner source and open source standpoint, executives (more than engineers it seems, but that could just be my friends) have an outright fear of forks.


so?


I think it's not a fair thing to say. HashiCorp's projects are using MPL 2.0, and please correct me if I'm wrong (IANAL!) it would allow you to create an open source fork of say consul, call it OpenConsul and continue development there. That this hasn't happened yet (or if it did, it never gained any traction) is a testament to HashiCorp being a responsible custodian of its projects and their respective communities.


There are folks who would loathe subscriptions or managed services just as equally, I hope you realize that.


Looks like Google's BeyondCorp: https://cloud.google.com/beyondcorp. If you are on GCP, you can already use it https://cloud.google.com/iap to protect your HTTP and TCP backend.

This is not something new. The earliest open source project that I can recall is https://github.com/bitly/oauth2_proxy (albeit it might be missing the part where proxy passing identity to the backend).

Pomerium is another open source project that's actively maintained. I've been using it as a reverse proxy to all my homelab websites (grafana, miniflux etc). I can now safely access all of these internal resources from outside of my home WiFi with automated SSL certificate configuration and renewal.

You can theoretically protect your SSH connection via these IAP proxies, using the Chrome SSH extension and open source SSH relay implementation like https://github.com/zyclonite/nassh-relay (but I personally haven't tried that).

Disclaimer: I work for Google and am a casual contributor to the Pomerium project.


Also looks very much like Gravitational Teleport [0], which has been amazing to use. Teleport has a lot of advantages over Boundary right now based on it's architecture. But Hashi does a good job of iterating quickly, so I'd guess as with most of their products, it evolves quickly.

[0] https://gravitational.com/teleport/

Disclaimer: I have no affiliation with any of these companies.


Also similar to Cloudflare One which was just announced: https://blog.cloudflare.com/introducing-cloudflare-one/

I think moving away from VPN's is gaining more adoption and a good thing overall.


Looks like RBAC and SSO are paid features with Teleport (but I may be misunderstanding)


RBAC is paid for, but "Enterprise SSO" is different than the SSO supported in the Community Edition - it's described on their site as: "SSO with Enterprise Identity". They list: Okta, Sailpoint, Active Directory, OneLogin, G Suite, and Auth0 as examples. But, you still get SSO in Community Edition.


My company self-hosts LDAP, so that's essentially a dealbreaker for us.


Even if they were the same a big difference is that Hashicorp tools usually work on prems and are OSS.

By default I expect google to try to lock me in the GCP and do not trust their OSS tools


Since you mentioned you're a contributor to a similar project, I invite you to check our recently released zero trust service access control solution: https://github.com/seknox/trasa

It's a BeyondCorp like a user identity and layer 7 aware access proxy for RDP, SSH, Web, and Database protocols with privileged access management, native two-factor auth agents, and device trust policies.

Disclaimer: I am a core maintainer of this project.


I immediately thought of BeyondCorp as well, and I have only read the papers about it. At my employer, which isn't even that large, we have on-prem hardware running VMs and k8s, some stuff in AWS, some stuff in Azure, and employees all over the world with various devices coming in through a VPN.

The old distinction of "internal network" and "external network" doesn't make much sense.


Is using IAM with managed serverless products (Run, Functions) effectively same as using IAP+VMs? Curious if there is a world in which managed Cloud Run + IAP makes sense.


...and they all looks like SOCKS5 proxies...


You can use https://github.com/cloudflare/nginx-google-oauth to do this with nginx too.


I've used this before & it was great - however both this and the bitly oauth2 proxy linked about are archived.

https://github.com/oauth2-proxy/oauth2-proxy is a maintained fork.


Seems like the BeyondCorp-ish “zero trust” remote access space is heating up. This looks similar in some ways to Cloudflare One which was announced Monday: https://blog.cloudflare.com/introducing-cloudflare-one/


Not surprising when corporate VPNs have gone from a handful of the company working from home to the entire company working from home.


It's already pretty crowded. https://telegra.ph/ZeroTrust-Vendors-04-23

Expect consolidation. That or it becomes a commodity expectation of any other purchase, and not a selling point.


I’m expecting both. Probably a standard AWS/IAM feature eventually.


Personally I’ve been a big fan of strongDM (https://www.strongdm.com/).

Lightyears ahead of teleport or any of the other solutions out there. Built for great auditing and zero trust.

Best of all it’s multi-protocol. So you can do SSH, SQL, K8s, HTTP all with one access system.

Had it in prod for almost two years. Gonna be a long time before hashicorp or anyone else can catch up with the level of depth.


StrongDM does indeed look interesting. Can it be completely self-hosted? I am asking because some of the architecture docs mentioned "app.strongdm.com" as a necessary element, which has a webpage behind a (customer?) login. This is an external dependency that is not acceptable for my use case.

I haven't found a conclusive answer in their documentation yet.


Justin here, co-founder and CTO of strongDM. The policy and audit functions of our product are hosted by us, but all the sensitive data transit - the proxies themselves - are hosted by you. Hope that helps!


Thanks for the reply, I really appreciate it. Unfortunately, that is not acceptable for what I had in mind.


> Best of all it’s multi-protocol. So you can do SSH, SQL, K8s, HTTP all with one access system.

Teleport is SSH based so you can tunnel other protocols.


I tried to set up sftp via strongdm hoping it will work since sftp is using ssh, but I failed. It just did not connect


Interesting, seems similar to Cloudflare One that was announced the other day.

https://news.ycombinator.com/item?id=24753940


Another company to watch here is Tailscale, which is Wireguard-based:

https://tailscale.com/

(disclosure: small Tailscale investor)


I like the people behind Tailscale, but I’ve yet to figure out how they’re different than ZeroTier.


I've tried both. I ended up going with Tailscale because:

- Better throughput overall.

- better NAT holepunching. E.g. ZeroTier gives up entirely with "symmetric NAT" where each outbound connection gets a random source port, but Tailscale has a few extra tricks that it can try (including opening a whole bunch of outbound connections, trying ports at random, and hoping the birthday paradox will kick in, which I think is pretty cool.)

- But most of all, Tailscale didn't suffer from weird intermittent throughput/latency issues between different cloud providers the way that ZeroTier did. Sometimes my machines could talk to each other pretty fast, other times it was clamped down to ~10 MB/s for no apparent reason. Sometimes it only showed up in one direction, sometimes both. I gave up on trying to troubleshoot it when I discovered Tailscale.

That said, I still like ZeroTier a lot and think it's a great project. It also provides a whole LAN layer, with stuff like actual broadcast traffic, for which Tailscale has no equivalent.


Based on hearsay:

* wireguard (faster) * easier * more stable


how does one go about that?


> When a user establishes a TCP session through Boundary, a Boundary worker node seamlessly proxies the connection.

Boundary sounds like the perfect mash-up of Google's bastion-less SSH access to GCE instances and actual IAM. Exciting!


> With Boundary, access is based on the trusted identity of the user, rather than their network location. The user connects and authenticates to Boundary, then based on their assigned roles they can connect to available hosts, services, or cloud resources.

Is this the main idea behind BeyondCorp and CloudFlare One, as well? If so this is the clearest explanation I've seen of it.


It is and it's something I noted, too.


I want to give a shout out to Tailscale. It relies on Wireguard and has been dead simple to setup and configure. Stability has been great as well.


I am also a big Tailscale fan, is anyone able to do a quick comparison on how Boundary relates?


Tailscale isnt a deny first, allow based on role/condition type product. Tailscale creates the equivalent of a wide open lan (it has other isolation options but that kind of control based on the identity of the person on the network, isnt its intended goal) where everyone connected can see everyone else.


From what little I know of both, Tailscale provides L2 access into a network that you might not otherwise have access and once you're in you can get anywhere from there, but Boundary hands out individual, already-connected TCP sockets directly to services running on endpoints.

If you're looking for something like a VPN and you're just going to SSH over it, either would probably work for you, but while Boundary can allow users to only connect to port 22 on certain hosts, I think if you wanted to do similar with Tailscale you'd be in iptables/ufw and "tagging / authz-ing traffic with unix uids" territory.


Tailscale is based on wireguard, so only does L3.


This looks like an authenticated proxy. I assume you would need to locally reconfigure your clients (ssh, browser, whatever) to use the Boundary server as a proxy.


Other way around, boundary needs to exec your client application. They're more clear about how it works here: https://www.boundaryproject.io/docs/getting-started/connect-...

Boundary comes with built-in wrappers for ssh, rdp, and postgres, but you can "boundary exec" to run some other application inside the TCP-wrapped transport, apparently.


Some sort of LD_PRELOAD style trickery? Or are they intercepting syscalls?

edit: seems nothing that complicated, more like ssh-style tunnel where Boundary has a local listening socket which you need to point the client to. That is if I'm understanding it correctly.


That is correct! The local proxy has a listening socket and handles all the authentication, encapsulation, and forwarding transparently.


So does it intercept all connections on that port (from the client app) and pass them along? Or do I need to reconfigure my client application to talk to localhost:whatever? Your only example is that curl using a hostname, it's not really clear.


You would point the application at the local port. It operates very similarly to SSH port forwarding. No fancy magic to intercept all traffic.


mostly copy-pasta from an earlier comment[0] of mine:

https://github.com/99designs/aws-vault/issues/578 was for an issue with remote servers accessing the localhost ec2 metadata service that aws-vault can run, that worked exactly by using DNS rebinding. It was fixed only months ago, so it seems like this is a developing area and if I were on a red team or pen testing, I would play around with more.

I visualize the "localhost hole" problem of blindly trusting localhost as an air gap in a pipe (like [0]); anybody could come along and either drop poison in the pipe, or redirect the water coming from the top to their own bucket, or both.

I appreciate that Boundary gives completely generic identity-aware-authenticated TCP sockets, but I don't know of a way, today, to make those not accessible to browsers through dns rebinding attacks.

This is probably much much too far in the weeds and this is unlikely to contribute to a major breach (unlike the aws-vault one where of course attackers would try to access the fake metadata service on the default port, because it's high-value and on a well-known port), but I'm interested in the space.

[0] https://news.ycombinator.com/item?id=23265509 [1] https://districtsales.ca/wp-content/uploads/2019/07/tru-gap-...


I started thinking, using network namespaces to intercept traffic from client applications would be pretty neat trick.


Any example snippets of what the connection setup looks like on the server side?

e.g. something like a docker-compose sidecar exposing an nginx container to users via boundary would really help me understand how this is supposed to be used in practice.

Looking for an example like my comparison here between argo, wireguard, tailscale, letsencrypt, caddy, and ssh ingress: https://gist.github.com/pirate/1996d3ed6c5872b1b7afded250772...


While we don't have a docker compose example (yet), I think the diagram in our reference architecture for AWS might be useful in visualizing a HA deployment and how a client connects to targets: https://github.com/hashicorp/boundary-reference-architecture


There are a few comparisons being introduced already in this thread, and I'm tempted to ask of more, so I'd love to see documentation on this vs. other solutions like is presented with Terraform:

https://www.terraform.io/intro/vs/index.html


Since you asked, we have a commercial zero-trust product very similar to this. As a quick comparison: In our architecture, the worker node (extender) only needs outbound direct access to contact the master node. Unlike many of our competitors, we promote the usage of ephemeral certificates instead of secrets management or minting. We support a number of identity providers and dynamic host directories. Connections can be formed either with native clients or web browser (SSH, RDP, HTTPS) with session recording for auditing purposes. Check it out here https://www.ssh.com/products/privx/


I'm looking for more clarification on how i can fit this into my Cloudflare ecosystem. Assuming many of your clients are consuming Cloudflare and all of their backbone, security, networking, and remote work services.

Would I just have Boundary authenticate via Cloudflare Access and whatever identity provider Cloudflare One is integrated with and move to the RBAC policy phase of the authentication - is this where i am seeing that additional value from Boundary by having that additional on demand credential rotation to various internal Apps and DBs once i am past the SSO stage? CF One is more of a vertically integrated all in one service addressing all of my other networking and security needs so it's not really going anywhere.

I think you guys could do well to release a Cloudflare integration paper, it might help with traction on on-boarding customers.

Thanks!



It looks like you still have to manage users on the hosts for PAM, including SSH keys (or use Vault I suppose). It's too bad that this can't perform all of that functionality--setup a server, install a boundary client, and manage all of the PAM things through Boundary.


We plan on integrating with Vault to perform transparent credentials injection in the not so to distant future. This is a 0.1 product after all, and we still have a lot to build!


Honest question: how is it different/better than setting up a OpenVPN server ?


VPN clients traditionally virtualizes your network interface entirely. Everything acts as if your are actually physically present because it's virtualized nicely. It's great because it "just works".

These "non-VPN" solution seem to use a client on your machine that change any DNS lookup through the OS layer by hooking into gethostaddr() and returning the same IP for all domains if they are in the list of hosts that should be virtualized. Then only the traffic to domains that are needed is virtualized, anything else is untouched. YouTube and Netflix won't get piped over your company network, as an example.

Disclaimer: I don't really know that this is how it works but this is how other providers do it.


Search terms "BeyondCorp" and "Zero trust" will get you started.


This looks awesome, great job! One thing that will slow me down from using this is I've not settled on an ID or Access Management system. Being a small company, we occasionally need to grant system access to contractors or other dev teams. The problem is we don't want to grant the access too wide and specifying fine grained controls takes a lot of time.

Armon mentions Okta and Ping, does anyone have any recommendations in this space that would work for managing a small team with occasional on/off boarding of contractors?


This looks pretty interesting both for some projects at work and for my homelab. I'm a data scientist with an amateur interest in devops so apologies if this is a silly question, but I'm trying to get a sense of the use cases.

Would it make sense to use boundary as a way to manage access to web-based developer environments using an IDP for authentication (e.g. Github, Google, Okta, etc). I'm thinking of tools like JupyterLab/Hub, RStudio Server, etc. Or is that outside the intended scope of Boundary?


Looks like Yahoo's Athenz https://github.com/yahoo/athenz


What is it this exactly adds on top of an authenticating reverse proxy like nginx? Is it the rbac to grant access to specific resources based on their labels instead of per hostname/servicename auth?


This looks wonderful. I spend a lot of time and energy trying to keep people from breaking modern infra with 1980s IP-based security models, and this could be another tool in the arsenal to help with that.


Can anyone explain if this can be used to share a linux samba server shares? If yes, could you point me out to right direction. Thanks


I’m sure it can, given at its core it just tunnels traffic from one place to another, but to be brutally honest this is a 0.1 release and if you can’t work out how to do this from the documentation you’re going to have a really bad time working out why it broke down the line.


Is this very similar to cyberark, but without the logging and recording of usage?


This is great, I hope where I work never implementes it :) Getting access to everything in "one hop" is mighty convenient. Especially now that one hop involves 2fa and finding my phone down the back of the sofa while production has a sev one.


I think you meant to write "boundaq".


How does your system compare to Appgate?


This looks like an interesting alternative to k8s ingress, even if the goal is similar, especially when the default ingress controllers don't support e.g. SSH.

Way too much ceremony for scientific compute sites tho


With a name like HashiCorp I expected this to be a decentralized blockchain identity network similar to IBM's Sovrin, still really cool though, managing id's and permissions is such a pita.


HashiCorp predates most of the blockchain hype; it's named after founder Mitchell Hashimoto.


Hashicorp is behind Terraform, Vault, Consul, Nomad. Surely you have heard of at least one of those?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: