Ngrok was really cool since I never thought about reverse proxying my localhost before l tried it.
But for the price of ngrok I'm paying for a domain and a 2gb ram/2 CPU VM on hetzner and using SSH tunnels to nginx reverse proxy.
And setting up a shared server for a team with subdomains is just 10 mins of config changes per user - no way they can justify the cost for me.
If it was some symbolic price like 20$/year then I wouldn't bother, otherwise I'll take the VM I can load other random dev crap to when I need it.
And you're using standard web tech to set this up - if you aren't familiar with something required to set this up you will be better off learning it in the long run (if you're the target audience for ngrok) : VM setup, nginx, reverse proxy, SSH tunneling, let's encrypt, domain management/DNS - all valuable fundamental skills to acquire on a small project.
As a developer and business owner, I literally can't comprehend this response. I can't imagine idea of having to manage yet another server over running
ngrok http 8000 -subdomain=my-custom-subdomain
from the root of my application (which is wrapped in a single line shell script). For a business, paying $10/dev/month for ngrok is a rounding error.
This response only makes sense if you are not into DevOps, which I think you should as a developer. Take a random server exposed to the internet, use caddy or treafik to setup a reverse proxy. It takes like 20 seconds to create a reverse proxy to a random host connected via ssh.
Even easier, create predefined ports in caddy and connect to those via ssh. Assign domains to them like proxy<1-1000>.example.com. Connect to one via ssh that is free. Done. That's 2 seconds once it is setup.
Even easier and cheaper if you do not have a "random server exposed to the internet", take one of the alternatives shown by op which allows to deploy a digital ocean or Hetzner vm automatically. Done, like a few seconds from the command line.
10$ a month for something like this is over priced but I guess you prove that it is a valid business.
What you are describing is “easy” if it’s something you practice regularly. It would easily take me, an advanced SWE an hour to learn the process from scratch or re-learn after not thinking about the system for 18 months. $120/year pays for itself very quickly.
For anyone doing this as a hobby sure, but if this is your business then it’s nuts not to just pay the cheap toll.
But once you set it up - you've set it up for everyone on your team - you just add subdomains and keys to each person - which can be done by a script (depends on your DNS might require 1 manual step but you need this with ngrok anyway).
So it's not 120$/year - it's 120$/year/developer.
And setting it up for yourself - I've found multiple instances where having a configurable nginx reverse proxy on a public server saved me a bunch of time, it's much more flexible than ngrok.
If I have 100 devs that's $12,000/year (ignoring any bulk discount), right?
Are you saying that in a company with 100 developers they cannot spare enough time to run their own server cheaper than that?
Additionally, while the amount is 'cheap' in the isolated sense, when you start adding all those 'cheap' services together they quickly add up and suddenly your per-dev costs start getting out of control. In all cases you should consider the value of the service being provided. While I love ngrok, the functionality can easily be replaced for most use cases and move you from a per-dev cost to a relatively (yeah, large numbers of users will cause server scaling/cost changes) fixed cost instead.
Edit:
Just to add on, $10/month is more than I pay for Gsuite or Jira on a per dev basis. The Ngrok pricing is drastically off base IMHO.
What about something going wrong on ngrok's end and your hands being tied while nothing is working? Some might prefer similar solutions on servers that they control themselves.
Actually with WireGuard, OpenVPN and the like, you can do some pretty interesting (and sometimes problematic) things as well, so there's definitely a lot of flexibility with a few footguns here and there: https://blog.kronis.dev/tutorials/how-to-publicly-access-you...
Disclaimer: everyone's setup has their own requirements and whatnot. If using external services works for you, that's great! Also, that second article of mine may serve as an example of that NOT to do (you typically would only want to selectively forward ports, rather than forwarding all of the traffic like i did).
Yes but 1) I enjoy solving problems like this and 2) I’m not working and being paid for all that extra free time I have anyways. There is also a non-zero intangible cost to offloading all tasks. Based on your rationale a highly paid Google engineer would have a maid, chef, and Ikea furniture assembler and the only thing the guy would do is code all day because that’s the maximum value use of his/her time. Obviously most people do not do that. I’d probably pay up to $100 per day to a toilet paper monopoly so that I didn’t have to use my hands everyday. Luckily toilet paper isn’t in a monopoly situation, so I’m not going to hand over $100/day to Charmin even though that’s the value I get from their product.
But I think that point is moot because a service that costs $X in perpetuity must justify its costs with continuous new features and must be compared to alternatives. If I just wanted a basic proxy I’d run one of the open source alternatives in a container. If it required too much work, make a contribution to the project to make it easier for everyone. The value of ngrok is in its extra features, but the bulk of the cost is probably in setting up an actual VM and running it, which isn’t that hard these days and can be automated even in an open source offering. When considering the value of ngrok you can’t simply justify it by saying how much time it saves you. You have to compare it to the market, which includes free open source alternatives. So the real question to ask is whether the added features of ngrok plus the time savings over using some open source alternative is worth the extra cost over the time span I’d be using it. If time span is infinite, then maybe it makes sense to do a one-time extra cost of learning how to setup an open source alternative and paying a bit less for the actual VM costs. Maybe ngrok’s added features are indeed worth paying for. But telling yourself your time is not worth looking into this is a lazy justification for unnecessary subscriptions.
I've set this up for myself 4 years ago and it just works, having a nginx instance online was really useful many times, eg. just recently I had to setup custom routing rules for services configured on AWS load balancer, just setup the same routing rules on my nginx and tunneled my local services - spent 10 minutes to get a working environment for debugging on local machine - with ngrok I would need to setup a local reverse proxy and tunnel that, small time saver but there have been a lot of instances like this, where I'm like - oh let me just throw this on my garbage server to transfer, let's host it there for a demo, let me install code server there for a workshop, etc. It was there, running on my domain, I know how it's setup - much better value proposition than ngrok for the same price.
I also set it up for a team I worked with - took me 2 hours to get it running again and then the rest of the day to document and add keys for people and explain how and why they should use it. This was a team of 9 people - so in one day I did 1080$ - cost of small instance - for the year. That's above my daily rate so worth it for the owner as well.
10$/dev/month is not a lot - but this is such a trivial service that you setup once and forget, also there's a lot of friction in getting things like this approved and pretty much everyone has a some discretionary cloud provider budget.
> For a business, paying $10/dev/month for ngrok is a rounding error
Yes, but every tool on the wild wants you to pay 10 bucks per user per month.
github/gitlab, jira, random CI/CD tool, gitpod, private repo, tailscale/zerotrust, dockerhub, 1password, okta, notion...
Depending on your size, those costs can add up pretty quickly.
ngrok is a very low hanging fruit for keeping expenses at bay, 10 bucks per user is ridiculously expensive for the service they are providing.
For developers, I largely agree with you, though even in that demographic often it's nice to not have to fiddle with the details yourself, even if you know how.
But I'm interested in a different demographic. Someone who wants to run their own blog, or run a Nextcloud server for their family, or host albums from their photography work shouldn't need to understand DNS, TLS certs, IP addresses, ports, etc. They should be able to install an app on their laptop (or old Android phone, or Raspberry Pi), go through a quick OAuth flow to tunnel out of their home network, and have their content available to the rest of the web.
Obviously there are UX and security concerns, but these are solvable problems.
> , and have their content available to the rest of the web.
The issue is that DNS and registrar services are fragmented by design, and there doesn't seem to be a standard DNS API that would make it easy to simply say"select your domain registrar" and use that to start an oauth flow then manage DNS records for a site. Therefore, anyone who does run even a simple service like hosted wordpress (without using the host as their registrar) needs to at least have basic knowledge of what DNS records are and how to put in their registrar without messing up email records or other subdomains they might have.
There's the DomainConnect protocol[0] which has been around for a few years. I didn't find it particularly well matched for open source projects, so I've also done some work on my own protocol[1]. The current draft is implemented by TakingNames.io (as a provider) and boringproxy (as a client). You can read more about it and watch a demo here[2].
That said, I think DNS might not be the right layer of abstraction to simplify this. I think we need an open tunneling protocol[3].
Do you ever run into performance issues with the shared machines at the low end? I've found the shared CPUs can be a little iffy under load but if you're basically running reverse proxying only it must not be an issue
Also do you think you'd pay $40/year? that's less than what you're paying Hetzner for no management.
I've been running stuff like code server [1] for some workshop I held recently and I was building simple .NET projects there without any issues.
I wouldn't pay $40/year because the value of having a random instance up with domain/ssl and nginx running all the time has been handy many times so far - worth way more than what I pay for the machine. At this point ngrok would be a downgrade for my use case, but if it was something like 20$/year when I was looking into it I probably wouldn't have bothered setting it up.
Setup https://github.com/antoniomika/sish and get away from manually setting anything up. Uses SSH, supports HTTP(S)/TCP/Websockets/TLS via SNI and let’s users choose their own tunnel names. Can run it on a free instance from google or oracle.
I feel like half of Ngrok's value prop is being undervalued here. Namely the fact that it captures requests for inspection and replay. That feature is an absolute game-changer for developing things like Webhooks.
First, it lets you easily see what the Webhooks payload looks like in real life. Second, it lets you hit your endpoint repeatedly with the same payload (while iterating on your code), without having to trigger the 3rd party event again.
That is a great feature, but a dev-centric one. If your focus is instead on self-hosting from behind a NAT, things like end-to-end encryption become more important. There are always tradeoffs.
I have to say ngrok was one of the services I've used in life that truly made me go "ohh holy shit." This was several years ago, but it was such a pain point sharing local dev things with other folks, and this made it absolutely so trivial. It felt a bit like when I used prettier for the first time in my code -- instantly I knew I couldn't live without having it in my life. I hope him/they are making good money from the project!
While I agree with the sentiment, there are real issues with a truly p2p internet. Even if we switched to IPv6 overnight, everything would still be locked down with firewalls. And what if someone decides to DDoS your blog or geo-locate your IP to figure out what town you live in?
In order to get to a p2p world, I think maybe the best balance point is for most people to connect to the internet through a local tunneling/VPN company which also provides them with a public inbound IP. This has many benefits including:
* Hides all your behavior from your ISP, turning them into a dumb pipe and encouraging net neutrality.
* Since VPNs don't necessarily require hardware, it's much cheaper for VPN companies to start up and compete with each other. This can also have the effect of decentralizing a lot of power into many small local companies, as opposed to large (inter)national ISPs.
* Allows new protocols like IPv6, HTTP/3, etc to be adopted more quickly because VPN companies can implement them for all their users at once.
Unfortunately this technology might need a rebranding first, as a lot of VPN companies these days feel quite scammy and make some dubious claims about what they accomplish.
A p2p internet with no open ports is pretty pointless. And the average p2p application user isn't likely to understand the ramifications of opening ports, even if good tools are provided to do so (which I'm skeptical). So I don't really see how this improves much on the CGNAT world we're headed into.
I think tunneling is a safer and easier approach for most people. They never have to expose their IP or ports, and the application on the private end of the tunnel can run in a sandbox to protect the rest of the user's stuff.
With tunneling you never open incoming ports. Everything gets tunneled through an outgoing connection from the client to the tunnel server. Maybe we're talking about different things?
Oh p2p apps can and definitely should be sandboxing. That's not a unique feature of tunneling. The exposed IP and trickiness of educating users on opening ports are much bigger issues. That said, if you're willing to accept the risk of exposing your IP, all we really need is wider UPnP support. But just like IPv6 it's been around forever but hasn't taken off.
It's really a chicken-egg problem. We're not going to get IPv6, UPnP, fast upload internet speeds, simple sandboxing on all major OSes, etc, until we can prove that people want p2p apps and there's money to be made by supporting them. But p2p apps are difficult to build without those things, unless you use tunneling...
Note that you can get maybe 90% of the way there by using NAT traversal[0]. But you still need relays (tunneling) for the last 10%. Given the other benefits, I think going straight to tunneling makes sense.
Yeah I'm a bit confused as to what you mean by tunneling. If you mean something like CloudFlare tunnels or ssh -R, how is that different from having the application open a firewall port automatically?
I'm confused. I think we really must have different mental models of this conversation.
What do you mean by "built-in" exactly? You mean adopted standards that are widely deployed?
Assuming that, with tunneling it doesn't matter if it's built-in, because you're side-stepping the devices between you and the public internet. Any device that can open a TCP connection to a remote IP address can create a tunnel, and then it's only constrained by the limitations of the tunnel provider, not by the home router, the ISP, or any software therein.
Maybe this would be more productive and I can learn something if we reduce the scope. How would you solve the DDoS problem in a network full of true p2p IPv6 devices?
> I think we really must have different mental models of this conversation.
It sounds like it, from what I understand, you're saying that each application would connect to a service like CloudFlare to open a tunnel to itself, right?
> What do you mean by "built-in" exactly? You mean adopted standards that are widely deployed?
No, I mean "running an application or library that knows how to open a port/tunnel". Remember that in the p2p case each computer has an externally accessible IP address, so all ports are just blocked by the firewall and the user can open them (or the application can request to have them open).
> Any device that can open a TCP connection to a remote IP address can create a tunnel, and then it's only constrained by the limitations of the tunnel provider, not by the home router, the ISP, or any software therein.
That's true, but in the p2p case, all ports are already open. You're using a firewall to block them, for security. If you want to allow programs to become connectable, you wouldn't block their ports. If you didn't, you would block them even in the tunneling case.
The point about DynDNS stands, but you still need some DNS even in the tunneling case (unless you assume that the tunnel endpoint has a static IP but your IP is dynamic, which there's no reason to assume, as it might well be the other way around).
> How would you solve the DDoS problem in a network full of true p2p IPv6 devices?
Same as you'd solve it in the tunneling case, you'd use a service that can absorb the DDoS. There's nothing that makes a tunnel inherently DDoS-resistant, it's just that one of the providers you can use for tunneling (CloudFlare) can provide you with anti-DDoS services.
I think we've mixed up too many things in the discussion. The main benefit I see in p2p vs tunneling is that tunneling hides your IP, but p2p lets you choose the port numbers (you can't choose whatever port you want for your tunnel, because all users have to share an IP). All the other features seem equal to me in both cases.
> It sounds like it, from what I understand, you're saying that each application would connect to a service like CloudFlare to open a tunnel to itself, right?
That would be ideal. But you could also have a GUI program that manages mapping tunnels to ports for apps that don't have built-in support.
> No, I mean "running an application or library that knows how to open a port/tunnel".
Ah that makes more sense. Yeah you're correct that is something that has to be added, but it's as easy as installing any other app, which is much easier than buying a new router, switching to an ISP that supports IPv6, etc.
> The point about DynDNS stands, but you still need some DNS even in the tunneling case
If the tunneling is offered by the same service that sells domains, this can be much more streamlined to the point where it's unnecessary for the user to understand DNS. That's what I'm working towards with TakingNames.io[0].
> Same as you'd solve it in the tunneling case, you'd use a service that can absorb the DDoS.
If you're routing through a 3rd-party anyway, why not tunnel and gain all the additional benefits?
> I think we've mixed up too many things in the discussion
I agree. I didn't want to leave any of your questions hanging, but if you want to continue the discussion feel free to drop anything you find uninteresting.
> p2p lets you choose the port numbers (you can't choose whatever port you want for your tunnel, because all users have to share an IP)
This is not a bad point. I do wish DNS had a way to specify ports in records. However, this is essentially solved by SNI routing. It does have the limitation of forcing you to use TLS for everything, but I don't think that's too big of a problem for now.
To be clear, I wish we had IPv6/UPnP everywhere and things worked the way you're describing, but I don't see it happening for many years to come. I'm betting on tunneling for the next decade or so.
> That would be ideal. But you could also have a GUI program that manages mapping tunnels to ports for apps that don't have built-in support.
Right, this sounds basically like what the firewall does in the p2p case.
> which is much easier than buying a new router, switching to an ISP that supports IPv6, etc.
Yes, if you can't just open a port, a tunnel is easier, true.
> If you're routing through a 3rd-party anyway, why not tunnel and gain all the additional benefits?
Well, you wouldn't be routing through a third party. DDoSes are costly and you usually don't need to protect against them if you're a home user.
> I do wish DNS had a way to specify ports in records.
That's what SRV records are for, no?
> but I don't see it happening for many years to come
Hmm, I don't think we need it everywhere, just where people care to use it, but maybe. I'm not as pessimistic, as my home ISP gives me a (fairly static? I've never actually checked) IPv4 where I opened a bunch of ports, but it would certainly be better if each of my home devices could be addressable from the internet.
TakingNames is an interesting idea, though I'd have to think a bit more about it to tell if I'd find it useful :P The current method of just opening a port on my router has worked extremely well for me so far!
It was also a bit of a nightmare of abuse, and a golden age of worm attacks. If you annoyed the wrong person on IRC suddenly your internet was DOS'ed offline.
You need to run a tor daemon on the server side and then a tor browser or tor on the client side, but other than that there is no other setup or intermediary service/server necessary. This solves a lot of pain that many ngrok alternatives still require you to run (or pay for using) some central hub server, have a public IP & DNS, etc.
The client part is exactly why people might use ngrok to get a public hostname - the number 1 priority is access to an internal resource from anywhere in the world with minimal setup, and that ngrok/other mediator means nobody needs any inbound ports open besides via loopback on the host.
Yeah it depends--I use ngrok (or similar services) for access to internal stuff I run. I don't want to pay money to run a VPS in the cloud when I have a NAS, etc. running it at home. But I also don't want to pay a premium for a public IP at home, and deal with the immense security headache of securing such an entrypoint from the public internet. So a tor hidden service works great as a no cost VPN-like alternative.
But yeah if your goal is "expose this thing with a public IP/DNS" then there is no way around using your own server, or buying/leeching off a service like ngrok.
You don't need any incoming ports open for a tor hidden service either though. Both the server and client side connect out to tor, and then get routed to each other over it.
Disclaimer: I'm currently incubating a similar product.
Honestly I would be very interested in hearing what you think the best plan for competing with Cloudflare Tunnel is. I've been keeping a close eye on this space for a couple years now, and Tunnel seems well on its way to dominating the mindshare.
It can be very hard to compete with a bundled loss-leader offering.
happy to chat more about our future plans offline, feel free to send me a note. i’m alan at ngrok dot com. would be curious to see what project you’re incubating!
I'll need to make a PR to add https://playit.gg :). Been working on it for the past two years. Offers UDP support, ability to tunnel a range of ports, and provides a fixed IP and port. I recently purchased a /24 and moved the entire service to an anycast network. An entire datacenter can go down and connections will keep going .
The main feature I want out of any of these, and can't seem to find anywhere, is a way to expose private services online hidden behind any sort of social-auth. So think ngrok+oauth2_proxy.
Disclaimer: I'm a Cloudflare employee, but not on a team related to Tunnel.
I use Cloudflare Tunnel for all of my self-hosted applications (e.g. https://piperswe.me). Configuring it can be a bit of a pain (generating secrets, setting up systemd service, etc.), but its integration with the rest of the Cloudflare CDN products is worth it IMO.
A simple alternative: I have a new asus router with built in support for their DynDNS clone. I can then buy a cheap domain, cname it to the asus domain and have Caddy proxy forward a subdomain to my laptop.
It will be lots cheaper per month than Ngrok, of course, but it will take more setup.
Greetings! Director of Eng here at ngrok. Figured I should say something (more of a lurker than I care to admit) since someone mentioned the recruiting side of things ;)
We haven't been bought and haven't taken any capital, but we have made some massive investments into our product: new ways of connecting, new ways of securing, reliability, etc. There are a couple announcements coming soon I don't want to spoil. We've been a quiet crew for a while!
On the recruiting side, we've got tons of openings in product engineering (junior, senior, management, PMs) and across the rest of the company. Please don't hesitate to reach out if you have an interest in working with us.
Personally I think they might be feeling the heat from Cloudflare Tunnel. To quote myself from a couple weeks ago[0] (note that an ngrok employee responded to some of my concerns):
> As much as it pains me to say it, Cloudflare seems well positioned to eat ngrok's lunch. AFAIK they offer everything ngrok does plus auto TLS certs, CDN, domain name registration, and tons of other features. They also have way more edge servers for terminating tunnels close to the origin devices. And they can afford to do all this for free as a loss leader product. It's the AWS bundling effect. Oh and the client source code is available.
> I don't want to see Cloudflare completely take over this space, but Cloudflare Tunnel is tough to compete with.
> One knob ngrok could still turn is adding auto TLS certs which are managed on the client side. Then you can offer e2ee which is something Cloudflare will probably never do.
I'm amazed even to hear the words "an Ngrok employee", like it were Google or something. The last time I remember reading about Ngrok, it was just one guy's repo on GitHub. That doesn't even feel that long ago. I'm stunned and perplexed to hear all of this.
Ngrok is a high-quality, well-engineered, bootstrapped product. Honest question, why be stunned and perplexed that it is successful enough to build a company around? Plenty of software companies start this way. And there's a huge spectrum between "having employees" and "being Google".
Long time lurker here , here is one i have written in java with netty inspired by ngrok and localtunnel which allows for inspection of requests and provides replay . Its Still in development with a lot of rough edges .
I would like to mention "Loophole Cloud". I'm one of the developers of Loophole and might be a little biased, but they do offer end-to-end encrypted services like custom hostnames and unlimited tunnel active time at no cost. I would be happy if there is any feedback we could learn from :)
Cloudflare Tunnel is what I (maintainer of OP list) currently recommend for most people. It's an excellent free service. Main downsides are:
* You can't do end-to-end encryption, ie Cloudflare terminates TLS for all requests and can see your data.
* Cloudflare's ToS specifically says you can only use the free tier for HTML websites. Anything else (ie video streaming, photo albums, etc) is technically grounds for suspension, although that seems to be rare in practice.
* Not open source. You can't self-host the server. (EDIT: Client is Apache licensed now) Client source code is available but not FOSS.
I really like ngrok, but I keep bumping into a limitation: if I leave a development tunnel open from one machine, there is no way to shut it down remotely and open it elsewhere.
I am hoping this will be added one day, because it blocked me a number of times already.
Maintainer of the list here. Take a look at my boringproxy project. Once you have the clients running on each machine, all the tunnels can be managed through a web UI on the server. With a little elbow grease[0], you can also SSH into any of the clients (as long as they have sshd running).
That would be a handy thing to do. A workaround could be to have a raspberry pi or similar as your ssh access point to a local network via ngrok, with your main ngrok service running on another machine on the local network in screen or tmux. Then just ssh into the raspberry pi, connect to the machine running the main ngrok service, drop into the session and reconfigure as you like?
i'll need to open a pull request for [tolocal](https://github.com/nelsonenzo/tolocal) :). It's clunky because it requires node and terraform and AWS, but all your stuff is self hosted and can be e2e encrypted, costs almost nothing, can be used with real domain names, etc. I would like to make it all JS at some point (the actual terraform is minimal), but it's hard to see why when Cloudflare Tunnel is a thing now.
I've been happily using localhost.run for a few years now. It's free if you don't mind randomized subdomains, and I like that I just connect with SSH instead of needing client software.
Last week I tried setting up BoringProxy from this list on a server running NixOS but gave up at the certificates step. I wish there was a simple NixOS module for something like this.
if only ipv6 wasnt designed by network giants gripping their aging boxes built on 1940s concepts, monopoly usurers ISP who literary live by a centralized network and profit from nat, and advertising behemoths who still need to attribute clicks to advertising campaigns... we wouldn't be compiling lists of dozens of reverse proxy solutions that do nothing but work around all these things getting in the way of what what's network should have been already.
Nice, I didn't realize there was a C port of the ngrok client. Since this seems intended for embedded devices/routers, you might also be interested in rathole:
But for the price of ngrok I'm paying for a domain and a 2gb ram/2 CPU VM on hetzner and using SSH tunnels to nginx reverse proxy.
And setting up a shared server for a team with subdomains is just 10 mins of config changes per user - no way they can justify the cost for me.
If it was some symbolic price like 20$/year then I wouldn't bother, otherwise I'll take the VM I can load other random dev crap to when I need it.
And you're using standard web tech to set this up - if you aren't familiar with something required to set this up you will be better off learning it in the long run (if you're the target audience for ngrok) : VM setup, nginx, reverse proxy, SSH tunneling, let's encrypt, domain management/DNS - all valuable fundamental skills to acquire on a small project.