You can relay through any other SSH server if your target is behind a firewall or subject to NAT (for example the public service ssh-j.com). This is end-to-end encrypted (SSH inside SSH):
This doesn't do most of what dumbpipe claims to do: it doesn't use QUIC, doesn't avoid using relays when possible, doesn't pick a relay for you, and doesn't keep your devices connected as network connections change. It also depends on you doing the ssh key management out-of-band, while dumbpipe appears to put the keys into random ASCII strings.
You could also set up a wg server, have both clients connect to it and then pass data between the two IPs. There's still a central relay passing data around, NAT or no NAT.
Having run servers on OpenVPN, IPSec and Wireguard.. Wireguard is very mundane software.
I still get the chills at the deep and arcane configuration litanies you have to dictate over calls to get a tunnel configured. And sometimes, if you had to integrate different implementations of IPSec with each other, it just wouldn't work and eventually you'd figure out that one or two parameters on one side are just wrong.
And if you don't want to manage IPTables/nftables manually to firewall the traffic from the VPN (which is ugly, I agree), ufw or firewalld introduced forwarding rule management (route, and policies) recently.
Yes, the initial setup and troubleshooting of IPSec can be a nightmare. I've spent hours on bridges with people getting it up and running properly.
Wireguard is a damn simple breath of fresh air. There's so little to configure and go wrong. The mental model took a little bit of time click for me (every endpoint is a peer, it's not client/server) but after that it was a breeze.
Interested to know how you've been burnt by wireguard; what did it not do that you were expecting? What failures have you experienced with it that were the fault of wireguard?
I've been using it (fairly simply, mind you) and it's been pretty solid for a number of years, and was as administrative relief in comparison to OpenVPN which I'd been using before wireguard existed. Single UDP port usage makes me query your comment about impenetrable IP table rulesets.
(OpenVPN was great for it's time too, the sales reps at the company where I introduced it loved the ability to work from the road, way back early 2000's)
"Interested to know how you've been burnt by wireguard; what did it not do that you were expecting?"
Speaking just for myself, I expected it to be as easy to set up as Tailscale. Not to be set up in exactly the same manner as Tailscale, I understand they are not identical technologies, but I expected the difficulty to be within spitting distance of each other.
Instead I fussed with Wireguard for a few days without it ever working for even the simplest case and had Tailscale up and running in 5 minutes.
I think I recognize the pattern; it's one that has plagued Linux networking in general for decades. The internet is full of "this guy's configuration file that worked once", and then people banging on that without understanding, and the entire internet is just people banging on things they don't understand, 80% of which are for obsolete versions of obsolete features in obsolete kernels, until the search engines are so flooded with these things that if there is a perfect and beautiful guide to understanding exactly how this all works together and gives the necessary understanding to fix the problems yourself it's too buried to ever find. It also doesn't help that these networking technologies are some of the worst when it comes to error messages and diagnosis. Was I one character away from functionality, or was my entire approach fundamentally flawed and I was miles from it working? Who's to say, it all equally silently fails to work in the end.
Tailscale changes your dns lookups, adds a bunch of iptables, and then unfortunately broke features without adding them to the changelog (because security I guess).
While wireguard has more of a maintenance overhead tracking public and private keys and ip addresses, it does less magic -- and I really just want things to work these days.
I really wish the people who feel like bringing up that comment would do a modicum of research instead of perpetuating unhelpful and wrong information.
For one, that wasn’t “HN’s reaction”. Read the thread, it’s full of praise. You’re talking about one specific comment.
For another, the commenter was incredibly respectful and even conceded some of the points after they got a reply. Again, read it, it’s right there in your link.
That is an excellent example of how to communicate, and we would be lucky if all interactions followed that example. We should be praising it, not deriding it.
Somewhat relevant, I have a list of (mostly browser based + few no-setup cli) tools [1] to send files from A to B. I keep sharing this list here to fish more tools whenever something like this comes up.
One limitation of iOS is the inability to use Bluetooth to transfer an image/video file to a Bluetooth receiver such as Windows. The Apple documentation requires a wired connection. https://support.apple.com/en-ca/120267
If LocalSend is running on iOS and Windows does LocalSend have the ability to send photos?
It should work, though I haven't actually tried it. That's not a limitation of iOS, just Apple's own syncing app/protocol. LocalSend is basically an http client/server with network device discovery, as far as I know.
Recently there is this project that caught my attention. The project claim to support multi different protocol, on various web browser(even IE6), and extremely easy to setup(single python file). I have not given it a try, just want to share.
Every time someone calls a product “dumb,” I get a little excited, because it usually means it’s actually smart. The internet is drowning in “smart” stuff that mostly just spies on you and tries to sell you socks. Sometimes, I just want a pipe that does what it says on the tin; move my bits, shut up, and don’t ask for my mother’s maiden name.
I've been writing raw POSIX net\ code today. A lot of variables shorten "socket" to "sock". And my brain was like.. um, bad news! This is trying to sell us on their special sock(et)s!
I wonder why it's not standard that you can simply connect two PC's to each other with a USB cable and have them communicate/transfer files. With same protocol in all OSes, of course. Seems like it should have been one of the first features USB could have had since the beginning, imho
I know there's something about USB A to USB A cables not existing in theory, but this would have been a good reason to have it exist, and USB C of course can do this
Also, Android to PC can sort of do it, and is arguably two computers in some form (but this was easier when Android still acted like a mass storage device). But e.g. two laptops can't do it with each other.
You actually can connect two machines via USB-C (USB4 / Thunderbolt) and you get a network connection.
You only get Link-Local addresses by default, which I recall as somewhat annoying if you want to use SSH or whatever, but if you have something that does network discovery it should probably work pretty seamlessly.
The same thing happens with two machines connected via an Ethernet cable, which appears to be what this USB4 network feature does - an Ethernet NIC to software, but with different lower layer protocols.
AIUI, most NICs these days do what is called "auto-crossover"; i.e., they'll detect the situation and just do the "crossover" in the NIC itself. A normal cable works.
The incredible technology you're describing was possible on the Nintendo DS without wires and no need for a LAN either. It's a problem that's been solved in hundreds of different ways over the last 40 years but certain people don't want that problem to ever be solved without cloud services involved.
This dumb pipe thing is certainly interesting but it will run into the same problem as the myriad other solutions that already exist. If you're trying to give a 50MB file to a Windows user they have no way to receive it via any method a Linux user would have to send it unless the Windows user has gone out of their way to install something most people have never heard of.
> It's a problem that's been solved in hundreds of different ways over the last 40 years
If we put the requirements of,
1. E2EE
2. Does not rely on Google. (Or ideally, any other for profit corporation.)
That eliminates like 90% of the recent trend of WebRTC P2P file transfer things that have graced HN over the last decade, as all WebRTC code seems to just copy Google's STUN/TURN servers between each other.
But as you say,
> but certain people don't want that problem to ever be solved without cloud services involved.
ISPs seem to be that in set. IPv6 would obsolete NAT, but my ISP was kind enough to ship an IPv6 firewall that by default drops incoming packets. It has four modes: drop everything, drop all inbound, a weird intermediate mode that is useless¹, and allow everything.
(¹this is Verizon fios; they claim, "This feature enables "outside-to-inside" access for IPv6 services so that an "outside" Internet service (gaming, video, etc.) can access a specific "inside" home client device & port in your local area network."; but the feature, AFAICT, requires the external peer's address. I.e., I need to know what my roaming IP will be before I leave the house, somehow, and that's obviously impossible. It seems utterly clearly slapped on to say "it comes with a firewall" but was never used by anyone at Verizon in the real world prior to shipping…)
starlink doesn't even give you publicly routable ipv6 unless you bypass the starlink router.
My starlink is such that i cannot install/set up things like pfsense/opnsense because the connection drops sometimes, and when either of those installers fail, they fail all the way back to "format the drive y/n?" Also, things like ipcop and monowall et al don't seem to support ipv6.
I looked in to managing ipv6 from a "i am making my own router" and no OS makes this simple. i tried with debian, and could not get it to route any packets. I literally wrote the guide for using a VM for ipcop and one of the "wall" distros; but something about ipv6 just evades me.
> starlink doesn't even give you publicly routable ipv6 unless you bypass the starlink router.
If you've not got an Internet[-routable] address, are you truly connected to the Internet?
> I looked in to managing ipv6 from a "i am making my own router" and no OS makes this simple. i tried with debian, and could not get it to route any packets. I literally wrote the guide for using a VM for ipcop and one of the "wall" distros; but something about ipv6 just evades me.
TBH, I would think that this is just enabling v6 forwarding. That wouldn't do RA or DHCP, I don't think, but I don't think you'd want that, either. (That would be the responsibility of the upstream network.)
You would want that. The upstream network can't do it for you, because RAs can't be routed. Same deal for DHCPv6 (although personally I'd say you can probably skip that and just use SLAAC).
in order to have public ipv6 on starlink you need to manage the /56 they delegate to you into however many /64s that is (at least 8); i tested it with a store bought router, everything worked if you can do PD with DHCP[v6] or whatever. I returned the router because it was $200 and i will eventually figure it out on a VM.
# On the upstream network.
[Network]
DHCP=yes
[DHCPv6]
PrefixDelegationHint=::/56
# On each downstream network.
[Network]
IPv6SendRA=yes
DHCPPrefixDelegation=yes
One frustrating part is that as far as I can tell nothing supports easy downstream DHCPv6-PD delegation, so machines on the downstream network that want their own prefix won't be able to get one automatically. OpenWRT's network config daemon supports it, but nothing on regular Linux does.
I mean, windows users install things they’ve never heard of all the time.
If this was a real thing you needed to do, and it is too much work to get them to install WSL, you could probably just send them the link to install Git and use git bash to run that curl install sh script for dumbpipe.
And if this seemed like a very useful thing, it couldn’t be too hard to package this all up into a little utility that gets windows to do it.
But alas, it remains “easier” to do this with email or a cloud service or a usb stick/sd card.
there are USB 2.0 (and probably 1.x) devices with usb-A on both sides and a small box in the middle that acts as a network crossover between two machines, i've seen them in stores. I've never used one because i know how to set CIDR. And, as others have mentioned, this does just work with usb-c.
"I wonder why it's not standard that you can simply connect two PC's to each other with a USB cable and them communicate/transfer files."
After TCP/IP became standard on personal computers, I used Ethernet crossover cable to transfer large files between compuers. I always have some non-networked computers. USB sticks were not yet available.
Today the Ethernet port is removed from many personal computers perhaps in hopes computer owners will send ("sync") their files to third party computers on the internet (renamed "the cloud") as a means of transferring files between the owner's computers.
Much has changed over the years. Expect replies about those changes. There are many, many different ways to transfer files today. Expect comments advocating those other methods. But the crossover cable method still works. With a USB-to-Ethernet adapter it can work even on computers with no Ethernet port. No special software is needed. No router is needed. No internet is needed. Certainly no third party is needed. Just TCP/IP which is still a standard.
Not on Windows 11 you can't. They removed that for...reasons. They also removed the lovely hosted network that was added with 7 (Vista?) so now you can't network two modern Windows devices without something else (physical cable, or a non-Windows or older Windows device for hosting a network). Stuck with a low speed Wi-Fi router and USB 2 cables? It's gonna take you hours to make that one-time 200gb transfer, unless you wanna drag it down the stairs (the only USB 3 cables I own are mini USB 3 cables for use with an older external hard drive that I no longer own, all my USBC cables are USB 2/PD only...I think...).
One … can. I have a script for this myself, but I only set that up after wanting to do ad hoc and then realizing that it was basically impossible to do from scratch. Ad hoc requires an Internet connection to download the knowledge necessary to do ad hoc, and that utterly defeats the point of it all. (Except in how I've now cached that into a script.)
Ad hoc requires the machines be in "WiFi shouting range".
I was about to talk about how online help files are forgotten these days, and should guide you to the right information to set up an ad-hoc network, but I was disappointed three times over by macOS.
macOS does not have any offline documentation like pretty much every OS used to. When I turn off my WiFi and then open "Mac User Guide" or "Tips for your Mac", they both tell me they require an internet connection.
When I re-enable my internet connection, neither of those apps have information about how to set up an ad-hoc wifi network.
When I looked up how to create an ad-hoc network in other sources, I discovered that the ability to create an ad-hoc network was apparently removed from the GUI in macOS 11, and now requires CLI commands.
I hate how modern tech companies assume that everybody always has access to a high speed internet connection.
Oh so you bought two computers at a store with the operating system preinstalled and have never connected them to the Internet? And you have no Internet access whatsoever to look things up for your two 100% air gapped computers?
That's sort of a disingenuous phrasing, but yes. I'm not thinking of them as "air gapped", since I'm intentionally attempting to form an ad hoc WiFi network between them, but yes, until two laptops are connected over a network, yeah, they're effectively "air gapped" I suppose.
They have normal, consumer OSes on them. Whatever one might reasonably already have preinstalled.
I'm sitting at an macOS machine presently. If I poke around the Wi-Fi menu, and the Wi-Fi settings … IDK, I come up empty handed.
So let's cheat, and Google it. But the entire point of my post above is that needing to Google it defeats the point; if I have an Internet connection (which would be required to Google something) — I can just network the various machines using that Internet connection. In every situation I've wanted to form an ad hoc network, it is because I do not have any access to the Internet, period, but I still have the need to network two machines together.
Anyways, Gemini's answer:
> To set up an ad-hoc Wi-Fi network on macOS, you can use the "Create Network" option in the Wi-Fi menu.
Apparent hallucination, since there is no such menu item.
The first result says the same thing:
> 1. Click the wifi icon on the menu bar. 2. Click “Create network. . .”
(… I suppose I see where the training data came from).
The next result is a reddit thread; the thread is specifically about ad hoc WiFi. The only answer is a link to a macOS support article; that article tells us to go to General → Sharing, and use "Internet Sharing". But AFAICT, that's for sharing an existing WiFi connection over a secondardy medium: i.e., if you have WiFi, you could share that connection over a TB cable, or some other wired medium. And "To Devices Using" conspicuously lacks "also over WiFi", or similar. I.e., this also isn't what we're looking for.
The rest of the results are mostly all similarly confused, and I've given up.
So even if I had Internet, … I still can't do it. So if I'm actually in a situation where I need an ad hoc, it definitely isn't happening.
> if I have an Internet connection (which would be required to Google something) — I can just network the various machines using that Internet connection
Wow, tell me you don’t know how computer networks work without telling me you don’t know how computer networks work.
I think there must be some misunderstanding? I think deathanatos just wants an easy way to send files between computers when the internet is down, which seems decently reasonable.
I never got around to installing NM in Linux. wpa_supplicant on its own is just … mostly good enough.
Perhaps that's mea culpa, and I suppose perhaps I should try NM again, but I also sort of thought this wouldn't be rocket science, until I tried to do it and failed.
> Today the Ethernet port is removed from many personal computers perhaps in hopes computer owners will send ("sync") their files to third party computers on the internet (renamed "the cloud") as a means of transferring files between the owner's computers.
Oh come on, this isn't a conspiracy. For the last decade, every single laptop computer I've used has been thinner than an ethernet port, and every desktop has shipped with an ethernet port. I think the last few generations of MacBook Pros (which were famously thicker than prior generations) are roughly as thick as an ethernet port, but I'm not sure it'd practically fit.
And I know hacker news hates thin laptops, but most people prefer thin laptops over laptops with ethernet. My MacBook Air is thin and powerful and portable and can be charged with a USB-C phone charger. It's totally worth it for 99% of people to not have an ethernet port.
You used to be able to connect two PC’s together via the parallel port. I had to do this once to re-install Windows 95 on a laptop with a hard drive and floppy. It was painfully slow but it worked.
You can plug an ethernet cable in between machines and send files over it! So that period where this would be useful already had a pretty good solution (I vividly remember doing this like 3 times in the same day with some family members for some reason (probably nobody having a USB drive at the moment!))
I realize you are asking for cross-OS, but Mac OS X was doing this in 2002 (and probably earlier) for PowerBook models with an ethernet cable between them. As I recall, iBooks didn't do this even if they had the port, but PowerBooks would do the auto-crossover, then Finder/AFP would support the machines showing up for each other.
I actually have a USB-A to USB-A cable. It came with priority Windows software on an 80mm CD-ROM. It wasn't long enough to connect two desktops in the same room if not on the same table, and I just never tried with a laptop because all my laptops have run Debian or some variant thereof since 2005 or so.
Half the time I need a dumb pipe, it's from personal to work. Regrettably, work forces me to use macOS, and macOS's bluetooth implementation is just an utter tire fire, and doesn't work 90% of the time. I usually fall back to networks, for that reason.
Of course, MBPs also have the "no port" problem above.
> Or using local WiFi (direct or not)
If I'm home, yeah. But TFA is advertising the ability to hole-punch, and if I'm traveling, that'd be an advantage.
> In the iroh world, you dial another node by its NodeId, a 32-byte ed25519 public key. Unlike IP addresses, this ID is globally unique, and instead of being assigned,
ok but my network stack doesn't speak nodeID, it speaks tcp/ip -- so something has to resolve your public keys to a host and port that I can actually connect to.
this is roughly the same use case that DNS solves, except that domain names are generally human-compatible, and DNS servers are maintained by an enormous number of globally-distributed network engineers
it seems like this system rolls its own public key string to actual IP address and port mapping/discovery system, and offers a default implementation based on dns which the authors own and operate, which is fine. but the authors kind of hand-wave that part of the system away, saying hey you don't need to use this infra, you can use your own, or do whatever you want!
but like, for systems like this, discovery is basically the entire ball game and the only difficult problem that needs to be solved! if you ignore the details of node discovery and name mapping/resolution like this, then of course you can build any kind p2p network with content-addressable identifiers or whatever. it's so easy a cave man can do it, just look at ipfs
We do use DNS, but we also have an option for node discovery that uses pkarr.org, which is using the bittorrent mainline DHT and therefore is fully decentralised.
And, as somebody else remarked, the ticket contains the direct IP addresses for the case where the two nodes are either in the same private subnet or publicly reachable. It also contains the relay URL of the listener, so as long as the listener remains in the same geographic region, dumbpipe won't have to use node discovery at all even if the listener ip changes or is behind a NAT.
we also have an option for node discovery that uses pkarr.org, which is using the bittorrent mainline DHT and therefore is fully decentralised
if users access that bittorrent mainline DHT thru a third party server then it's obviously not decentralized, right? that server is the central point to which clients delegate trust
In practice, the "ticket" provided by dumbpipe contains your machine's IP and port information. So I believe two machines could connect without any need for discovery infra, in situations that use tickets. (And have UPnP enabled or something.)
$ ./dumbpipe listen
...
To connect use: ./dumbpipe connect nodeecsxraxj...
that `nodeecsxraxj...` is a serialized form of some data type that includes the IP address(es) that the client needs to connect to?
forgive me for what is maybe a dumb question, but if this is the case, then what is the value proposition here? is it just the smushing together of some IPs with a public key in a single identifier?
The value proposition of the ticket is that it is just a single string that is easy to copy and paste into chats and the like, and that it has a stable text encoding which we aim to stay compatible with for some time.
a URL is also a single string that's easy to copy and paste, the question I have is how these strings get resolved to something that I can connect to
if you need to go thru a relay to do resolution, and relays are specified in terms of DNS names, then that's not much different than just a plain URL
if the string embeds direct IPs then that's great, but IPs are ephemeral, so the string isn't gonna be stable (for users) over time, and therefore isn't really useful as an identifier for end users
if the string represents some value that resolves to different IPs over time (like a DNS entry) but can be resolved via different channels (like thru a relay, or via a blockchain, or over mdns, or whatever) then that string only has meaning in the context of how (and when) it was resolved -- if you share "abcd" with alice and bob, but alice resolves it according to one relay system, and bob resolves it according to mdns, they will get totally different results. so then what purpose does that string serve?
The value prop is that dumbpipe handles encryption, reconnection, UPnP, hole punching, relays, etc. It's not something I could easily replicate with netcat, for example.
ngrok and tailscale and lots of other services offer all of these capabilities, the only unique thing of this one seems to be the opaque string identifiers + some notion of "decentralization" which is what I'm trying to understand, particularly in the realm of how discovery works
I wonder how much reimplementation there is between this and Tailscale, as it seems like there are many needs in common. One would think that there are already low level libraries out there to handle going through NATs, etc. (but maybe this is just the first of said libraries!)
Who cares at this point, Tailscale itself is the 600th reimplementation of the same idea, with predecessors like nebula and tinc. They came at the right time, with WireGuard being on the rise, and poured millions into advertisements that their community "competitors" didn't have since most of them isn't riding on VC money.
I've met a lot of people who think Tailscale invented what it does.
Prior to Tailscale there were companies -- ZeroTier and before it Hamachi -- and as you say many FOSS projects and academic efforts. Overlay networks aren't new. VPNs aren't new. Automated P2P with relay fallback isn't new. Cryptographic addressing isn't new. They just put a good UX in front of it, somewhat easier to onboard than their competitors, and as you say had a really big marketing budget due to raising a lot when money was cheap.
Very few things are totally new. In the past ten years LLMs are the only actually new thing I've seen.
Shill disclosure: I'm the founder of ZeroTier, and we've pivoted a bit more into the industrial space, but we still exist as a free thing you can use to build overlays. Still growing too. Don't have any ill will toward Tailscale. As I said nobody "owns" P2P and they're doing something a bit different from us in terms of UX and target market.
These "dumb pipe" tools -- CLI tooling for P2P pipes -- are cool and useful and IMHO aren't exactly the same thing as ZT or TS etc. They're for a different set of use cases.
The worst thing about the Internet is that it evolved into a client-server architecture. I remain very cautiously optimistic that we might fix this eventually, or at least enable the other paradigm to a much greater extent.
I know it wasn't a "new" idea, but still, ZT was a paradigm shift for me. I was suddenly on the same LAN with people I cared about. Thank you for making it happen.
It's good as long as everything works out of the box, but it's a nightmare when something doesn't work. Or at least that has been my experience. I'm used to always troubleshoot first when I have any issue, but with Tailscale I decided I'm done trying to fight it, next time something doesn't work I'll just open a ticket and make it the ops team problem.
This is true for all systems that hide a lot of complexity. Apple is great until something doesn't work and you get things like "Error: try again later." A car is great until it doesn't start, and there are numerous reasons that can happen.
I remember running Hamachi and NoIP DUC's (Dynamic Update Client) as a kid in late 2000's to expose private server addresses for games or for multiplayer through direct network addresses
NoIP was also the recommended "easy" option for configuring RAT (Trojan) host addresses at the time IIRC.
As one of the iroh developers I must say thank you for creating ZeroTier! It absolutely was part of the inspiration and it's seamless functioning continues to amaze me daily. Something that continues to drive me to strive for as seamless an experience in iroh.
I love the fact we can make different tools learning from each other and approaching making p2p usable in different ways.
As others have said Hamachi was very popular in some gaming communities. I don't know quite how it fits technologically, but a similar user experience seems to come from playit.gg[1].
My friends and I used Hamachi in the early 2000s to play StarCraft and other games over the internet without involving online services. Worked great. I’ve got a soft spot for it.
TailScale sells certificate escrow, painless SSO, high-quality integrations/co-sell with e.g. Mullvad, full-take netlogging, and "Enterprise Look and Feel" wrapped around the real technology. You can run WireGuard yourself, and sometimes I do, but certificate management is tricky to get right, the rest is a pain in the ass, and TailScale is cheap. The hackers behind it (bfitz et all) are world-class, and you can get it past most "Enterprise" gatekeeping.
It doesn't solve problems on my personal infrastructure that I couldnt solve myself, but it solves my work problem of getting real networking accepted by a diverse audience with competing priorities. And its like 20 bucks a seat with all the trimmings. Idk, maybe its 50, I don't really check because its the cheapest thing on my list of cloud stuff by an order of magnitude or so.
Its getting more enterprise and less hackerish with time, big surprise, and I'm glad there's younger stuff in the pipe like TFA to keep it honest, but of all the necessary evils in The Cloud? I feel rather fondly towards tailscale rather than with cold rage like most everything else on the Mercury card.
Iroh is much better suited for the application layer. You can multiplex multiple QUIC streams over the same connection, each for a specific purpose. All you need is access to QUIC, no virtual network interface.
It’s a bit like gRPC except you control each byte stream and can use one for, say, a voice call while you use another for file transfer and yet another for simple RPC. It’s probably most similar to WebRTC but you have more options than SCTP and RTMP(?).
This is made using iroh, which aims to be a low level framework for distributed software. Involves networking but also various data structures that enable replication and consistency between networked nodes.
Does it include reconnection logic? I presume that's not considered "low level", but it does always annoyingly have to be reimplemented every time you deal with long-lived socket connections in production.
yes, to an extent. It will time out if the connection completely dies for more than the timeout interval, but all connections are designed to survive changes to network changes like IP address or network interface (eg: switching from WiFi to ethernet, or cellular)
Theres overlap but i can see complementary uses as well. It uses some of the same STUN-family of tecniques. I have no plans to stop using TailScale (or socat) but i think i use this every day now too.
Part of the problem with libp2p is that the canonical implementations are in Go which isn’t really well-suited to use from C++, JS, or Rust. The diversity of implementations in other languages makes for varying levels of quality and features. They really should have just picked one implementation that would be well-suited to use via C FFI and provided ergonomic wrappers for it.
After writing a response about using this for games below, it occurred to me that most tunneling solutions have one or more fatal flaws that prevent them from being "the one true" tunnel. There are enough footguns that maybe we need a checklist similar to the "Why your anti-spam idea won’t work" checklist:
Your solution..
( ) Can't punch through NAT
( ) Isn't fully cross-platform
( ) Must be installed at the OS level and can't be used standalone by an executable
( ) Only provides reliable or best-effort streams but not both
( ) Can't handle when the host or peer IP address changes
( ) Doesn't checksum data
( ) Doesn't automatically use encryption or default to using it
( ) Doesn't allow multiple connections to the same peer for channels or load balancing
( ) Doesn't contain window logic to emulate best-effort datagrams over about 1500 bytes
( ) Uses a restrictive license like GPL instead of MIT
Please add more and/or list solutions that pass the whole checklist!
You could just use streams - they are extremely lightweight. But those would then be reliable datagrams, which comes with some overhead you might not want.
So how hard would it be to implement window logic on top of RFC9221 datagrams?
I'm not sure I fully understand this window logic question. QUIC does MTU discovery, so if the link supports bigger datagrams the MTU will go up. Unreliable datagrams using RFC9221 can be sent up to the MTU size minus the QUIC packet overhead. So if your link supports >1500 bytes then you should be able to send datagrams >1500 bytes using iroh.
Fragmenting datagrams (or IP packets) is generally not a good idea. All protocol designs have been moving away from this the past few decades. If you want unreliable messages of larger than the MTU maybe taking some inspiration from Media-over-QUIC is a good idea. They use one uni-directional QUIC stream per message and include some metadata at the start of each stream to explain how old it is. If a stream takes too long to read to end-of-stream and you already have a newer message in a new uni-directional stream you can cancel the previous streams (using something like SendStream::reset or RecvStream::stop in Quinn API terms, depending on which side detects the message is no longer needed earlier). Doing this will stop QUIC from retransmitting the lost data from the message that's being slow to receive.
Right, I should have been more clear about that. Window logic was perhaps the wrong term, since I don't care about resends.
The use case I have in mind is for realtime data synchronization. Say we want to share a state larger than 1500 bytes, then we have to come up with a clever scheme to compress the state or do partial state transfer, which could require knowledge of atomic updates or even database concepts like ACID, which feels over-engineered.
I'd prefer it if the protocol batched datagrams for me. For example, if we send a state of 3000 bytes, that's 2 datagrams at an MTU of 1500. Maybe 1 of those 2 fails so the message gets dropped. When we send a state again, for example in a game that sends updates 10 times per second, maybe the next 2 datagrams make it. So we get the most recent state in 3 datagrams instead of 4, and that's fine.
I'm thinking that a large unreliable message protocol should add a monotonically increasing message number and index id to each datagram. So sending 3000 bytes twice might look like [0][0],[0][1] and [1][0],[1][1]. For each complete message, the receiver could inspect the message number metadata and ignore any previous ones, even if they happen to arrive later.
Looks like UDP datagram loss on the internet is generally less than 1%:
So I think this scheme would generally "just work" and hiccup every 5 seconds or so when sending 10 messages per second at 2 datagrams each and a 99% success rate, and the outage would only last 100 ms.
We might need more checklist items:
( ) Doesn't provide a way to get the last known Maximum Transmission Unit (MTU)
And optionally:
( ) Doesn't provide a way to get large unreliable message number metadata
Iroh will do hole punching through NATs. It will even work in many cases when there are NATs on both sides.
There are some limitations regarding some double NATs or very strictly configured corporate firewalls. This is why there is always the relay path as a fallback.
If you have a specific situation in mind and want to know if hole punching works, we got a tool iroh-doctor to measure connection speed and connection status (relay, direct, mixed):
There might be some confusion here, holepunching is a core functionality of iroh. There are still some firewall configurations that iroh can not yet holepunch and that can still be improved, but in general the holepunching works rather well.
I attended Rüdiger's (N0) workshop 2 weeks ago at the web3 summit in Berlin and was left super inspired. The code for building something like this is available here https://github.com/rklaehn/iroh-workshop-web3summit2025 and I highly recommend checking out the slides too :)
Thank you for the praise! It is nice to hear that people enjoy these workshops.
I would love to see what people would build if they had a little bit more time with help from the n0 team. A one hour or even three hour workshop is too short.
Well pipe.pico.sh always uses a proxy server so throughput and latency are worse, but you have your own namespace for the pipes and thus don't have to synchronize random connection strings
Does anyone know if this tech (or Iroh) is suitable for real-time networking for games? Basically, once connection is established, what's the overhead on top of UDP in terms of latency and bandwidth?
Edit: after digging a little, Iroh uses QUIC which looks like a reliable, ordered protocol as opposed to the unreliable, unordered nature of UDP which is what many games need.
Now what I'd love to figure out is if there's a way to use their relay hopping and connection management but send/receive data through a dumb UDP pipe.
> QUIC which looks like a reliable, ordered protocol as opposed to the unreliable, unordered nature of UDP which is what many games need.
This isn't right, as a sibling comment mentions. QUIC is a UDP-based protocol that handles stream multiplexing and encryption, but you can send individual, unordered, unreliable datagrams over the QUIC connection, which effectively boils down to UDP with a bit of overhead for the QUIC header. The relevant method in Iroh is send_datagram: https://docs.rs/iroh-net/latest/iroh_net/endpoint/struct.Con...
It would be nice if dumbpipe revealed the local and remote IP and UDP port numbers via something like STDERR or a signal so that apps could send UDP datagrams on them with ordinary socket calls. I'm guessing that QUIC uses a unique header in its first few bytes, so the app could choose something different and not interfere with the reliable stream.
A better solution would be to expose the iroh send_datagram and read_datagram calls somehow. Maybe if dumbpipe accepted a datagram flag like -d, then a second connection to a peer could be opened. It would recognize that the peer has already been found and maybe reuse the iroh instance. Then the app could send over either stream when it needs to be reliable or best effort.
This missing datagram feature was the first thing I thought of too when I read the post, so it's disappointing that it doesn't discuss it. Mostly all proof of concept tools like this are MVP, so don't attempt to be feature-complete, which forces the user to either learn the entirety of the library just to use it, or fork it and build their own.
IMHO that's really disappointing and defeats the purpose of most software today, since developers are programmed to think that the "do one thing and do it well" unix philosophy is the only philosophy. It's a pet peeve of mine because nearly the entirety of the labor I'm forced to perform is about working around these artificial and unintentional limitations.
> It would be nice if dumbpipe revealed the local and remote IP and UDP port numbers via something like STDERR or a signal so that apps could send UDP datagrams on them with ordinary socket calls.
I believe this would be even more unreliable than UDP, since Iroh is also capable of using a relay server for when hole punching can't be performed, and Iroh also handles IP migration.
> it appears to be linux and macOS only
Iroh should work on Windows, IIUC, just the installer and possibly prebuilt binaries aren't provided. But dumbpipe isn't designed for UDP anyways, it's closer to a competitor for socat/nc.
This reminds me a lot of the holepunch.to (previously hypercore-protocol)
What I wonder is this, is there a clever and simple way to share the secret phrase between two devices? The example is pretty long to manually enter "nodeecsxraxjtqtneathgplh6d5nb2rsnxpfulmkec2rvhwv3hh6m4rdgaibamaeqwjaegplgayaycueiom6wmbqcjqaibavg5hiaaaaaaaaaaabaau7wmbq"
I didn't know about this tool, that's pretty useful!
Too bad the broken nature of NAT means this approach will just ignore any firewall rules you have configured and any malicious device or program can leverage it to open inbound connections.
If you are in the mood for a slightly less dumb pipe, I’ve been building a tunnel manager CLI built on Iroh. Supports forwarding ports over TCP, UDP, and UNIX sockets. https://gitlab.com/CGamesPlay/qtm
I wonder how much different it is from Wireguard + netcat. Both run encrypted channels over UDP, but somehow differently. What does QUIC offer that Wireguard does not?
Wireguard doesn't, which is why tailscale took off so much, since it offers basically that at its core (with a bunch of auxiliary features on top).
Show me some wireguard discovery/relay servers if I'm wrong.
Also, QUIC is more language-agnostic. The canonical user-space implementation of wireguard is in Go, which can't really do C FFI bindings, and the abstractions are about dealing with "wireguard devices", not "a single dump pipe", so wireguards userspace library also makes it surprisingly difficult to implement this simple thing without also bringing a ton of baggage (like tun devices, gateways, ip address management, etc) along for the ride.
If you already have a robust wireguard setup, then of course you don't need this and can just use socat or whatever.
They both run over UDP and always encrypt data. Beyond that superficial similarity they are completely different.
QUIC is a transport protocol that provides a stream abstraction (like TCP), with some improvements over TCP (like built-in support for multiplexing streams on the same connection, without head-of-line blocking issues).
Wireguard provides a network interface abstraction that acts as NIC. You can run TCP on top of a wireguard NIC (or QUIC for that matter).
Wireguard is a tunneling protocol. Netcat lets you write things over a socket. But netcat doesn't implement mechanisms for guaranteeing that all your packets arrive over UDP mode, so you're forced to tunnel TCP over UDP for reliability.
QUIC is all UDP, handling the encryption, resending lost packets, and reordering packets if they arrive out of order. The whole point of QUIC is to make it so you can get files transferred quickly.
WireGuard doesn't know the data you're sending, and netcat+TCP is stuck with the limitations of every packet needing to be sent and acknowledged sequentially.
Wireguard is opaque about the independent streams in its connection. So, while they both can encapsulate multiple concurrent streams in one connection, QUIC can do things like mitigate Head-of-Line Blocking and manage encryption at the transport layer. It also uses a connection ID on these substreams which helps make transitioning across network changes seamless.
If you set up multiple TCP connections over Wireguard, there is no head-of-line blocking either. And Wireguard also transitions across network changes.
In fact, it's one of the main reasons I use Wireguard. I can transition between mobile network and wifi without any of the applications noticing.
We (n0) are running one set of relays. They are rate limited, so they basically only help with the hole punching process.
Projects or companies that use iroh can either run their own relays or use our service https://n0des.iroh.computer/ , which among many other things allows spinning up a set of dedicated relays.
Thanks for the response. This statement confuses me a bit. What is a relay? Does traffic go through it at all, or is it for connection negotiation, or some of both?
sibling comment with links to docs is the more accurate, but to summarize, it's some of both:
* all connections are always e2ee (even when traffic flows through a relay)
* relays are both for connection negotiation, and as a fallback when a direct connection isn't possible
* initial packet is always sent through the relay to keep a fast time-to-first-byte, while a direct connection is negotiated in parallel. typical connections send a few hundred bytes over the relay & the rest of the connection lifetime is direct
the use case here is somebody opens a web browser and types/pastes an ID into the top bar -- and it needs to resolve, correctly, without prior knowledge, in roughly the same amount of time that DNS takes today
relays are the only thing among the things you listed that even have a chance of solving this problem
It'd be nice if the Getting Started link on the n0des page went here instead of immediately asking me to sign up before I know what the hell I'm signing up for
Dumbpipe is using our set of relays. It is meant as a standalone tool as well as a showcase for what you can do with iroh.
If you use iroh as a library, you can specify your own relays.
It is important to mention that relays are interoperable, so you don't have isolated bubbles of nodes using certain relay networks. I can have the n0 relays specified and still talk to another node that is using a different set of relays.
Is there a network topology where two hosts, each behind one or more layers of NAT, can both initiate outbound connections to public internet services (e.g., google.com), but are unable to establish a direct peer-to-peer connection due to NAT traversal limitations? I understand that NAT hole punching can work with a single level of NAT, but does it still function reliably across multiple layers of LAN/NAT hierarchy?
Very handy. We've developed an industrialized variant of this in RelayKit designed for fleets of fielded devices at scale with Anycast, mTLS, multiplexing of services through a single tunnel, Bring Your Own PKI and some other fleet management features that together become a somewhat smarter pipe: https://farlight.io
The surface being http is super nice to have. It's a streams-over-http general utility, quic powered.
I'm struggling to remember what but there's a simple http service called like patchbay or some such that's a store and forward pattern. This idea of very simple very generic http powered services has a high appeal to me.
Looking forward to a future version that can do WebTransport
>These dumb pipes use QUIC over a magic socket. It may be dumb, but it still has all the features of a full QUIC connection: UDP-based, stream-multiplexing and encrypted.
How is multiplexing used here? On the surface it looks like a single stream. Is the file broken into chunks and the chunks streamed separately?
In this particular example there is no multiplexing. It's just one QUIC stream.
In other iroh based protocols the ability to have many cheap QUIC streams without head-of-line blocking is very useful. E.g. we got various request/response style protocols where a large number of requests can be in flight concurrently, and each request just maps to a single QUIC stream.
The marketing is brilliant. The name of the company (number0) is mad hackerish man, right up my alley in the words of Charlie Murphy. I'm going to try this in my GCE on bare metal "unvirtualizer" today (number0 is what a Linux kernel would call the first tuntap with number as its prefix if you had such a patch).
And especially not run the script while it's downloading. The remote server can detect timing difference (let's say the script has "sleep 30" and the buffer fills) and send a different response (really easy if using chunked encoding or HTTP2 data frames).
Kinda related to this, but is there something that runs a daemon on your local machine, where if a "file request document" is uploaded to mega or Google drive or something similar the (,polling) daemon recognizes the request and pushed the document/file to the file store service?
I remember doing something like this with Skype many years ago (at least 15, I guess).
The old Skype, the one that was a real p2p app and before it got bought by Microsoft, was very good slicing through firewalls and NATs and it offered a plugin api, so it was easy to implement a TCP tunnel with it.
Question: What is the security level behind this? I guess if it is "dumb" _anyone_ can input your identifier for the pipe and connect to it?? Or even listen on it?
So if it single-point, there will be a really small window where someone could try to brute-force it (almost impossible, I know), but if it is multi-point (i.e. multiple users can connect to that endpoint) then it could be brute-forced and connect to it? I couldn't see if it is single-point of multiple-send...
Let me know if my understanding is incorrect, I don't have much experience with QUIC :)
I am not one of the cryptographers on the team, but I will try to answer to the best of my knowledge.
QUIC is specifying TLS, specifically TLS 1.3 or larger. From the RFC 9001 (Using TLS to Secure QUIC): "Clients MUST NOT offer TLS versions older than 1.3.".
For the first request, brute forcing would mean guessing a 32 byte Ed25519 public key. That is not realistically possible.
For subsequent requests, even eavesdropping on the first request does not allow you to guess the public key, since the part of the handshake that contains the public key is already encrypted in TLS 1.3.
With all that being said, if you want to have a long running dumbpipe listen, you might want to restrict the set of nodes that are allowed to connect to it. We got a PR for this, but it is not yet merged.
10 years ago I made an "encrypted voice channel" by chaining the following 3 commands together (I dont remember exactly how it looked, this is just a sketch):
I don't remember exactly which audio device I used back then. It worked okay-ish, but there was definitely lag from somewhere. Just kind of neat that you can build something so useful without a bloated app, just chaining a few commands together.
You could describe this same project as "a smart pipe that punches through NATs & stays connected (...)" and it wouldn't be any more surprising or inaccurate than the current description. So maybe it is not that descriptive.
reply