Hacker News new | past | comments | ask | show | jobs | submit login

You cannot hide anything on the internet anymore, the full IPv4 range is scanned regularly by multiple entities. If you open a port on a public IP it will get found.

If it's a obscure non-standard port it might take longer, but if it's on any of the standard ports it will get probed very quickly and included tools like shodan.io

The reason why I'm repeating this, is that not everyone knows this. People still (albeit less) put up elastic and mongodb instances with no authentication on public IP's.

The second thing which isn't well known is the Certificate Transparency logs. This is the reason why you can't (without a wildcard cert) hide any HTTPS service. When you ask Let's Encrypt (or any CA actually) to generate veryobscure.domain.tld they will send that to the Certificate Transparency logs. You can find every certificate which was minted for a domain on a tool like https://crt.sh

There are many tools like subdomain.center, https://hackertarget.com/find-dns-host-records/ comes to mind. The most impressive one I've seen, which found more much more than expected, is Detectify (which is a paid service, no affiliation), they seem to combine the passive data collection (like subdomain.center) with active brute to find even more subdomains.

But you can probably get 95% there by using CT and a brute-force tool like https://github.com/aboul3la/Sublist3r




The Certificate Transparency Log is very important. I recently spun up a service with HTTPS certs by Let's Encrypt. By coincidence I was watching the logs. Within just 80 seconds of the certificate being issued I could see the first automated "attacks".

If you get a certificate, be ready for the consequences.


Were these automated "attacks" hitting you by hostname or IP? Because there's a chance you would've been getting them regardless just from people scanning the entire IPv4 space


They would not have been reverse proxied to the Docker container without the hostname.


This is really interesting. For my Homelab I've been playing around with using Lets Encrypt rather than spinning up my own CA. "What's the worst that could happen?"

Guess I'll be looking to spin up my own CA now!


Getting a wildcard certificate from LE might be a better option, depending on how easy the extra bit of if plumbing is with your lab setup.

You need to use DNS based domain identification, and once you have a cert distribute it to all your services. The former can be automated using various common tools (look at https://github.com/joohoi/acme-dns, self-hosted unless you are only securing toys you don't really care about, if you self host DNS or your registrar doesn't have useful API access) or you can leave that as an every ~ten weeks manual job, the latter involves scripts to update you various services when a new certificate is available (either pushing from where you receive the certificate or picking up from elsewhere). I have a little VM that holds the couple of wildcard certificates (renewing them via DNS01 and acmedns on a separate machine so this one is impossible to see from the outside world), it pushes the new key and certificate out to other hosts (simple SSH to copy over then restart nginx/Apache/other).

Of course you may decide that the shin if your own CA is easier than setting all this up, as you can sign long lived certificates for yourself. I prefer this because I don't need to switch to something else if I decide to give friends/others access to something.

Your top level (sub)domain for the wildcard is still in the transparency logs of course, but nothing under it is.


If you're homelab'ing then you should be using private IPs to host your services anyway. Don't put them on a public IP unless you absolutely have to (eg port 25 for mail).

Use your internal DNS server (eg your routers) for DNS entries for each service. Or if you wish you can put them in public DNS also. Eg. gitlab.myhome.com A 192.168.33.11

You can then access your services over an always-on VPN like wireguard when you're away from home.

Then it doesn't matter if anyone knows what subdomains you have, they can't access them anyway.


Why not something like https://www.cloudflare.com/products/tunnel/ free tier?


If your exposed services use authentication and you use strong passwords you are no worse off than any small business but you have the advantage of being a lesser target.


Tailscale actually does all of the above for you: does the DNS, can register a LE cert, and provides the always-on VPN to allow access when you're away from home.


>Don't put them on a public IP unless you absolutely have to

Not a fan of ipv6?


> Guess I'll be looking to spin up my own CA now!

I was looking for a lazy/easy way to do this manually and settled on KeyStore Explorer, which is a GUI tool that lets you work with various keystores and do everything from making your own CA, to signing and exporting certificates in various formats: https://github.com/kaikramer/keystore-explorer (to me it feels easier than working with OpenSSL directly, provided I trust the tool)

In addition, I also setup mTLS or even basicauth at the web server (reverse proxy) level for some of my sites, which seems to help that little bit more, given that some automated attacks might choose to ignore TLS errors, but won't be able to provide my client certs or the username/password. In addition, I also run fail2ban and mod_security, though that's more opinionated.


I use a wildcard certificate for my home infrastructure. For all the talk of hiding, though, it's wise not to count on hiding behind a wild card. Properly configure your firewalls and network policy. For the services you do have exposed, implement rate limiting and privileged access. I stuck most of my LE services behind Tailscale, so they get their certificates but aren't routable outside my Tailscale network.


We have all our services deployed on an internal network in AWS. We took care to use private hosted zones, gate access behind a VPN with SAML auth.

Turns out we're leaking our service usage by using ACM for our certificates.


Doing something similar on AWS right now, what do you mean by leaking service usage? What is ACM exposing? I assume the “fix” for this would be to host your own CA through ACM?


If I register a TLS cert for gitlab.donalmacc.ie, its publicly logged.

From this thread it seems the fix is to register a wildcard *.donalmacc.ie and use that cert.


Pretty much yeah, I don’t know why would any sysadmin thinks a subdomian is a hidden thing.


Didn’t you read the original comment? It’s just a matter of time until someone starts to poke your IPs. Your own CA will be harder to get right.


Can Tailscale magic DNS + tunnel obscure things? Or only when you keep a service within the tailnet? (Still a + for selfhosters)


Recently, I opened 80 and 443 so I could use LetsEncrypt’s acme-client to get a certificate (and then test it). Tightening up security a bit, I configured an http relay to filter people accessing 80 by ip address rather than domain name - some scanners are still trying domain and sub-domain names I was using weeks ago - which goes to show how organised hackers are about attacking targets.


You can use DNS-01 challenge [1] to get certificate. You just need to add temporary TXT record to your DNS. It also supports wildcart certificates.

Most popular DNS providers (like Cloudflare) has API, so it can be easily automated.

I'm using it in my local network: I have publicly available domain for it (intranet.domain.com) and I don't wont to expose my local services to the world to issue certificate trusted by root CA on all my devices. So, this method allows me to issue valid Let's encrypt wildcard cert (*.intranet.domain.com) for all my internal services without opening any ports to the world.

[1]: https://letsencrypt.org/docs/challenge-types/#dns-01-challen...


Once you expose something long enough to get scanned. It's going to continue to get scanned pretty much forever.

I self host a couple web services, but none are open, you need strong authentication to get in.

It's not ideal, ideally I'd close the https web traffic and use some form of VPN to get in. But sadly that's just not feasible in my use case. So strong auth it is.


not to underestimate the power of shodan, and oh god don't spin up a default mongo with no auth, but port knocking would seen to counteract this to enough of a degree, not to mention having a service only accessible via Tor.

https://wiki.archlinux.org/title/Port_knocking#:~:text=Port%....


Yes, you can hide with a little bit of effort. Port knocking or Tor will stop almost any thing (but don't rely on it as the sole protection, just as another layer).

I like to prefix anything "I don't want scraped" with a random prefix, like domain.com/kwo4sx_grafana/ and nobody will find it (as long as you don't link to it anywhere). But I still have auth enabled, but at least I don't have to worry about any automated attacks exploiting it before I have time to patch.

Something as simple as moving SSH on a non standard port reduces the amount of noise from most automated scanners 99% (made up number, but a lot).


Have you had any problems with browsers leaking the prefixed sites, as seen here?

https://news.ycombinator.com/item?id=35703789


You don't even need "multiple entities". Absolutely anyone can do that. Scanning a single port on the entire IPv4 internet takes about 40 minutes.


> You cannot hide anything on the internet anymore, the full IPv4 range is scanned regularly by multiple entities. If you open a port on a public IP it will get found.

Sure but you might still host multiple virtual hosts (e.g. subdomains) on the same web server. Unless an attacker knows their exact hostnames, they won't be able to access them.


There are several easy ways to the skirt that.

First you can simply try bruteforcing subdomains, secondly if you are using https you can simply pull the cert and look at the aliases listed there. 2 ways off the top of my head.


Of course, but my point was that none of them involve IP scanning.


> This is the reason why you can't (without a wildcard cert)

Guess being security conscious pays off, as testing those on some domains I have, they only managed to show what I want to show, since wildcard will just mask them.

That being said, I don’t think anyone should consider a subdomain as a hidden thing, it’s an address after all and should not be considered hidden, assume it’s accessible or put it behind a FW or VPN and have a proper authentication, security by obscurity never works.


> the full IPv4 range is scanned regularly by multiple entities

Single packet authorization. Server just drops any and all packets unless you send a cryptographically signed packet to it first. To all these observers, it's like the server is not even there.


At my company we got bit by this several months ago. Luckily the database was either empty or only had testing data, but like you said the port was exposed and someone found it.


> full IPv4 range is scanned regularly by multiple entities.

Yet another good reason to use IPv6


IPv6 won't get found by brute-force but there are a few projects which tries to gather IPv6 addresses using various means and scans them as they are found.

Shodan did (maybe still does) provided NTP servers to some ntp-pools and scanned anyone who sent incoming requests.

https://arstechnica.com/information-technology/2016/02/using...

So as with everything, layer the defences, don't rely on you IPv6 address being secret as the only defence.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: