I don't understand why any privacy conscious person would choose a hosted service instead of self-hosting your own solution.
Implementing the whole thing (modulo the anycast IP, which is the only thing I did not use) is easy. I have a docker-compose file which does the whole stack:
1. Unbound DNS which provides DNS-over-TLS service at port 853. It forward request to my local pihole's 53 port.
2. Pihole forward request to my Stubby DNS server.
3. Stubby connects to Google DNS over DNS-over-TLS.
4. A separate docker container to run certbot to update certificate used by the unbound container.
5. A separate docker container with Pomerium as reverse proxy so I can remote access PiHole UI.
Then you can configure your Android phone to use your unbound DNS server as the "private DNS" server. I've being using this setup for more than a month and works really well.
I don't know how you can say that's easy with a straight face. You just mentioned at least 5 software projects and/or technologies that a large bulk of people have never heard of.
A self-host solution by its nature requires some investment in the techniques and would take greater effort (that's how most open source projects make money).
Look, I'm not trying to sell my solution here. This is Hacker News, I'm simply share my setup and hope can help someone who's capable and willing to invest the time. I understand this is not for everyone, that's why I suggest nextdns.io as hosted solution in the README.
> A self-host solution by its nature requires some investment in the techniques and would take greater effort (that's how most open source projects make money).
That sounds like a pretty good reason not to run your own solution then, so I guess we can meet there.
You just answered your own question. A self hosted solution requires a lot of domain and technical knowledge to set up. To you it might seem trivial, but that's an insurmountable barrier to many.
This project seems to occupy the same niche as products like Blokada. Most of the benefits of a self hosted solution, with a much lower barrier to entry.
Depending on your goal. If you just want to have an ad blocking DNS server then nextdns.io is fine. But if you also want to have some control over the privacy issue involved in using a public DNS server, you should seriously consider hosting it yourself.
Depending on your goal: I really don't like the idea all the ISPs can track what websites I visited (Verizon, ATT, and ISPs behind public WiFi). To me, my setup is a huge improvement to the status quo.
I'm amazed that on a site called "Hacker News" people are giving you hassle for building your own self-hosted solution rather than handing control of your DNS over to random people, possibly for money down the line.
The hassle is because of the implication that is super easy to run a self hosted solution. It's a decently complex task that your average person couldn't come close to doing, and many here would still take a bit of time to grok it all.
Hey, I've updated the README and the instruction should be straight forward.
Docker compose file makes everything easily reproducible and I've included working example configs. Not sure how I can further simplify the setup but open to suggestions.
Technological proficiency is very distributed too. Some people are really good at web apps but have no idea how to program in a compiled language. There is so much out there and its not really feasible for everyone to know about everything.
Your solution is not privacy conscious or self-hosted as long as you send all your data to Google in exchange for resolved DNS records. Why not let Unbound resolve recursively?
I think it depends on who you're trying to protect against. While using DoT to a public resolver gives the public resolver the ability to build a history of your queries, running a recursive resolver yourself means anyone who's watching the wire (ISP, local government, etc.) can build a query history instead. Some people trust Google or Cloudflare more than those other entities, or figure that Google already knows pretty well what they're up to since Analytics is pretty much everywhere and they use Gmail.
The most useful option I've seen for trying to get the benefits of both has been rotating between a list of DoT resolvers, so none get all the history and end up with fragmented profiles. There's issues there since people access the same services and thus they'll get the full list over time if the software doesn't record who got what request and stickies it to them. There's always the option of doing it over Tor, but then you're introducing multisecond latencies to your DNS queries, which isn't exactly a great experience.
If you think someone is watching your wire they will see what you connect to after resolving it. That's true if your ISP resolved it, Google resolved it or you resolved it. If this is a problem, you need a different solution altogether.
So because a snooping provider is irrelevant when we talk only about resolving DNS, that only leaves the choice of which party to the chain of entities that are able to easily snoop on your or not. If privacy is important, adding Google or any other DoT resolver to that chain is strange.
That's true if an IP only serves requests for a single domain. With ESNI it's now possible to connect to a server that hosts services for multiple domains without the domain being divulged in the clear on the wire.
How does “forward to google dns” and “android” give you any privacy? Still you dns queries are recorded, tracked and indexed by them, linked to your ip and phone profile.
Disclaimer: I work at Google. I know our internal policy regarding PII information and the tooling around it to protect PII information, so individual employees cannot easily violate my privacy. And I know people work there are generally very vocal (think about Dragonfly) . I would trust more on Google to handle my privacy.
If totalitarianism ever comes to the US, Google would not be able to prevent the totalitarian regime from making use of its data-collection systems. A good analogy would be building a nuclear reactor on a site which sees very rare massive earthquakes. Apple in contrast has acted responsibly by designing its systems not to centralize or concentrate the data in the first place. That is, the unencrypted version of the data and the encryption keys stay on the iPhone.
Second, Google uses personal data combined with machine learning to optimize "user engagement" (roughly, hours spent on the service) because that has been proven to be a good predictor for how resistant an internet service is to competition or disruption. This optimization of user engagement has a bad effect on the productivity and perhaps the mental health of individuals and families and has a bad effect on our public discourse.
Not saying my setup has lower reliability than the hosted service (did nextdns.io promise any SLA?). For the added privacy, the potential lower reliability is a risk that I'm willing to take.
Even with this setup there are ways to increase reliability with-in the budget/skill set of a normal engineer, e.g. run two RasPi with keepalived and run VRRP on your routers. As a last resort, I can disable the "Private DNS" setting on my phone if my DNS is down and I can't fix quickly enough remotely.
keepalived is never the answer; if you can run it, your services are by definition crash-only share-nothing or inconsistent by design, or else you wouldn't let keepalived choose when to move the "primary flag" to the other service (as there'd be no way of sending the last ACKed data from the previous primary). Since this is the case, you could just load balance across the services and have them both active.
From a networking perspective, getting VRRP working on anything but physical equipment (e.g. in the cloud) is a fool's errand; it's L7/API-based and not on the ethernet level. Similarly with keepalived, which will get isolated from the monitored instances (thereby failing to the other, also "down" instance) — except it might have access to the API gateway of the cloud provider thereby disassociating the V-IP from both your instances; so you'll end up with more downtime with keepalived than you gain by it.
Since DNS is by default inconsistent, but eventually consistent and thereby possible to load-balance, you could run one instance of this stack on your static home IP and another instance on GCP/DO/AWS and configure multiple DNS servers in your DHCP options and on your phone, to get higher availability.
There’s a large distinction between BGP announcing and running a properly balanced anycast network. Vultr is not designed for this, they have limited bgp community strings - so running an anycast network there will work either only with select locations, or with sinkholes pulling in traffic from far away.
Implementing the whole thing (modulo the anycast IP, which is the only thing I did not use) is easy. I have a docker-compose file which does the whole stack:
1. Unbound DNS which provides DNS-over-TLS service at port 853. It forward request to my local pihole's 53 port. 2. Pihole forward request to my Stubby DNS server. 3. Stubby connects to Google DNS over DNS-over-TLS. 4. A separate docker container to run certbot to update certificate used by the unbound container. 5. A separate docker container with Pomerium as reverse proxy so I can remote access PiHole UI.
Then you can configure your Android phone to use your unbound DNS server as the "private DNS" server. I've being using this setup for more than a month and works really well.
UPDATE: I posted my docker-compose.yaml file at https://github.com/yegle/your-dns. I'll update the README soon.