Tor dev here! We're super excited about this launch!
The lead dev on this feature, who also wrote the blog post, is taking some well deserved r&r after getting this feature out the door. I was somewhat tangentially involved (I work on the Shadow simulator, which we used to test, evaluate, and tune this feature) but can take a stab at answering questions.
Otoh comments on the blog post itself are likely to be seen by more experienced tor devs than myself :)
Slightly off-topic but do you have any pointer for someone who would like to help optimizing Tor? The current documents portal [1] has a big "OUTDATED" banner attached, and the WIP new portal contains too little information for me to make sense of how Tor works internally.
I worked on some areas of TBB before, but still feel like I don't know enough about Tor's internal.
The spec documents linked from there are the most canonical documentation, though the gitweb link will probably be deprecated soon in favor of gitlab. https://gitlab.torproject.org/tpo/core/torspec
As always it's probably a good idea to reach out to chat about what you have in mind before getting too far with implementation. #tor-dev on OFTC IRC (bridged to Matrix)
How well would the new congestion controls be able to handle udp traffic from aggressively opportunistic protocols like bittorrent over UDP, assuming that datagram traffic is allowed on the tor network?
As far as I know, bittorrent's udp congestion control assumes the network is able to drop packets and clients definitely act accordingly.
Isn’t Tor at this point widely suspected to be compromised by state actors? Aren’t people who use Tor for nefarious purposes arrested regularly these days?
Just curious: does Tor have anything against timing analysis by an actor with state-level resources? My impression is it's extremely hard to defend against in general, and it's employed by governments against both .onion servers and users browsing the clearnet (although I don't have concrete evidence). AFAIK some sites in .onion get DDoSed on and off routinely, possibly to locate the origin of the server.
Has the work you linked to been shown to lead to the successful deanonymization of a normal user during normal Tor browsing? It's a simple yes or no question. I don't follow tor news like I used to but I'm willing to bet a months-worth of salary that the answer is still no.
Unless the NSA, FBI, or whoever comes out and says oh we broke Tor, I don't see how you could ever get a definite answer to that question. And I don't really see that happening.
Or it could leaked, on accident or by a whistleblower. But that's pretty uncommon.
True, but it'd be difficult to both make arrests based on info learned from having broken Tor, and keep it secret that they'd broken Tor. It's possible by obscuring how they got information (e.g. via parallel construction), but difficult to do at scale.
The application is open sourced [1], and it gets a lot of attention from all manner of people, ranging from nation states (both ‘good’ and ‘bad’ guys, depending on what you’re using it for and from where) and activist researchers.
It’s almost certain that some state has an application exploit sitting on a shelf somewhere, which might only be useful in some extremely niche use case, but it’s unlikely that it’s routinely ‘compromised’ in the way that sensationalised media might put it.
What’s more likely is that an exit node has been owned, or is actually operated by some nation state. Even then, you might not even see the actually traffic if it has been re-routed.
The most likely scenario is an OPSEC failure - turns out you need to be very, very good at operations and online hygiene if you want to hide your illicit activities online shocked pikachu.
Probably some, but the Tor network is designed to be robust to that.
The community does a lot of active monitoring to kick out misbehaving relays. "Misbehaving" includes running multiple relays without correctly setting the family attribute to identify them as being run by a single entity.
The main danger of malicious exit relays beyond other relays is that they perform some man-in-the-middle active attack. This is largely mitigated by end-to-end encryption. Tor Browser will soon be HTTPS only (other than explicit manual overrides) to help avoid inavertent non-e2e protected connections.
I remember reading this article and being concerned that state actors had simply flooded the tor nodes to allow them to them to perform attacks to deanonymize a user. It’s possible that Arstechnica just has some agenda against Tor because it seemed like for awhile they were putting out articles like this every few months on the Tor network and people being arrested who used Tor.
This article is from 2013, and notes a huge increase in tor clients. While the article notes we weren't able to determine the cause of the sudden increase, the primary hypothesis put forward was that it was a true growth in usage due to new anti-piracy laws in Russia. It doesn't note any particular attacks this may signify, and I'm not aware of deanonymization attacks that involve adding a lot of clients to the network.
The larger concern for deanonymization is typically flooding the network with relays, since it increases the ability to do e.g. timing-based de-anonymization attacks. This is a bit of an arms race. As @ajvs points out though, the known cases of tor users being de-anonymized were not due to attacking Tor itself, but via other channels. I'm not aware of any known real-world cases of users being deanonymized by attacking or analyzing Tor itself, let alone users being "arrested regularly"
The lead dev on this feature, who also wrote the blog post, is taking some well deserved r&r after getting this feature out the door. I was somewhat tangentially involved (I work on the Shadow simulator, which we used to test, evaluate, and tune this feature) but can take a stab at answering questions.
Otoh comments on the blog post itself are likely to be seen by more experienced tor devs than myself :)