Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Congestion Control Arrives in Tor 0.4.7-stable (torproject.org)
149 points by haakon on May 6, 2022 | hide | past | favorite | 24 comments


Tor dev here! We're super excited about this launch!

The lead dev on this feature, who also wrote the blog post, is taking some well deserved r&r after getting this feature out the door. I was somewhat tangentially involved (I work on the Shadow simulator, which we used to test, evaluate, and tune this feature) but can take a stab at answering questions.

Otoh comments on the blog post itself are likely to be seen by more experienced tor devs than myself :)


Thanks for working on Tor!

Slightly off-topic but do you have any pointer for someone who would like to help optimizing Tor? The current documents portal [1] has a big "OUTDATED" banner attached, and the WIP new portal contains too little information for me to make sense of how Tor works internally.

I worked on some areas of TBB before, but still feel like I don't know enough about Tor's internal.

[1]: https://2019.www.torproject.org/docs/documentation#DesignDoc


The spec documents linked from there are the most canonical documentation, though the gitweb link will probably be deprecated soon in favor of gitlab. https://gitlab.torproject.org/tpo/core/torspec

As always it's probably a good idea to reach out to chat about what you have in mind before getting too far with implementation. #tor-dev on OFTC IRC (bridged to Matrix)

https://www.torproject.org/contact/ https://blog.torproject.org/entering-the-matrix/


Excellent write up!

How well would the new congestion controls be able to handle udp traffic from aggressively opportunistic protocols like bittorrent over UDP, assuming that datagram traffic is allowed on the tor network?

As far as I know, bittorrent's udp congestion control assumes the network is able to drop packets and clients definitely act accordingly.


UDP is currently not supported, but we're working on a design for it now https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/p...


The latency improvement is especially exciting for me. Thank you Tor team


[flagged]


Please don't do this here.


[flagged]


"Don't feed egregious comments by replying; flag them instead."

https://news.ycombinator.com/newsguidelines.html


Fair enough. Thank you for the reminder.


Isn’t Tor at this point widely suspected to be compromised by state actors? Aren’t people who use Tor for nefarious purposes arrested regularly these days?


People arrested for their darknet activities have always had OPSEC failures that out them like reusing known email addresses or usernames.

Being deanonymised during normal Tor browsing is extremely difficult and I'd challenge you to post cases where Tor itself lead to it.


Just curious: does Tor have anything against timing analysis by an actor with state-level resources? My impression is it's extremely hard to defend against in general, and it's employed by governments against both .onion servers and users browsing the clearnet (although I don't have concrete evidence). AFAIK some sites in .onion get DDoSed on and off routinely, possibly to locate the origin of the server.


There has been plenty of work on how to de-anonymize Tor users over the years, e.g., see https://www.cs.princeton.edu/~jrex/papers/usenixsec15.pdf.

There's no doubt in my mind that state actors have been putting similar techniques to use.


> There has been plenty of work on how to de-anonymize Tor users over the years, e.g., see https://www.cs.princeton.edu/~jrex/papers/usenixsec15.pdf.

But that's not what gp asked.

Has the work you linked to been shown to lead to the successful deanonymization of a normal user during normal Tor browsing? It's a simple yes or no question. I don't follow tor news like I used to but I'm willing to bet a months-worth of salary that the answer is still no.


Unless the NSA, FBI, or whoever comes out and says oh we broke Tor, I don't see how you could ever get a definite answer to that question. And I don't really see that happening.

Or it could leaked, on accident or by a whistleblower. But that's pretty uncommon.


True, but it'd be difficult to both make arrests based on info learned from having broken Tor, and keep it secret that they'd broken Tor. It's possible by obscuring how they got information (e.g. via parallel construction), but difficult to do at scale.


The application is open sourced [1], and it gets a lot of attention from all manner of people, ranging from nation states (both ‘good’ and ‘bad’ guys, depending on what you’re using it for and from where) and activist researchers.

It’s almost certain that some state has an application exploit sitting on a shelf somewhere, which might only be useful in some extremely niche use case, but it’s unlikely that it’s routinely ‘compromised’ in the way that sensationalised media might put it.

What’s more likely is that an exit node has been owned, or is actually operated by some nation state. Even then, you might not even see the actually traffic if it has been re-routed.

The most likely scenario is an OPSEC failure - turns out you need to be very, very good at operations and online hygiene if you want to hide your illicit activities online shocked pikachu.

[1] https://gitlab.torproject.org/tpo/core/tor


... No? Could you be more specific?

Disclaimer: am a Tor developer and employee.


Not op, and generally agree that the tech is safe.

One potential problem is that it's suspected state actors run a large amount of exit nodes.


Probably some, but the Tor network is designed to be robust to that.

The community does a lot of active monitoring to kick out misbehaving relays. "Misbehaving" includes running multiple relays without correctly setting the family attribute to identify them as being run by a single entity.

The main danger of malicious exit relays beyond other relays is that they perform some man-in-the-middle active attack. This is largely mitigated by end-to-end encryption. Tor Browser will soon be HTTPS only (other than explicit manual overrides) to help avoid inavertent non-e2e protected connections.

More in another recent blog post: https://blog.torproject.org/malicious-relays-health-tor-netw...


> running multiple relays without correctly setting the family attribute to identify them as being run by a single entity.

How do know who is the actual real owner behind a machine on the internet?


There is a bit of a heuristics-based arms race here for sure. https://blog.torproject.org/malicious-relays-health-tor-netw... talks about this


I remember reading this article and being concerned that state actors had simply flooded the tor nodes to allow them to them to perform attacks to deanonymize a user. It’s possible that Arstechnica just has some agenda against Tor because it seemed like for awhile they were putting out articles like this every few months on the Tor network and people being arrested who used Tor.

https://arstechnica.com/information-technology/2013/08/tor-u...


This article is from 2013, and notes a huge increase in tor clients. While the article notes we weren't able to determine the cause of the sudden increase, the primary hypothesis put forward was that it was a true growth in usage due to new anti-piracy laws in Russia. It doesn't note any particular attacks this may signify, and I'm not aware of deanonymization attacks that involve adding a lot of clients to the network.

The larger concern for deanonymization is typically flooding the network with relays, since it increases the ability to do e.g. timing-based de-anonymization attacks. This is a bit of an arms race. As @ajvs points out though, the known cases of tor users being de-anonymized were not due to attacking Tor itself, but via other channels. I'm not aware of any known real-world cases of users being deanonymized by attacking or analyzing Tor itself, let alone users being "arrested regularly"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: