> The saying goes, "Anonymity loves company" which means that an anonymous communication system cannot provide high anonymity assurances if there are very few people using it.
I see this often repeated but never elucidated. It is particularly problematic in light of the stated non-goal:
> Hiding the fact that someone is utilizing an anonymity network is not an intrinsic goal of anonymity networks.
So here's a straightforward counter-example I mentioned before:
Create a PIR where messages-- let's say "tweet-sized" for this example-- are sent once daily, and the only allowed participants are Debian developers. If a participant has no message to send, they send a "tweet-sized" message composed of randomly chosen bits.
The clients try to decode all messages with their private key. The message or messages that are successfully decoded were the messages that were intended for that client.
Let's even put a single point of failure like a centralized server where all messages are sent and retrieved.
Now you've got:
* sender anonymity
* receiver anonymity
* sender and receiver anonymity with respect to third party observers
* maximum practical resistance to Sybil attacks
* resistance to timing correlation attacks
* resistance to compulsion attacks
Anyway, for this example the saying "anonymity loves company" is false.
Sender anonymity means that the server (or any recipient) cannot determine who sent a particular message. In your example, the centralized server absolutely knows which tweet-sized message was sent by each sender. So your proposal does not provide sender anonymity. Unless of course, the senders use an actual anonymity system to send their messages to the server...
First of all, PIR systems currently do not scale well and secondly, the saying "anonymity loves company" definitely holds true even in your brittle single point of failure example design. Consider a PIR system which is only used by Alice and Bob; a network observer can see that the PIR system is only used by Alice and Bob. Although it's true that you do get all of the properties you mention, this isn't good enough.
PIR systems are interesting and I am looking forward to seeing them be practically deployed. This isn't a competition and instead of bashing something different we could be more welcoming of anonymity network diversity. Further, perhaps you didn't consider that mixnets can be used to construct PIR systems... and that we can also use PIR systems in our mix networks as well. Consider the Loopix design where users send messages to destination Providers which queue messages. The retrieval of these messages could be done using a PIR system.
This is a great summary of anonymous routing networks. If you're interested in anonymous distributed computation, you might want to have a look at Feralcore (I am not the author)
Interesting. Is it possible to make a bittorrent-like protocol that is anonymous? I mean, hiding what is being downloaded, hiding the source of the download and replicating/caching data so more rarely downloaded files are still available.
Using Freenet is extremely hazardous. One should not use it, except through Tor. Adversaries can readily discover IPs of Freenet nodes. Once you've been arrested, you will find "plausible deniability" a useless abstraction. At best, you may negotiate a minimal sentence.
I see this often repeated but never elucidated. It is particularly problematic in light of the stated non-goal:
> Hiding the fact that someone is utilizing an anonymity network is not an intrinsic goal of anonymity networks.
So here's a straightforward counter-example I mentioned before:
Create a PIR where messages-- let's say "tweet-sized" for this example-- are sent once daily, and the only allowed participants are Debian developers. If a participant has no message to send, they send a "tweet-sized" message composed of randomly chosen bits.
The clients try to decode all messages with their private key. The message or messages that are successfully decoded were the messages that were intended for that client.
Let's even put a single point of failure like a centralized server where all messages are sent and retrieved.
Now you've got:
* sender anonymity
* receiver anonymity
* sender and receiver anonymity with respect to third party observers
* maximum practical resistance to Sybil attacks
* resistance to timing correlation attacks
* resistance to compulsion attacks
Anyway, for this example the saying "anonymity loves company" is false.