This paper is really about solving a fundamental problem in decentralized identity, which is to combine the characteristics of self-sovereignty, privacy and Sbil-resistance, which seems to exclude each other.
I previously proposed formalizing this problem as Decentralized Identity Trilemma (http://maciek.blog/dit).
Of the systems listed in the paper, the only one I'd really heard of was BrightID, and it still seems like potentially the best. The FAQ on their website is a good introduction[0], but the paper seems rightly cautious:
"To control for Sybil attacks BrightID runs GroupSybilRank, a modification of the SybilRank algorithm, to estimate the anti-Sybil score of the network participants based on affinity between groups. Proposed to be used as the official BrightID anti-sybil algorithm, the effectiveness of this algorithm in the presence of multiple attack vectors, remains to be proved."
Unfortunately, just proving personhood is only the first step in deciding someone's reputation, but it seems like a good basis to build some proper decentralized trust systems from, for example [1].
The problem is that the more successful the system becomes (i.e. it's used for more valuable use cases) the more incentive there is to attack it. There needs to be a dynamic analogous to Bitcoin where the resources that go into securing the network grow along with the value of the network.
But none of my friends know any of your friends, so I ignore their votes. Admittedly that means that any sort of consensus reality breaks down online, but hey, it's 2020 and we are where we are.
the key feature of sybil attacks is that they are fake or semi fake. the majority overriding the minority (regardless of correctness), is a feature of democracy, not a bug.
What you've got there is majoritarianism not strictly democracy.
We should hope that a democracy would consider not only whether there are more in favour than against but also other factors.
For example. Suppose there are two rabbits and three foxes and the question is what's for dinner? Majoritarianism suggests that the foxes can vote "Rabbit" and then eat the rabbits, all is fair. But I think we'd want our democracies to consider that there is an outsize cost to this choice compared to the 40% vote for "Fresh fruit and vegetables" from the rabbits. [A fox isn't an obligate carnivore, its digestive system is perfectly capable of obtaining nutrients from a meal of fruit and vegetables, although rabbit is delicious].
The success of democracy is not that it allows 2 rabbits to convince 3 foxes to eat vegetables, but merely that it convinces 2 foxes to limit themselves to eating vegetables based on the votes of 3 rabbits.
To achieve the result that you're hoping democracy will provide, it is perhaps better to rely on the principle of Freedom of Speech / Expression.
Everyone's a "fox" or a "rabbit" about different things, and often the foxes are in the majority for each issue, but (almost) everyone would be better off with "rabbit for everything" than "fox for everything".
The most obvious cases of fox / rabbit issues are regarding GRSM rights. A majority would weakly prefer that gay people not marry (but they'd just shrug and move on if overruled), a small minority would strongly prefer that gay people not marry… and then you've got all the gay people, most of whom would very strongly prefer that gay people can marry. The weak majority who care enough to block "rabbit", but not enough to actually be upset if it happened, rule the policy – until democracy becomes indirect, anyway.
Indirect democracy should never have superior properties to direct democracy. Is there a way of fixing this?
Quickly skimming the site, it seems like Idena validates identity by having all users solve a turing test at the same time.
I fail to see how this ever scales. Surely as the user count increases it becomes extremely difficult to get all the users to be ready to validate themselves at the same time?
It may scale when you think it as putting yourself or your reasoning consciousness as stake in the system (instead of hardware or financial capital) by actively contributing human deducing capacity and taking responsibility once every 2-4 weeks synchronously. Android and mobile validation apps may also do their part in easing and scaling this thing to more users because of the nature of synchronous validation parties. I reckon Idena is among the most reasonable options when considering the "Decentralized Identity Trilemma".
The current data on language-neutral AI-hard tests (analogous to Winograd Schema Challenge (save the textual representation) reasonably concludes that the current AI apparatus cannot sustainably achieve a human-level score (92% or above) on the FLIP-like tests as are used by Idena Network. FLIP creation by humans probably offers more protection from any of the bots and AI recognition schemes available as of now. Let's see what the future brings in this yet unknown realm.
Keep reading, you'll see that these are not problems except for "perfect being enemy of good" rigidity.
The fact is the internet is in need of a privacy-friendly information verification solution that scales...when it comes to approaches for achieving this I think the more the merry.
I previously proposed formalizing this problem as Decentralized Identity Trilemma (http://maciek.blog/dit).