There were actual Wikipedians arguing not to take a wiki with a grain of salt? If I was in that discussion, I must have missed those posts. Can you link an example?
If you mean whether Wikipedia is unreliable? That's a different story, everything is unreliable. Wikipedia just happens to be potentially less unreliable than many (typically) (if used correctly) (#include caveats.h) .
Sources are like power tools. Use them with respect and caution.
I think it's important to edit early and often, but it certainly can't hurt to also explain your edits on the talk page. Bonus points if the other side makes no explanations, you get to "rv unexplained edit, see talk page". Just look in on the article every couple of days for a while to see what sticks and what doesn't. Originally when I started editing, more often than not people would have improved and built on my edits, rather than fought them. But you may need to be a bit (un)lucky these days?
If you encounter that, you can possibly get help to get those articles unstuck. People are not supposed to keep fiefdoms, much of policy prevents it. (and someone with a bit of practice can call in help and clear it up)
Fair-ish. It really depends. The last few areas I did anything in (I'm not a regular anymore) basically nothing happened except what I wrote, so I guess the quiet parts are really really quiet and you don't get into much trouble at all.
Uh.. <raises hand> I might be one of the few people who actually knows a bunch of the theory on why wikipedia works (properly). I had to do a bunch of research while working on wikipedia mediation and policies stuff, a long time ago.
I never got around to writing it all out though. Bits of it can be found in old policy discussions on bold-reverse-discuss, consensus, and etc.
I guess the first thing to realize is that wikipedia is split into a lot of pages, and n_editors for most pages in the long tail is very very low, so most definitely below n_dunbar[]; and really can be edited almost the same way wikipeida used to be back in 2002. At the same time a small number of pages above n_dunbar get the most attention and are the most messy to deal with.
Aaron Swartz actually did a bunch of research into some of the base statistics too, and he DID publish stuff online... let me look that up...
If you weaken encryption so that your government can get access, now other sides can get access too. Including criminals and other governments.
No I would not like to weaken encryption for my bank (obviously), my personal information (if only due to spear fishing), cryptographic authentication like passkeys in general and ssh keys in particular, and absolutely no one gets access to any teenager's phone anywhere. (unless it's their parents maybe,... that one is debatable).
ps the term "NOBUS fallacy" is apparently not a thing yet. (I thought it was!)
You don't have to weaken your encryption to your bank though. The author proposes extending the TLS protocol so that the bank declares themselves responsible for the contents of communication and full strength encryption can be used.
I don't buy it. These systems are always multiparty. In a single party cryptosystem we can have internal integrity. We know we're not the bad guys and we didn't share the private information with the bad guys, therefore the bad guys don't have the data.
Once you're multiparty that goes away, any other party can definitely betray you and then it's game over, your own integrity doesn't matter.
Historically NOBUS was about having a particular technological lead, that's very fragile and didn't work out long term. If anybody has that lead today it's the Chinese, but realistically nobody has such a lead.
The argument about who the trustworthy "us" is is deeply uninteresting to me. I just care that there's precedent that if you stipulate the existence of such an "us", computer science does allow for NOBUS-y access mechanisms.
Interesting. My understanding has always been that there wasn't, at least in practice. But you're so insistent, so maybe I've missed something. Do you have some references? I'll go read!
At minimum, bad actors inside the government could always use the access mechanism. What's your concept for preventing other bad actors from getting it though?
"What if 'us' is bad" is a separable question from "is NOBUS possible".
I'm not advocating for it, I'm just saying the computer science of this matters, and a lot of people have objections to the concept of NOBUS that are more ideological than empirical.
I don't think it's a computer science claim to begin with. To my knowledge nobody has ever broken 256-bit AES, but that's not the part of the system that fails. There are two things that prevent it from working in practice:
The first is that "us" would be something like "governments in the US"; but then that's too big of an organization to sustain as free from compromise. There are tens of thousands of judges in the US, well over a million police and military. All it takes is one of them to be corrupt or incompetent or lazy and the bad guys get to use the skeleton keys to everything in the world, which can unlock secrets worth billions or get people killed. And that's assuming they only compromise the authorization system; if they actually gets the keys it's practically armageddon.
And the second is that it's not just one government. If the UK makes Apple and Google build a system to unlock anybody's secrets, is Australia not going to want access? Is China? Let's suppose we're not going to give access to Russia; can the fallible humans operating this system fend off every attack once the FSB has been ordered to secure access by any mean necessary?
It's a system that combines many points of compromise with an overwhelming incentive for everyone from state-level attackers to organized crime to break in and severe consequences when they do.
The logistics are non-trivial. If you have to be nation-state intelligence level of scale then no, you cannot maintain NOBUS level of secrecy because you have too many people involved. That sounds pretty damn empirical to me. The objections to NOBUS aren't ideological, they are moral by the way. They are literally choosing to keep vulnerabilities in place for others to discover under arrogant assumptions that they will be the only ones who will know.
> The objections to NOBUS aren't ideological, they are moral
“ideological” and “moral”, as bases for objection, mean exactly the same thing, though people will often use “ideological” to mean “based in principles of right and wrong that I don’t agree with” and “moral” or “ethical” to mean “based in principles of right and wrong that I agree with”.
I think any practical implementation needs to have an "us" that's like "with a valid warrant" or secured on the govt end anyway, right? Otherwise you have to deal with "what if someone in the govt leaks the keys" or "what if someone in the govt is a spy". I consider those outcomes the same as foreign governments getting backdoor access basically.
But also, Dual EC was suspected of being backdoored from day one, was slower than existing CSPRNGs, and was therefore avoided like the plague. Whereas the premise is that if you put all the world's secrets behind one set of keys, there doesn't exist a level of defense that can withstand the level of attacks that will attract. Which doesn't apply when it isn't widely used.
On top of that, the attackers would be the likes of foreign intelligence agencies, and then them not getting it and the public not hearing about them getting it are two different things.
Exactly. They built a backdoor that "only they" could get into and then somebody else slipped into it anyway.
The backdoor is a vulnerability even if you don't have the keys because it requires the trappings of third party access. If you try to get something in the shape of a backdoor through code review, you should get knocked back. But if something in the shape of a backdoor is required then a change in who has the keys to the lock is much smaller, more subtle and easier to sneak in.
No, that's exactly what didn't happen here. The attackers in this case got and maintained for years the ability to slip code into Juniper/Netscreen releases. That the backdoor they chose happened to replace NSA's NOBUS backdoor is just a funny detail.
I don't think it's actually irrelevant; there's a reason they did it that way. Getting commit access and being the only one who can even read the code are two very different things. Even if you can modify the code, the less obvious it is that the change is adding a backdoor the less likely someone else is to catch you.
I think it would be so difficult to convince me that a state-level adversary who has obtained persistent access to Netscreen's builds can't hide arbitrary backdoors that it isn't really worth hashing this out. I'm just going to point out again that the Netscreen attack didn't break the "NOBUS" property of Dual EC --- so far as we know, the Dual EC private keys have never leaked.
It seems like you're implying they'd be too good to ever get caught, but... they got caught. The trouble is, making a backdoor less obvious makes it more likely that if they try it 10 times they don't get caught all 10 times, more likely it gets into production before they get caught, more likely that it stays in production for a year instead of a month, etc.
I mean, didn't the NSA also get caught by Snowden? They intended it to be a secret.
But the Juniper hackers are the NOBUS failure because changing the locks on a backdoor that somebody else had installed is easier than getting one installed yourself.
I don't think you're following. "NOBUS" doesn't mean "nobody but us can ever find out about the backdoor"; it means "nobody but us can actually use the backdoor". Ironically, the Juniper PKRNG backdoor --- I assume it was Chinese --- is also a NOBUS backdoor!
> it means "nobody but us can actually use the backdoor". Ironically, the Juniper PKRNG backdoor --- I assume it was Chinese --- is also a NOBUS backdoor!
Except that it was intended to be "nobody but the us (i.e. the NSA)" and now you've got China using it.
No, we don't. Respectfully, I don't think you're working from an accurate notion of what "NOBUS" means, and I don't think you have your head fully around the Juniper hack. The Juniper hack replaced the existing backdoor; it didn't break it.
NOBUS or not, if your adversary controls your source tree, you're boned. Here, the adversary replaced "our" NOBUS backdoor with theirs. Two different backdoors, different keys, same structure.
NOBUS is only NOBUS until a spy gets their hands on the escrow master key (or until Donald Trump shares it at a dinner party on a lark, for that matter). If RSA's signing keys can be compromised¹, anything can be compromised.
They're a world-class security organization. If a nation-state actor can get access to their most important keys the hard way, then a nation-state actor has a decent shot at compromising any private key on the planet, if they're willing to put enough money into it.
They're a large, trusted enterprise software company specializing in security. I'm very comfortable using them as a heuristic for the most secure that a regularly-used private key can possibly be.
I think you need to adjust your priors on the capabilities of enterprise security companies. I don't think you will find many practitioners that would rank RSA Security in "the most secure that a regularly-used private key can be".
I don't like describing it as cycles because it is too simplistic and pretend it is inevitable, robbing people of agency.
I prefer to think of society as a system where different actors have different goals and gradually lose/gain influence through a) slow processes where those with influence gain more from people who are sufficiently happy to be apathetic b) fast processes when people become sufficiently unhappy to reach for the source of all real world influence - violence.
This happens because uneducated/dumb/complacent people let it happen. It can be prevented by teaching them the importance if freedoms and to always fight back. But that goes directly against the interests of those in power - starting from parents who want children to be obedient.
I might partially agree, but the market already has a fantastic, secure option for those users: Apple.
Android's value was always in being the open(ish) alternative. When we lose that choice and the whole world adopts one philosophy, the ecosystem becomes brittle.
We saw this with the Bell monopoly, which held up telephone innovation for three quarters of a century.
In the short term, some users are safer. In the medium term, all users suffer from the lack of competition and innovation that a duopoly of walled gardens will create.
I never really got into "phone" progrmaming, always waiting for the shenanigans to die down. But somehow the shanigans have gotten worse and for a significant chunk of the world population, the phone is the only computation device they have at all.
I never got into it because I was convinced developers would refuse to give up control over distribution when Apple started doing it. I wish I was right, but here we are.
Developers sometimes seem to be as in control as farmers are of the distribution of their produce. There's no absolute rule that gives the owners of large scale distribution networks power over both producer and consumer. It's just laws of convenience. It's easier for everyone to go through a few or just a single common broker.
There's no law against a more democratic way to implement the broker either but it requires interesting methods of coordination and/or decision making that doesn't seem to exist yet?
It limits choice. I don’t have any experience building mobile apps because I didn’t want to buy into an unfair ecosystem. That means fewer mobile apps even if distribution networks change tomorrow.
> I don’t have any experience building mobile apps because I didn’t want to buy into an unfair ecosystem
Seems like it wouldn't be much of a stretch to compare that statement to not starting a business because the economy is unfair. People indeed don't start businesses when the bureaucratic or tax overhead outweighs the financial benefit, but nobody loses sleep over an individual's hypothetical missed opportunity to learn a new skill but them. Doesn't matter to the platform owners unless it also stops being profitable, so it's their job to maintain the profitability for their ecosystem despite whatever barriers they put up.
> There's no law against a more democratic way to implement the broker either but it requires interesting methods of coordination and/or decision making that doesn't seem to exist yet?
It's not enough to not have a law against it, we need to have and enforce laws requiring it.
I'm not so sure that we can even rely on legislation for this. I think we need new ways or new technology for collective decision making that doesn't rely on a pre existing healthy legislative environment.
Some developers did. Others, who didn't care so much, got into the app store instead, and got rich off it. Users didn't care about such principles and mobile-first has been a viable strategy for a long time now. Not having something of an app is a problem if you want to stay in many markets.
Developers want a stable, secure platform where they can reach customers that trust the platform and are willing to transact. Everything is downstream of that, including any philosophy around control.
Developers are businesses and the economics need to work. For that, safety and security is much more important than openness.
Oh! Classic Survivorship bias. You're only looking at the devs who went into business in the phone ecosystem in the first place. I'm thinking that they're there despite the barriers to entry ('shenanigans'), and the ones you encounter happen to be those who happen to place a higher value on 'other values'. As the ecosystem gets locked down more, this effect becomes stronger.
Meanwhile, you're not looking at those who left, or those who decided to never enter a broken market dominated by players convicted of monopolistic practices.
This seems much more intuitive than a hypothesis where somehow people would prefer to enter a closed market over a fair and open market with no barriers to entry.
Remember, monopolists succeed because they are distorting the market, not because they are in fact the most efficient competitor.
I'm actually quite familiar with the history of app stores and getting people to pay for software on the internet. I grew up in this timeline so I have first-hand experience too.
Before the App Store, the picture was mostly a disaster of security, reliability and quality. There was no trust and so people didn't bother parting with their credit card information to buy software...especially not on their phone.
Apple's App Store model dramatically grew the pie because it was one of the few platforms that people were willing to actually transact confidently on and trusted. This is why millions of developers flocked to the platform. This is also why Apple has traditionally maintained an iron grip on it; it was beneficial for everyone involved.
Over time, they are being proven right as more open platforms realize that openness at the expense of trust doesn't work for the masses.
You now need to have an online account to setup and login on a Windows desktop. It's obvious what the trend is and it's not allowing consumers control over their stuff.
Just look up how to skip the "OOTB (out of the box) experience" and you can still bypass having to set up a cloud account on Windows 11 and can just set up a local account like normal. :)
I have been a computer user, developer and a system administrator for longer than I care to recount. I don't like Windows and I don't use it at work or home. But I do encounter it from time to time, and the experience is worse each time. The last time it happened, I couldn't figure out the way to skip/bypass the cloud account set up. Would it have been possible if I tried harder, starting with a web search? Perhaps. But there is no way an average system user is going to have the patience or often the skill necessary to do it. I'm not challenging their intelligence. But people have other priorities than to jump through a dozen hoops just to preserve privacy. I would do the same if I had to set up a Windows system for urgent work.
These sorts of hurdles exist to push more and more users to their favorite workflow until the dissenting voice is too feeble to notice when they finally pull the plug on the straightforward method. The intent is certainly there, since they are quite evidently boiling the frog. Just wait for the fine day when you wake up in the morning to see an HN story just like this one about Windows login as well.
I was using Linux for 10+ years consistently before starting my current role, which is for a Windows-only business. And my god, the first few months was super annoying. ctrl+alt+t doesn't open a terminal?! click, click, click. No Vim. Wtf.
Setting things up was much more complicated as well. But I stuck it out, still hate Windows, but I've gotten a bit used to it.
> But there is no way an average system user is going to have the patience or often the skill necessary to do it.
> But I stuck it out, still hate Windows, but I've gotten a bit used to it.
So you tolerate it. Matches what I felt. But it was more the stuff I couldn't control - like the timing of the updates and the incessant ads.
> It's like two commands. Super easy.
For you, yes. But problem for the average user is the patience required to figure it out. Also, I think the edition I used didn't have that option at all. Because I vaguely remember searching for a solution and not finding one that worked for me. Whatever it was, it will soon be like that for more or less everyone.
That's why most people are not on Linux. I'm not talking about people who can search the internet or kids who just keep at it till they figure out the registry in two days. I'm talking about people who have absolutely no interest in the machine other than to browse the internet or use the office suite. Surprisingly, there are far more of such people than you'd imagine.
With how much worse the experience of using Windows has gotten, why wait? Many hills have already come and gone. This is the hill you're willing to die on?
Personally: the idea that a "slippery slope" is a logical fallacy has always seemed like bulllshit to me. The vast majority of reasoning for why the judiciary makes the decisions it does is because of "precedent". Slippery slope is how the world operates. It surfaces everywhere, and when the slope we're sliding down matters, like this one, we have to fight back with fervor. Google isn't doing this in a vacuum; they're doing this because there's precedent for it, and because all they want is to assert more power over the world.
Google's behavior is utterly and entirely disgusting, unacceptable, despicable, and dishonorable. Everyone who even glances near this decision should feel overwhelming shame. If you have a shred of political power to fight this internally, you are a failure to yourself, your customers, and the world if you choose to stay silent. They'll read comments like these and think "we're right, we're being brave", because they have convinced themselves that there is bravery in wielding overwhelming power against their users.
> Personally: the idea that a "slippery slope" is a logical fallacy has always seemed like bulllshit to me.
I don't know if I got this wrong, but the 'slippery slope' argument by itself never appeared to be a logical fallacy to me. There are numerous valid examples of it, and that's the context of its use in my previous reply. There certainly is a 'slippery slope' logical fallacy, but I thought it meant that you are misapplying/misusing the slippery slope argument where it isn't valid or doesn't apply.
> Google's behavior is utterly and entirely disgusting, unacceptable, despicable, and dishonorable.
I was going to apply the Nazi label on them everyone else who use such sleazy tactics. I hesitated because a lot of people are still emotional about the holocaust (it has been 80 years) and object to equating anything with Nazism. But I sometimes wonder if the objection is meant only to silence the critics. While their actions haven't yet reached the magnitude of atrocities committed by the Nazis, their actions certainly are consistent with the Nazi tactics. Besides, it's not as if they had any qualms labeling ordinary people 'Pirates' for sharing media. Therefore I feel it's quite appropriate to apply to them and promote the label of 'Supply Side Nazis'.
While what the Nazis did was extremely barbaric, I feel that people gate keep their references too much - especially when talking about their tactics and methods, rather than the magnitude of their cruelty. For example, you don't have to be Joseph Goebbels or someone as vicious as him to follow his tactics. And I don't find an issue in invoking his reference if someone does this.
Please don't comment like this on HN, no matter how right you are or feel you are. We need everyone to observe the guidelines even if others are posting comments that are understandably upsetting. Please try to be one of the ones who makes things better not worse.
i made and released some apps in the early days. Got tired of it and got tired of the reminders from google to add banners, screenshots, submitting icons to support multiple resolutions.. notifications that apps i haven't touched in decade are no longer compatible etc.
so much extra work involved that isn't building the app.
Got tired of this with a few extensions I made too. It felt like every year or so they'd completely break some API and I'd have to go switch to the new one, then they wanted a privacy policy, then justification for permissions, etc etc. Wasn't worth the trouble eventually and I just let them die.
If you mean whether Wikipedia is unreliable? That's a different story, everything is unreliable. Wikipedia just happens to be potentially less unreliable than many (typically) (if used correctly) (#include caveats.h) .
Sources are like power tools. Use them with respect and caution.
reply