Excerpts from the press call transcript [1] by Guy Rosen explaining what lead to this breach being possible:
> The first bug was that, when using the View As function to look at your
profile as another person would, the video uploader shouldn’t have actually
shown up at all. But in a very specific case, on certain types of posts that are
encouraging people to post happy birthday greetings, it did show up.
> The second bug was that this video uploader incorrectly used the single signon
functionally, and it generated an access token that had the permissions of
the Facebook mobile app. And that’s not the way the single sign-on
functionality is intended to be used.
> The third bug was that, when the video uploader showed up as part of View
As -- which it wouldn’t do were it not for that first bug -- and it generated an
access token which is -- again, wouldn’t do, except for that second bug -- it
generated the access token, not for you as the viewer, but for the user that you
are looking up.
> It’s the combination of those three bugs that became a vulnerability. Now,
this was discovered by attackers. Those attackers then, in order to run this
attack, needed not just to find this vulnerability, but they needed to get an
access token and then to pivot on that access token to other accounts and then
look up other users in order to get further access tokens.
This is the vulnerability that, yesterday, on Thursday, we fixed that, and we’re
resetting all of those access tokens to protect security of people’s accounts so
that those access tokens that may have been taken are not usable anymore.
This is what is also causing people to be logged out of Facebook to protect
their accounts.
> The second bug was that this video uploader incorrectly used the single signon functionally, and it generated an access token that had the permissions of the Facebook mobile app. And that’s not the way the single sign-on functionality is intended to be used.
Is it just me or does this sound like an terrible idea in the first place? Guess we can't know for sure, but why would anything unrelated to authentication generate access tokens?
Technical debt, multiple systems using multiple old authentication routines getting slowly upgraded to new auth methods. And no one taking the time to fully understand the ramifications. And honestly it seems like that was the right choice for the teams responsible. They all made tons of money delivered features and now years later a bug is found.
I work for a major (by Norwegian standards) bank. This level of authentication integration trickery wouldn't be attempted by us. Mainly because we try hard to avoid serious technical debt (due to timeline/delivery pressure) in our security infrastructure. We occasionally take such shortcuts in places that are not mission-critical, but they are always considered carefully as the tradeoff that they are. I believe that we are considerably better at technology development than most of the banks in the US.
That said, I've heard stories of similar bugs in the industry. The difference was that they were more shallow in the effort to reproduce; deep enough to get through QA but discovered quickly in production.
But honestly, Facebook has more resources to spend on security than any online bank. Banking security should be defense-in-depth: Strong first layer security, serious monitoring of suspicious activity & openness for reports by users, a certain level of manual approval of irrevocable transfers, a certain revocability of transfers that are able to be automatically processed, transfer size limits to deny one breach to have huge consequences.
And finally, a credible economic and legal system that ensures only a tiny minority of people want to rob a bank because there are much better options for making money, and banking regulations that leave the responsibility for security vulnerabilities squarely with the bank's shareholders.
Anyone can be owned with enough effort, so it's not just about creating software that's as secure as you can make it. You need to have sound policies as well.
Meanwhile I work for a major US IB. While I don't work on anything customer facing our internal SSO infrastructure basically consists of a single cookie that gets access to almost everything.. And its really not difficult to sniff one from another user (like say getting them to visit a link like http://mydesktop.companyname.com/..).
Its so bad that for certain systems we check the origin of your connection and will only trust you if you've come from the DMZ rather than internal.
Is the cookie not associated to a specific IP? SSO systems would normally flag the mismatch if you try to connect to a website and pass an SSO cookie issued for a different IP, so sniffing cookies wouldn’t help all that much.
It's unlikely to change between the SSO login page and the application's login page, and it doesn't matter if it changes later on since the app can issue its own session cookie which isn't tied to an IP.
You're vastly overrating the size of the vulnerability and the security of banks. This would not have been caught by internal security teams at most banks and even if it was caught, it wouldn't be considered a major vulnerability on a major banking website.
With that said, this is a bigger vulnerability precisely because Facebook is a free service - at banks, you need to be a customer with real-world identity to even begin to attempt to exploit this.
There's a Firefox Add-on named "Facebook Container". Once you install that Facebook lives in a little box, Facebook cookies, Facebook whatever else, all trapped in the little box.
No effort needed, if you click your Facebook bookmark, or follow a link or whatever, the browser goes "Oh, this is Facebook" and traps it inside the box with the rest of Facebook without any extra steps from the user. There's a cute blue "Facebook" icon added to the URL bar so you can see it's working.
(I mean, or, stop using Facebook, but for many that isn't a reasonable option)
'We only use incognito mode for security. Since it's annoying to login constantly, everyone has dedicated machines that are powered 24/7 so you never have to shut down those incognito tabs.'
If you're more interested in tech discussion or maybe some subcultures, and less interested in food photos/anecdotes about babies, just join http://mastodon.social/ already.
Set your preferences to show posts of your native language only, start poking around the timelines, and follow people who post something interesting. Follow, boost, reply, it only takes a few days before you have plenty of interesting content in your feed.
There's zero chance on Mastodon that you'll get caught up in a gigantic data breach like this. Probably less chance you get caught up in any kind of breach -- it's too obscure to be a target, plus the code is open source so many eyes on it, etc.
And you'll enjoy these guaranteed benefits, as well:
- No longer subject to the most sophisticated data vacuuming adtech in the world
- If you get bored/annoyed you can just take a break from Mastodon because it doesn't own your life the way Facebook tries to
> Probably less chance you get caught up in any kind of breach -- it's too obscure to be a target, plus the code is open source so many eyes on it, etc.
Security through obscurity...
Open source != secure. I can guarantee that a hell of a lot more folks with a lot of security expertise have combed through the fb codebase than Mastodon.
...is not a solution by itself but is a perfectly valid part of a defense in depth strategy, for example running SSH on a port other than the default is a common and good practice.
> I can guarantee that a hell of a lot more folks with a lot of security expertise have combed through the fb codebase than Mastodon.
This is the same argument Microsoft always made in defense of Windows security back in the XP era. "We hire the best experts in the world so Windows must be fantastically secure." And Windows security turned out to be a train wreck. Now in Microsoft's defense it has improved considerably over the years, but Windows desktops still get owned far more often than Linux desktops do, for a reason that would probably apply to Mastodon today as well: not that many people use it, so it is not nearly as common a target for exploits.
I don't think I deserved downvotes for making these points btw, that button is way overused on HN.
>>Security through obscurity...
> ...is not a solution by itself but is a perfectly valid part of a defense in depth strategy, for example running SSH on a port other than the default is a common and good practice.
This really depends on what kind of target you are. Are you a random person on the internet? Then making yourself a smaller target by using obscure services might help. Are you someone with sufficient value for a spear phishing attack? Not so much. “Sufficient value” might just be “you slighted the wrong person on the internet.”
There’s also a lot of trade offs involved, some of them less than obvious. For example mastodon servers may be run by a person/team who’s trustworthiness rating is harder to evaluate Tran facebooks. The server you’re on might by run by well-meaning but incompetent people. The server you’re on might have one participant that is a target of sufficient value for spear phishing and your data might be taken and leaked just to obscure the real target.
i agree, but "guarantee" is a strong word.
on the flip side, a hell of a lot more folks _without_ much security expertise have _contributed_ to the fb codebase.
- reminiscent of old DeBeers diamond courier group motto anonymity is the best security. Interesting that most NYC 47th street diamond district couriers were Hasidim, who's attire wasn't particularly inconspicuous.
When designing such a system, the immediate failure mode is obvious: at some point, someone will read data not meant for them.
As every feature on FB needs to take "View as" into account when handling their own permissions, a lot of developers on FB's payroll get a chance to f'up. We are all humans, so the probability of this happening is very high. The impact (for the users) is also high, given that it's automated and concerns every user on FB equally.
When dealing with a very probable, high impact risk in a software project, considerable additional effort is warranted to mitigate that risk: in this case maybe taint checking and additional implementations of the same feature in different programming paradigms, to ensure the system is fail-stop.
But in contrast to airlines and railways, the interests of FB and their users are not aligned. For Facebook, this risk is not (or was not deemed to be of) high impact, so we did't get any of this.
It seems to warrant checking both the permissions of the true user and the view-as user. If either does not have permission, then the action should fail. Of course, lacking the middleware for this forces you to choose one or the other and hope you remember to check the remaining user in numerous pathways.
That doesn't just seem like a few unlucky coincidences. That seems like a fundamentally unsound design. Why should it even be theoretically possible for a request under the authority of one user to create a token with the authority of another user?
Its more important for you to move fast and break things and make us money than to move slow and do things the right way. The life of an engineer...Do it now! why did you do it that way!? Now we are screwed??
Facebook’s php developers like to move fast and break things. Bad design choices, monkey patching, breaking things on production, it’s all part of Facebook’s “engineering” principles.
Not the root cause, but I'm guessing a microservice architecture made it more possible. It sounds like both the token generating service and the video upload service have bugs.
Very likely. It happens all of the time. If you read through some cve’s or other bug reports or post mortem’s, you’ll be surprised just how complex attacks can be.
I suppose that the likelihood estimate would need to take into account the number of people who have (or had) access to the sources. Obviously the alternatives are not mutually exclusive.
Nowhere on that banner does Facebook make it clear that there was recently a severe security issue that may have resulted in the loss of personal user information (Making it much less likely for the user to actually click 'Learn More'). It's misleading to title this with just "An Important Security Update" and make it seem like they've just updated their systems. No mention of the recent compromise until you click 'Learn More'.
This banner is crazily insufficient. It disappears forever after you visit any other page, without you having to acknowledge its existence.
I just checked fb, and went quickly to my first notification. I didn't really register what the banner was until maybe a second after it loaded - at which point I had already clicked on my first notification. By that point, the banner is gone forever. I can't find any way to get it back.
> “We’re taking it really seriously,” Mark Zuckerberg, the company’s chief executive, said in a conference call with reporters. “We have a major security effort at the company that hardens all of our surfaces.” He added: “I’m glad we found this. But it definitely is an issue that this happened in the first place.”
There was a conference call with reporters about the subject, so the press release public release was not the first the NYT knew about it. They likely had an embargo agreement.
Since you're just getting downvoted, I may as well say that as a member of the press it isn't uncommon to see embargoes on stuff like this. They don't say a week out "hey we've got a huge security announcement" but they do say "we have something coming out this afternoon and we're doing a briefing half an hour before if you agree not to publish before we go public."
It's often in the interest of the reporter to agree to stuff like this since publishing security issues ahead of time can have serious negative consequences.
This is in response to a dead reply on this chain. Unless you are in Congress it is illegal to trade on material non public information. So if a reporter traded on info in an embargoed press release they could be prosecuted for insider trading.
I hadn't considered an 30 minute embargo, thanks for setting me straight on that (also an ex member of the press, but from the days when things didn't move quite so fast)
In the journalism world, pre-written articles are apparently quite common. I assume they had a boilerplate already for the next Facebook controversy, and just wrote 2-3 opening paragraphs that were relevant for this one.
In the journalism world, pre-written articles are apparently quite common.
Actually, not "common" at all.
Obituaries for famous people are often done in advance, since everyone dies. It used to be one of the things that young journalists/interns did to cut their teeth.
But not every company has a massive security breach, so this was not pre-written.
It's not uncommon for big companies to fax (yes, fax) bad news to news organizations a few hours or days before posting it on their own web sites.
In the past, there would be embargoes on the information, but in the case of bad news, those are routinely ignored.
Welp, this sounds like a pretty bad practice. If there's one thing that journalists can count on, it's that famous companies are going to have a data breach.
This is probably not at all what happened. Things get heard and articles get quickly written. In this case it can even be the company spreading the news to key media companies in order to control the spreading of the news.
It's possible they emailed the release out before it was published on the web, I suppose. It would make sense, as I imagine news outlets have follow up questions.
It's also possible a bunch of them got logged out this morning, new something was up, and started fleshing out their prewritten template with details like the date and symptoms.
I suspected there was a breach of some sort, when my tokens expired in three places simultaniously, this morning. First thing I did was search google news, nothing had been written yet. I wasnt sure they would ever announce it, probably depends on the scale.
Facebook wrote it. They called their friend at NYT and handed over the article - then mentioned they would be sharing it with other outlets later. [just my guess].
Until they can provide some data that say the 50 million number is a fact, I don't believe it's that low. Every breach starts out on the low end, and miraculously ends up being double or triple as they do "more research" and the initial anger dies down.
I'm pretty sure they logged out more than <5% (90m of 2B) of their users, because of the people I talk to on a daily basis on Messenger like well over 2/3s got logged out. I could see if they meant 90m of American users or something.
I don't think you understand how big the world outside of the US. They could logout 50% of all Americans and it amounts to 5% of Facebook. How many people in other countries have you spoken to before drawing your conclusion?
Also if the tokens can be used for 3rd party “Sign in through Facebook” authentication this just compromised millions of people’s entire digital identities for everything from dating sites to financial logins.
I was logged out twice, once in the morning and again in the evening - I came back to this discussion to see if there was some explanation.
(No, it's not that it was just two devices - I had to log in four times just on my phone. Once for Messenger, once for FB itself; during each occurrence.)
It's more a problem of a biased sample than a small sample - this attack spread through the friend network, and so if one of your Facebook friends is in the attacked/vulnerable group then other ones are also likely to be.
Sadly, where I live, Texas, that handful of media outlets is what is viewed by the majority. Also, relating back to FB, it has become an echo chamber. Most people lock into one source, and once they are locked in, that information becomes gospel. They favorite/like/follow these sources, and that's all they will accept. Before I left FB, I was inundated with other people's posts from this handful of media outlets. People I know and consider friends, but this is what they are into. It was tiresome and baffling.
> But it’s clear that attackers exploited a vulnerability in Facebook’s code that impacted “View As”, a feature that lets people see what their own profile looks like to someone else. This allowed them to steal Facebook access tokens which they could then use to take over people’s accounts
Well, thanks to Facebooks "View As" functionality, I recently discovered that their privacy setting "Only Me" does not work for only me, if another person is tagged in the picture. Meaning that if I have a picture with my ex somewhere in profile, set to "Only Me", it actually means "Only me... and her".
I don't think that matters. "I hate travelling by air because the plane can crash" is a true statement for many people... but statistically, that's not the method of transportation that kills people.
The fact of the matter is... ACLs are hard to get right. It's even harder when you have various roles that can be checked against the ACL (logged in user, batch job, logged in user impersonating someone, etc.) . But in the end, complexity is what's scary, not some feature that depends on complexity.
> The fact of the matter is... ACLs are hard to get right
This sounds similar to different distros of linux. Some are security focused where nothing is allowed until it is explicitly allowed. Other distros try to be more "user-friendly" and pretty much everything is open.
Starting from a wide open starting point and then trying to batten down the hatches afterwards does seem to the harder way to do it, but that's exactly where FB is. They wanted everything open, and then had to decide to start limiting that data. FB was designed as a place to share info. If you posted it, you wanted to share it. I totally get that mentality. However, as devs, I can imagine that we have all built something that the end users use in a way not envisioned, and we've probably all had "you're holding it wrong" lines of thinking. Once you get to that point, you can alienate users by telling them to stop doing it that way or embrace what's happening, and then make it work for them. Seems like the perfect situation to where bugs can get introduced.
Which is why the point doesn't make sense. The article says tokens were leaked. There are plenty other places where such bug could happen, so it shouldn't serve as a strong validation of "User impersonation code always terrifies the bajeebus out of me".
(Not to mention it's not really user impersonation, it's just filtering your profile page based on computed access level of one of your friends.)
Stealing the access token is the worst possible attack, because it wouldn't get logged or lead to any sort of notification. If they were only able to steal the passwords, this would have gotten caught immediately.
Yes, there have been other cases of exactly the same issue. I recall a case where it was possible to pretend to be people via the chat system while using “View as”.
So is this what the hacker was going to livestream the deletion of Zuck's page? In the end, they submitted a bug to FB so I'm assuming this is the exploit that was intended to be used.
Does anyone have an idea on how this exploit could have worked?
If only the hacker(s) would write a blog article or make a LiveOverflow-style video about it. It would be quite entertaining.
I'd be so curious/intrigued to know more about this specific exploit, it's a shame it wasn't a responsible disclosure from a white-hat.
One of the top comments by herpderperator dives into it pretty good. The part that I had questions was where it tied into the increased traffic patterns. But if I'm understanding it correctly, that would be because you're baiscally walking the graph, each token that you compromise, you have to masquerade as that new person to expose more tokens that they are connected to.
I find facebook's effects on privacy and democracy as scary as the next person, but so far their secure coding standards have been extremely high. They're one of the few big names NOT on haveibeenpwned.com, they run their passwords through a KDF and then encrypt the result with a hardware security module, and a whole lot of other good things.
I guess even the best (at secure coding) sometimes mess up.
Not the same.. that breach was way before the acquisition, you can't conclude from that breach that MS development or security practices were lacking ..
The issue is that Facebook has access to so much information that their security has to essentially be unbreakable if they don’t want a massive leak of sensitive user information.
Pervasive biometric security may be the next step. I know it's scary and could actually be abused but it also can generally increase the level of security for everyone.
This may be an urban legend, but I've heard there was once a bank robber who dipped his fingertips in acid. After a few months, his fingers healed, and the prints were exactly the same as before.
Recently talked to 2 friends working for fb. According to them, the culture there is very toxic. For a master's degree, once get in, you need to get promoted in 22 months (I might misremember the actual number.) or you will have to leave. Debugging is never counted as a real work, so for quick promotion, nobody wants to solve bugs unless a bug becomes too obvious. And they also complained about no work-life balance. They got pushed to check-in code at 12a.m. for example.
I suspect that that's very team-dependant (in a company with thousands of engineers in tens of offices, most things are). Personally I got promoted based on debugging / code cleanups / reliability work, and I don't remember the last time I worked outside my self-assigned working hours (~10am-6pm) (Aside from on-call shifts, where I got one false-alarm on a weekend a few weeks ago). If one of my teammates messaged me asking for code review at midnight and it wasn't a "the site will be down if this doesn't land right now" issue, then I'd reject their code on the basis that we should all be in bed :P
My understanding of the "get promoted or leave" thing is "engineers hired as juniors are expected to get to mid-level in under 5 years (with a half-way milestone at 2 years)"; once you're mid-level it's up to you if you want to carry on climbing. Personally once I got there I switched to a "work more efficiently in fewer hours and keep the same overall productivity" approach instead of trying to get promoted into the senior levels, and that's worked out nicely so far :)
Trading anecdotes, I have a number of friends at Facebook (both at Menlo Park and the NYC office), and they complain about the opposite: lots of people just coasting and doing the minimum needed to get by, really hard to fire people, etc.
What you’re talking about isn’t related to having a masters. All engineers are expected to progress beyond junior levels (get to E5) in a reasonable amount of time.
It’s not a great practice in my opinion. But in practice only a small percentage of engineers fail to make the grade.
What, exactly, is wrong with the expectation that people make senior level eventually? What exactly is wrong with being able to work at any time? I worked there for years, and if I was landing code at 12am, it was because I was excited about what I was doing. It was wonderful being able to work with people from all over the world on high-impact projects, and fixing important bugs was definitely high-impact. People who fixed vexsome bugs were heroes.
>> you need to get promoted in 22 months ... or you will have to leave
> What, exactly, is wrong with the expectation that people make senior level eventually?
The problem is when you base too much on promotion systems and performance reviews, that end up as a form of bias and favoritism not closely approximating the truth. Some amount of people are doing useful work for you (like cleaning up after people you think are the high performers) that does not surface there, and when you crap on them, pass them up, bust their morale, make them afraid of their next review, etc., you risk losing their valuable contributions.
Modus operandi in these companies is to rewrite/reintroduce whole products instead of fixing bugs from already discarded people. So if you lose a critical amount of worn out higher paid contributors, you just make a V2 or introduce a new product with a completely new fresh team that will get discarded after another 3 years. This requires fresh supply of motivated and hungry people willing to take sacrifices and a much smaller amount of people willing to exploit that.
Societal pressure to do everything it takes to get rich and succeed is a serious drug. I also contribute some of it in cases like this to the fact that some programmers are unfortunately just not well adjusted.
Either you were excited about what you were doing or you got an 11pm page from chuckr and consequently had a lingering doubt about your expected lifespan...
I am finding it very hard to comment on this without violating HN guidelines and throwing ad-hominens. But I will try.
You see, the parent poster said:
> They got pushed to check-in code at 12a.m. for example.
This is ENTIRELY different than having you, overly excited about some project, deciding to work late and pushing code at 12am of your own accord. That's absolutely nothing wrong with that.
Now, if you are EXPECTED to do it, outside major emergencies, then you have a problem.
It’s perfectly reasonable to work at 12am, and there’s nothing in the parent comment to suggest that they’ve been working since 9am or so. Maybe they started working at 8pm. Modern work should be asynchronous. If your company cares about butt-in-seat time, it’s the one that’s wrong.
I don't think you can so glibly dismiss enthusiasm as Stockholm syndrome. Passionate people push the world forward, and mocking passion is a recipe for mediocrity and stagnation.
I think i'd just much rather spend the short time I've got left in my one existence doing things outside of work that actually make me happy and fulfilled, than being exploited for the benefit of the mostly rich and powerful and the illusion of "progress". If you truly get fulfillment from that stuff then more power to you, but I don't think the vast majority of people who are pressured to perform do.
Just because we have more "stuff" and more advanced "technology" doesn't make life more worth living. Happiness levels across society don't increase alongside productivity.
> I think i'd just much rather spend the short time I've got left in my one existence doing things outside of work
Okay. That's your choice. But having made this choice, don't complain when those of us who choose to devote more time to work receive greater rewards. There's nothing wrong with paying for performance.
Of course there is. If you are working 12 hours a day, how am I with my paltry 8 hours ever going to be considered for a promotion? I quite need it to keep feeding my family after all.
I can’t stop my bosses from judging based on time spent working (which is silly, but hey, we’re all human), but I sure can try to stop my coworkers from subscribing to such insane work hours.
Keep on living to work, brah. I'll feel less guilty clocking out early knowing you're there to keep things running. I bet you'll feel differently on your death bed.
On the flip side celebrating a culture where (allegedly) people are expected to toss out their personal lives and time (what is sometimes referred to as passion in some circles) is a race to the bottom. It means colleagues who DON'T do this are punished or replaced. Perhaps that's what you refer to as mediocrity, the unwillingness to put in long workdays that extend into night.
If you did, do you get fired? Genuinely curious: what happens?
Personally I strongly prefer no fixed working hours. If you want to work at night, so that you can do things when it’s light out (especially in winter), and you still get the expected results, what’s wrong with that?
If you did, do you get fired? Genuinely curious: what happens?
Probably not fired. But the interior motion sensor alarms go on automatically at 7pm, which would probably alert the security guards that roam the campus.
When I first started, I came in too early once and set off the alarms. People were nice about it, but I was super embarrassed because I was a n00b.
Personally I strongly prefer no fixed working hours. If you want to work at night, so that you can do things when it’s light out (especially in winter), and you still get the expected results, what’s wrong with that?
I worked at a place like that once. When I was hired I was told I could make my own hours. I prefer to work early mornings, so some days I came in long before anyone else. A couple of times around 3am. But I always worked at least eight hours, and often more.
In my exit interview, my supervisor was rabid about how I wasn't a good fit because I "come and go as [you] please." She was so full of crap about other allegations against me that I didn't even have a chance to bring up that making my own hours was part of my employment deal.
I think the conversation above was more about people who put in very long hours because they're passionate and so forth, or they're obliged, or whatever the reason the 'company culture' is a certain way. I think flexible hours that you describe is a far more popular idea (and probably a good one if you ask me).
Yes, and it shouldn't be up to an employer to set that limit, but to regulatory bodies. Having people spend 12-14h a day working is not good in the long term, and expecting people to do that otherwise they will be fired is draconic.
It’s not that cut and dry. For a lot of reasons, I don’t do side projects. But I do choose jobs that are using technologies that will keep me marketable. So if I want to learn a new to me technology, I’ll often work some crazy hours to both learn the technology and get the work done.
Yes my company benefits from it, but so do I. For instance, given a choice of trying to come up with an idea to learn about a feature of AWS and pay money for the resources I use, and take advantage of my work AWS (Dev) account where I am an admin, I would rather do a work related project where I have the resources and I don’t have to come up with an idea and I don’t have to pay for it.
What I don’t do is “signal”. I don’t stay at work late, I don’t send emails out after hours, and I pushback if they give me unreasonable deadlines.
Let’s say my team had a feature to get out and the React expert said he could do it in 30 hours and he could have it done by Monday morning without working extra during the week or on the weekend.
On the other hand, say it would take me 50 hours and I knew I would have to work on the weekend because I’m not as experienced, but I thought I could still have it done by Monday.
I might be willing to volunteer, knowing it would take me longer but it would also be done on time. That extra 20 hours, I’m still working, committing code but zeal do trying to figure out the framework. I wouldn’t have a problem doing that because I am learning a new skill.
But, I wouldn’t work weekends to finish a project because I was given an unrealistic deadline.
The first scenario, the extra 20 hours benefits me and the company. The second, it just benefits the company.
Facebook is a cancer. It’s not “pushing the world forward.” It’s a phenomenal waste of energy.
Take those excited geniuses and have them work on preventing climate change from ruining all life on earth, instead of inventing new ways to profit off of people’s data.
There’s already a community of excited geniuses that work on preventing climate change - they’re called climate scientists, and their solution is a steep carbon tax. It could pay for a public interest ad campaign for recycling and energy efficient practices, distributed and targeted by the excited geniuses at Facebook. That way we can brainwash the Paleolithic know-nothing American public into behaving in a way that doesn’t destroy the planet.
Well, maybe this specific case doesn't apply to you, but enthusiasm and passion weren't the vocabulary used to describe many of my friends' experiences working late nights at fb.
I think I see the disconnect. Yes, passionate people move the world forward, but that's not every person, or every coder, or even every Facebook employee. Plenty of engineers just want to make a steady paycheck and live their comfortable life outside of work.
If Facebook's a grind, then that's something the employee has to figure out.
I think she probably meant that being passionate without meaningful equity is equal to being a corporate slave - even if ultimately company/world benefits, the person gets discarded/sacrificed at some point in a hierarchical structure with limited upward movement, not profiting from it in the future.
What really freaks me out is the day Facebook die, what will happen to all of this data?
If you heard about the NCIX story where they basically abandoned their servers filled with users data (over 13 years of data) and someone scooped them up and tried to resell them on the black market, one could think that a similar fate is possible.
Obviously if Facebook was going under it would probably trigger a huge legal process on how to handle the data but it clearly doesn't happen for smaller businesses...
If they go under, they'll of course sell off their assets to the highest bidder. Their shareholders will demand they do so. Or it'll be auctioned off as part of declaring bankruptcy.
> What really freaks me out is the day Facebook die, what will happen to all of this data?
Interestingly, Facebook owns your data. I believe if they wanted to, they could close the company tomorrow and put a facebook.tar.xz of everything they collected on archive.org or somewhere else.
No. You own your data stored at Facebook. Facebook just have license to use "as they wish" while respecting your privacy settings (i.e. uploading facebook.tax.xz is definitely not according to privacy settings of most people).
Logged out of and back into what? Your mobile app? Your web browser tab that is left open indefinitely? I no longer use FB, so just curious. I know people that never log out of FB, and have closed their browser window/tab thinking that was good enough even though the "remember me" type option was checked. Opening a new window/tab to FB would show their account just like nothing happened because they did not log out. I know this is to FB's advantage of tracking all the things, but wow what a security nightmare.
There's no remember me option antymore, it always remembers. You have to log out manually and/or set your browser to delete the cookies and/or use Ghostery if you don't want FB tracking you all over the web...
My girlfriend and I experienced a really weird bug in the past. We would see that Facebook said we were active in the middle of the night when we were definitely asleep. It didn't make too much sense then, but now its possible that those instances might have occurred due to someone else accessing our accounts? Both of our accounts were logged out.
So here is a question: my girlfriend only uses FB on her laptop, and always logs out when she's done. I usually make fun of her for doing this.
But does this mean most of the time that there was no active access token and she is mostly safe? (Excluding the windows of time where she was actively using FB) Do I have to take back all of my teasing?
This is an interesting point. Right now, I can't reconcile the "we canceled active sessions thus logging people out" as a fix with the fact that "View As" was the attack vector.
Possibly -- if the attacker accessed session IDs, they could potentially hijack the sessions of logged-in users. If you log out, most servers will destroy the session data on their backend, so there's no session that can be hijacked.
This is something I would suspect doesn't actually happen. FB wants to track all of the user's browsing habits, so maybe they just make the actual FB UI look logged out? Security-wise, it would seem to be more complicated by their desire to never let a user be logged out, and looks like it's complicated enough it is biting them in the backside. Oops?!
It’s not really that complicated, you have auth tokens and you have tracking tokens, and you wouldn’t want to mix them anyway because you also want to be able to correlate multiple accounts logged in from the same browser over time.
The interesting part is that it is the second time (at least) that this is happening.
In the past, when you were using "View As" you could read private messages without doing anything malicious (you were actually logged on victim's Messenger account).
> This attack exploited the complex interaction of multiple issues in our code. It stemmed from a change we made to our video uploading feature in July 2017, which impacted “View As.”
Obviously, Facebook is an extremely complicated system. But I find it hard to believe a video uploading feature would impact 'View As'.
It's very easy for me to believe. "View As" is an authorization and authentication sensitive, limited user impersonation feature. Video uploading interacts with, and complicates, authorization in an application with fine grained privacy and permission models.
It's intuitively straightforward that modifying code for uploading videos could (read: not should) have authorization and authentication ramifications. One of those ramifications could then result in a vulnerability chain compromising user impersonation functionality.
I have seen far, far more incredulous head scratchers in penetration tests and code reviews. The interaction boundaries of, or middleware between, two seemingly unrelated systems is generally a good start to look for a security vulnerability.
> It's intuitively straightforward that modifying code for uploading videos could (read: not should) have authorization and authentication ramifications.
I get this part. But why would it affect only videos and not other entities (photos, status etc.)? I would think creating (or uploading) any of the entities have the same authorization and authentication ramifications. What could be different for videos? Unless the privacy models are so fine grained that you can have different privacy settings for different entities (haven't used Facebook in years, so I don't really know). Your explanation makes sense, I'm just looking for a concrete example.
As someone who works specifically on user authentication stuff...
The problem is often that there are multiple sources of truth for who the user is. And if you have an impersonation feature, you by definition have two sources of truth: who the user actually is, and who the user is impersonating. It would just be a matter of a single mistake of using the wrong one.
Considering that "view as" requires your page view to render every control as the impersonated user but only when it comes to your profile, but renders all controls outside of your profile as the original user, I could see any engineering team dealing with some very carefully drawn and potentially confusing boundary cases.
Edit: just to elaborate, it's not just obvious impersonation contexts where this gets interesting. For example, linking your Humble Bundle account to your Steam account, or on Netflix which user you are vs. which email address is being billed. Many apps have a function to share some document using a one-time expiring token. If you're also logged in, then do you read permissions from the shared token or from your account? If you mix them, do you make sure anything that writes to this shared view can't touch your account itself on accident? We don't think about it much but I think you can see how these subtle distinctions are important when you are thinking about access control, and that makes it a breeding ground for subtle mistakes.
If I recall correctly this is not the first FB vuln relating to View As. I searched and can't find it, but I seem to remember there being a bug around 2009 where you could basically take over a friends' account by viewing your profile as them.
Is it wrong to be glad FB's reputation has tarnished (and stock price sideways) over the past year or so? For so long they've monopolized the talent pool in the Bay Area. If more people decide 1) they don't want to work at FB and 2) FB employees are itching to leave then I see any stain on FB's employment brand as a net positive to the greater tech + startup ecosystem.
they havent monopolized talent, they pay for talent. Facebook paying high salaries has increased all of our pay, equity etc, whether you work there or not. The only thing this may be bad for is founders who are in a zero sum competition with FB for talent and now need to spend more money and equity to get it.
This is a very short-sighted view. Yes it has some immediate benefit in terms of pay, but you have to consider the long-term societal tradeoff of not developing addictive mental candy for people or developing societally useful technologies (or vice-versa, as it now stands). We can focussed on getting paid a lot now, or improving the wealth of everyone and generative the value we can all enjoy later.
Why would it be remarkable that a few popular technologies come out of a big, rich technology company? People who create such technologies work at places like that. But there’s nothing about React or GraphQL that makes them only possible at Facebook.
A big, rich technology company has the resources to put people on the project full time and a revenue stream to justify such broad architectural project.
There's also financial support for building a community around improving the tech, by encouraging outside contributions via meetups, conferences, social events, better technical documentation, etc.
At smaller scale startup an engineer is surely welcome to work on his skunkworks project, but justifying expensive large-scale architectural undertakings on company's dime is problematic. Especially if a quicker fix is available and buys the company a chance to kick the problem down the road.
With that said, it's not impossible to build a major popular piece of technology within a small company (Joyent and Node.js being a good example), it's just harder.
You’re repeating what I wrote - Big tech is likely to produce new tech, but new tech comes from other places too.
This discussion is mostly irrelevant to the fact that this particular company is completely reckless and unethical. The technology they accidentally produce while building a dystopia to make people click on ads[1] does not justify anything.
-Breaking democracy in the US and the UK by being _the_ platform for disinformation.
-Disinformation assisting genocide in Myanmar.
-Use correlating strongly with poor mental health
-Manipulating behaviour to encourage poor attention spans for the sake of ad-clicking
-Constantly violating basic standards of privacy
-(I could go on..)
Oh wait, excuse my arithmetic. I forgot to add another JS framework like Relay to the LHS of the equation, that makes it a net positive from Facebook! :D
I don't think its fair to blame FB on the decay of democracy in the information age. Surely Twitter is also to blame them. I think the blame is on the users. Its not possible to be perfectly informed. It is possible to keep your mouth shut if you don't know something for sure. Perhaps its the fact that in real life, to say something you need to say it to someone's face and on social media you don't have that social weight to carry. This brings about people more likely to share misinformation. If this is the case, its not the fault of social media, rather the fault of internet culture. More personal responsibility is the solution. Not an improved ML system to detect fake news.
It is a problem inherent in the structure of most social media companies. And Facebook is the most significant social media company, and thus contributor to the problem.
I think the sadder part of this argument is that nobody outside of software engineers know or care what GraphQL is, yet it’s being touted as a “societal benefit”. How about the fact that my grandma with limited mobility can still attend church virtually through the Live feature? Regardless of how often the scions of the Valley disavow their own technology (I would /never/ let my children use our products!), there are a billion or so other people who actually use it to real benefit in their quaint little lives.
> Breaking democracy in the US and the UK by being _the_ platform for disinformation.
Blaming facebook for "breaking" democracy in the US and the UK is ridiculous. I can't understand how this can continue being a claim remotely considered valid. I agree (or may agree, at least in part) on some of the other points, but not on this.
Claiming that Trump won just because of the russians putting ads on facebook is at least naive - and ignores the fears/actual issues a very big* part of the US population experience daily. Isn't failing public schooling a problem there also? Does that give us citizen more or less prepared to actually participate in democracy?
Politicians (of all sides) in the UK have accused the EU of being the root of all evil since they "joined", again and again and again: you lost your job? Blame the EU! We can't cut taxes? Blame the EU! You really want to blame facebook and NOT the politicians themselves because people voted for brexit?
If the Russians tried to manipulate (and for sure they did, oh gosh, I'm pretty sure the US and the EU states never do - or did - anything to manipulate elections abroad! Evil Putin, why you do this to us? :cry:) we rolled out the red carpet for them!
Democracy was broken because actual journalists did not do their job. Stop doing what they (may) want you to do, using social media as a scapegoat for their own (willing, sometimes, for sure, at least if you read what Chomsky has to say) MASSIVE failure of being the "champions of truth" they claim (and blindly believe - I worked on somewhat close contact with them for years, I've seen that) to be.
I agree with your premise, that many Facebook employeees would give society a better return on its investment if they were employed elsewhere, but that’s hardly Facebook’s fault.
It's tempting to think that without Facebook they would get involved in cancer research or interplanetary travel, but given the Silicon Valley's funding cycles, they would be more likely to end up building yet another food delivery startup or revolutionizing something by putting it on blockchain.
Also, a bunch of recruiting venues exploited by Facebook are not that accessible to smaller startups.
E.g. one of the top previous employers for Facebook employees was Google (or some other outfit within Alphabet group, like YouTube). Most likely those people would've stayed at Google.
Another hiring source was university recruiting, which involves participating at job fairs at various universities, exhaustive days of back-to-back interviews, flying candidates for on-campus interviews, and eventually covering relocation costs (and potentially visas and immigration paperwork) for someone moving from Pittsburgh, Waterloo or Romania.
Would a smaller startup have the financial oomph to run a similar recruiting pipeline?
There's also its ostensible goal to connect people. I logged in for the first time in months just to see if I had been compromised. In about 15 minutes of goofing around, I got to enjoy countless happy baby pics posted by old college friends, and had a nice chat with someone I hadn't talked to in almost a decade, after I randomly commented on a status update. Then I logged off. I know that my kind of limited use is likely not the average scenario, and I can definitely understand people suffering when they get sucked in. But it's a site that does a damn good job of making it easy for me to find and interact with friends, and I don't believe the tech and design involved is trivial.
What makes Facebook "addictive mental candy" other than you not personally liking it?
I know lots of people who feel they get and have got tremendous practical benefit from Facebook. It isn't "addictive" unless you use that term to mean anything some people make that other people enjoy.
"Our results showed that overall, the use of Facebook was negatively associated with well-being."
Naturally, even if this study is accurate it isn't definitive; the causation could go in the other direction, that the unhappy use Facebook more often than the contented. But it's still quite suggestive.
A friend in HR that has friends in many of the Bay's companies told me that people at Google and other big companies hire to keep people away from other companies. Because they can.
So, yes, I believe they are trying to corner the market on the best programmers.
Wasn't Facebook part of the class action lawsuit that sought to supress wages and colluded in anti-poaching between Intel, Apple, Microsoft, and Adobe?
They may pay more, but they collude to make sure people couldn't leave without going far outside the bay. That's a monopolistic trait.
I agree they weren't putting a gun to people's heads but they were making the environment less available.
I don’t think Facebook was part of that group. More importantly though, it was in the aftermath of that, where large companies started more aggressively poaching employees, that large company compensation ballooned and startups started to complain about the top large companies hogging talent.
I hate that this has happened. The Bay Area used to be a place where working for the big, shiny company that makes your parents happy wasn't prestigious. It was safe. But taking a risk and starting something new was admired. The present state of affairs reminds me of Wall Street.
What happened was that VCs started sucking up all the equity and it became not worth it from a risk-reward perspective for most people to work at a startup. This, coupled with companies staying private longer meant that in the lat 5 years, you were better off working at G/FB than a small or mid-sized startup.
While VCs certainly played into this, I'd say founders merit the bulk of the blame. VCs are generally more amenable than founders to larger equity pools for employees. They're also much more enthusiastic about IPOs than founders, since they want liquidity events for their investments.
Thank you both. VCs and founders together have sucked up all the potential value of working for a startup, leaving only risk and below-market pay to employees. Until this changes, big name companies are not just safer but higher expected value.
Yes, so much this. There is a very real opportunity cost to forgoing high salaries (and this opportunity cost is front-loaded as well since home prices keep appreciating).
There was always a level of prestige associated with certain companies even in the 80s and 90s, no?
The tech industry, despite its shortcomings, is vastly superior to Wall Street in that regard. It's still a meritocracy above all else.
Plenty of smart people break into tech after doing something else for a few years. If you want to go into investment banking, you better come from a consulting or have already been working in finance. Your only last bastion of hope is to get an MBA and then join the rat race.
I think that prestige only exists in the minds of some people who work there or have worked there. If I had a nickel for every time someone started a sentence with "well when I was at Google" for a scenario that is nothing like Google... Facebook's move-fast-and-break-things culture is fortunately a little less envied, in my experience.
The prestige most definitely exists and is especially relevant for people who dont have a strong public portfolio to show off their talent. An average developer from Google/FB etc. has an easier time getting access to opportunities than even an outstanding developer at a no-name company. Companies/Hiring managers go through an implicit thought process along the lines of "if she/he go through google she/he must be good" which opens doors and helps in salary negotiations.
The economic success, brand awareness, and hipness of a company with the general public is only somewhat correlated with average level of engineering talent at a company. Different successful companies take different approaches to hiring - some focus on hiring a lot of reasonably competent engineers, while others focus on only hiring the best (and generally pay them a lot).
I think about this a lot too. All companies eventually decline or go through rough patches. A Google that's fighting for survival and losing money would be much more open to working with the Chinese government or selling user data to the highest bidder.
Trusting these entities based on their noble intentions today makes no sense to me if there's no legal agreement or regulation to restrain them tomorrow, when they get desperate.
> Is it wrong to be glad FB's reputation has tarnished (and stock price sideways) over the past year or so?
No, not at all. Their positive reputation was in many ways unearned, and it's a good thing to be glad that their own actions and attitudes are finally catching up with them.
From everyone I know at top tech companies this isn't happening at all. If anything the stock dip was a good thing for new grads because they got more shares.
Its sad that everytime there is a post about Facebook the comments are extremely toxic and negative, and don't really even discuss the article itself. I would argue that 80% of all tech companies are doing close to 0 in making the world a 'better place'.
Taken at face value, all those companies were/are all trying to make the world a better place in the ways available to them. But when the externalities of those goals affect other people, they will become hated. Everyone has a different opinion of what 'better' means.
Since giant companies are most able to execute their vision of better, they will get most of the hate from people who have a different opinion of better.
Looks to me like its all just a battle for power: People with ideas but lacking implementation and execution, vs Companies with ideas, implementation, execution, AND momentum. Let's not forget all the different forms of government either...
All other news outlets: Make people click or view ads
Random-blog: Make people click or view ads
Free games companies: Make people click or view ads.
Internet businesses are a vicious circle of cash for ads. Ads are deeply embedded in the business model, but is advertising the goal of any of these companies?
I wouldn't call this toxic in the least. There is no name calling or childish behavior. I would just call it debating.
I have a coworker that is jealous of me and my boss because we go at it, but we both know it isn't personal. It is about making the right call on a project. We each think we are right and just trying to make our point
So I think this thread is just more intense but there is no ill will
I wouldn't limit that 80% to the technology industry. Capitalism isn't optimized for human or societal welfare, it's optimized for resource extraction and the production of disposable goods.
This comment breaks the site guidelines. If you'd please review https://news.ycombinator.com/newsguidelines.html and follow the rules when posting here, we'd appreciate it. Be sure not to miss the one about accusations of astroturfing.
2.23 billion active users, 50 million affected. ~2% of their user base. Wow, that's a lot of people affected, and somehow just a tiny sliver of their user base.
As many have mentioned here, estimates about the number of accounts affected usually start off low and increase as the full scope of the vulnerability is known.
Random question - is there a way to naturalize in the EU and use GDPR to ask Facebook to remove your data? (For example an Estonian digital citizenship)
I might be way way off and I am (obviously) not a lawyer but interested in material about this.
I’m a EU citizen and I’ve always been intrigued by Estonia’s digital citizenship. I’ve only picked a few tidbits about how it works and they may be wrong, so take this with a grain of salt.
From what I understand, a digital citizenship is different from a normal citizenship. It allows you to, for example, open a company, but I don’t know if it gives you enough to make you a EU citizen. In Portugal there’s a bit of a kerfuffle about the Golden Visa programme[1] because it’s seen as a way to buy EU citizenship for cheap[2] and cause problems for nationals[3]. I’m not aware of the same happening through Estonia, so it makes it me think it doesn’t give you EU citizenship.
In addition, you have to actually go to Estonia to set up the digital citizenship; it’s not something you can do completely offline and be done with it.
Methods such as marriage are not that effective in Portugal, at least at the moment. I know of two people with legitimate reasons to get Portuguese citizenship and the process has been a chore, taking years.
This is precisely why people should be upset over the massive data mining tactics of Facebook. Maybe it's not a big deal to some people that Facebook knows everything about them, but it should be a huge deal to everyone when that data is then accessed by hackers with malicious intent - from identity theft to blackmail to threats against life - all made easier with this data available to criminals.
I wonder where the "50M users" estimate comes from. It seems like the feature that caused it, "View As", is probably available to more than that many people. Does this mean that they managed to trace the attacker capturing the access tokens of 50M users? Even allowing for the bug in the first place, it seems like exploiting it should be detected before 50M uses.
They have different versions of code base deployed to different areas of the world all the time. They can reduce the user base nuber based on where the code was deployed and how much is the usage
Said this yesterday in the other Facebook thread, and I'll say it again.
Working for Facebook is a morally bankrupt position. If you are an engineer you have plenty of job opportunities available to you and there is no excuse for you to continue contributing your labor and time to a wholly malignant organization. At a certain point one has to ask how we as an industry will start dealing with those who continue to take a paycheck from Facebook even in the face of constant and horrific evidence of wholesale ethical violations and negligence.
So is working at Google, Amazon and probably 90% of the big corps of the world in many sectors - from oil to finance to pharmaceutical to telecommunications and so on. And we can include the government. If you're a subcontrator or sold in body rental (modern IT slavery) you're also in the same position as an employee, so you're enabling their evils. Also, if one of those companies is a client of your company you're also enabling them (or a client of a client of your company? How many layers of separation should exist between you and Walmart before you stop being an accomplice in enabling their abuse of workers?).
Your point? Should we stop working in IT and go back to the fields?
Also, I fear that HN somewhat forgets the world is not SF, in Europe going to work for Facebook/Google/Amazon is a enormous bump (we're speaking 2-4x) of salary for many people, which in some cases means you can buy an house after 3-4 years even with the crazy rents back in your home country - and that's HUGE. Why should those people spend their time slaving as a subcontractor for yet another TLC/bank trying to squeeze their customers dry at the first occasion while getting 25% the salary and zero benefits? Are those less evil?
What needs to happen is that people keep applying pressure so facebook is forced to adapt its business model even if it hits their bottom line - which is already happening apparently.
> So is working at Google, Amazon and probably 90% of the big corps of the world in many sectors
While Google and Amazon both have their ethical problems and serious anti-trust issues, Facebook is in a league of its own. The complete, cavalier disregard for the consequences of its own actions as long as they get theirs is utterly unconscionable.
Unlike Google or Amazon, they add nothing of real value to society to balance the abuses they bring with them. All their labors are geared towards extracting the utility of other creators (their acquisitions like Instagram or WhatsApp) or to suck the time and attention out of people through addictive mechanisms. If Facebook disappeared tomorrow a hundred federated online platforms to function as generic address books and life-updaters would crop up over night, and just about every one of them would be better.
You would have to look at lines of business like Big Tobacco to find another field of similar moral perfidy.
Facebook is just the first that stumbles over all the sh*t that they doing. Amazon with their slavery workforce isn't better and Google was pretty good as keeping everyone happy and creating services but they're as bad as everyone else. I think we will see Google running into similar problems soon. They have also the problem of not having enough guidance anymore, sergey and larry don't care at all and never really wanted to build such a big evil corp. the only one left at the top is sundar but I don't think he can manage to steer such a beast of corporation into shallower water.
but I'm also hoping that it breaks apart and we find a better way then invading peoples privacy to monetize platforms
I'd class Google and Amazon as being analogous to an industry like coal, oil, or railroads back in the robber baron days. They bring horrible externalities and dodgy/corrupt influences on our politics, but we also need them to keep the modern world running and build the future. The problems with them are larger problems with how we manage our economy and our industrial and labor policies.
Facebook is more like Big Tobacco. They're purely malignant, continuing solely through adversarial relationships with their users meant to foster addiction or control the revenue streams of key industries like media and news. They're pirates. What benefits they provide, they have only ever made worse than other services and tools that predated them.
>but I'm also hoping that it breaks apart and we find a better way then invading peoples privacy to monetize platforms
Amen to that. Treating attention as a form of currency has been utterly corrosive to society.
I work at a big tech thing. But these also do a lot of good.
Facebook keeps me more connected to old friends and family than ever before. It facilities communication and organization of events that would far more time consuming to do without Facebook. I hang out with loved ones more, because pinging them is on Facebook is easy.
Sure, Facebook could be better. IMO ads in messenger is a cheap move, that won't play out well. But Facebook could also be a lot worse than it is.
You're not going to rebuild the world from scratch. Nothing will ever be perfect -- but don't let that get in the way of better. Why not make it better from the inside? How is that not ethical?
Maybe I've just bought into the image Mark wants to give off, but I believe facebook's naivety in its actions to largely result from Mark's optimism and lack of skepticism. He comes off as a person with hope and belief that things are good and positive. He doesnt come off as a cynical, worst case scenario type person. Time and time again, something happens, he writes that "it isnt so bad" and THEN evidence comes out that it’s a bit worse. Usually the press invents a new level of hyperbolic hysteria, pretending it’s even worse than it is for ratings.
BUT whether its "fake news" or "data misuse" Mark is wrong because he believes all else equal, good is more powerful than malignant, and I think he is wrong.
Mark has a megaphone, he can amplify whatever the fuck he wants. Any story, any fact, any narrative. And he chooses to be passive, inactive, reactive, and let the algorithm decide, because he can maintain a position of faultlessness, forever blameless, as long as he can point a finger at a math problem, a glitch, or an oversight. He speaks in response to controversy, not preemptively.
The answer to election manipulation, fake news, etc, is to find better ways to rate and surface GOOD content. What SHOULD people be reading. Not, "well we only show what your friends posted, so blame your friends." Why is every person with money buying a newspaper? Because they have realized what Mark hasnt. To speak out, without prompt, about what is right and good. That means having an agenda, that means having bias, that means having an opinion. Mark cant have an opinion, because is trying to please 7 billion people at once, and has chosen to favor blandness and inoffensiveness. I believe he has made a mistake; both in his lack of pessimism towards collective human behavior, trusting monkeys with typewriters to come together on their own; and for not having the courage to come out and say more offensive things. He should have torn the senate apart like Howard Hughes in The Aviator. Instead he played nice, hoping they would soon forget about him.
Quick plug to sites that DO surface good content daily: What Mark SHOULD be doing. A facebook front page, a TIGHTLY curated (looking at you msn.com!) of what people SHOULD be paying attention to, things that actually matter to humanity, avoiding stories and serialized controversy the news cycle invents for its own riches.
[I don't think Mark is being insincere, or speaking in bad faith, when he says Facebook takes security seriously. Of all companies to scale quickly, facebook has not been one that appears to allow technical security debt to accumulate, unlike just about any non tech company in the world.]
I had no trouble understanding that "Mark" meant "Mark Zuckerberg" given the thread about Facebook and the comment branch about him. I don't think "Mr. Zuckerberg" is warranted on HN.
When I interviewed at Nest a few years ago, people kept telling me what Tony said or believes. You know, Tony. It's just an awkward way to refer to a 3rd party, unless they're in the discussion.
Zuckerberg only thinks he can't, and that <not having an opinion> is not itself an opinion.
These platforms focus on exploiting emotion and seeking 'engagement' to maximize time on platform and ads viewed/interacted with.
This creates a strong bias towards false information, since it appears to be more novel/surprising/engaging. Even without active corrupt injection of lies, conspiracy theories, dezinformatsiya, provokatsiya, etc., this will bend away from the truth.
They need to focus strongly on filtering for facts and truth, and yes, that means suppressing all of the above.
Zuck thinks he's not in the business of publishing, despite corralling all the biggest publishes onto his platform, adn providing advertising to the likes of Russian trolls and Cambridge Analytica.
He needs to recognize what he is actually doing and take editorial responsibility for his platform.
Sure, let's all sling FUD around based on anecdotes and fake data. Let's shame people earning a salary working at a company and trying to solve real problems. Sure FB does some shitty things but honestly so does every large company out there. It's impossible to get to that massive size without breaking a couple eggs here and there.
People expect saints when saints are imaginary. The baseline to measure cannot be "perfection".
While Google and Amazon both have their ethical problems and serious anti-trust issues, Facebook is in a league of its own.
Maybe, but at least Facebook doesn't actively try to manipulate you into believing that they're the "good guys" and "not evil". That's why, after all, my level of respect for FB is still a lot higher than for Google.
>> Maybe, but at least Facebook doesn't actively try to manipulate you into believing that they're the "good guys"
> They don't? News to me.
News indeed. Being good IS their mission statement. "Facebook's mission is to give people the power to build community and bring the world closer together."
Nation states (China and Russia) are backing a massive PR campaign against tools that they can't control... Hence these major attacks lately and all the hate for Facebook and Google. In reality, those tech companies are enormously more ethical than Baidu or Alibaba or almost any mainstream alternatives... And compared to companies oil, gun, drug, tobacco, alcohol and most others, Facebook and Google are saints.
Can we get proof of this supposed massive PR campaign? You can't just shoo away several real, well-documented, massive breach of trust from Facebook with "well it's the Russians". The hate Facebook is getting comes from them being negligent about user data, and they were well-aware of the problem.
There are hundreds of more similar articles documenting hacking by China and Russia. The data breach at the parent of this article even is supposed to have been done by a coordinated nation states. How can a company ever compete against a nations resources? Expecting Facebook to magically secure themselves against attacks by a nation is wishful thinking. If you want to stop breaches like this, the nation's that are doing it must be held accountable. Facebook is an easy target, but that's just attacking the victim. China and Russia are much more intimidating but are the ones who are the true perpetrators.
> Mandiant says the hackers would log in to Facebook, Twitter, and Gmail from infected computers. Once logged in, they would send the spearfishing attacks which were the basis of their espionage.
This is a spearfishing attack... That's a totally different beast. Let me look at the gross mishandling of data that facebook had recently:
Remember the Cambridge Analitica scandal? That wasn't a hack. That was facebook deliberately letting apps access user data like it was candy, because that's what facebook does. It was done on purpose. Of course, they didn't expect someone to scrape the whole network at that scale, so it wasn't fully intentional. Still, it was absolutely gross negligence, and is absolutely a breach of trust that happened outside of external government interference: https://www.vox.com/2018/3/20/17138756/facebook-data-breach-...
What about the recent scandal revolving around the use of 2FA SMS numbers being used for ad targeting? Against, this isn't Russia coming in and hacking facebook. This is Facebook shooting themselves in the foot with a bazooka. They should have been aware that, when a user enters their phone number for 2FA, they expect it to be used for this feature and this feature only. Not for ad targeting, not for notifications. Again, facebook breached that trust. No outside interference, just facebook being either lazy, irresponsible, and incompetent around user data: https://9to5mac.com/2018/09/28/facebook-ad-targeting-2fa/
There are many other cases like the above. Facebook doesn't need china to attract hate. They can fuck things up by themselves well enough.
Please, don't excuse their behaviors by giving examples of companies doing worse. While FB or G, or Amazon, might not be killing people directly, you could very easily compare their methods to those of big oil or tobacco, because they ignore the influence/effect they have in certain countries with questionable governments.
In my mind, FBs inaction in countries like Burma has caused more direct conflict/hate then most tobacco companies. However, if you look at the oil industry, its harder to draw a line... For example: Is the oil industry responsible for the bombing of Irak? The Government gave different reasons, but looking at current evidence for example, it just looks like pipeline protection.
So what i wanted to get at is that when a company gets to a certain size, it gets harder to separate government/ company involvement. At that point we need to look at how the company acts, and in this case, especially FBs, they are doing a horrible job (IMO) at actually standing up for their users, instead choosing to protect their business interests.
(While this is their supposed purpose as a company, there exists people like myself, who believe the current model of stock sponsored companies are the wrong way to do business).
Just a FYI: This is more a rant then anything, Im tired of the BS that these huge companies are producing and selling as the solution to humanities issues. Personally i hope that this current system fails for something better, however, personally i have not thought up anything much better atm.
Your comment history shows no criticisms of truly awful nation state dictatorships or any of the 95% of all companies that behave far worse than Facebook. Facebook is just an easy target.
It's not an issue of paying well or not - it's that all alternatives are also evil and pay a lot worse (so you're worse off, you may work more and have a shittier commute and so on).
I'm pretty sure we'll end up discussing the various shades of moral bankruptcy soon enough though.
HN trends to think there's nothing else besides Google, Facebook, Amazon, Microsoft, and Silicon Valley startups. Truth is, there are tens of thousands small software shops all around the world, working on - in HN's opinion - "mundane" software like machine controllers, production management software, ERP systems, etc. without collecting and indirectly selling all personal data they can get their hands on.
> Your point? Should we stop working in IT and go back to the fields?
Not the OP... But no... Just stop working for morally bankrupt companies.
There are enough consulting jobs, to say nothing of current and future startups around (including your own if you create one), to not need to work for them.
> What needs to happen is that people keep applying pressure so facebook is forced to adapt its business model even if it hits their bottom line - which is already happening apparently.
Then again, would-be employers saying loud and clear that they won't hire people who worked for morally bankrupt companies is a potential answer too.
If software engineers pushed hard to consider that working for them was a dead-end job rather than something very desirable, then maybe they might end up attracting less talent and go bust eventually.
On your deathbed you'll only take a single thing with you. Not your house or family; not your wealth; only whether what you did with your life was worth it.
A very very few, like Alfred Nobel, are lucky enough to see what their contemporaries thought about their lives before they passed away and got to adjust. You probably won't.
OP’s stance is (I believe) that we shouldn’t align our incentives along money only, but also on the core values of a company. Call it naive or idealistic but I, and obviously many others, believe that we have a responsibility as people and employees to work for companies with the right set of values. Truly lived, not just printed out and put up on the wall. I think this is something that motivates people these days. Do something with a good impact on this planet with the single lifetime we were given and f*ck the money (ideally have both).
Arguably, no: rather, we should work to organize our workplaces into democratically controlled firms whose bottom line incorporates the ethical thresholds of its employees, who are then free to adjust their ethical thresholds back to with what we can sleep at night.
It's only 2 large companies, Google and Facebook who are now out of control in building invasive surveillance systems so its completely disingenuous to try to pass this off as some sort of generic corporate issue.
People who work for known unethical organizations are hardly celebrated so these are poor attempts at muddying the waters and distraction.
The bottom line is if you can't behave ethically you can't expect ethics from others in society. If 6 figure earning engineers can't exercise ethical choice then who can? Its incredible given the level of discourse how anyone can expect any ethical behavior from the poor and starving and yet we do. The double standards and greed from educated classes is stunning.
"They're all like this so it's impossible to do good" is an even more morally bankrupt position. It rejects personal responsibility and agency, places you as the victim, and allows you to continue to be a bad actor in the world.
It's not impossible to do good, that's not what I'm claiming at all, and that's why the last line is there.
I'm pretty sure you can do good even within facebook, doing your utmost to keep the company accountable (from my experience in another big corp, we don't see 1% of what's happening inside it, and how many people are facepalming - and we'll never know if many things were just humans being stupid or actual calculated decisions).
You can also keep your guard up from outside and force facebook to fix itself (obviously, as much as its business allows) from outside, for example pushing it to hire more moderators and get as better so to prevent things like myanmar from happening again.
What I'm saying is that it's impossible (and in my opinion, pointless) to claim moral superiority and to accuse people of being morally bankrupt because they work for corp X.
There are plenty of companies which can't easily be categorized as having significant negative effects to the world. You can work for a "bad" company but consciously constrain your work to a business unit which improves customers' lives.
What you're using is false equivalence. You know what the worst thing for the environment is? Being born. Why do people insist on living when everything is bad? Reject the notion that you're powerless to change things.
Why are you so invested in the continuity of facebook?
Its like, "Doctor, why dont we just apply pressure on the tumor until it starts to grow at more reasonable rates!".
No. When you find cancer, you try to eliminate it.
Facebook is exactly this -- cancer. They have been aggressively monopolizing software for socializing so that they can arrive to the dominant position they are in now.
Until Facebook becomes more transparent w.r.t how they use the user data and until Facebook gives users autonomy -- they need to be regulated. We need to define constraints regarding how they present and manipulate user data and interactions.
It died with network effects. And no, I'm not going to make all my family (including non tech-savvy, 60-year-old people) download a second app to talk with me apart from the ubiquitous Whatsapp they use to talk with everyone else.
It is completely ridiculous to try portraying someone working for FB as morally disabled. I'd suggest people express their moral superiority by deleting their own FB account, without much ado.
This. It's like with politicians. If you fall for the argument that "they are all the same", you are benefitting the most corrupt. And they won't have an incentive to become even slightly better.
Yes, it's called voting. Unfortunately it seems that in the US the Democrats are more in the pocket of Big Data than the Republicans so it's a choice of 'do I like minorities or privacy more'.
Is it really that bad being a software/hardware engineer in Europe? I was thinking of entering via Holland (easiest work rights) and then after a few years try to work in France or Italy where it is almost impossible to fire someone, so I could retire on the job.
Depends on your expectations. You can live comfortably as a skilled software engineer in Europe, but you won't be able to build a nest to retire comfortably in 20 years.
I feel like this overreach is directly as a result of the absurd expectation VC backed companies have of constantly growing forever. People who are uncomfortable with this fact can choose to only work at bootstrapped companies that are doing well.
This is why in my dating profile I say that I'm skeptical of people who work at FAANG, and if they do that they'll have to prove they're human. (not in so many words)
going to work for Facebook/Google/Amazon is a enormous bump (we're speaking 2-4x) of salary for many people, which in some cases means you can buy an house after 3-4 years even with the crazy rents back in your home country
That feels pretty dismissive. Wealth independence is a huge deal that gains an individual very important security and well being.
We can chastise the workers as much as we can chastise the consumers who support this system. We’re all complicit in this, for the most part. We can work together to build alternatives.
And so are people that have a phone built in china, abusing people enough they have to build nets to prevent them from suiciding. And so are people that use SIMs from carriers known to have an army of H1B slaves on their payroll through the usual suspects (infosys)? And so are the companies using hardware and routers sold by companies using the same methods (hello Cisco?) I can go on literally forever, just choose a random industry sector.
Yes, it's an horrible world. I know, I'm saying that it is, I agree completely. What's the alternative? We blame people that work in other companies and claim we are honest and pure? Do we say that since there are X layers between us and them then we're fine? :)
> Yes, it's an horrible world. I know, I'm saying that it is, I agree completely.
Whew! It's not just me? That's reassuring (As in, yes, there IS a monster on the wing of the plane, but that's so much less scary if you see it too. Little Twilight Zone reference there.)
> What's the alternative?
I don't actually know. However...
Bucky Fuller calculated that we would have all the technology we need to supply our needs globally for everyone, if only we applied it efficiently, by sometime in the 1970's. We have arguably already passed that point, meaning that our problems today are not physical, that they are just psychological (or moral or religious or spiritual if you prefer.)
Starting in the mid-1970's a kind of effective cybernetic psychology has been developed (under a kind of trade name Neuro-Linguistic Programming) that provides simple algorithms for correcting a great deal of malfunctioning psychology. (Be aware that the Wikipedia article for NLP is crap, it's haunted by skeptics. I can vouch for NLP. Not only is it grounded in hard science, I myself was cured of serious debilitating depression. I owe my life in a sense to NLP.)
To sum up, we have the technology to supply our needs, and the technology to overcome our psychological problems, so I think it's a matter of A) dispelling ignorance of the possibility, and B) logistics.
Alright then, where do we draw the line? Can I go buy a person? If not, can I invade Sudan?[1]
> Ethical consumption is a dead end.
Should we give up even thinking about it and just do whatever is most convenient?
Will that solve all our problems?
You know Amazon treats its employees like disposable shit. If you buy from them anyway you are putting your self-love above the love you should have for the folks working there. You know Uber is extracting value from their drivers[2] and will discard them without compunction when the robots come online. And they killed Elaine Herzberg. If you ride Uber you're rewarding them for all this and putting your self-love above the love you should have for those folks driving.
I'm gonna keep fighting for what I think is right. That means telling people that they are moral cretins when they are blithe about it. (I don't think violence solves anything, but a good rant can shake up a body's thoughts. I have friends that shop through Amazon and ride Uber and I don't chide them too harshly or often.)
Working at FB or one of the other emerging technocracies isn't an instance of "Never let ideological purity prevent you from effective action." It's a case of putting money above core values, or not having those values in the first place, or simply not paying attention.
If you're going to be a Morlock at least be a self-aware Morlock, eh?
> You can't even buy slavery-free clothing or food reliably.
You can try.
If you don't even try you're a moral cretin.[3] It's a common malady in this empty and abortive age.
[1] One of the few places where outright slavery still occurs in modern times. If that's not grounds for attack what is? Oil?
[2] "They don’t pay the cost of their capital. The wages they pay to their drivers are less than the depreciation of the cars and the expense of keeping the drivers fed, housed, and healthy. They pay less than minimum wage in most markets, and, in most markets, that is not enough to pay the costs of a car plus a human." https://www.ianwelsh.net/the-market-fairy-will-not-solve-the...?
[3] "Origin Late 18th century: from French crétin, from Swiss French crestin ‘Christian’ (from Latin Christianus), here used to mean ‘human being’, apparently as a reminder that, though deformed, cretins were human and not beasts." https://en.oxforddictionaries.com/definition/cretin
Between accusing others of being apologists for slavery and moral cretins, you've dived into full-out flamewar of the kind that we ban people for on HN. It's unacceptable, regardless of how morally correct you are or feel you are. Flamewars burn what they burn regardless of how righteous the flames, so please don't post like this again.
Dang, you're right and I'm sorry. I won't do it again and I'm gonna give myself a 24 cool-down period before I post on HN again (on any subject.) I let my passions get the better of me and I should have known better.
Between you and me, I pulled a muscle in my neck/shoulder this morning and I've been in a devil of a mood all day. I'm not trying to make excuses, I shouldn't have taken out my bad mood here. Today's B.S. is not indicative of my best efforts.
HN is an incredible forum (I interacted with Alan Kay the other day!!!) and I'm ashamed to have added such counter-productive negativity. It won't happen again.
I'm sorry for being part of the problem today. Have a great weekend.
I'm not defending slavery. I'm saying that if you buy food or clothing you almost certainly are buying goods that used slave labor in the supply chain, whether or not you're aware of it. Ethical consumption is impossible and trying to whip people to do it is a distraction from actual solutions, which would involve state intervention.
>>in Europe going to work for Facebook/Google/Amazon is a enormous bump (we're speaking 2-4x) of salary for many people, which in some cases means you can buy an house after 3-4 years even with the crazy rents back in your home country - and that's HUGE.
So this is how we justify it now? “But it will allow me to buy a house in 3-4 years”?
I was just noting that in other parts of the world (where the average salary nationally for developers is NOT a comfy 60k$) finding alternatives to those companies can be a little bit complex, and for sure no less evil.
The "justification" (if you want to call it that way) you are looking for is in the part you forgot to quote and respond to ;)
I find it hard to believe that only FB has on office in that country that offers the $60k you are referring to. I work in London and there are a lot of companies that will give close to FB salary if you are a good developer. And yes there are companies that are not as corrupt as FB, Google, Amazon.
So I'm wondering (out of curiosity as I can't personally think of any) where in Europe is that happening?
Some of my favorite and most ethical friends work everyday at FB to protect against these kinds of attacks and others. You're making an incredibly global statement about an organization where a more tailored statement would carry your water a lot better.
Over the years I've met a lot of people who went into HR in the hopes of making things better for the rank and file. I've yet to meet a single one who didn't report a few years later that they had been delusive.
I sincerely hope your friends will be more successful. But I doubt it.
I think treating Facebook as a singular project is the bigger fallacy.
What about those who argued for React to move to a more reasonable license? What about those who pushed for open sourcing code and hardware in the first place?
Companies at the 25k+ employee size are complex, often internally-disagreeing enterprises. Code may be pure, but resource allocation (programmer time) is political.
Of the recent Facebook news (e.g. shadow profile and 2FA phone numbers being used for ad targeting)... "we had bugs in our code" is by far the least ethically problematic.
By my reasoning it would be a fallacy to assume that all the members of a large project are unethical, which is exactly what some in the comments are suggesting.
And is ridiculous, for anyone who's worked in the real world.
We all make ethical compromises, and have worked for companies that made ethical decisions we didn't agree with.
That was the crux of the Nuremberg Trials: what portion of an endeavor's ethical decisions can be assigned to an individual.
The answer was "more than none, but less than all."
“it would be a fallacy to assume that all the members of a large project are unethical, which is exactly what some in the comments are suggesting“
Can you quote anything at all from this particular thread that supports this statement?
It’s clearly the position you are trying to refute, but I think it’s a straw man.
That said, at least you are in the domain of agreeing that working for Facebook is an ethical compromise:
“We all make ethical compromises, and have worked for companies that made ethical decisions we didn't agree with.”
Once again, your position seems to be to try to erase the distinction between Facebook and any other company. The Nuremberg trials demonstrate that this position is not tenable - we don’t erase the distinction between the Nazis and any other government.
The argument here is that by now there is enough evidence that this compromise is too much, and that ethical people who work at Facebook should consider that.
Being an ethical person doesn’t imply some kind of mythical ethical purity. It implies that you care about ethics.
> Can you quote anything at all from this particular thread that supports this statement?
The original parent comment of this thread...
> Said this yesterday in the other Facebook thread, and I'll say it again.
Working for Facebook is a morally bankrupt position. If you are an engineer you have plenty of job opportunities available to you and there is no excuse for you to continue contributing your labor and time to a wholly malignant organization.
As to your comment...
> The argument here is that by now there is enough evidence that this compromise is too much, and that ethical people who work at Facebook should consider that.
The argument in this thread is not that people who work at Facebook should "consider" that, but rather that anyone who continues to work at Facebook is no longer ethical.
It's not a straw man if the very first comment proposed exactly that.
Which is a sort of absolutism that I'm taking issue with. I'm sure there are parts of Facebook that are wretched hives of scum and villainy. I'm sure there are parts that would make my and your employer look terrible, ethically comparatively.
So maybe we should use a bit finer brush when tarring people. That seems like a fairly modest proposal to me.
I think it’s you who is falling into the trap of absolutism.
The poster you quote believes that Facebook is clearly immoral, and to continue to work there is indefensible.
Have you considered that this might be true?
Now perhaps you don’t think it is true. That would be my guess based on your positions in this thread.
A non-absolutist position would be to say ‘Facebook as a whole isn’t that bad - here are my reasons...’
Whereas your actual position is ‘nobody can make valid ethical statements about organization above a certain unstated size’.
The first is holding a different opinion. The second is an absolutist claim.
Another counterargument could be like the Nuremberg defenses that you have already mentioned - I.e. Facebook is that bad but there are good people there who don’t realize that, or who think they can change it, or don’t understand the consequences of the orders they are following etc.
But that’s not what you are saying - you are saying that nobody should claim that Facebook is that bad.
You seem to think that Facebook is no different from any other employer, but have offered no explanation other than to suggest that not all steps taken by all employees are calculated to be evil.
It’s perfectly reasonable for others to think that Facebook is so obviously corrupt that to work there is morally bankrupt.
Look friend. If I lose my job, you aren't going to do anything to augment my lack of income. You're not going to do anything to provide health insurance. And you're not going to do anything to help me find new employment.
There is no "We".
Here's an idea. If this topic is something that you feel is important, then perhaps you can set aside half your income to a general fund to help provide benefits for employees who leave facebook for morality reasons. Maybe if it gets enough momentum others will also provide funds. Given enough time perhaps this will help the "industry" to become more ethical.
You know what's not going to convince anyone to leave facebook now? Trying to setup some sort of ad hoc lynch mob to "deal" with people who are trying to pay a mortgage.
Exactly this. Decline a 3x paying job, there’s another 1000 developers behind in line. You take a loss on personal income, Facebook carries on being Facebook, and nothing will change, except having less money in your bank account.
I've got several infosec friends at Facebook right now, all new hires, trying to preserve our democracy from attacks. I don't consider them morally bankrupt at all.
It seems like a vast networked trove of everyone's information is doing the opposite of "preserving our democracy from attacks" - its actively enabling it.
Minimizing the damage of collecting that information is maybe not directly evil, but helping kick the can down the road to the point where facebook is so deeply networked in our society its unremoveable doesnt seem like a high horse to be riding on.
your line of reasoning makes it sound like it would be simple for everyone in the world to stop using Facebook or some similar social media platform. I see nothing wrong with trying to improve something that is already well ingrained into human society.
Hopefully I didn't make it out to be simple as much as stark.
I feel like the option where you get a bunch of money and perks is probably the "simpler" option.
Did they just found their moral compass after 2016 or were they selectively applying morality when their platform being abused to spread misinformation in 2016 elections. Or when Facebook could not curb hate speech in Myanmar that had direct impact on thousands of lives. Or when Facebook ran mood manipulation experiments... or when Facebook decided to use 2FA type security features to show ads to users ?
How does the fit with Myanmar? It’s hardly a democracy, but is ‘morally bankrupy’ too strong a term for what they have helped create there?
And that’s ignoring Cambridge Analytica, the creative accounting for that and the other related scandals.
Are they spying on private communications to identify terrorists? Please don't tell me you are calling this rudimentary Russian propaganda an attack. The ads were aimed at the lowest common denominator of America, and those that were already going to vote for trump I am sure. The reality is Trump won because enough of our citizens felt left behind and wanted to try something other than the status quo.
And unfortunately Hillary Clinton did it to Putin first, probably using similar methods.
> Working for Facebook is a morally bankrupt position.
Facebook offers a service that people wants. A service that is not morally bad, if any connecting people is a positive thing.
The monetization of that business is what has proven problematic. As it is offered for free, it is people's privacy what is being sold.
Who should solve it?
The problem with "just don't work for Facebook" is that it shifts the responsibility to policy companies from governments, that have the power and resources, to individuals that have not. Of course individuals have a moral responsibility, and that is why whistle-blowers are so important in all industries.
But it is the government that has the responsibility to assure that the industry remains a positive force in the country. Tech giants are a new phenomenon. Regulations have not still catch up with its problems. But governments around the globe need to shape up and get up to the challenge of letting companies offer services that people wants and needs while minimizing the harmful impact that some business models have.
If you do not like the ethics of the company that you work for, change jobs. You are going to be happier. But that is not going to make the company more ethical, if any it is going to be less ethical as people that worries about such things move out.
I mostly agree with the rest of your post, though. I believe it's possible, though very rare, to work at Facebook and have a positive impact.
And there's definitely a total mismatch between regulation and the realities of big tech companies. I'm skeptical of government regulation solving privacy issues specifically (I think that stems more from a widespread cultural misunderstanding or unawareness of privacy concerns) but maybe it's needed for other ways these companies are negatively impacting the world.
I didn't say "don't judge ever". I'm opposing the idea of judging the entirety of someone's moral compass based on so little – in this case, a single point of data.
Starving yourself is undoubtedly morally bankrupt, especially so if you have elderly parents or young kids or random loved ones or dependants in general. Not by accident killing oneself is considered a primary sin in many mainstream religions.
I've always liked this line, even after becoming quite wealthy:
And again I say to you: It is easier for a camel to pass through the eye of a needle, than for a rich man to enter into the kingdom of heaven. --Matthew 19:24
When I first heard this as a kid, I immediately thought about a real life camel attempting to fit through the hole for thread in a sewing needle. That seemed like a very odd comparison as to who would try to do such an odd thing?
As an adult, I heard someone talk about one of the gates in the wall to Jerusalem was named the "eye of the needle" because of its shape. If a camel was loaded up that exceeded a certain height, the camel could not fit through the gate. It was this situation that the biblical passage was supposedly referring. So as with most things, context really helped.
</random_tangent>
My understanding is that it's more hyperbolic than that. There are gates around Jerusalem called "Eye of the Needle" (St Alexander Nevskys Church for instance, its supposedly the location of a 1st century city wall) and a man can barely fit through these himself.
I could be totally wrong, of course. Maybe they started naming holes in dilapidated walls that to bring tourists/pilgrims.
Working for any company writing code with the purpose of violating the privacy of your users, is ethically questionable indeed.
So they are certainly engineers at Facebook doing very questionable things - but not all of them.
In general: the more influence you have (via code you write or otherwise via power you are aware you have over people), the more you become ethically accountable for your actions.
For better or worse, a lot of people can distance themselves from what the company does. I used to work as a software engineer for one of the largest banks in the world...The department I worked in had exactly zero to do with money, I never got close to a customer, I never had to deal with an account of any kind. It was a really cool gig, too.
With as big as Facebook is, a lot of people (the majority?) are not directly connected to all the crap you see in the news. Sure, you can say they contribute to it, and you'd be totally correct...but Im sure what they see is a bunch of smart people working on cool technology with a good salary and free lunch.
So yeah, its probably "morally corrupt", no denying it, but so's the majority of companies that hire more than 50 people, one way or another. You have to work somewhere.
Then you have people who are trying to do the right thing from the inside. I knew someone who worked for Google purely to try and change its culture. The paycheck probably didn't hurt. There's a lot of these people.
I am extremely imperfect and a hypocrite on many levels. I never practice what I preach, and if I'm being honest I can't say what I'd do if I had an offer from a company like Facebook or Google that promised to completely change my financial situation. I don't pretend to be better than anyone who has been in that situation and decided that the money was worth it. I'm not better than people who took the job because of the career opportunities, the social cache of working at a FAANG, or the chance at working with one of the world's richest datasets on the world's most powerful computers with some of the world's brightest minds. It would be a fair critique to say that my pronouncement smacks of self-righteousness borne of a moral framework that's never been challenged. Like a rich man who chastises the poor for stealing to feed their families, it's easy for me to moralize.
But, I do know that working for companies that are funded by advertising makes me feel uneasy. I know because I've worked at one or two. I also know that the company I currently work for charges our customers for the services provided, and I know there's a consensual quid pro quo in every customer agreement. I also know we don't track our customers beyond their consent. I would hope if my company ever started doing that I'd speak up, and if things didn't change I would hope to have the fortitude to leave and continue to speak up outside of the company.
I also have no doubt that my day to day work is automating away someone's job, somewhere. Where someone used to make a good living, my code will run instead. People might not get overtly laid off because of my code, but there's no doubt people who use my company's services hire less people... it's kind of the point. I definitely think about the moral implications of that. Sometimes I'm not super comfortable with the hypothetical effects of my code over a long time period. Even if I contribute less than 1% to my company's service, if my company's service saves our customers on average the equivalent of one salary a year I've been responsible for the, at best, lack of creation of hundreds of jobs. In a different world someone fed a family, bought a house, and lived a life with one of those salaries, and now that opportunity is forever gone. Sometimes that's a hard thing to grapple with, and I really hope that I'm not contributing to negative economic trends that hurt a large majority of the world's populace while enriching myself. Chances are I probably am, though.
However I am certain of a couple things. The mass collection of billions of people's information is putting upon yourself an incredible responsibility that I find hard to justify. This wasn't by accident, this wasn't dumb luck, this was a purposeful attempt to amass and control power. This power isn't inherently good or evil itself, but even in a vacuum one has a right to be suspicious of such power. Fortunately we don't live in a vacuum and over time Facebook has shown itself to not be a good steward of the power it's created. I have no doubt there are plenty of ethical people that work at Facebook, and there are definitely plenty of ethical, smart people who work in Facebook infosec. I don't blame them for the data breach. I blame the creator of this Pandora's box, I blame those who willingly continue the abuse of this power, I blame those who purposefully profit off the abuse of this power, and I blame those who refuse to realize that they will not change an organization that refuses to change. Until the use of Facebook's data is no longer rewarded with massive amounts of money Facebook will continue to collect and sell this data. The incentives are very clearly aligned. Working there, no matter your intentions, cannot change these incentives. I'm not saying everyone at Facebook is evil, but if the hiring reputation is true they are too smart to not understand these things for much longer. Facebook will continue to be morally bankrupt until its power is abolished or democratized, and since a Pandora's box cannot be closed I'll settle for democratized.
It's pretty interesting to me how incredibly polarizing the topic of Facebook is on the web. People seem convinced that Facebook is an evil enterprise and has to be stopped and its board, employees, and shareholders should all suffer. At least that's how it seems based on the loudest voices on the web because defending Facebook is a highly unpopular position.
But why? Use of the platform is entirely voluntary. Beyond that, what are they doing besides targeting ads at people based on pretty basic info that those users (almost entirely) provided to Facebook of their own free will. Yeah, I know they were found recently to be using "shadow contact info" to target ads. But even that study seems fairly contrived- they had to upload an entire organization's private data without their consent just to prove the point they were making in the article. But even if we all agree that that's bad, so what? Okay, so if they stop using MFA data and shared contact data not necessarily shared by the users to target ads are they suddenly not evil? My guess is that most people would remain unconvinced.
I think there's some part of society that just hates the idea of advertising of all kinds. They think they should be able to move through life without having information foisted in front of them without their consent. That's a fine view, but the reality of our society is that it relies on businesses being able to sell and they do that largely by advertising to consumers. I also think a lot of the lament comes from the idea that using these platforms is a waste of time, which obviously is more of a personal value judgment call.
But above all this, I believe Facebook is hated because it's powerful. But it's powerful because people the world over use it and use it a lot. And that doesn't seem to be changing at all based on Facebook's last several earnings reports. People seem convinced that everyone should agree that Facebook and targeted advertising is evil and the use of the platform isn't worth the trade-offs. And yet people don't care. That stubborn fact. People just simply do not care about having their phone numbers, ages, political beliefs, genders and interests used to target ads at them. Lots of people can't wrap their minds around that fact- that not everyone is so concerned about using that info for ad targeting. Some fraction of the active FB user base probably does care- but not enough to delete the app and stop using it. "Your actions speak so loudly that I cannot hear what you say."
I hate facebook but I still totally agree. And, I just don't use it, though I still maintain an account for messaging purposes. Like, in my perfect world, Facebook would go away because I find it kind of gross and think that it is generally negative. But, I don't think it cracks the top 100 of "Most Evil American Companies", it just attracts a ton of attention because of it's size and how pervasive it is. For example, a company like Pepsi, or Walmart, or many oil companies are much worse in how "evil" (ie intentionally do bad stuff to increase shareholder value). It's tough, and i do not like facebook, but yea they're villainy is overrated (but still, screw facebook).
FWIW, I’m also not what most would consider an active user although I do have an account. I’m usually more turned off by what people post than the platform itself. I just think the attacks are a bit too sensationalist.
Yes absolutely. I stopped using it because I felt weird and voyeuristic about knowing what people were up to that I hadn't seen in years, long before any of this controversy started.
No it's not. Do you know Facebook creates shadow profiles of you by tracking your online activities?
Even if you discount that, what about when all your friends use Facebook? Then you are going to be forced into a situation where you either use Facebook or stay disconnected from you friends.
Do you know Facebook creates shadow profiles of you by tracking your online activities?
How would they do this if you never visit the site or download the app? If you're referring to the use of a pixel, compared to the full-fledge use of cookies used by other ad networks (the Gizmodo article from yesterday itself noticeably had ads all they way down the page based on sites I'd recently visited), surely that alone doesn't make FB evil relative to other advertisers?
what about when all your friends use Facebook? Then you are going to be forced into a situation where you either use Facebook or stay disconnected from you friends.
Why are you forced to use Facebook or be disconnected from your friends? You still have texts, email, Twitter and phones. I know a few people (admittedly not a lot) that refuse to use Facebook. They complain occasionally because they believe they are missing out on seeing photos or something, but nothing to the point where they are in the dark. I think the use of Facebook is a convenience and each person has to weigh their values against what they know Facebook does. But let's not elevate what Facebook does beyond the level of displaying ads for money and crossing ethical boundaries in some instances about what data they use to target those ads. Based on some comments you'd think they were proactively trying to destroy the world.
> How would they do this if you never visit the site or download the app?
Your friends install the app on their device. They provide access to their contacts. FB slurps in all of that data. For every person in the user's contacts, FB compares that info to their records. They update connections where found, and start new records when not found. So they now know your name/email/phone number/physical address info depending on how detailed your friend's contact was about you. I haven't read anything if the user has added your picture in their contacts if that's something FB can read as well, so they could know what your face looks like. They are now tracking you, and you've at this point never joined FB. One day, you decide to join FB, and you're presented an option to connect with people FB thinks/knows you know. Oh, and now that you're a user, you don't get to see that info that they had been making on you before you signed up either.
That’s not tracking web activity though. Also, why is google given a pass here? They use info gathered from my emails all over the place and that surely has more sensitive data than my Facebook account. What about amazon who has been reported to sell your purchasing data without your explicit consent to advertisers?
Granted, they aren't tracking your browser history, yet, in the manner I described. However, have you ever been to FB from a link? If so, you now have an FB cookie. Ever been to a website that has the FB like button, same thing. It's kind of like an STD. You don't know you have it, but it will follow everywhere. You can find out you have it, and try to take the appropriate actions, it'll just keep popping back up later in life. Now, they can track you anonymously. Whether they know it is you and link it to the shadow account they have or not, they still have data from a real person they can monetize. All without you having an account.
I'm definitely not giving Google a pass. I just didn't mention them ;-) Google Analytics, Fonts, whatever are just as bad, to me. I as an unsuspecting web user have my browser tracked from web developers using some free tools. I have no idea that it is occurring as a viewer. If a website puts in FB's like buttons, it is visible to me, and being in the know, I understand the repercussions of that site's decision. GA, Fonts, etc, are completely hidden from view. This is why I've used NoScript/Ghostery/etc throughout the years. It started with ads, but now I'm more concerned about these types of scripts.
I am one of those few people. They're inconvencing the few people that refuse to use their service and generally making the world shittier. If something worse than facebook comes in mind, it's probably the Vogons or bedbugs. But what can one expect from a company that uses the f word as their logo?
"The Universe is a morally hazardous place" -Ribbonfarm
The issue is that there is not a 'universal' set of morals for tech people, especially as tech has started to become a more accessible profession. I'll agree, personally, that the ethics of working with FB may have issues, large ones at that (Rohingya comes to mind). But, and correct me if I am wrong, are you are advocating for a medical board/ the bar / ASME / professional engineer type of organization? Generally, those organizational types also have ethical issues too, but they tend to be a lot more nuanced and not as glaring.
> start dealing with those who continue to take a paycheck from Facebook even in the face of constant and horrific evidence of wholesale ethical violations and negligence
Do you also suggest that someone should no longer be friends with anyone who uses that Facebook platform and is 'the product'? What about that? That is support as well. Right?
This is a very typical Silicon Valley mindset where the number of high paying software engineering positions is abundant, and there’s no shortage of work.
If you look outside the Bay Area, I’m sure you will find people who desperate enough and would sacrifice any sense of morals they have for a $100k job.
OT but FYI the NRA reacts the same way as you just did with gun ownership and school shootings. If you agree with that it's cool (/s), but just saying...
What if I am totally fine working on that stuff if it means more money? Hell, I would be totally content working on weapons for the military if it means more money and interesting problems. I am aur enany others would too.
Sheesh, why stop there. Let's shame American taxpayers because the government's done some pretty nasty stuff over the years. Just pack up and move to one of those other countries that are all sunshine and puppy dogs.
What that's too much? Moving jobs is sooooo much easier, what with changing your healthcare, benefits, probably taking a paycut, losing all the friends and acquaintances you've made, no problem man.
And we all know Facebook is the only Pure Evil company in the valley. You could get a job at Google and work on censoring search results for the Chinese government. I hear MSFT is in good graces these days with hacker types, unless you don't like the idea of supporting the Military Industrial Complex, and certainly they would never become as anti-competitive as they were in the 90s if they found themselves in a monopolistic position again. Or you could always work for any number of startups that sell hype, bullshit, and vaporware to get VCs to part with their money.
Really if you don't quit your Facebook job and #delete your account you're really no better than Mark Hitlerberg at this point. And you can't hide, we'll find you, and "deal" with you (I hear Twitter mobs are good for shaming these days).
Full disclosure: The company I work for does some small contracting work for Facebook, so I guess I'm on the list.
How is this not being flagged by a mod? This is just a slanderous attack on people who work at Facebook. HN is supposed to be a place for intellectual conversation and instead we have someone telling people that if they don't quit their job they are malignant scum of the earth. Of course it's at the top to. If you think Facebook is cesspool, HN is no better.
> Since we’ve only just started our investigation, we have yet to determine whether these accounts were misused or any information accessed. We also don’t know who’s behind these attacks or where they’re based. We’re working hard to better understand these details — and we will update this post when we have more information, or if the facts change. In addition, if we find more affected accounts, we will immediately reset their access tokens.
From the press release[0] posted elsewhere in this thread
Over 50M accounts are compromised and we're going to split hairs on the proper way to divide up a week? The optimal number of days to alert your 50 million users that their accounts have been compromised is zero. Think about how many businesses that use FB and the thousands of 3rd party sites that use Facebook's API to authenticate users. I don't feel Facebook should get to be sole arbiter on deciding the severity of the incident when if affects so many and has so much potential to financially impact other businesses. They should have immediately sent out an alert when they discovered it.
1) I was logged out randomly of messenger on my iPhone for the first time ever, i think a week ago. Like messenger completely reset for no reason, and i had to put in my cellphone number again. I don't know if this has to do with the breach. Also i received several snapchat 2-factor codes without requesting them on my cellphone.
2) If the message history has leaked it will absolutely unimaginable consequences for many people, myself included. Nothing incriminating but enough to make me paranoid and make loads of people shameful, loose their partners, loose their friends, loose their employment, have their reputation tainted forever. It's absolutely insane if it shows up in an indexed fashion somewhere. Black mirror is becoming real.
3) For the first time i am actually thinking about seriously migrating from "all corporate services" - and only searching through TOR through VPN on duckduckgo if i have to search for something personal.
4) I will probably forget all about migrating to privacy oriented services in a few days.
5) Living these days can make me slightly paranoid, a new feeling i guess, that has not been experienced by most humans before. The feeling of being watched constantly, of being potentially revealed in some vague fashion. A strange and unhealthy feeling for sure, no matter how banal your life probably is to the rest of the world. Now everyone knows how it is to live in eastern germany under STASI, or like in Iraq or Syria before the breakdowns, - just potentially worse if these leaks get real.
I think "privacy" is a basic human necessity right at the bottom of the Maslow pyramid even though a common trope in pop culture is that privacy is a new phenomenon.
I think that is utterly wrong, a tribe back in the days was like an organism, an extension of the self, that also required privacy from other tribes.
Regardless of who requires it, right from the earliest, animals and humans have been hiding them selves both from predators and from their prey. Later the mammalian brain extended this need for privacy as a basic necessity in the interaction in basic civilisation, from politics, to civic life, in family life and in war. It's all a game of showing and telling.
This vague fuzzy paranoid panopticon feeling is devastating.
6) In a few years neural nets will trawl the net and run stylometric analysis - everyone will be ranked and have complete psychological profiles created, all their whereabouts will be mapped completely by inference, you will know everything everyone has done for the last 20 years in complete detail including their desires and emotions. Lol
I know for sure that you can view all of the pictures sent in a conversation if you have never "ended" the conversation. All I can think about is all of my partners who have sent x rated photos using messenger. It could very well be the next "fappening" and tied to real identities.
Are they under-estimating that 90 million people (out of 2 billion accounts) have to log back in?
I had to log back in and so did 6 out of the 8 people I've asked so far. Purely anecdotal, but it just seem unusual that if only 5% of accounts are affected that so many of the people I talk to would be potentially affected.
>"“This is another sobering indicator that Congress needs to step up and take action to protect the privacy and security of social media users,” Senator Mark Warner, a Democrat from Virginia and one of Facebook’s most vocal critics in Congress, said in a statement."
What an ass. It's simply amazing that he makes a statement like that when Congress hasn't bothered to "stand up" to Equifax, Experian and TransUnion yet. Not once. Maybe look into protecting privacy and security of people period, FB and social media are but one component of that.
The second bug was that this video uploader incorrectly used the single signon functionally, and it generated an access token that had the permissions of the Facebook mobile app. And that’s not the way the single sign-on functionality is intended to be used.
Is it just me or does this sound like an terrible idea in the first place? Guess we can't know for sure, but why would anything unrelated to authentication generate access tokens?
I was logged out of my account, but reviewing logs of devices that are using/have used my account in the past I don't see anything suspicious. What is the likelihood that the account was breached and accessed without triggering any of those systems? Is it plausible to speculate that it was possible for accounts to have been accessed without users receiving emails from Facebook?
It's not 50 million users. It's all of them. The numbers will go up and up and up in small increments until they basically just admit that it's everyone. They break the news in the middle of a national political event, minimize the numbers, and that's the worst of it. The coming ripples that up the numbers will be mostly ignored.
They logged me out of all my accounts and now I can't log back in because of their ridiculous 2FA setup. I never provided them my phone number, so they need me to use their code generator via an active login...except of course I no longer have any active logins. This is the most ridiculous possible edge case for a $X00B firm. #SoftwareIsHard
What are some secure Facebook Messenger alternatives to use? The only reason I currently have Facebook is because of the ease behind Messenger but I do not want to continue using their services. I also need to be able to convince my friends to try a different platform since I want to be able to continue to communicate with them. Any advice?
There are some well-known alternatives: Signal, Telegram, any XMPP server (which you can host)... As for convincing your friends to switch to it, there's no one-size-fit-all method.
a good argument for switching to signal is that you can also use it to send sms messages, which means you don't have to switch between apps as much or at least lets you cut down on the amount of IM apps you have installed
It sounds like users might be more susceptible if they recently had a birthday or know someone that did. Of course, birthdays aren't the only reason the uploader is active.
"But for certain types of posts on users' timelines, such as prompts to post happy birthday greetings, the video uploader function was shown as active."
My session was reset. I don't remember using view as feature in probably at least a year or two (I haven't actually use FB web/app over the last year or two). But my company does buy political ads and I have gone through the affidavit/id verification.
I wonder if FB reset all political buyer accounts too just to be safe?
I was still logged in. I just went and did the "log out all my active sessions" thing just for good measure, even though I didn't see anything unusual there.
They weren’t really grilled at the hearings at all. I heard maybe two “difficult” questions from Senators, and both ended up getting the response “we’ll follow up later”.
I have some friends who were new grads and put on the security team. My first thought was, why put new grads on the security team? Do they really have the experience to protect this sort of data?
So "View As" a tool intended to help you prevent leaking data about yourself was used by others to leak your data. What a complete shit show this company has become.
Being an application security consultant, I see this stuff a lot unfortunately. Just takes a missing authorization check on the feature, then you got the keys to the kingdom.
Would it be premature to change my account password in response to this? Also, does anyone know if phone numbers associated with accounts are included in this breach?
Facebook is blocking users from posting stories about its security breach
Some users are reporting that they are unable to post today’s
big story about a security breach affecting 50 million
Facebook users. The issue appears to only affect particular
stories from certain outlets, at this time one story from The
Guardian and one from the Associated Press, both reputable
press outlets.
...
The situation is another example of Facebook’s automated
content flagging tools marking legitimate content as
illegitimate, in this case calling it spam.
I posted on Facebook about how the NSA have a profile of everyones race, sexual preference religion etc- it was in the context of the Australian government digital health record scheme, I basically said no point opting out of that if you already have Facebook. The post was gone within 20 minutes, right off my wall. Facebook is a closed platform now, at least in China everyone acknowledges the censorship, in the west were still cencored but the fact that it happens is also cencored.
>how the NSA have a profile of everyones race, sexual preference religion etc- it was in the context of the Australian government digital health record scheme,
Those articles were hitting their spam filter (not sure why reliable news sources won't be whitelisted) which prevented users from posting and deleted posts that were already made.
It's because the Guardian is not a 'Reputable' new site wholly. It is still remotely published stories that aren't under employees or constant people. I am pretty sure associated Press is the same. They are better than say Rebel, but not much better
I wouldn’t say so. It’s happened too many times for anti Facebook posts, even Google+ back in the day.
Remember, this is a company headed by someone who captured failed login passwords and used them to hack the email accounts of a journalist writing an anti-FB article. Yes, that was a decade ago, but that is a serious, criminal low.
C'mon. I'm not a big fan of Zuckerberg but "did some unethical and possibly illegal things on the university network" describes a significant portion of the readers of this site.
Never were negative comments censored on these sites. It’s not that they are so ethical but that it would be just dumb to assume they would get away with this. They are just not stupid.
What's going to happen when a world leader dies, or wins an improbable victory in some area? What happens if such a world leader is the public face of some aggrieved political segment? Not just "censorship!", but "a vast techno censorship conspiracy!"
Well, that's not gonna cause a bunch of conspiracy theories at all! Doesn't matter if its automated, this has to be the worst time for something like this issue to occur, since everyone will instantly assume the worst.
I think the total number of 'affected' users is 90M.
The reason for this is they KNOW of 50 million, but there are an additional 40 M logged out "just in case".
'Facebook is clearly aware that losing its chief security officer and dissolving its dedicated security team, in the middle of all that’s going on, is not a great look. So many of the company’s statements today are clearly designed to address obvious concerns that arise.
“We expect to be judged on what we do to protect people’s security, not whether we have someone with a certain title,” a spokesperson said. In another statement, Facebook said it is “investing heavily in security to address new types of threats” and that its new security structure has “helped us do more to keep people safe.”'
For what it's worth, Facebook did not dissolve its dedicated security team. That statement is misreporting on the part of The Verge. Facebook's security team is actually expanding its presence.
There is lot of misinformation about the Stamos debacle on all sides.
With regards to the implication: this isn't a throwaway account, it's just ironically named. Take a look at my comment history; I'm not affiliated with Facebook in any official capacity, but I do know security engineers who work there and I've been to Facebook offices.
A true statement is that Facebook's security teams have been shifted around in several reorgs. A false statement is that Facebook has dissolved its security teams. The latter is a mischaracterization of the former, because while some security staff have left Facebook for a variety of reasons, the company is not deliberately reducing its security staff nor encouraging their departure. It still employs a huge number of engineers specializing in every major domain of information security.
If you'd like evidence that Facebook is expanding its security presence, you can take a look at its careers portal. It's aggressively hiring security staff in satellite offices that previously weren't focus areas for security engineering.
In my opinion, Alex Stamos' company memo gives a clearer picture of what's happened in Facebook's security org recently.[1] You should read that in addition to media reports.
It is essential that tech companies, especially ones that provide critical infrastructure, place technical excellence above other priorities. Denigrating meritocracy is like pollution: the impact may not be immediate, and in the short term, it may look like you can have your cake and eat it too, but the universe is not caring and not kind, and if you forget about the need for excellence in the continual struggle against entropy, nature will eventually get around to teaching you a harsh and remedial lesson.
It's not just the quote, the whole appendix ought to be taught to every engineer (and manager, and investor), and deserves to be hanging on the wall in many places.
You're absolutely, completely, 100% correct. Facebook holds an immense trove of private information that in the wrong hands could be leveraged to inflict unimaginable pain and suffering.
With that said, is it perhaps possible that some people might view this as subtly distinct from power plants, hospitals, roads, and ISPs? Those are what are generally considered "critical infrastructure".
If you also add the ability to micro-target voters at scale using everything facebook knows about them using secret ads and niche content that only those voters will see and no one knows need debunking, and thus changing the government, then it is very much like the power plants.
I understand the point that you don't need facebook the way you need the ability to feed the people in the cities (and thus need roads and power plants). If facebook disappears, life will go on. But as long as it exists, control of it is critical like control over power plants.
In the sense that it allows for power, you're completely correct!
In the sense that it's an immediate need for the continued basic functioning of the state, it's possible that there may be some distinctions that could be drawn. Some might opine that these are the distinctions that matter for the designation of what is and isn't critical infrastructure.
The "surface" of Facebook may not be, but the parts of it that keep "personal information" certainly are, due to the scope of what can happen if it leaks.
Edit: people take my comment to mean it won't be a big deal. It will be. However, not on the same scale of taking out the power grid, or the water system, which would lead to hundreds or thousands of deaths. Facebook is not critical infrastructure.
If it leaks, there is a direct impact on users monetary expenses.
One of the examples:
FB may know their user's lifestyle - eating habits, drinking habits etc. (based on the content/media users upload) If this info is leaked to insurance companies, it'll have direct impact on the premium you pay.
Facebook is positioning itself to be a -- the -- private source of a "social score", somewhat akin to what the Chinese government is doing.
As that comes into place and use, how many companies are going to be basing their pricing -- their entire product offers, in light of the availability of this information, this "score" (and all the categorization behind it) -- upon it?
Bingo. Critical infrastructure. (Like it or not, for some of us.)
> According to some in the US government, Facebook can change the result of an election, so I guess that would qualify
Essential infrastructure describes "assets that are essential for the functioning of a society and economy" [1]. Not things that can cause a lot of damage. Bombers aren't essential infrastructure. Facebook is non-essential.
According to your linked article, it could be considered 'Critical.' Not sure how it doesn't fit under the 'telecommunications' umbrella. Subjectively I don't like facebook nor people's dependence on it to label it 'critical', but objectively I'm not sure the linked article supports those subjective inclinations. At the very least, it's certainly debatable that facebook could be considered Telecommunications infrastructure.
But it's a self fulfilling prophecy. It's only "critical" because it exists. If we shutdown every Facebook server tomorrow and set fire to their data center, it would no longer exist. And therefore have no influence on much of anything.
I'm not so sure I follow this argument, one could say the Earth itself is only critical infrastructure because it "exists". So therefore if we destroy the Earth, it wasn't actually "critical" infrastructure, even though any associated infrastructure on the Earth went along with it. Maybe the distinction needs a little more fleshing out.
The information contained within Facebook is the payload. Facebook itself is the structure that holds and protects (or lack thereof in this case) that payload.
Nuclear missiles themselves aren't critical infrastructure, but you better bet the launch systems, and specifically the security of those systems, are utterly critical to society's continued functioning as we know it.
Bombers don't cause damage if they are neglected and unmaintained. A better analogy might be explosive material or radioactive material like involved in the Goiânia accident. There are consequences to the public when these are neglected. I don't know if those semantically qualify as critical infra, but its security is important for our security.
I read that article about how the Whatsapp founder got screwed over by Facebook. Facebook is sad company. I'd much rather pay for a quality product, then be a pawn in this data collection crap.
Well thank goodness I left facebook a year ago. Never regret it! Now I can interact with people in person in peace vs. heated online debates about DJT that make me want to avoid them in person.
I'm no fan of how bad PHP code is often written but between the choice of laying blame on PHP or Facebook's "Move fast and break shit" philosophy, I'm choosing the latter.
If you watched the Senate hearing on privacy two days ago you'd have seen that they were remarkably on the same page about potential privacy legislation [1]. Facebook's continued fuck ups will only help the cause, and for that I'm grateful.
Of course they're on the same page. They can afford the best lawyers and as much infrastructure as they need to fulfill the requirements, while every new competitor gets sued from all angles. I'd be very surprised if any of that regulation actually serves the user in any positive way.
> The company is in the beginning stages of its investigation.
This is code for "this is much worse than we are telling you now, we just can't reveal it all at once".
I dislike Facebook as much as the next person.. but I have to say, Facebook Ads are a goldmine if you know what you're doing. It's not going to be that way forever.
Because ot was not network breach (i.e. get dump from some db), but abuse of feature available in the web app (so they had good logs to see who was affected).
On the one hand, every time another scandal or breach is revealed about centralized networks, I want to post this and have everyone recognize the root problem and the solution:
On the other hand, I feel like I’m shamelessly promoting/shilling my own company.
How to do it in a classy way? I really believe that there is a problem people are not recognizing enough to do something about it (Diaspora and Mastodon and Solid are exceptions).
And I spent the last 7 years and $700K of our company’s profits solving it. So it’s now solved. If Mastodon is “a decentralized Twitter creation kit” where you own your own data, then Qbix is a “decentralized Facebook creation kit” where you can assemble social apps from a growing marketplace of reusable components, some of which don’t exist anywhere. Here for example is a Group Rides plugin that basically makes a social Uber, and ANYONE can have it on their OWN social network:
OK, but we are perfectionists and are spending months polishing “the other 90%” so it’s not a flop when we release it to the public to create their own facebooks. We need really clean onboarding and measure engagement metrics and fix bugs etc. It took 7 years thus far.
For example this was last year, we are way more advanced now:
So, advice would be appreciated from people who have successfully done before. Maybe contact me (qbix.com/about has my email link). How do we get the story out there that Qbix is being built to FIX the underlying root problem of decentralizing social networking, so people’s data isn’t in one place?
Please if you have some knowledge about this, take a look at the above videos and let us know what advice you have to get stories actually published.
PS: one more thing, we managed to get tons of inadvertent press back in March including BBC and Newsweek, which you will find if you search for “calendar mining” or “qbix calendar”. BUT when I reached back out to thise journalists to cover an actual story of Qbix is actually doing, none of them replied. Many of them just want to break the sensational controversy because that brings notoriety. How do you make them write about SOLUTIONS to problems?
Seeing as you asked... your website's terrible UX doesn't help. I just took a look and moved on within 30 seconds - a dated look and feel (that also feels targeted at primary school teachers), walls of text, broken links, no clear route back to the homepage to reorient yourself (can't click the logo), no clear waypoints to follow...
You only get one chance to make a first impression. I'm afraid the first impression I got means I'm unlikely to return. I suspect the same applies to all those who took a look in March.
3D strong blue and green spinning globe. 3D logo, hard (high contrast?) RGB values. Child's drawing header looks like a hospital. Mission statement not at all aligned with what your comment says. Empowering people page, weird lego photoshop. More strong RGB icons.
>We build apps for all kinds of communities.
I assume this is an app building consultancy from this statement. Nothing about decentralization. Way too much text on the page (for people like me who cbf to read). Are you trying to get people to download your group / calendar app or build apps on qbix? Choose a goal and optimize your copy for it.
Mastadon - Single page layout. Clear missions statement (Social networking, back in your hands). Flat (material?) icons. Single, fixed width column of text. Lower contrast color scheme.
What do you mean it isn’t responsive? It loads on mobile phones with a completely mobile optimized look. Have you tried it? Unless you mean you loaded it on a desktop and tried to resize the browser window to quickly check what would happen on a phone - no normal user does that
You’ve picked a point of the criticism and are trying to defend it. You need to drop your view of your own work when receiving constructive criticism. I agree with other comments. Your website is full of 3d effects and low quality pictures that imply it’s a product for children- and most importantly it doesn’t seem to mention your product besides the blog. A blog is a secondary strand of thought from the main page. You should only have to read the blog if you’re already interested and want to learn details. Also in the blog, you give a long list of features that are very difficult for non technical people to consume. There’s no value statement. It also reads like a side project and not a professional venture. If you spent $700k on this, you should really pay a professional to write some marketing material.
I spent 10 minutes on your site, still have no idea what it's about. If you say it's the "decentralized facebook", I don't see anything about a feed/updates, no way to post/broadcast anything. The only thing I see are contact grouping and shared calendar?
I wish you all the best but it boggles my mind as to why engineer types tend to think decentralized versions of exceptionally popular consumer platforms will ever take off.
My mind has fashioned me to think that these leaks, so called, are planned. To why I think so, is simple. When you, as Facebook, sell user data to other companies/third parties/countries, then it is crime, or is subject to investigation when it is known. But these so called leaks, are deemed "we are sorry, we will fix it, but we are so sorry about the data". And that is it. No one is responsible.
Now you have 50 million people's data for sale. Are you in that 50 million? You don't know or you will never know.
Connected incidents -
British Airways Data Leak, Equifax, Uber Data Theft Cover Up, Air Canada, T-Mobile, Dixons Carphone......how many such.
All of them soon will be available for sale, with no one to blame for.
I don't think there has been much stopping these companies selling the data up to this point. That's been the issue with Facebook and others, they have happily sold peoples data with little legal protection for the people whose data they sell. There is no crime in just selling the data within the US so your theory doesn't hold up.
Never attribute to conspiracy that which is adequately explained by incompetence.
I don't have any evidence. But I just like to think it that way. Also it could be because I finished reading most of the books mentioned here
https://news.ycombinator.com/item?id=17749283
BTW there are people who need data, not just people within US.
https://www.nytimes.com/2018/09/26/world/asia/trump-china-el...
"Mr. Trump did not suggest that China’s behavior was on the scale of Russia’s sophisticated campaign of manipulating social media and the release of hacked emails during the 2016 presidential election."
https://www.rappler.com/technology/news/211276-facebook-twit...
Sen. Richard Burr, R-N.C., the chairman of the Senate Intelligence Committee, opened the hearing by citing the promise of social media before adding, "But we've also learned about how vulnerable social media is to corruption and misuse. The very worst examples of this are absolutely chilling and a threat to our democracy."
Already, Russia and Iran have sought to interfere by passing themselves off as American groups or people to shape the views of American voters, say lawmakers and technology executives. Facebook, Google and Twitter together took down hundreds of accounts tied to the two countries last month, a move that prompted Burr to open the hearing Wednesday by expressing fear that "more foreign countries are now trying to use your products to shape and manipulate American political sentiment as an instrument of statecraft."
Cambridge Analytica - Worked with some Indian Political Parties or the Government itself too.
Lawmakers aren't limited in the questions they can pose Facebook and Twitter. Sandberg's boss, Facebook CEO Mark Zuckerberg, faced questions in April hearings that extended far beyond the reason the hearing was called: Facebook's entanglement with Cambridge Analytica, a political consultancy that improperly accessed 87 million users' personal information. Sandberg could also face questions on Cambridge Analytica.
I had the same feeling when Google accidentally gained access to medical records from the NHS. Everyone involved acknowledged they need to do better, yet no one ever suggested to delete the illegally obtained data.
> The first bug was that, when using the View As function to look at your profile as another person would, the video uploader shouldn’t have actually shown up at all. But in a very specific case, on certain types of posts that are encouraging people to post happy birthday greetings, it did show up.
> The second bug was that this video uploader incorrectly used the single signon functionally, and it generated an access token that had the permissions of the Facebook mobile app. And that’s not the way the single sign-on functionality is intended to be used.
> The third bug was that, when the video uploader showed up as part of View As -- which it wouldn’t do were it not for that first bug -- and it generated an access token which is -- again, wouldn’t do, except for that second bug -- it generated the access token, not for you as the viewer, but for the user that you are looking up.
> It’s the combination of those three bugs that became a vulnerability. Now, this was discovered by attackers. Those attackers then, in order to run this attack, needed not just to find this vulnerability, but they needed to get an access token and then to pivot on that access token to other accounts and then look up other users in order to get further access tokens. This is the vulnerability that, yesterday, on Thursday, we fixed that, and we’re resetting all of those access tokens to protect security of people’s accounts so that those access tokens that may have been taken are not usable anymore. This is what is also causing people to be logged out of Facebook to protect their accounts.
[1] https://fbnewsroomus.files.wordpress.com/2018/09/9-28-press-...