I think thats fair. All of us who have written and deployed software know that a change in the onboarding/new users rate like this would be a punch in the face that would knock any SW team on its ass. And it would take anyone a few days to get back up.
The important part is the leaderships reaction to the situation. Compare to something like Boeing. Zoom acknowledges facts, takes responsibilty and starts fixing things. Boeings reaction to its product killing hundreds of people was “Lol user error. RTFM”. That is (apparently) what acceptable leadership can look like..
Any sw product has issues. The question is what the company does about it
Err, no. It would be understandable if their servers buckled under the load or something. Zoom's blatant disregard for their users' security and privacy is unacceptable regardless of whether they have 5 or 5 million users.
> Any sw product has issues. The question is what the company does about it
I let out an audible “wow” upon reading this. This is absolutely bone-headed and I have no idea how they thought automatically grouping members by their email domain name was a good idea.
You gotta figure, as soon as you starting writing a blacklist of “common” domains like gmail.com, hotmail.com, etc, your immediate thought should probably be “wait, maybe we’re doing this wrong.”
You’re right but I absolutely see why they are doing this. When I saw all my colleagues in the company list I immediately figured they only have the email domain and I found it extremely useful to see whom I can contact without explaining how zoom works. Privacy isn’t our most important concern right now, it’s keeping the world running, and this “feature” helped me/us (if only just a little bit) communicate more effectively.
Why wouldn’t this be an opt in feature per-organization? I’m acme co, I buy a zoom subscription for acme.com, I click a box saying “let everyone with an acme.com email address see each other”. Done. Yes, I would have to prove I own acme.com, but we have solutions for that (didn’t set out to make this joke but, the ACME protocol, for one.)
Why is it that it’s on by default for arbitrary domains (excepting the ones some poor soul has to blacklist)?
I don't mean to be overly snarky, but removing authentication from all computers and servers would also help everyone (if only just a little bit) be more effective. It's still a bad idea, crisis or not.
This problem also seems to stem from the fact that Zoom has been used primarily in the corporate settings until now, which kinda validates their claim. Definitely not ideal, but understandable.
Companies that are concerned about this can set up SSO authentication with zoom I believe. So once the user is removed from the company’s directory server they wouldn’t have access to the zoom address list either.
I'm not dismissing the overall security point, but this seems like a pretty weak attack vector. If your company is routinely not deactivating accounts associated with your domain as part of your offboarding, being able to see e-mails and pictures of your employees is not your biggest problem.
Point taken, but none of the issues raised over the last few days had anything to do with scaling problems. The humblebrag of "we were just a little company and then we got hugged to death" doesn't sit right when a lot of the issues fall into the same category: prioritising ease of use and onboarding over security.
As for "Thousands of enterprises around the world have done exhaustive security reviews of our user, network, and data center layers and confidently selected Zoom for complete deployment."... well it can't have been that exhaustive if a couple of weeks in the sunlight have generated a shopping list full of concerns.
Kudos for half-playing by the 3F rule, though - probably their smartest move yet
> These new, mostly consumer use cases have helped us uncover unforeseen issues with our platform. Dedicated journalists and security researchers have also helped to identify pre-existing ones.
I don't think he is saying that these issues have to do with scaling problems, but rather that the increased usage + new types of usages led to increased scrutiny and uncovered new issues. Which is correct in a way.
Obviously, they were told several issues in the past too, but then those issues were not costing them money. Now they are, so they are trying to fix them.
A year ago they architected their system to not uninstall when you asked it to uninstall, and instead it left a running daemon that would re-install as soon as it saw a relevant URL.
This isn't a scaling, usage, or whoops issue. This was intentional.
Maybe. I've got enough uninstaller logic wrong in my day to believe it possible that a failure to switch off all your daemons and delete them is just sloppy software engineering on a piece of the system that isn't considered critical path by product managers.
After all, users never want to uninstall our software, right? That implies they don't love our product. And of course they love our product. ;)
It's not that uninstalling isn't an important feature. It's just that at crunch time, project managers will pull people off polishing the uninstaller to put them on that virtual green-screen feature 10 out of 10 times.
This was clearly not an accident, but a dark pattern.
There are way too many complex dark patterns which have been exposed to excuse them as oopsies. This is a company where product managers overruled developers into creating security-breaking implementations for the sake of "usability".
You just described exactly what I described, only attaching a malicious connotation to it.
It doesn't have to be malicious; the fact is that the market simply favors usability. Optimizing for the things users care about over the things they don't is the first PM guideline. This has been demonstrated over and over and over again; have users first, then worry about security and privacy.
What you originally described (or proposed) what that it may be a simple case of accidentally overlooking a bit of tidying up during uninstall.
What I described - the problem that came to light March and then June last year - is that Zoom installed a web server on your Mac whose sole purpose was to silently re-install Zoom if you a) uninstalled zoom, and b) later clicked on a zoom link.
There is nothing about it that could be attributed to 'getting uninstaller logic wrong'.
This is not correct. I have some sympathy for them, maybe this was what was needed to grow with a dev center in China. They might have been pressured by government authorities.
The statement admits that they fell short of the privacy and security goals but go on explaining how it's not their fault. It makes it look like either the issues are non-issues, or they're someone else's issues, or "we'll do these generic things that don't address in any way how those issues came to be". Which is a big thing to mention if you care about transparency and earning back the trust.
Some of the biggest issues came to be due to deception and this message does not address that point. They were intentional decisions with effort put into obscuring them. One of the most egregious being the creative use of the "end to end encrypted" moniker. That was deliberately deceptive and I don't see this cookie cutter response addressing any of that.
More engineering resources and engineering fixes don't fix deception, that starts at the top. And this puts the whole message into question.
>Any sw product has issues. The question is what the company does about it
We are all software devs -- we know as well as him. He's chosen to prioritise growth over end user data privacy protection and then lying about it with marketing e.g. E2E advertised on front page.
Many of these privacy/security issues were being complained about on HN about Zoom well before Corona.
If Zoom users are data breached I personally won't feel sorry for them like some other breaches like Equifax for example. They've signed up to this to secure a bit of convenience. I will be personally discouraging it's use where I work.
“If Zoom users are data breached I personally won't feel sorry for them like some other breaches like Equifax for example. They've signed up to this to secure a bit of convenience.”
I think that’s a bit unfair of a stance to take. As an example, I know someone who doesn’t want to use Zoom, but thanks to their university classes going online-only due to COVID-19, some of their professors have forced them to use Zoom for lectures, presentations, and examinations.
That. I am personally in that boat and I in my class I had no choice, but to use Zoom for it. The problem with "you don't want it, don't use it" mantra is, it is ignoring cases like mine. In law, those tend to be characterized as contracts of adhesion.
Same. $WORK moved from Whereby to Zoom (to handle more people in the weekly video meeting) which means I have to use Zoom - but I'm only using the iOS version and without signing up for an account.
Well, I can discourage its usage as much as I want, but if my university decides to use it, my chances to change that are very close to zero. And if the security issues affect me or my students, I will feel sorry.
"While we never intended to deceive any of our customers, we recognize that there is a discrepancy between the commonly accepted definition of end-to-end encryption and how we were using it."
Because "thousands of enterprises around the world have done exhaustive security reviews of our user, network, and data center layers and confidently selected Zoom for complete deployment" and they didnt "design the product" for these "new, mostly consumer use cases", it means that up until now they couldnt have forseen that lying about e2e encryption to sell enterprise subscriptions was an issue.
> enterprises around the world have done exhaustive security reviews
I'm pretty sure they are referring to security reviews for things like SOC2 and PCI. Which aren't exhaustive and generally consist of throwing a scanner on the network and running some sort of WASP top 10 vulnerability tester against the product. I have uncovered major flaws in products I have written that these "extensive reviews" have missed, like user enumeration by changing something in a POST request.
It's very likely that a bunch of companies RFP process is a feature checklist and to get the "encrypted" box checked they needed that lie, or their product was out of the running.
RFP by "who can tailor their marketing to check all the boxes" is a terrible process and leads to this marketing bloat. RFP would be much more useful if it stuck to "list only things you do your competitors doesnt; what processes come with your product that are much more efficient or innovative compared to your competition; like an sec disclosure what are three true non fluff risks to selecting your product; describe your revenue, user growth, and future ownership expectations." If a company cant answer those seriously, push them until they can, or tell them youll move on.
SOC2 and PCI are a lot more than running an automated scan. Sure, that's part of it, but both are full-on frameworks that stretch well beyond technical controls and deeply into organizational questions.
The important thing is that they establish enough trust to create basis for shifting liability.
That's the biggest sticking point for me as well. I don't expect my mother to know what end-to-end means but I have a hard time believing that a technology company made this encryption claim in good faith.
Their website headers also whitelist a lot of domains including quite a handful that are known malware distributors. See for yourself: curl -I https://zoom.us
These are a blanket permissions for third party ad and affiliate (user tracking) scripts, so much for "targeted at organizations with IT departments" and "we do not sell you data"
This list makes me scratch my head... when, ever, would ".google.com" be filtered the same way as ".50million.club" or even "googleads.g.doubleclicj.net"?
I understand the header causes logged reports, no actual policy enforcement, but still... I don't have a good read on their underlying concern here.
If I understood it correctly it tells the browser iranok to run scripts from all these origins. No idea why there are so many malware associated domains here. Maybe zoom’s ceo could enlighten us. Probably because of the virus and unexpected growth I’m sure.
The domains are likely in the whitelist as their report-uri was getting spammed with reports from users that have adware/malware extensions in their browser.
These extensions inject their own scripts into the page which will then fail based on the CSP and send a report to the server. In an ideal world you would just 'ignore' these reports server-side instead of whitelisting the domains.
Well, think of all the man hours they spent creating “features” like installing a backdoor on MacOS that allowed them to reinstall Zoom after the user explicitly uninstalled it.
I don't know if I'd accept this. Zoom deliberately bypassed macOS security measures and ignored other basic principles for security. Additionally, they ignored privacy regulations like the GDPR by sharing data with facebook without user consent.
That's a lot of stuff to forgive, within just a few weeks. I could forgive their servers buckling under the load or the trolls bombing in meetings. But everything else is less of a mistake rather than a concious decision in the basic software architecture.
Isn't this where fines balance things out? I mean, it's 2020 ... GDPR isn't a new thing. It's good they have a plan to fix things, but isn't that enough for tech startups "We're sorry :(" narrative?
They are well funded, and have plenty of resources when compared to SMEs...
People still can choose to not use the service anymore, but that choice alone isn't enough. They should pay for it, and then users can make that decision.
> Isn't this where fines balance things out? I mean, it's 2020 ... GDPR isn't a new thing.
Exactly! Thats the point I was trying to make (sorry if that didn't came accross properly). It's not like they are facing completely new challenges. GDPR has been in place for years, yet they are breaking it. Guessing URLs to access "protected" files is also not unheard of.
I understand that it is a massive challenge to scale so fast and its good that they have plans to fix these issues, but these are mistakes that could have been easily avoided in the beginning.
The problem is that the GDPR is a joke, it's almost like they passed the law under duress but aren't actually interested in enforcing it (maybe because whoever is in charge is actually benefiting from the current situation?)
The idea to enforce it was each country Data Protection Agency is the key contact for any data/security issue - doesn't matter if it's reported by the company itself, or by a consumer who denounced a breach in data protection terms.
Then the country can issue any fines, reporting to EU agencies, etc.
The problems are:
- This process isn't clear for companies, let alone consumers;
- Not all Data Protection Agencies are the same, neither have the same resources. Here, in Portugal, when GDPR was live, the director of the agency came out to the public and said it was impossible to enforce anything because they didn't have the resources to do it. He was fired.
The reality is that it's extremely hard to control so many players, and delegating it to each country, some of which underfunded, doesn't get us anywhere.
I'm in complete agreement here. Their response is appropriate given the challenges that they face. No team ever thinks they are going to get that kind of migration to their platform the way zoom has received. Good for them.
If you don't read, comprehend, and remember Emergency Airworthiness Directives you have no business being a pilot for hundreds of people. (The instructions were a whole two steps: 1. trim to normal with electric trim switches 2. turn off the stab trim system.) Boeing is still at fault, but the pilots do share a portion of the responsibility.
Absolutely not. They have LIED about end-to-end encryption knowing very well that their product did not support that. That is premeditation, not making a mistake.
Also their atroicious history regarding their privacy practices makes me think that they are now reacting in this way only because they got caught, not due to a genuine desire to be better.
The important part is the leaderships reaction to the situation. Compare to something like Boeing. Zoom acknowledges facts, takes responsibilty and starts fixing things. Boeings reaction to its product killing hundreds of people was “Lol user error. RTFM”. That is (apparently) what acceptable leadership can look like..
Any sw product has issues. The question is what the company does about it