Hacker News new | past | comments | ask | show | jobs | submit login
Google reveals fistful of flaws in Apple's iMessage app (bbc.co.uk)
433 points by dmmalam on July 30, 2019 | hide | past | favorite | 346 comments



I'm honestly appalled at the number of comments in this thread trying to lambast Project Zero for the good work they do in improving software security. Even if Google specifically started and ran Project Zero to target competitor's products (which they didn't, and they don't, there's over 100 bugs found by P0 in Google products), it wouldn't matter because the effect would still be that the online world is a safer place with more secure software.

Of all places, I thought Hacker News would have a community which understands the critical importance of security research and the fact that fixing software security bugs is a net benefit to everyone, every time, all the time.


When there's a cure for cancer that prevents Alzheimer's and grants immortality I'll come to HN to find out why it's shitty.

There's a trove of clever people on HN, I never lookup usernames or track who posts what. It's enlightening in a way to remember even smart people are still people.


> There's a trove of clever people on HN

I think there is trove of people who think they are clever on HN. Most likely includes me, but I don't comment much because of that reason.

Disc: Googler.


I’m confused. Why was the Google disclaimer necessary?


It is better to post excessive disclaimers on the chance they're not necessary than to accidentally not post one where it's relevant.


This post and the parent comment they responded to are both about Google?


Most large companies have policies stating that employees have to disclose their affiliation on social media when talking about any matter concerning the company or its products. No one really follows it, but still.


That's easy, over population is why it's shitty.


This echo's exactly what I was thinking.

I suppose once you hate something, it becomes impossible to acknowledge that any good can come from that thing. Which is unfortunate.

There is a plethora of things I really don't like about Google, but I make a conscious effort (and it's hard sometimes!) to acknowledge the good they do, when they do it.

P0 is one of the things in the "good" column of Google.


Hacker News has become tribal and politicized, much like many other parts of the internet.


There are multiple tribes represented in HN. I know, having been on here for years. I've met face to face with other HNers who might today disguise themselves and punch me in the street. I also continue to have productive and (at times) civil discussions with them, even in today's crazy climate.


Very true. And both Apple- and Google-related discussions brings out both sides in force. In this case we have a story that involves both, so I am not at all surpised to see it get contentious.

Personally, I've observed (via the voting mechanism and commentary) several different cohorts of people visit HN throughout the day. You get a real sense that opinions are regional and tribal. Europeans in the early morning, then east coast and flyover states, and then brace for impact when SV (in particular Googlers and/or pro-Googlers) pile on. It's fascinating.


I think the trends you are seeing, are most likely due to bias and not objective analysis. Try as we might, humans are subjective beings and truth is difficult for us discern, especially with so many variables.


Well, yes, of course I have not done a rigorous scientific analysis. But I can tell you from repeated experiences that if I say anything a bit critical of Europe or Europeans then I will get downvoted heavily in the wee hours of the day, and almost always will make that up and then some about midday (Pacific time). On the other hand, if I say something critical of Google then in the wee hours it'll usually get a bunch of upvotes, and then lose them all about midday.

No science here, absolutely, but I have had great success so far predicting in advance what the voting pattern is going to be on a comment I make. I treat it a bit like a sport.


There's a lot of potential value in influencing public opinion.


Tribal Australian here. No, its not politics. This research is a very rare occasion that Google did something good. All other actions are evilcorp, hence the emotions. AMP, analytics in every page on internet, Chrome autologins, blocking 3rd party email servers, the list is big.

I installed innocent piece of Google SW on my Mac, Android File Transfer [0]. Immediately it tried to install several launch agents, driving BlockBlock [1] crazy. Uninstalled right away without opening the app.

People tend to subconsciously forget that Google is an Ad company by 85% of revenue.

[0] https://www.android.com/filetransfer/ [1] https://objective-see.com/products/blockblock.html


It's right there on the FileTransfer page... "Open Android File Transfer. The next time that you connect your device, it opens automatically".

That's not magic mate, something needs to monitor for your Android device being connected otherwise you'd complain that it's not as user friendly as other Mac apps.


You're right. But it was a minor point in my post. Main point was data siphoning from users of all Google services, software and devices as primary business model. I might've overreacted with this app because of my lack of trust to Google.


>Even if Google specifically started and ran Project Zero to target competitor's products

Many Googlers use iOS. I am not a Google fanboy (I prefer DuckDuckGo) but I don't think it's accurate to paint this as an attack on Apple. Not everything Google does is good, but Project Zero is a good thing IMHO.


The way the article frames it also doesn't help. It feels like they're trying to make it sound more like an attack to get more views. I'd definitely like a better and more technical source than the BBC, especially on HN.


How it is appalling that people disagree with you? You say yourself that it is a "net benefit" meaning there are things to disagree with. Your opinion isn't the one being hidden en masse. I tend to find it appalling to when hacker news argues against for example a free press and when publications like valley wag and democracy now was banned. Arguing against arguably the most powerful software company in the world because you disagree with their practices? Not so much. That is essentially the entire raison d'etre for being a hacker. That doesn't mean you have to agree with them, but it certainly makes it weird to dismiss people for it. Especially in a "we should know better" way.


>How it is appalling that people disagree with you?

They didn't say they were appalled at people disagreeing with them. They said they were appalled at the lambasting. There is valid criticism raised here, but the majority of the criticism is based solely on the fact that its P0. Not for any technical reason. Not for any security related reason. Just because it's Google. If you think "Google is worse than China and Russia" is a worthy criticism... So be it.

>I tend to find it appalling to when hacker news argues against for example a free press

Huh. Can you expand on this, in the context of this thread?

>Arguing against arguably the most powerful software company in the world because you disagree with their practices?

Very few of the legitimate criticism I have seen here has been about P0's practices - and the ones that have brought up P0's practices have brought it up in the context of the security industry as a whole (i.e. what responsible disclosure should be).


> They didn't say they were appalled at people disagreeing with them. They said they were appalled at the lambasting

Whether you think something is legitimate criticism or not is subjective and I don't see an argument why it is. Anyone is certainly free to argue, or not argue, with those comment and they have.

> Huh. Can you expand on this, in the context of this thread?

I was calling out the line "I thought Hacker News would have a community which understands the critical importance of security research". People argue against important things here all the time.

> Very few of the legitimate criticism I have seen here has been about P0's practices [...]

Again, subjective. Here is the first comment hidden by downvotes:

'Why should end-users "have nothing but gratitude" when vulnerabilities are disclosed and they are immediately placed at risk until they get a chance to update, even when the vendor has promptly provided a correct patch? I know I certainly don't appreciate that and can't reasonably expect any normal person to appreciate it either.'

Seems like a legitimate opinion to me. (The same users had almost all their comments downvoted as well, most of which seem perfectly fine). There are a number of others saying that Google has a conflict of interest, which also seems legitimate.

What I see is people doing everything they can to not address those arguments. It is one thing to call out people when they are the majority, another when mostly normal comments are being suppressed. That is not legitimate if anything. I can't see how the people creating that environment expect to get anything out of it, but I guess that isn't the point. Very few smart people you meet in real life spend any time on hacker news.


So you think, in defense of free speech, Google shouldn't be allowed to talk about what it found in Apple's products.


It seems like peoples hatred for Google is leaking over to how they think vulnerability disclosures should happen.

Reading through the comments is disorientating - people are angry that researchers are.. gasp... researching vulnerabilities. It's not some faceless Google Incarnate monstrosity, they are paid researchers (humans, too!). If it was Cure53 that did this, for free, and made the exact same announcement no one would bat an eye.

Good on whatever company does vulnerability research, follows established protocols in disclosure, and makes the world a safer place.


When I read these comments I also feel like people must believe P0 is some kind of new thing that Google came up with. In fact, vulnerability research labs have been A Thing in the security field since the mid-1990s (I worked at what I believe to be the first commercial vulnerability lab, at Secure Networks, from '96-'98, along with much smarter people like Tim Newsham and Ivan Arce). And they've always been taking this kind of (dumb) flak regardless of which company they're attached to.

If companies like Apple or Microsoft are alarmed by the optics of Project Zero, they are free to stand up their own vulnerability research labs; they have the resources and they would immediately find takers in the research community.

End-users, meanwhile, should have nothing but gratitude for P0, since that project essentially represents Google donating fairly expensive and scarce specialized resources to public interest work. Vulnerabilities that P0 finds are vulnerabilities that aren't being sold through brokers to the global intelligence community. Message board talk about the "black market" alternative to bug bounties is almost always overblown, but P0 traffics in exactly the small subset of vulnerabilities that do have substantial, liquid markets.


>that project essentially represents Google donating fairly expensive and scarce specialized resources to public interest work.

To be fair it also deprives other companies (and governents) of talented security engineers with a rare set of skills


Apple fucked up and created a buggy program and then they fucked up again because their automatic update doesn't actually update automatically as soon as the update is live, instead it does so at its own convenience days or weeks later. Oh and they also fucked up at fixing one of the bugs apparently. Google then makes the exploits public with the end result that there are many devices which are vulnerable.

I see you've included the old, tired but still favorite excuses of security community that the exploits were probably already known by those other bad actors and if not we should anyway be grateful that those bug hunters aren't selling exploits to the global intelligence community...

Except the security community and Google inclusive have no freaking clue who or what has discovered those exploits. And now they're available to everyone.

Why exactly should anyone using iOS be grateful towards Apple or Google here? We're pawns in a stupid game between these companies. In this day and age all software companies should be forbidden by law to release any software that they can't prove secure. And if that means no releases for the next 10 years, too bad.


> Except the security community and Google inclusive have no freaking clue who or what has discovered those exploits. And now they're available to everyone.

They were already available to everyone. You just didn't know about it, and now you do and can take protective measures.


That they were available to everyone is obviously false, since it took a team of skilled bug hunters to find them. The only way to know that other actors were aware of the bugs if there were reports of active exploitation.

Were there?

In any case, the software industry has a nice racket going:

1. Get paid lots of money to develop broken software.

2. Get paid lots of money to find security holes.

3. Expect praise from us customers that version X+1 is still broken, but now in different ways.

No thanks Google, no thanks Apple. The game's over anyway, several unsavory companies have access to iOS zero days, see the Bezos case.


> End-users, meanwhile, should have nothing but gratitude for P0, since that project essentially represents Google donating fairly expensive and scarce specialized resources to public interest work.

Why should end-users "have nothing but gratitude" when vulnerabilities are disclosed and they are immediately placed at risk until they get a chance to update, even when the vendor has promptly provided a correct patch? I know I certainly don't appreciate that and can't reasonably expect any normal person to appreciate it either.


First: everyone is already at risk for everything that P0 finds. The difference is no one if publicly talking about the flaws or the risk in the open.

Second: P0 never “immediately places people at risk” because they always follow responsible disclosure.

P0 has a well documented and frankly fairly conservative policy before anything is publicly disclosed. Do vendors want more time to fix things? Sure, they always will. Do P0 disclosures sometimes happen publicly before the vendor has things fixed? Yes, Occasionally. However, looking at the net, P0 has provided far more value than they detract with public disclosure of flaws


> First: everyone is already at risk for everything that P0 finds. The difference is no one if publicly talking about the flaws or the risk in the open.

You don't see how the risk might increase when more people learn about the vulnerability?

> Second: P0 never “immediately places people at risk” because they always follow responsible disclosure.

What? They provided proof-of-concept exploits just one week after the patch was provided. That's apparently not "immediate" in the eyes of security researchers, but try asking the average user if that's enough time to expect them to update.

> P0 has a well documented and frankly fairly conservative policy before anything is publicly disclosed.

Yes, and it could be worse, but it's also not great and could also be better.

> Do vendors want more time to fix things? Sure, they always will.

That was never my argument. I never said they should get more time to fix things.

> Do P0 disclosures sometimes happen publicly before the vendor has things fixed? Yes, Occasionally.

Again, I was specifically NOT arguing about disclosing before the patch is provided.

> However, looking at the net, P0 has provided far more value than they detract with public disclosure of flaws

And I never singled out P0 or claimed otherwise. I'm disputing the entire practice by whomever is practicing it.


> You don't see how the risk might increase when more people learn about the vulnerability?

Do you see the risk of having a vuln that is completely unknown still exploitable in your stack?

> What? They provided proof-of-concept exploits just one week after the patch was provided. That's apparently not "immediate" in the eyes of security researchers, but try asking the average user if that's enough time to expect them to update.

Why are critical issues not being patched within 48 hours? The disclosure of the issue can only mitigate so many things, and patch schedules by vendors is not one of them. If your vendor takes 3 months to patch the system, is that the requisite amount of time the researcher should be expected to wait before disclosure? That seems preposterous.

> > Do vendors want more time to fix things? Sure, they always will.

> That was never my argument. I never said they should get more time to fix things.

So then that is your argument. What is a reasonable amount of time, and why is your arbitrary value not arbitrary? A day, a week, a month, a year; when can you ever be sure you've reached the critical threshold of patched systems using a rule of thumb?


Apple controls updates, not the carrier. So Apple updates happen pretty fast.

I don't think we should wait for the carrier though. That's just months and honestly the carriers need pressure put on them or else they'll keep playing this lazy game and keep customers at risk. Because that's the truth. While a vulnerability exists users are still vulnerable. The difference of disclosure is that users know how they are vulnerable and they can hold carriers responsible.


> That's apparently not "immediate" in the eyes of security researchers, but try asking the average user if that's enough time to expect them to update.

"immediate" has a pretty clear meaning. You might argue that a week delay is not long enough, but it clearly isn't immediate.

> try asking the average user if that's enough time to expect them to update.

I would expect that yes, the average Apple user was updated within a week. You can also argue that if the rollout took too long, that is Apple's fault, not P0

Why is disclosure good?

Companies:

Disclosure in specific instances is often not popular with companies and they try to avoid it because of the bad PR is brings. However, companies benefit from the broad convention of disclosure because disclosures spread knowledge about how to write secure software and how to test for insecurities.

Researchers:

It seems pretty obvious that disclosure is good for security researchers. They get good PR and exposure, without a bug bounty program, that is all they get. (With a bug bounty program, disclosure is often restricted)

Users:

The risk profiles begin to change when a vulnerability is disclosed. After a patch, risks for unpatched users are always going to rise because the patch itself serves as a disclosure of sorts. After the disclosure, this risk does start to rise faster as the pool of people who can exploit the bug expands.

On the flip side, disclosure is important because users need to know when they have been exposed to risk so they can take steps to mitigate that exposure. If a vulnerability allows remote code execution, installing a patch may not be enough if your system is already compromised. If a vulernability exposed communications you thought were secure, any credentials passed using that method need to be rotated.

Finally, not all users' security is equally important. A grandma sending pictures of puppies to her grandchildren does not have the same security considerations as a human rights activist in China. You see this explicitly in embargoed disclosures where a limited set of organizations are informed in advance of the public disclosure.

The length of time between patch and disclosure is thus a trade off between reducing security for high-security users and low-security users. The longer you wait, the more low-security users are patched, but the worse the risks become for the high-security users.

You can't pick an optimum period on a case by case basis, because there are too many unknowns, so the best bet is to use a standard disclosure delay.


iOS users tend to be relatively quick about upgrades. Plus, the publication of the announcement contributes to word-of-mouth dissemination of the existence of the vulnerability and prompts those who haven't updated yet to do so.


Obviously, because the alternative is that bugs never get patched, and get independently discovered by people who sell them to brokers instead.

If you want to get angry about this, get angry at the vendor that took your money and time and attention and gave you a product with an exploitable security vulnerability in it. I think the issues involved here are a bit subtle to warrant knee-jerk anger, but if you've got to get angry, at least get angry in the right direction.


What? I'm asking about why disclosures happen when patches are provided promptly. I'm not objecting to disclosing when they aren't. How does that "obviously" mean "that bugs never get patched"?


The disclosures motivate the patches. We're not guessing about that; we have a before/after picture of the industry prior to mandatory disclosure windows.


Disclosing when a patch is NOT provided should motivate prompt patches. Nobody is disputing that. But you're saying disclosure when a prompt patch IS provided motivates patches further, and that's not even just a guess? How...?


As soon as a patch is released, it should be presumed that bad guys know all the fixes in it. Once a patch is released, people with bad intentions study all the changes (source code for open source, and decompiling for closed-source) and find all the changes that were made, looking for opportunities to exploit.

This happens to 100% of MS Windows Patch Tuesday patches, and happens to less well known products as well. These examinations happen even on changes that aren't known to be security problems when they are fixed: the recent EXIM worm problem was actually fixed by the EXIM team as a minor bugfix, and they didn't categorize it as a security flaw. It was only when outsiders at Qualys [1] saw the change that they realized it was possible to do a remote command execution.

In essence, the bad guys are patient, smart, and observant. They will notice all of these code changes. So disclosing the problem after it is patched serves to encourage users to patch, because it boosts and amplifies the "get updated" signal for the good guys, and the bad guys already are paying attention.

[1]: https://www.qualys.com/2019/06/05/cve-2019-10149/return-wiza...


So here's what I don't get. Even if you put an N-month delay between the disclosure and the patch, people would still have to update their systems far more regularly, because patches from older exploits would still have their disclosure deadlines expiring, and the motivation you claim should still exist for them. So why can't you just put the N-month delay? Are customers really going around updating regularly but actively filtering the list of updates you provide to include only the ones that haven't had their disclosure deadlines passed?


You're missing the key point: the patch is disclosure to the bad guys.

The P0 disclosure is the disclosure to the good guys and users.

You're assuming that the bad guys are learning about the problem from P0's disclosures, but without P0's disclosures, the bad guys would still learn about the old bugs from the patch itself.

So which is a better situation: a world in which only the bad guys get the information about broken old versions, or a world in which everyone gets the information?


I'm not missing that point. "Which is better" is not measured by who has "information". If only the bad guys have the information, but they still fail to exploit, that'd be an awesome world. Somehow infosec folks seem to keep treating this as an information game rather than a security game.

The way I'm suggesting you measure "better" by measuring actual hacks, and how the number of actual hacks is changing based on whether you insert a delay or not.

So we go back to my question above, which you didn't address. I'm saying the fact that patches come out on a regular basis means that people would still have to update regularly, even if each individual patch comes with a delay before a PoC etc. is disclosed. So I repeat the question: would customers really update regularly but actively filtering out patches whose disclosure deadlines haven't passed? If not, why wouldn't the delay that still achieve the outcome everyone is asking for here?


Information asymmetry is a key part of security - users not having information clearly does not make them more secure.

As noted elsewhere, patches are effectively disclosures, with some small delay (best described as an obscurity delay) baked in.


FWIW, I am much less likely to procrastinate on an update if I know it fixes a vulnerability.


On an update in particular? Or on the set of updates being applied to your system at a given time, if one of them fixes a vulnerability? (Do you filter out the ones that don't?)

And wouldn't a vulnerability still be there even if there wasn't e.g. a PoC disclosed? I'm not really sure how that affects what I'm saying.


> Do you filter out the ones that don't?

Of course not, that's a straw man that only you are suggesting and often isn't even possible on most systems. People by-and-large will apply all pending updates at once.

Responsible disclosure pushes folks to update. Well, that and new emojis that they're feeling FOMO from. It's a carrot and stick sort of operation.


There's a little box that occasionally pops up in the top right corner of my screen. I usually just click "remind me tomorrow" ad infinitum. But if I find out there's an important vulnerability that an update fixes, I go out of my way to update immediately. That's all I'm saying.


Most users aren't updating at all. My highly educated STEM employed wife just hits the "not yet" button on all updates. But when disclosures get enough attention that it leaks into the broader (not-security focused) information ecosystem, people are more likely to upgrade- remember how suddenly everyone heard about Heartbleed (e.g. my mother-in-law asked me about it)? That's because for all that we steely-eyed engineers said "cute names? Logos? This is fake, it's all about the ideas, not the fluff" that fluff caused the message to be amplified and more white hats learned that they needed to upgrade. The bad guys knew immediately, because they are paying attention. Most of the good guys aren't, because they are busy earning a living that, fortunately for them, doesn't revolve around computer security.


What you say seems plausible, but it's not clear to me if it's actually true or false. Heartbleed is a tough example, because it's already entirely open-source, so it's not really possible to imagine "how much media coverage would it have gotten if the security researchers themselves didn't publish a PoC". (I don't even recall if they ever did provide one.)

On the other hand, I'm pretty sure I've repeatedly seen patches on closed-source products (from Microsoft, Apple, etc.) make it to the broader news without a PoC, so to me it seems like it's really a function of how severe the vulnerability is (although every vulnerability becomes more severe when it comes with an exploit and an instruction manual).

There's an easy way to settle this with data though. Is there data to indicate fewer machines are actually hacked at the end of the day when a PoC is provided after a patch, compared to when it is not provided unless the vendor doesn't issue a patch? That's what ultimately matters at the end of the day, and I'd readily buy that, but I have yet to be made aware of any.


Like you, I suspect, I am not in a position to do this sort of analysis. But I would note that Qualys didn't release a PoC with their announcement of the Exim bug, the RCE takes 7 days to trigger (you need to send a keep-alive packet every 4 seconds for 7 days as a step in the exploitation), and within 8 days of it being published it was being exploited in the wild. So the bad guys move fast, even without a PoC.

Based on evidence like that, I don't think that a PoC matters that much to the most dangerous bad guys. The script kiddies, maybe it does matter, but there are enough NotScriptKiddies out there that'll own you just as hard with Shodan and their own code that the marginal effect of releasing a PoC is probably pretty minor.


Hm... are you sure you're not misreading the timeline and what happened?

Because as far as I can tell, Qualys did include precise exploit details [1], and the attacks happened 8 days after they did that, meaning in fact the inclusion of source code details would have caused the exploitations in the wild!

Here's the timeline I can find:

CISA reported this vulnerability as being exploited in the wild on June 13 [2]. According to a June 14 article [3], this came one week after Qualys disclosed the bug, which means they must've been referring to the the announcement Qualys made on June 5 [1]. When you look at that announcement, it in fact included full details on how to exploit the vulnerability ("a local attacker can simply send a mail to [...] and execute arbitrary commands") on top of explaining in precise detail the vulnerable piece of code in the (open-source!) source code.

More info on the timeline is in [4]. They refer to a May 27 report, which I cannot find online. I assume it must've been a private disclosure. In any case, it doesn't seem to be what SCMagazine was referring to, given CISA only reported this on June 13 and SCMagazine referred to that on June 14.

So... if I'm reading this right, it seems in fact it almost certainly was the precise exploit details that made the bad guys move quickly. Right?

[1] https://www.qualys.com/2019/06/05/cve-2019-10149/return-wiza...

[2] https://www.us-cert.gov/ncas/current-activity/2019/06/13/Exi...

[3] https://www.scmagazine.com/home/email-security/exim-vulnerab...

[3] https://www.exim.org/static/doc/security/CVE-2019-10149.txt


Customers are going around and delaying updated that aren't marked as critical because it's too much trouble, and might be included in a large quarter/half/full year patch roundup with an OS update.

It's in the interest of the developer to downplay security problems they they think aren't a problem. It's in the interest of the security researcher to make sure they get the information about how problematic the exploit is. The user can only make an informed decision about whether an update is important when they have the information.

Once the patch is released, all announcing the exploit does is possibly bring more exposure to it for people that might have delayed or foregone patching, possibly causing them to patch manually or request their automatic patch process run immediately instead at some future date.

This is a net gain for the security of individuals, in that it likely causes some number of people to patch earlier than they would have, and adversaries are already actively tracking patches so it's unlikely you've given away much info they couldn't get fairly easily (and they are incentivized to find it no matter what).


> But you're saying disclosure when a prompt patch IS provided motivates patches further, and that's not even just a guess? How…?

1. because the policy is "mandatory disclosure", it's way harder to criticise it if it's a blanket, universal policy

2. disclosure is important for users (in general though mostly corporate) because if the issue is not publicly disclosed they might not update their systems (assuming issues are minor or irrelevant)


> 1. because the policy is "mandatory disclosure", it's way harder to criticise it if it's a blanket, universal policy

"It's harder to criticize"? So the reason to put users at risk is to... solve a PR problem?

> 2. disclosure is important for users because if the issue is not publicly disclosed they might not update their systems

Hold on. You're actively injecting a threat and guaranteeing that everyone knows the exploit immediately and that it can be deployed by lots of people on a wide scale because someone might discover it someday?

Can't you just at least easily address this by at least putting a reasonably long time gap between the patch and the disclosure? People will still have to update their systems in the interim due to previous patches' deadlines expiring...


Seems like the disconnect between you and others is your assumption there is some time gap of knowledge about the exploit after the patch is released.

Others have stated that almost immediately after patches are released they are reverse engineered to discover the exploits. So at that time, motivating people to upgrade to the patch is all benefit no?

If they're wrong about their assumptions, you might have a point. If you're wrong, what's the dispute?


Yes, but a time gap of knowledge is just one dimension, whereas I see a gap in every other dimension too. I expect that not every bad actor who would learn from a PoC would learn from a binary, not every one who would learn that actually invest the time to write an exploit, not every one who invests the time would actually come up with an exploit in such a short timespan, and not everyone who does all that will target the same set of customers. Furthermore, as I said in another comment, the existence of prior patches would mean people would still have to update regularly because those disclosures would be expiring anyway, so it's not at all clear to me adding a delay would change that. And there are probably other dimensions I can't think of right off the top of my head.

There should be an easy way to settle this, which is with data, which I have not yet seen anyone point to. I would be shocked if data showed that the number of actual customer systems hacked actually decreases when a PoC is provided quickly after a patch, vs. when this is not the case.


> because someone might discover it someday?

because many people will discover it immediately by looking at the patch.


Disclosure motivates people to apply the patches.


When the vendor provides a patch before the 90 day deadline, do you want P0 to never disclose the bug or to wait a month after patch was released, or what?

I think you're complaining that P0 don't give users enough time to patch - e.g. disclosing one day after the patch was released. If so, that is fair and I think they could wait longer, but for things like Microsoft, Apple, Android, Chrome, etc. security patches - those are disassembled and diffed within hours of release, and for some vulnerabilities an exploit is ready within 24h, so P0 disclosing everything so defenders can for example prioritize installing the patches, turn off features/services, etc. is generally a good thing.


I mean, I'm not taking a hard position on what they should do, but I know (a) it shouldn't be immediate, and (b) logically one would think the logic would be "never disclose unless you can provide a compelling reason for doing so". I'm open to hearing a compelling reason, in which case I would think they should disclose after waiting a while, but I need to hear one first.


The compelling reason is that users have a right to know what they're vulnerable to, and how they can protect themselves, or at least mitigate the risk. Once a patch is released, the changesets get examined, and the binaries get reverse engineered. This happens within days, if not hours. That means if the exploit wasn't known before, it definitely is now; the only thing not disclosing achieves is leaving the people vulnerable to the exploit in the dark. Blackhats and the world intelligence community certainly don't need Google's blog post to figure it out.


> logically one would think the logic would be "never disclose unless you can provide a compelling reason for doing so".

AFAIK the compelling reason is that history shows us that vendors won’t take vulnerabilities seriously enough until they get threatened.


Seeing that Apple is still releasing security patches for phones back to the 4s released in 2011, does that argument really apply here?

https://appleinsider.com/articles/19/07/22/apple-issues-ios-...


This was "never disclose after a patch was provided". How would they not take it seriously if disclosures still happened when patches weren't provided?


It's disclosed so that the customers know what the vulnerability is. Sometimes patching is not the best thing to do for a given situation. Other mitigations might be possible, or it might not even be an issue depending on how the software is used. Criminal researchers might have found this information anyway, so it's only fair to put the customers on an equal footing.


> Criminal researchers might have found this information anyway, it's only fair to put the customers on an equal footing.

Or they might not have, in which case you just gave them a pretty powerful weapon, and it's pretty unfair to customers??

> It's disclosed so that the customers know what the vulnerability is. Sometimes patching is not the best thing to do for a given situation. Other mitigations might be possible, or it might not even be an issue depending on how the software is used.

Are you sure "because maybe you shouldn't the patch" is their logic here? (Which doesn't even necessitate this either, but everything I've seen indicates they want you to patch immediately.)


If you have a popular product then criminals will study the patches and reverse engineer the vulns. If you ship a patch then the vuln is known regardless of a post about it. Refusing to disclose patched issues does not improve security.


Someone already mentioned this. See my reply there: https://news.ycombinator.com/item?id=20567923


Logically, one would disclose as soon as the value of disclosing is larger than the value of non-disclosing. I hope we can agree that after 5 years, the benefit users have due to non-disclosing is near-zero, because virtually nobody will still use the vulnerable version. At the same time, if after 5 years the security gap is still unknown to the public, the value for other security researchers will be very large. So, for mobile phone software vulnerabilities, it is definitely better to disclose after 5 years than not to disclose.

Given that before the patch the value of non-disclosing is certainly higher, and after five years the value of disclosing is certainly higher, there must be some point in time from where on disclosing is the right choice. Therefore, the only question is when to disclose, not if to disclose.


They were at risk before they knew about the vulnerability. The gratitude is for the fact that it now actually gets patched.


For getting the patch, there is gratitude. Why should they feel nicely about the following disclosure though?


This is backwards. The patch is something you're getting from the vendor that put you at risk in the first place.


Instead of telling me I'm being "backwards" about what people should appreciate and what they shouldn't, it'd be more helpful if you could explain your own thoughts like I asked you on the comment above (not here): https://news.ycombinator.com/item?id=20567651 I certainly don't follow your logic.


Why would I do that? None of your arguments on this thread appear to have been persuasive, judging from the pallor the text they're displayed in has taken. If it was just one or two comments, sure, who cares, but you've written a multitude of them, and none of them appear to be surviving. It feels like the burden of proof falls rather more strongly on you than on me at this point.


> None of your arguments on this thread appear to have been persuasive, judging from the pallor the text they're displayed in has taken.

You judge the persuasiveness of an argument by... the "pallor" from other users' votes? In a discussion about computer security... which is an area in which you yourself are the expert? Shouldn't you be the one whose thoughts and votes other people would look at (and frankly, quite possibly, did look at and become immediately biased by), rather than the other way around?

> Why would I do that? If it was just one or two comments, sure, who cares, but you've written a multitude of them

Why would you do that? Yes, good question! Why did you ignore my 3rd comment in that thread (I suppose 3 is "a multitude") where I pointed out you had not been actually addressing my argument at all in the previous 2 comments, yet still proceed to interject into this discussion instead, and after waiting for hours for the discussions with other people to finish, again waste time replying to me here with a comment devoid of any substance? I wouldn't know; that's a question only you can answer to yourself.

You can "place the burden of proof on me" if you want, but the fact that you misrepresented my argument twice, refused to ever address it, moved told another thread just to tell me I'm being backwards, and now replied hours later to judge the merits of my comments by other people's downvotes in the very area you're an expert in... answers my question more fully than I could have ever hoped it would. Thanks.

> and none of them appear to be surviving.

You're wrong here too. At least one of them is still quite productively moving forward, with a kind user who left genuinely good replies that have actually directly addressed my arguments, and I look forward to that user's next reply.


The disclosure allows researchers who would have never bothered looking into that category of bug to now be aware that these issues exist in x area of a system.

I'm totally baffled by your stance would you rather only the people on the black market and their buddies have access to this knowledge?


For all purpose, the patch is disclosure.


People are complaining about quality security research...on hacker news!!


Except it's google leaking vulnerabilities that effect their direct competitors.

They have way too much skin in the game.


Project Zero has found, and disclosed, over 100 vulnerabilities for Google products.

You're just going out of your way, trying to find things to dislike, and it shows.


So you are concerned that they are improving their competitors products but not their own? Doesn't it make more sense just to not do pro-bono research for their competitors?


I really like both Apple and Google both of the companies make amazing products and come with different strengths and weaknesses.

Project Zero is a great marketing effort that produces a genuine public good. But when you are the brains around the security dumpster fire that is Android, it's understandable to eyeroll the big PR splashes (ie. distractions) that that project generates.

But hey, that's that market -- it is a valid strategy, and at the end of the day the world gets to learn and fix problems that they didn't know existed.


My beef is with the established protocol (Google or not) and that is that you disclose enough details to potentially repro even if the vendor patches correctly and promptly.

Why the hell suddenly put everyone's devices at immediate risk until they get a chance to update? That's "responsible"?


Too many companies dont want to patch and won't do so unless forced.


Yes, that's why you disclose if they don't do that. Not if they do.


Then it turns into blackmail. Indifference to their actions is necessary.


A patch for a security flaw exposes the flaw to anyone with deep enough pockets to reverse engineer the patch.

Disclosing the flaw even if patched helps everyone, including you and me.


I don't believe serious vulnerabilities, patched or not, would stay secret for long even if you tried. Disclosing motivates vendors to build systems to ensure users update their devices as quickly as possible.


I believe Natalie Silvanovich is giving a talk at Black Hat about some of these next week. Silvanovich is a machine.



While being a Project Zero member is cool and all, Natalie is also a well-known Tamagotchi hacker https://natashenka.ca/about/


>We are withholding CVE-2019-8641 until its deadline because the fix in the advisory did not resolve the vulnerability

Wonder how this happened? rushed patch or perhaps they only tested against a submitted PoC? Only a week left until the defcon talk. Still listed as "fixed" in Apple's release here: https://support.apple.com/en-us/HT210346


Fixes can have bugs too, there's nothing odd about that. Maybe there was an edge case the developers didn't understand from the report, etc...


Sometimes it’s basically that the person in charge of fixing it didn’t. A year ago one of the fixes for a widespread remote execution flaw was to see if the user agent was curl.



My experience has been that it's extremely common for the average developer to fix the symptom instead of the root cause.


Other security updates released on the same day last week caused Macs to kernel panic every time they went to sleep [0]. Apple software quality is not what it should be, and hasn't been for quite some time.

[0]: https://eclecticlight.co/2019/07/24/dont-apply-high-sierra-s...


> Apple software quality is not what it should be, and hasn't been for quite some time.

Quality is a moving target: I know something about the quality of Safari, and the quality has been getting better over the years (that said, I admit the recent Mobile Safari Betas have been really shit, hopefully the release will be good).

Maybe it is comparitive: for example Safari's quality it is nowhere near as good as the Chrome team's quality (which is unbelievably good: regular updates across thousands of different Android device types, across thousands of versions of Android, with immensely complex software).

Also social media now means that we hear about quality issues - we raise the bar on what we think is acceptable.

Do you think Apple's software quality has not improved over the years?


> Do you think Apple's software quality has not improved over the years?

I think quality has actively declined.

As you know something of the quality of Safari, I'll limit myself to that. Safari over the past several years has made myriad design changes that I heavily disagree with (killing extensions, removing user control over website data, baffling UI decisions), but even though those changes have made my browsing experience worse they may not be objectively considered "software quality." Instead, I'll focus on stability and bugs.

When macOS Sierra launched, I had to deal with weekly lockups and reboots of the OS that I mentioned here: https://news.ycombinator.com/item?id=13159008. I tracked the issue down to Safari 10, which introduced new resource leaks that eventually brought the entire system down after being left open. Even after major releases of the browser eventually stopped forcing restarts, leaving Safari open for extended lengths of time will still cause not just instability and misbehavior in itself (e.g., popover arrows eventually disappearing), but also knock-on problems in completely separate applications, including greyed-out standard menu actions that return immediately once Safari is quit. This resource exhaustion is independent of the number of tabs, but handling of large numbers of tabs has also regressed: tabs now crash or unload regularly, and there is no easy built-in way to see which; this causes data loss and erroneous cookie manipulation when the tabs are reloaded when navigating back to them. Pages often do not add correctly to History, particularly from clicked or OpenSearch search results, with mismatched titles/URLs or entirely missing entries: to this day, searching Wikipedia with Quick Website Search gives a tab title that does not match the page or the history item, and interaction with the back/forward cache is likely to exacerbate this. Worse, pages often disappear entirely from autocompletion, causing mistaken page loads and spurious searches when expected results are missing. A couple of years ago, Safari stopped preventing the Mac from sleeping while a download was in progress, forcing me to copy URLs into Terminal to download with a caffeinated curl command instead to avoid truncated files. A recent release of Safari marked random unvisited links as visited, likely due to some newly introduced hash collision, and was not fixed for many months.

This is just what I can recall off the top of my head, in one limited aspect of a single application. All of these were newly introduced errors; some major, many persistent. I sometimes have call to use older versions of Safari, and while definitely slower and less compliant, in many respects they are remarkably better in terms of feature stability and experience.


> Apple software quality is not what it should be, and hasn't been for quite some time.

It has never been. Been using Apple devices since Tiger if not Panther, their software has always had more teething issues than their hardware, and 10 years back you didn't buy hardware rev1 unless you got every device. Major OS updates usually took a few point release to get solid, and some were just terrible to and through (Lion stands to mind, lots of shiny new stuff, lots of shitty new stuff).


Perhaps it was a bad fix.


[flagged]


"Fixed if you hold it right"


Fixed: working fine on my machine. Now I'm going home zzzzzzzzzzz


This type of thread always goes alot differently when the flaws revealed aren't in Apple products


Choice supportive bias is real, and it seems to scale with price


Apparently iOS 12.4 came out last week but I have automatic updates on and the update is not installed. I just triggered it manually a moment ago.


Can anyone explain iOS “automatic” updates? They never seem to work for me.


It’s very specific parameters. Automatic Updates are done after they’re downloaded, and automatic downloads are rolled out around a week and a half later or around that time. https://imgur.com/a/u6Mxz4A


Same, never worked for me. At least I should be given a notification so I can install it myself.


You have to have the phone charging overnight, usually. But sometimes I wake up in the morning to a message that it had an issue and couldn't update.


I always wake up to this message, and have to do it manually. I don’t know why.


Same here. For the last few iOS updates I was notified through hacker news rather than through the "automatic update" feature.


Same here. This was an unpleasant surprise. Even after I checked and saw that I didn’t have 12.4 installed yet, there was no red badge on the Settings app icon to indicate an available update.


Consider subscribing to Apple's security-announce mailing list [0].

[0] https://lists.apple.com/mailman/listinfo/security-announce/


Project Zero continues to be a Good Thing


Is this why Apple also quietly released updates for older devices as iOS 9.3.6 and 10.3.4? IIRC Apple has only patched EOL'd iOS releases once before - in 6.1.6 for the ssl gotofail?


That was for the GPS 10-bit bug. https://www.theregister.co.uk/2019/02/12/current_gps_epoch_e...

Apple's runs out in November 2019 instead of April.


Apple's changelogs are often incomplete at time of release and updated with additional CVEs later. A gps fix sounds like a convenient cover story for an emergency 0day patch for imessage, for example.


Interesting that there isn't a post on Project Zero's blog. That's typically how they do public notification.


I definitely would've preferred that to the BBC article.

ZDNet seems to be the better / primary source on most other articles: https://www.zdnet.com/article/google-researchers-disclose-vu...


I wonder what kind of infrastructure they had set up to find these vulnerabilities and extract names of classes and methods. Do they jailbreak iPhones and run fuzzers directly on the device? Do they analyze IPSWs directly?

Edit: and explains how to set up tooling to test these components. I'll wait for the BlackHat slides.


> extract names of classes and methods

This is very easy to do using tools such as class-dump if you can get access to the binary (either from the IPSW, or sometimes directly from the shared cache).


Glad to see tech companies holding each other accountable. I hope the white hat hacking between these folks continues. The more vulnerabilities found, the safer our data will be.


TBH, while I think the iMessage service is invaluable, the app itself is often buggy for me. On OS X, it often hangs w/ the spinning beach ball when attempting text input, the iCloud sync can be spotty, and the cardinal sin, on my iPhone X, there are inexcusable screen draw bugs w/ orientation rotation, or w/ the keyboard popping up to type....so I am not entirely surprised. It is an app in need of a good overall bug hunt.


One bug I’ve noticed on iOS is the bar with the message text field and the send button sometimes being displayed at the bottom even if you’re on the message list view.


TL;DR Apple happily fixes what Google’s hackers uncover and responsibly disclose but the beeb desperately spices things up because clickbait ;)


Wish I had a bunch of people testing my code for free and giving me 30 days to fix things before anyone else found out.


Even better when your competitors do it!

Think if Google instead of disclosing these responsibly would leak one bug to hackers every month or so. How many would stay on iMessage efter getting owned for the tenth time??


Does anyone know how to file a bug report for iMessages? I have a slew of bugs I'd like to report from my day to day usage.



Meanwhile google keyboard collects everything you type and android collects everything you say. Who needs bugs..


[flagged]


Please don't post flamebait to HN.

https://news.ycombinator.com/newsguidelines.html


I asked a serious question and got serious responses. Please let people ask questions.


The only developers that would be left after such a shift in liability would be the ones with law degrees.


And the ones with a proper engineering degree, instead of calling themselves engineers after a 6 months bootcamp.


Why should an engineering degree be any sort of requirement? The best programmers I can think of (Knuth, Ritchie, and Stallman) don't have them and I've met plenty of competent developers who didn't either.


They surely have a degree in some form of enginnering and are far beyond those that call themselves engineer without having put a foot into an university.

https://en.wikipedia.org/wiki/Donald_Knuth#Education

https://en.wikipedia.org/wiki/Dennis_Ritchie#Personal_life_a...

https://en.wikipedia.org/wiki/Richard_Stallman#Harvard_Unive...


They all have degrees in STEM, but not in any "engineering" (i.e. ABET approved) program. It's also worth noting that having a university degree is not a requirement to be a licensed engineer in many places, including California.


STEM is part of engineering, that is what that E is about.

Being an "engineer" without a valid university degree in engineering was my whole point.


That's not how acronyms work. Engineering is the E in STEM, not the other way around. STEM also includes Science, Technology, and Math, none of which are Engineering.


They are on this side of pound.

The large majority of STEM university degrees have engineering in the title and are overseen by the Engineering Order as proper degrees.


I don't have an engineering degree, but rather a CS degree. Did I learn proper design and protocol for making secure software? I support requiring some sort of registration/testing/regulation of Professional (software) engineering, just like my mechanical engineering friends


Some schools have their CS program in the engineering department and have requirements which follow from that dependency. So whether you have an engineering degree depends on the school?

Otherwise I agree, there should be a standard for professional software engineering. There are efforts underway already in that direction.


I am absolutely confident that 100% of developers with an engineering degree and at least five years of experience have written at least one security vuln. Most have written hundreds.


And those lacking proper education have written even more, if we are now going to start counting.


Even if this is true, the thread here is discussing throwing engineers who write vulns in prison. We'd all be in jail in that world, degree or not.


I'm picking up a very defensive vibe about your own degree here. Are you worried about getting replaced?


Not all at, just fully disagree the misuse of engineering word, in some countries, by people that most of the time didn't set foot in college, or if they did, the degree had nothing to do with engineering.

Why should I be afraid? Over here Engineering is regulated and it surely isn't going to change during my lifetime.


Take a look at école 42 just next door to you [1].

It breaks all pre-conception of a certified french engineering school. You don't even need to have finished high school to apply! At most they'll be restricted out of governmental jobs, a drop in the bucket of EU jobs.

This is a very silicon-valley like way of thinking : Disrupting the incumbent who cannot adapt to new thoughts. Maybe you should be afraid, because change in on the way.

[1] https://en.wikipedia.org/wiki/42_(school)


In some countries, that's a rational, deliberate decision.


Do you think Apple is hiring those people to work on iMessage?


It doesn't matter what a single company does, the whole industry needs to improve in regards to quality of deliverables.


what about any kind of degree that prevents people from jumping to such logical chains : coding bootcamps -> low quality of deliverables industry wide -> iMessage bug!


If companies were hint hard on their pockets due to iMessage like bugs, they certainly would take care to hire and put into place the respective measures to prevent them from happening.

One of such measures, like in other professions, is ensuring a proper education and respective certification.


These iMessage bugs were implemented by properly educated and certified engineers. So what's the take home conclusion now?


So 100% of the team is composed by foreign workers? Given that US doesn't do certification of engineering degrees.

Can you please prove that to this audience.

Lets assume that is the case just to make you happy.

Then it is certainly a prof that the learnings weren't taken into consideration when setting up the quality certification chain of the iMesssages development process.


Relax, I don't know what "certification" means to you specifically. But what makes you think Apple doesn't already hire the best-in-class engineers they can get their hands on?

Anyway, we're pretty far off the initial claim you made : "it's those poor saps from coding bootcamps that are the reason this industry is doomed".


Yeah. That's the only way you get companies to slow down and do it right. The vast majority of those flaws are completely preventable, we are just not investing in quality as an industry.


It's an interesting sentiment for sure. But unlike other engineering disciplines, software flows freely and easily between borders. What about, say, importing software from countries like China, where in some cases developer best-practices are lax? Do we then need some sort of vetting process for the import of foreign software? That seems the logical conclusion of implementing something like this.


Well, if we want to make Software Engineers real, credentialed Engineers, then perhaps this is how...


Some of us actually are.

It just isn't a widespread thing.


Is this in the US? I wasn't aware that any jurisdictions were licensing software engineers at this point. I'd certainly be interested in hearing more!


Portugal.

While you aren't strictly required to be certified to work as Engineer. You are required as such by law in case you have legal responsibilities on the project, like having your name in contracts.

Additionally any university that wants to offer an engineering degree, must have the degree certified as compliant with the Order requirements for such a degree.

So even if you don't do the certification, at least there is some assurance regarding the quality of the teaching.

Some other European countries have similar approaches to this.


Thanks for sharing that. Definitely sounds like a step in the right direction. I've heard some talk in the US about improving the credentials for software engineering, but primarily at a professional organizational or educational level, not at the formal government-recognized Professional Engineer level, however.


And look at all the amazing billion dollar tech companies, open source projects, operating systems, programming languages & compilers that come from there compared to the US.

Ah yeah, I can see how much better the certification makes everything now. Thanks for that!


Those things are orthogonal.

Having 60 work hour weeks, 1 week vacation and tons of VC money to spend is not related with having a degree.

In fact many of those marvelous US things were created by expat teams, which got an engineer degree back home before trying their luck in US.


[flagged]


Facts are facts.


"many of those marvelous US things were created by expat teams" is that a fact ? or your projection of reality.

Edit: since facts are facts are facts: at most US companies can have 15% of visa holders in their workforce.

I'm going to wait here and watch Portugal become the next global leader in tech because of better certification of engineers. Give me a ring when that happens ok?


Could you please not post in the flamewar style to HN? We're trying for a bit better than that here.

If you wouldn't mind reviewing the site guidelines and taking their spirit to heart when posting here, we'd appreciate it. Note that they include Don't be snarky.

https://news.ycombinator.com/newsguidelines.html


Yes, it is in the US!

You can get certified as a Professional Engineer (PE) by the National Council of Examiners for Engineering and Surveying (NCEES). The general requirements are a degree from an accredited program, passing the Fundamentals of Engineering (FE) exam, 4 years of supervision under a PE, and passing a PE exam.

Here's some information on the PE Electrical and Computer exam: https://ncees.org/engineering/pe/electrical-computer/

As far as I know there isn't much benefit to becoming a PE as a programmer unless you are working in an industry that requires it. I have heard that some programming jobs at power companies require a PE, but I have never come across any. The National Society of Professional Engineers has a job board, but none of the programming jobs currently on it require a PE: https://careers.nspe.org/jobs/discipline/computer-software-e...

I took the FE as a part of my undergraduate degree but I don't expect that I will ever become a PE.


On the surface it appears Google is spending millions of dollars to expose flaws in competitor's products.


Words are wonderful. Try saying it this way:

Google is spending millions of their own dollars to freely help other companies (and themselves!) enhance their security and close holes malicious actors can exploit.

Now replace "Google" with any other company or independent researcher of your choice. If you're no longer angry, you're being biased solely because its Google and not someone you like.


What I said should have conveyed no opinion. Why would you change the words?


The other folk have given some (correct) reasons why I read an opinion in your message. But, here was my thought process. If I was wrong in how I understood your message, my apologies.

>On the surface it appears

This implies an ulterior motive.

>expose flaws

As mentioned, "expose" is an emotionally charged word with negative connotation.

>in competitor's products.

P0 does not just focus on competitors products, by any stretch.

Your statement also fails to convey that they notified Apple with industry accepted disclosure practices in order to fix the vulnerabilities, rather than just "expose flaws".


Your message shows how this situation can be framed to create a negative opinion towards Google. The wording of "on the surface it appears" implies that showing that was the purpose of the message.


"Expose" is a loaded term in many cultures. In particular it means to reveal something which the other party was trying to hide, usually in an adversarial way.


I replaced Google with Microsoft, Oracle and SCO, and the feeling is the same.


So, you're not okay with white hats or security researchers at all then


I suspect it's large companies, given the names provided.

Really, I don't get peoples hatred for Project Zero. Sure, hate the companies, but can you seriously argue that companies spending money on security research is a bad thing? Even if gasp they might get some good publicity from that research?


I guess the issue is that very little discovered internally will ever be publicly disclosed. It feels like a tactic to make themselves look more secure than others when that is not likely the case.

That being said, I think the same behavior is to be expected from any company large enough to need a dedicated security research team.


Project Zero absolutely covers vulnerabilities in Google products. Just take a look at the blog archives (https://googleprojectzero.blogspot.com/). Chrome and Chromium seem to be frequent features, but they cover other Google properties as well.


> I guess the issue is that very little discovered internally will ever be publicly disclosed. It feels like a tactic to make themselves look more secure than others when that is not likely the case.

Agreed, and I don't think for a second Google as a whole is any different in this regard.

But who cares? Security issues are being found, Security issues are being publicized, Security issues are being fixed.

Project Zero, as a small part of Google, is finding bugs in everyones software - including Google's - and holding them to the same standards, standards which are widely regarded as being acceptable standards for disclosure.

The rest of Google, should they discover an issue without Project Zero's help, presumably behave just as most of other companies do - so hate them all equally, that's fine - and I agree, but Project Zero is different to Google as a whole, and just is not something to hate IMO.


Except google has good reason to want iPhone customers to feel unsafe, to have apple blamed for issues, etc...


> good reason to want iPhone customers to feel unsafe

How does fixing vulnerabilities in your iPhone make you feel unsafe?

> apple blamed for issues

Apple is being blamed for the issues because Apple is to blame for the issues. They made the product. Who is to blame about the security issues in an Apple product, if not Apple?


I don't think it is much more complicated than that some of us don't want to grant even more power to companies like Google.


Power? I don't see much power being granted here.

And if you rule out "all companies like Google", you've basically ruled out everyone with enough capital to donate to research, depending on your definition of "like Google".

And really, it absolutely is a donation. The ROI on Project Zero is likely 50x or more less than if that money went to the marketing team.


> Power? I don't see much power being granted here.

And I am not going try to convince you in a popularity based forum, but that is generally the objection.


If you don't want to engage in discussion why are you even commenting here in the first place?

You are saying that this gives more power to google and someone asked if you could elaborate on why you think that. Not everyone has the same background and what may be obvious power to you may not be to others. This forum is supposed to be participated in with good faith.


To be fair to them, while you understood my intent - I definitely could have phrased that better.


Thankfully there are many ways to participate. But maybe I was a bit short. My point is that if you don't 'meet me half way' I can't do the subject justice in a forum where a significant number of the comments arguing that point is hidden. That increases my effort to make an effective argument and diminishes my returns for that effort. Especially since I don't feel that strongly about it. You are better off trying to find a blog post about it that won't disappear in a couple of hours.

But on the other hand meta isn't that interesting either. If large companies wanted to do security research that wasn't objectionable to people they could do so by consensus, standards and agreements. No one could really question that. Instead the idea is largely that "the ends justify the means". That is what people tend to disagree with. That large companies can unilaterally decide how things are done, not just for themselves but in a way that affects other companies or their users. It doesn't really matter if it is for good or best practice because it is about them, especially as large companies in the industry, having that influence.


It's how it's used. Like how when Epic decided Fortnite should skip the Google Play Store and it's 30% cut, Project Zero suddenly was interested in checking the security of games (which you rarely if ever see), so they could find a vulnerability in it and feed it out to Google's favored press outlets with a constructed story about how Epic is compromising everyone's security by not using the Play Store. (Don't mind all the stories about malware in the Play Store, of course.)

As soon as a company slighted Google, it was immediately a Project Zero target, and that should tell you everything you need to know about why people are annoyed with them.


IMO, your making an pretty large assumption here - you're describing the intent as malicious without anything concrete to back it up. If I was a security researcher, and someone decided to do something out of the norm, I'd probe it! There's nothing here to suggest malicious intent, only a security researcher doing what a security researcher does.

> with a constructed story about how Epic is compromising everyone's security

Did Epic compromise everyone's (or, at least their users) security? My memory of that incident is, yes - they did. If that's true, if the code was buggy and had a path to a security exploit, how is it a "constructed story"?


If I recall the issue in question, the only way it would be vulnerable to anything was... if someone already had another malicious app installed on their phone. Which is to say, you could get infected by already being infected, which is... not much of a vulnerability.


Which is to say, you could get infected by already being infected, which is... not much of a vulnerability.

It's a pretty big vulnerability when you allow the malicious intent of one app to escalate to an actual malicious capability so I don't think you're accurately recalling the issue in question.


I mean, if the first malicious app has less permissions than Fortnite, it could potentially gain Fortnite's permissions through the vulnerability. But the first malicious app likely could've just asked for those permissions itself, it's not like Fortnite has egregious permissions as it is, and neither can run as root.

Which is to say, there's the possibility for minor problems that should be fixed, but it's far from the "Epic is terribly insecure, trust the malware-ridden Play Store instead" rhetoric we got from this particularly aggressive media campaign.


>Epic is terribly insecure, trust the malware-ridden Play Store instead" rhetoric we got from this particularly aggressive media campaign.

Did p0 say that anywhere?

Given that if I search fortnite on the play store, I get a special warning message that it can't be downloaded on play (which was added specifically to prevent fortnite clones), I'm less than convinced that there was a unified campaign by Google to undermine epic, as you seem to be suggesting.


It means malicious apps can target Fortnite as a vector for malwarin', stealing your Epic account, etc. The vuln got media attention because Tim Sweeney got it into his head to publicly grump at P0 because they wouldn't hold off disclosure past the point of patch release. If it wasn't a big deal, why do you think he did that?


> If I recall the issue in question, the only way it would be vulnerable to anything was...

So, yes is what your saying. It was vulnerable code discovered by Project Zero.

And when discovered, did Project Zero follow their published process for disclosure to both Epic and the general public?


Project Zero suddenly was interested in checking the security of games (which you rarely if ever see)

I don't think it's all that sudden and they've talked about it:

https://twitter.com/taviso/status/955540415263907840


What about the NSA, or GRU? Because they're doing this research too, except they're not publishing it.


What if your replace it with, say, Stripe?


My biggest question would be how many security vulnerabilities Google uncovered and disclosed on their own platform. If they are being good and helping other companies - great! But if they're also profiting by FUD'ing them - then we should call them something other than white hats.. grey hats maybe?


https://googleprojectzero.blogspot.com/

Plenty of uncovered and disclosed vulnerabilities for Android and Chrome there.


> My biggest question would be how many security vulnerabilities Google uncovered and disclosed on their own platform.

130.


Google fixes its shit before the 90 day deadline. Apple could hire the talent it needs to do the same.


Well, Apple released a patch for all phones back to the 4s released in 2011. What are the chances that security patches for Android phones make it to phones released even two years ago?

Even Microsoft released a patch recently to a security vulnerability found in Windows XP.


Android doesn't need a system update to update the messaging app.


But it does need system updates to update parts of the system....


The point is that this is a pretty small portion of all security updates. Compare to iOS, where updating the browser or iMessage (both with very large vulnerability surfaces) requires a system update.


You act as if there is a difference. It’s not like Apple pushes the entire OS down for a minor update. Either way it’s a delta.

But that “pretty small portion” doesn’t matter if you can’t patch it.


> You act as if there is a difference.

There is a large difference. One is an automatic app update while the user continues working. The other requires the user to stop everything they're doing and reboot their device.


Or the user can just tell it to update later on when they aren’t using it.

With the benefit that all necessary components are updated together and that Apple can push out any updates world wide without waiting on the carriers....,


> Or the user can just tell it to update later on when they aren’t using it.

This is how devices stay vulnerable.

> With the benefit that all necessary components are updated together

The whole app is already updated atomically. There is no benefit here.

> and that Apple can push out any updates world wide without waiting on the carriers

The same as a Pixel or Android One device. The only difference is that app security updates are artificially slower on iOS due to poor design, and for apps like browsers, this is a fatal flaw.


This is how devices stay vulnerable.

As opposed to most Android phones that never get system updates? As opposed to Apple releasing an update two weeks ago for all iOS devices back to 2011?

The whole app is already updated atomically. There is no benefit here.

The Safari app is also used as an out of process web view for other apps as is the messenger app...

The same as a Pixel or Android One device. The only difference is that app security updates are artificially slower on iOS due to poor design, and for apps like browsers, this is a fatal flaw.

It’s estimated that Google may sell 1-2 million phones a year and Android One phones are not much more ubiquitous. Even then Google only promises updates for two years.


> As opposed to most Android phones that never get system updates?

Don't buy them. Problem solved. Do you avoid Linux entirely because there exist Linux-based routers that are never updated? No, you buy Linux-based routers that are updated.

In this case, the choice is between properly updated Android phones, poorly updated userspace iOS phones, and poorly updated base system Android phones. The obvious choice is a phone from the first group.

> The Safari app is also used as an out of process web view for other apps as is the messenger app...

As is Chrome on Android. Since Android is designed in a way that apps can gracefully recover from arbitrary processes being killed, this does not matter. Chrome gets updated, the process restarts, and the page the user was viewing in the web view reappears. If the app wasn't in the foreground, the user won't even notice.


So their are approximately 2.5 billion Android devices in the world and less than 2% are sold by Google and they are the only ones getting updated and you don’t think that’s a problem?

But yet every single Windows PC sold by any vendor can still get updates directly from Microsoft.

In this case, the choice is between properly updated Android phones, poorly updated userspace iOS phones, and poorly updated base system Android phones. The obvious choice is a phone from the first group.

You are really claiming that Android has a better update strategy than iOS and is more secure? Which Android phones from 2011 are still getting updates? 2013? 2015? Heck 2017?


> you don’t think that’s a problem?

It's a problem, just like the routers that aren't getting updated. It's not my problem.

> You are really claiming that Android has a better update strategy than iOS and is more secure?

Yes. I've already explained why, and you haven't refuted it.

> Which Android phones from 2011 are still getting updates?

I don't use eight year old phones, so this doesn't matter to me. If you use old phones, you could argue that iOS is marginally more secure than the Android options; but that argument is irrelevant to the purchase decisions of 99% of the people here who do upgrade devices regularly for whom there are Android options that are much more secure than iOS phones.


If you use old phones, you could argue that iOS is marginally more secure than the Android options; but that argument is irrelevant to the purchase decisions of 99% of the people here who do upgrade devices regularly for whom there are Android options that are much more secure than iOS phones.

The average replacement time for cell phones in the US is 32 months.

https://www.npd.com/wps/portal/npd/us/news/press-releases/20...

8 months longer than Google has promised updates.

https://www.digitaltrends.com/mobile/what-is-android-one/

And that’s only with Android One phones. Most Android phones never get updates or are rolled out slowly waiting on the OEM and carrier.


> The average replacement time for cell phones in the US is 32 months.

That is not my replacement cycle nor the replacement cycle for most of the readers of this forum. It has no bearing on my purchase decisions nor the purchase decisions of most of the readers of this forum. For people who upgrade regularly, which is a group that includes me and most of the people on this forum, Android One and Pixel devices are more secure than iOS devices, and you appear to agree.

> 8 months longer than Google has promised updates.

Android One phones get security updates at least three years after release.


That is not my replacement cycle nor the replacement cycle for most of the readers of this forum.

Well as long as it caters to you and the rest of the people on HN (have you done a survey?), I guess that’s all that matters - not the other 2 billion people in the world....

Android One and Pixel devices are more secure than iOS devices, and you appear to agree.

Android One phones still have to wait on the manufacturer to update their phones. Yes, but they pinky promise they will. From the article I posted.

I’ve never had to wait on a manufacturer to get updates from my Windows PCs. Heck I still get updates for my Mac Mini running Windows 7 and Apple definitely had nothing to do with it. Why is the Android architecture so piss poor that they can’t figure this out? This- an OS vendor licensing to OEMs and providing update - has been a solved problem for PCs for well over 30 years.

From the earlier article I posted.

While updates do still have to go through each phone’s manufacturer, there’s much less to check and update, so updates will generally arrive much faster. It won’t be a day one patch like you’d expect on the Google Pixel range

Each Android One phone is guaranteed to get at least three years worth of security updates from its release date, and up to two years of major Android releases, too.

Android One phones get security updates at least three years after release.

The iPhone 5s (2013) received 5 years worth of OS updates.

The 4s (2011) just received a bug fix earlier this month.

The 6s (2015) is still a more performant phone than any midrange Android phone released this year and can hold its own against high end Android phones that are two years newer. It would be a pity to replace it if it were an Android phone just because Google couldn’t figure out how to update third party devices. My son is still using it.


> I guess that’s all that matters - not the other 2 billion people in the world....

I already explained the choices. For us, the obvious choice is a properly updating Android device. Any user who chose an iPhone or non-updating Android phone made a poor security choice. Any user who has a longer than three year upgrade cycle has no good options unless they use a community-maintained Android build.

> Android One phones still have to wait on the manufacturer to update their phones. Yes, but they pinky promise they will.

They are guaranteed monthly security updates. If you have an example of one that hasn't had monthly security updates, that would be a breach of contract with at least the user and possibly with Google who certified the device as Android One.

Windows updates aren't guaranteed to work with arbitrary device manufacturers' custom drivers.

> [Irrelevant stuff about how long iOS devices are updated]

The comment you replied to was a correction to your claim about how long Android One devices are updated. That is the maximum period a user can get a secure device for because we have already established that all alternatives have non-working security update systems.

>The 6s (2015) is still a more performant phone than any midrange Android phone released this year and can hold its own against high end Android phones that are two years newer.

You have conceded that iOS is worse for security, so now you want to argue about performance. Android has iOS beat there, too. Here is a midrange Android phone one generation older than the iPhone 6 beating it at the most common task for phone users — opening apps: https://youtu.be/hPhkPXVxISY

Here is a midrange Android phone of the same generation as the iPhone 6s beating it in the same test: https://youtu.be/B5ZT9z9Bt4M

Of course if you want to get off topic, a more interesting discussion than performance is usability, and Android is multiple generations ahead of iOS for what you can do with it and has been since at least the Verizon Droid, which came with driving navigation and voice control.


You have conceded that iOS is worse for security, so now you want to argue about performance. Android has iOS beat there, too. Here is a midrange Android phone one generation older than the iPhone 6 beating it at the most common task for phone users — opening apps: https://youtu.be/hPhkPXVxI*

I’m not arguing performance for performance sake. I’m arguing that a four year phone is still performant compared to many newer Android phones and it is getting both* security updates and os upgrades 24 months and 12 months longer than the tiny percentage of Android phones that get either. It also doesn’t have to wait for a third party OEM to decide to push updates.

I’m also criticizing Google for not knowing how to push updates to phones running its operating system without OEM intervention - something Microsoft figured out 30 years ago with PCs.

But you don’t need to speculate how fast iOS users update their phones.

There are plenty of sites showing how many iOS users have updated operating systems compared to Android users:

https://www.forbes.com/sites/ianmorris/2018/04/13/android-is...

So do have a cite showing that a larger percentage of Android users are running an up to date OS?


But iMessage is too low of a priority for them?


Is there any indication that the flaw affects iOS versions less than 12?


It also absolutely discloses its shit after they patch it.


A security company doing this (disclosing apple/ms vulnerabilities) wouldn't be going after their competitors.

> you're being biased solely because its Google and not someone you like.

We should be biased against Google, in everything.


Vuln reporting is free work. It helps companies. This isn't "going after" people.


Why wouldn't a team of vulnerability researchers... research vulnerabilities?


Of course I'm biased. Google is a huge multinational corporation with a vested interest in keeping me on their platform. They're not researching vulnerabilities in Apple's products out of the goodness of their hearts, and although I agree that they're probably doing the right thing in this case, the fact that they're releasing the vulnerabilities at all probably has some ulterior motives.


This is you, projecting, based on how you would operate. Unless you have actual evidence, don't put that on them.


Project Zero also regularly publishes on flaws in Google's own products. Check out https://googleprojectzero.blogspot.com: they do a fair amount of reports on Chrome, ChromeOS, Android, etc.


They spend millions to find flaws in popular software projects, some of which are their own.


Project Zero regularly exposes flaws in Google products, as well as non-competitor's products (e.g. AV, Tavis Ormandy's AV rants are always enjoyable).

I'm not a fan of Google at all and don't use their products unless I absolutely have to, but everything I've seen so far about P0 has been stellar technically and ethically.


I find it all funny because one of the biggest reasons people don't switch from iOS is because of iMessage. It is nice to see that Google is helping Apple in finding the exploits and not exploiting it themselves as far as we know.


Good. Because the competitors aren’t.


It is the entities spending millions of dollars to find exploits but then not disclosing them that we should be concerned about. This makes Apple stronger not weaker.


It really is brilliant. As others have pointed out, Google finds both vulnerabilities in its own products with project zero, as well in its competitors' products.

There is no reason to assume project zero is biased if one considers second order effects of this security research.

What kind of press releases get picked up by the media 9 out of 10 times? Not the ones about google finding flaws in google products. Google just makes clever use of the medias' bias for conflict.


So that they are fixed? What is wrong with you people?


Sounds great for consumers.


Perhaps their competitors should spend money to expose flaws in Google's products, too. That would be better for everyone.


They are spending millions to do free security research for their competitors.


That's why vulns have been reported in all sorts of products that have nothing to do with Google's offerings and also in Google's own products, right?


I’m not sure why you received downvotes. Looks like people are a bit sensitive on this subject. You are exactly right.


Better google than Russia


Or China, where the iPhone is wildly popular (even if the popularity has been decreasing recently).


I'm far more afraid of google.


If Google is responsively disclosing the issues to vendors, why? I would find it hard to believe totalitarian governments like Russia or China would do the same.


Well of course they are. So what?

Hopefully Apple does the same thing and obliges Google to operate with a whole lot more security. (Not that anyone should use Google in any case because of the industry leading flagrance of their privacy issues. But I digress.)


Would be nice if Google finally came out with their own iMessage-like service that texted over the Internet instead of 30 year old SMS


The recent barrage of security bugs in iOS makes me wonder if Apple has been more lenient on their security posture in recent times.

It also shows that Google Project Zero is very successful in marketing their work. There are several other players reporting security bugs in iOS regularly, I see Tencent KeenLab, Pangu, Checkpoint, GaTech SSLab in the last two releases to name a few, but very few have achieved similar recognition as GPZ.


I think it's clear that Apple security hasn't been as good as everyone would like. Remember the disastrous Mac OS login bug?


A fish, a barrel, and a smoking gun.


You are getting downvoted because the kids don't get the reference.


No. You're both being downvoted because it didn't really add anything to the conversation. It was just filler.


Hard to tell what’s really going on here from this article. Although it seems like five vulnerabilities were fixed and one remains (and google is being unusually patient about the sixth issue)

One thing I’ve always struggled with is the strategy of these white hat teams. I’m sure Google Zero spends a lot of time on Apple because Apple is an enormous company, large partner, and competitor in some spaces.

So now I wonder: does the release of vulnerabilities ever get effected by business agenda?

I assume it has to, although I’m not sure of the agenda here. In this case, iMessage is in direct competition with a Google sms protocol (although googles hasn’t gained much traction). Maybe the vuln is less impressive than saying, “there’s one more”?


iMessage is better software now than it was before this disclosure. I don't think that logic really works. If Google wanted to be evil and damage Apple, they would have leaked the vulnerabilities to the press and not bothered to work with them for a fix. If they wanted to be really evil, they could have leaked them to black hats first and waited for a real zero day before talking to the press.

Instead, they worked together to fix the bugs. This is exactly what we want, there is no better resolution.


These guys are the best experts in the world and Apple is getting free work done. Who cares if Apple is angry, Google made their software slightly safer and probably put the iMessage dev team on their toes (as they should be).


Why would Apple be angry?


Apple carefully controls its messaging to its customers. With Google able to disclose software security flaws, and Apple not able to keep up in fixing them (some non-trivial), it puts some control into the hands of Google.


Doesn't the article say that 5/6 were fixed? And that the 6th isn't disclosed yet, as the 90 day period isn't up yet? Seems like Apple is keeping up reasonably well, wrt disclosure windows.


That doesn’t mean they aren’t upset. Two years ago I spoke with someone on the CoreOS team at a conference and he expressed dissatisfaction at Project Zero. It forces them to halt other work they are doing to design fixes for these bugs, as they often get pressure from Google close to the 90-day window. One fix, he explained, related to process management and would affect all OS builds. Getting that fix integrated and tested into all recent macOS and iOS versions takes time, despite the 90-day window.


The big headlines about vulnerabilities in iMessage being misunderstood by laypeople?


[flagged]


Good doesn't mean perfect.


When was this?


Snowden leaks confirmed that gmail was compromised under PRISM https://en.wikipedia.org/wiki/PRISM_(surveillance_program)

They now claim its secure again because they are encrypting internal traffic. It's such a high priority and centralized target with so much valuable intelligence that any such claims have to be taken with a grain of salt.


PRISM was the one with FISA warrants, where the government lawfully acquired data from Google via formal processes.

MUSCULAR was the the one where the NSA tapped Google's inter-datacenter fiber lines in order to spy on their internal traffic.

https://www.washingtonpost.com/world/national-security/nsa-i...


naturally its hard to keep track of all their cute code names, at the end of the day all the data ends up queryable by analysts with nothing resembling a warrant process


PRISM wasnt a vulnerability, Google (et al) was basically forced to build a system to automatically handle FISA warrants data requests. For example, a single warrant signed in a secret court would request all data for a person of interest plus one or more hops of every person they communicated with which got funnelled through the PRISM system to multiple tech companies in the US simultaneously which then got fed back into XKeyScore for agents to go through the data and do graph analysis.

Which is still really bad but not the same as these software vulnerabilities.


Not only is it encrypted -- in transit and at rest -- Google uses its own silicon and has its own fiber.


Project Zero, as evident from their bug tracker¹, is a Chrome security effort. It looks at everything in the browsing stack — Chrome, libraries, plugins, OS, processors, proxies — presumably because security can be broken anywhere in the chain.

¹ https://bugs.chromium.org/p/project-zero/issues/list?can=1


And they keep collecting exploits thanks to memory corruption bugs.


How did you make the little ‘1’ ?

^1


The wonders of Unicode: "Superscript One" has codepoint U+00B9.


But in the hn reply box, how do you type it?


On a representative contemporary *nix desktop: <Compose> ^ 1


Chrome doesn’t run on iOS. They’re doing this because they want to, not because it would somehow make Chrome more secure.


There is a Google Chrome for iOS - though it doesn't use the rendering engine.


Yes, so it's not relevant.


Project Zero Works for the manufacturer of the largest data exfiltration vector in human history and don’t seem to be making meaningful progress on fixing that.

All their bug reports come with a bad taste in my mouth.


I'm a citizen of a country that is likely responsible for more preventable deaths than any other.

Guilt by association is in fact not completely unjustifiable. But either it attenuates quickly, or every human you know must be shunned for ethical reasons.


I’m not judging the people that like finding exploits for a living, I’m judging google employing them while the rest of googles business model is directly opposed to user security.


You’re upset that a deep-pocketed company is funding security research?

I have no love for Google but I’m extremely happy they found vulnerabilities that could be patched on my phone before someone else did.


Privacy and security are different concepts.


Would you prefer that they not find or report these bugs, and let them remain unfixed?


Project Zero isn't Facebook.


Project Zero has always been disguised marketing, and IMHO an extremely nasty form of it. I have no doubt they plan coordinated releases like this on a regular basis

(these downvotes are confusing. Do you disagree that it is marketing? That their approach is brutal? That they plan this regularly?)


Giving a company 90 days to fix a problem that may be currently exploited, harming end users, seems nasty to you?

We should all be so lucky as to have Project Zero handing us free bug reports like that. Responsible companies PAY for bug reports on their products. Google is handing them over for free.


Sure, but at the same time, when Google announced that Google+ had a huge security breach of 52M accounts, they didn't publicly disclose it until well after the fact because they didn't think it was serious enough. I wish Google would follow their own principles.


The most recent post on Project Zero's blog is about a Chrome vulnerability: https://googleprojectzero.blogspot.com/2019/05/trashing-flow...

And it had the exact same automatic 90 day disclosure applied: https://bugs.chromium.org/p/chromium/issues/detail?id=944062

"Please note: this bug is subject to a 90 day disclosure deadline. After 90 days elapse or a patch has been made broadly available (whichever is earlier), the bug report will become visible to the public."

In fact they've reported a lot of Chrome vulnerabilities: https://bugs.chromium.org/p/project-zero/issues/list?colspec...

And Android ones: https://bugs.chromium.org/p/project-zero/issues/list?colspec...

Hey, look at that! Equal treatment for all.


Do you mind rephrasing that to make Google seem eviler? I have s confirmation bias to fulfill


By that standard, literally no company in the industry is following these principles, because internal findings are not routinely disclosed. Internal vulnerability researchers have access to information outsiders don't, so you can imagine, the bugs you're not hearing about are pretty lurid. Every major tech company in North America spends millions annually on third-party software security tests; did you think these just weren't turning things up? What did you think was happening to the reports?


For what it's worth, Mozilla routinely discloses internal findings, subject to the same policy as external findings: the bug report is opened up once the fix has shipped to a sufficient fraction of users.

So it's not "literally no company". ;)

Disclosure: I work for Mozilla and I have reported a number of security bugs on our code, the vast majority of which are now public.


Mozilla certainly discloses more than other vendors do, but I'm talking to Mozilla security team members about this now, and maybe one of them can jump in here and correct me, but I don't think they can claim that all their internal findings are reliably (and meaningfully, in advisory form) disclosed.

Regardless: that's a good point. I should have said, public disclosure of internal findings is not an industry norm. Mozilla is a good counterexample to the argument that everyone close-holds internal findings.


That's a good point about advisories. All the findings are public eventually in the form of non-hidden bug reports, but not all may have advisories issued. Doubly so if the finding happens before the affected code had first shipped in a release (so buggy code gets checked in, then internal testing finds a security bug in it before it ships, and that bug is fixed).


I don't think that is the point though so much as that Google has one standard for their internal findings and another for project zero, which also deals with other companies with the justification that it is better. Mozilla doesn't audit other companies so what they do with their internal finding isn't relevant for that argument. One can of course argue whether it is good, or justified, or not. But I don't think that changes that their is an argument there. If someone wanted to sue Google (ha!) over a project zero disclosure that is likely something they would try to argue. That Google knows that disclosing has consequences.


I could be mistaken, but I think internally reported issues that don't make it to release aren't assigned CVE numbers, which might be what he means by "disclosed".

Of course, as you say, we do rate almost all security issues, and eventually make them public, so the information is only a bugzilla search away! https://bugzilla.mozilla.org/buglist.cgi?keywords=sec-critic...


Google's goal with Project Zero is supposedly to raise the stakes in security. I'm happy they're doing it, but if they're going to enforce a non-negotiable 90 days public disclosure policy, it leaves a bad taste in my mouth when Google itself doesn't care to follow that for their own services.

Project Zero has long maintained that any serious company should be able to meet 90 day disclosure timeframe, and yet here comes Google+...


Project Zero was not the group that discovered the G+ vulnerability, though. Project Zero's terms do not bind other teams within the company who have not agreed to them.



The Google+ API issue wasn't a breach, but a vulnerability that was identified and patched within a week, before announcing publicly.

How is that different?


This is complete whataboutery.

Disclosure is one thing, remediation is another. The former is only instrumental to the latter. In the G+ leak, remediation was swift; so disclosure was not required.

(Btw, the affected accounts were 500K, reportedly, not millions.)


That was not discovered by P0. If P0 found it, there is no reason to believe they wouldn't have disclosed in 90 days.


Apple has a bug bounty program, yes? Are they paying Google for these?


Project Zero does not accept bounties. They generally ask for the money to be donated.


Makes sense. The bug bounty is meaningful money to an individual but it's just a pittance to Google.


I'd assume it also helps avoid the perception of a conflict of interest.


Apple's program has a few very specific classes of bugs that they pay out bounties for: these bugs probably don't qualify.


Probably not. I think that most of those bounties can only be redeemed when you sign an NDA.


Who requires an NDA? I don't believe Google does: https://www.google.com/about/appsecurity/reward-program/

(Disclosure: I work for Google)


I meant the NDA from the party where the bug is reported, Apple in this case.


Yeah, all they get in return is the chance to shit on competitors and possibly leave their customers open to harm, while getting lauded by the people who still think google isn't evil.


The company with the bug is leaving their customers open to harm.

And Project Zero has notified that company of their problem.

If the company fixes their problem in a reasonable amount of time, then it's a Win-Win-Win. The users, the company, and Project Zero all win.

If you blame anyone but the companies with bugs that can't fix them in a reasonable amount of time, then your priorities are dead wrong.


So what. I don't personally care if a company's marketing is affected - we as consumers have the right to know if iMessage or other protocols aren't secure. This is in the public interest and I'm glad Google's doing this. Apple can start their own Project Zero investigating Android if they want.


I agree that it's basically marketing, but what is the societal harm? That it makes Apple look bad?

Remember, it's not like others would stop looking for these exploits if Google did.


You're assuming they're exposing the bugs for everyone's benefit when that might just be a side effect.

Does Google harming the reputation of a competitor for it's own advantage not cause some societal harm? Or are we still pretending that some businesses are working in our best interests?


> Does Google harming the reputation of a competitor for it's own advantage not cause some societal harm?

If they're harming the competitor's reputation by exposing a legitimate flaw in the competitor's product, I don't think that causes societal harm, no.

Apple could open up their own Project Zero, if they wanted to. Then you'd have two competing companies making each other better, which sounds to me like the ideal of the free market.


> Apple could open up their own Project Zero, if they wanted to

Need the right people and perhaps just as importantly the right internal politics.

Lots of businesses struggle with the idea that when the Big Boss says something false it might be OK for a lowly employee to contradict them. I expect that even if Tim Cook thinks he'd be OK with hearing from an Apple engineer that their new product is garbage, Tim's immediate reports will ensure that engineer is fired before news reaches Tim so he thinks it never happens.

What you want in a good company is the CEO takes the bullet. Something bad happened? That's my fault, the buck stops here, I will make sure we do better next time. Big loss? Cut my salary and zero all executive bonuses until we turn it around.

What you see most often is throwing employees under the bus. Something bad happened? We fired the people responsible, I'm putting somebody else on this (read: I am preparing to throw this new person under a bus too). Profits a bit less than anticipated? Fire 1000 people essentially at random to show I'm focused on the problem.


>they're harming the competitor's reputation by exposing a legitimate flaw in the competitor's product, I don't think that causes societal harm, no.

Well it’s not necessarily that simple. Exposing a flaw without adequate time to develop a fix could cause net societal harm. This is especially true if it’s a bug that would have been discovered and fixed internally without any public disclosure.


Overall, sure, but Project Zero follows responsible disclosure.


Calling something "responsible" doesn't make it so. When Google first started this "responsible" disclosure in October of 2014 with Microsoft, Microsoft had a fix setup to be released on Patch Tuesday and asked Google if they could wait to disclose it until then. A mere two days. Google refused and released details on Sunday.

How was releasing the details 2 days early responsible or beneficial? At best it got customers worked up and made them question Microsoft's patch policies.

Do you think in the intervening 2 days anyone took any actions knowing the patch would arrive Tuesday?

Google hides behind "responsible disclosure" as an excuse for using Project Zero tactically to do PR damage to competitors.


> If they're harming the competitor's reputation by exposing a legitimate flaw in the competitor's product, I don't think that causes societal harm, no.

The act of rapid public disclosure compels the target to shift resources and focus to respond to those potential dumps. This can negatively impact the company's strategically and put them in damage control mode.

In the case of Apple, they're not the dominant platform and are trying to pivot to be seen as the the secure and private platform. Google is damaging their credibility with that pivot by investing in finding vulnerabilities in their products and rapidly disclosing them.

Short term this could improve the product but long term it could damage Apple's reputation and further diminish their market share and solidifying Google's.

If Google were funding an independent research team tasked with securing the internet and platforms for the greater good that would be fine. But that isn't Project Zero. Project Zero is a weapon wielded by a company trying to protect it's monopoly.


Yes but if Apple is trying to "be seen as the secure and private platform" then really from a consumers point of view they should be diverting resources to being secure and private.

The fact that this is possibly two faced by google doesn't change the fact that it is a net good if Apple is sincere in their pivot, because they'd want this dealt with anyway and they get them highlighted for free. If Apple just want to be "seen" as secure and private without actually making it so then it's good that it's being exposed as hollow words.

You 'may' have a point with smaller competitors to Google but really Apple is a large enough target that there are other capable threats targeting them that will use these vulnerabilities for worse than just keeping them in line with their marketing material.


I don't understand why this is an either or type scenario. Apple should be focusing on security as you've stated, AND Google uses Project Zero as a tactical weapon.


OK. In your mind, what's the ethically correct way to do security research into major company's products and disclose what you might find?


And independent nonprofit organization with a clear mission statement and no ulterior motives. Not Google Employees operating under the oversight of Google management.


Would the results still be ethically clear in your mind if this nonprofit with a clear mission statement received significant funding from Google?


Like Mozilla? I think if such an organization existed, I would hope that it recognized the conflict of interest in such an arrangement and be working to clarify or rectify the arrangement.


What's nasty about what they're doing here?


They pay a team to embarrass competitors. The technical aspect is a small part of what is happening here.



I wish my competitors "embarrassed" me by helping to improve my software for free.


So you don't view this as Apple getting free whitehat testing and a chance to not get hacked? I think there are two viewpoints you could take here and I think the truth is directly in the middle.


Competitors should be embarrassed when it concerns security flaws. It's one of the best ways to generate media buzz and inform customers about the flaws, and also their consequences.


There are many possible motivations for Project Zero, and the reality is that more than one is likely responsible for the inception and ongoing sponsorship of the team's membership and activity.

What made you settle on this specific one?


Why are the downvotes confusing? You are making completely unsubstantiated claims.


Brutal?

90 days is unbelievably conservative. It's frankly ridiculous. Imagine you found weaknesses in a bridge. 90 days to disclose would be insane.


these downvotes are confusing.

You should check out

https://news.ycombinator.com/newsguidelines.html

for some answers.


90 days is enough to divert your next sprint from "Build new 3d Emojis" to "fix critical bugs". Of course it's not the same thing, but in the end : same company, same budget : just a question of priorities ... Project Zero is the stick. I still don't know what the carrot is.


It isn’t marketing and it isn’t brutal. It’s closer to charity.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: