Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Both are Google - from an outside view we shouldn't distinguish. Google should hold itself to a consistent bar.

It highlights how divisions operate in silos at Google, and just because Project Zero causes a lot of positive security marketing for Google, it doesn't seem that the quality bar is consistently high across the company.

Also, please don't forget this is still not fixed.



Funny thing is I agree with you that Google should hold itself to that bar, but I don't agree as to Project Zero being the reason. I think we very much should distinguish Google from P0, and that P0's policy should be irrelevant here; their entire purpose is to be an independent team of security researchers finding vulnerability in software, indiscriminately. It seems a number of others here feel similarly (judging by the responses), and ironically their support for the position is probably being lost by dragging P0 into the conversation.

The reason I think Google should hold itself to that bar is something else: Google itself claims to use that bar. From the horse's mouth [1]:

> This is why Google adheres to a 90-day disclosure deadline. We notify vendors of vulnerabilities immediately, with details shared in public with the defensive community after 90 days, or sooner if the vendor releases a fix.

If they're going to do this to others as general company policy, they need to do this to themselves.

[1] https://www.google.com/about/appsecurity/


Are you suggesting Google to make all unfixed vulnerabilities public after 90 days? Would that be even if the finder does not want them to become public? Or just as an opt-out type of thing.


I'm only suggesting Google needs to fix everything in 90 days (and reveal them afterward as they consider that standard practice) so they don't have unfixed vulnerabilities past that. I don't really have opinions on what policies they should have for cases where that isn't followed, though I think of thing even having a policy for that case encourages it not to be followed to begin with.


Vulnerability deadlines are disclosure deadlines, not remediation deadlines. There's plenty of vulnerabilities that can't be fixed in that time, and I think it's fair for the public to know about them rather than keeping them secret forever.


"Fair to the public" was neither intended to be nor is the concern. Their stance has always been "better for security" and disclosing an unpatched vulnerability is generally worse for security unless you believe it'll encourage people to fix things by that deadline.


On this case knowing about this vulnerability allows you to take corrective action. If Google cannot fix the root cause this doesn't necessarily mean there aren't mitigations that can be done manually by an end user (yes it sucks, but still better than getting hacked)


When users can mitigate it I agree with you (I forgot about that case in the second half of my comment), but there have also been cases when users weren't able to do anything but they disclosed anyway, so that doesn't explain the policy.


Insecurity is invisible. Users have no way to know the weaknesses in the software they use until it's too late. Disclosure is meant to make it possible for users to see what weaknesses they might have so they can make informed decisions.

Users still benefit to know about issues that can't be fixed (think about Rowhammer, Spectre and similar), so as these attacks become more practical (eg https://leaky.page or half double) they can adjust their choices accordingly (switching browsers, devices, etc) if the risk imposed by them is too high.

Of course (using an analogy for a second), some can say that it would be better for people to never find out that they are at increased risk of some incurable disease, because they can't do anything about it.

But for software, you can't make individual decisions like that. Even if one person doesn't want to know about vulnerabilities in the software they use, others could still actually benefit to know about them, and the benefit of the many trump over the preferences of the few.

That is, unless the argument is that it's actively damaging for all of the public (or the majority) to know about vulnerabilities in the software they use. If the point is to advocate for complete unlimited secrecy, and for researchers to sit on unfixed bugs forever, then that's quite an extreme view of software security and vulnerability disclosure (but that some companies unfortunately still follow).

Disclosure policies like these aim to strike a balance between secrecy and public awareness. They put the onus of disclosure on the finder because it's their finding (and they are the deciders on how it's shared), and finders are more independent than the vendor, but I could imagine a world in which disclosure happens by default, by the company, even for unfixed bugs.


I assume they haven't fixed it yet because they don't consider it to be severe enough to prioritize a fix.

So the reporter waits >90 days, then publicly discloses. Isn't this exactly how it's supposed to work?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: