Hacker News new | past | comments | ask | show | jobs | submit login

My main point is that the protocol as published is completely unrelated to the prank scenario, that's simply out of scope. The protocol does not prescribe who is able to report certain Diagnostic Keys that have tested positive. In a centralised deployment, that is likely under the current German reporting chain for infectious diseases, mrPrankster has no capability to falsely report a positive test result. You have a trustworthy central stakeholder that can provide a ground truth. At the very least it could be designed to be revocable (a step that would be necessary for false positive test results anyway).



> “simply out of scope” (of the protocol)

But it is in-scope for the framework, would you say not? If we want to evaluate the privacy aspects its important to understand the whole system.

First you said it’s a complete non-issue, and now you say actually we need to tweak things here and there in a serious fashion. That’s fine.

> “The protocol does not prescribe who is able to report certain Diagnostic Keys that have tested positive.”

It heavily implies though that it is a decision by the user. It says the keys never leave the phone, it also says that the keys with the users consent gets uploaded. Maybe what they actually meant is that the keys get uploaded alongside a signed cert of the local health authorities. Or that when you get tested the health authorities extract something from your phone and they themselves report using that. But it very much sounds like this is also a very important part of the protocol then.


I don't feel like I'm contradicting myself there. Yes, the scenario of pranks would be in scope for the overall system or framework, sure. Pointing it out as a leakage / flaw of the proposal by Apple and Google is counterproductive though in my mind since a) it can be easily tackled in those other parts of the framework and b) we don't even have a specific single framework to talk about on that particular matter so it makes little sense to spread FUD about it.

> But it very much sounds like this is also a very important part of the protocol then.

That might be arguing semantics honestly, the protocol as published suggests restrictions that are beneficial to the end user's privacy, sure. It otherwise does not dictate any particular government, country, or region where the keys are supposed go in case of a positive test results or how they should be verified / handled. That in my mind would again fall into the category of the overall framework that we do not have. What we have is a manual system that is ineffective and hard to scale. What this adds is a privacy aware method to tackle a tiny part of a digital supplement to this manual system.

That's why I'm so insistent on the in scope / out of scope, sorry if that comes across harsh but I don't feel it's particularly productive to construct hypothetical overall threat models based on this very limited technical proposal. Scenarios such as malicious distributions of tests are much better looked at in the context of a full framework proposal. I can come up with dozens of threat models that include unrelated things, that doesn't mean it's particularly responsible to share those imho. We're the technical audience that can grasp this, pointing out potential shortcomings is fine but they should be grounded in reality.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: