Note that years ago, Moxie has studied a similar problem of how to let users know if their contacts use Signal or not without uploading the whole address books like e.g. WhatsApp does [0]. It's similar because in both instances you want to "match" users in some fashion using a centralized service while keeping their privacy.
He ruled out downloads of megabytes of data (something that the Google/Apple proposal would imply) and couldn't find a good solution beyond trusting Intel's SGX technology, arguably not really a good solution but better than not adopting it at all [1].
You have kind of a computation/download/privacy tradeoff here. You can increase the time interval of the daily keys to weeks. Gives you less stuff to download but the devices have to do more hashes to verify whether they have been in contact with other devices. You can increase the 10 minutes to an hour. That means less privacy and more trackability, but also less computation needed.
My guess to why Google/Apple didn't introduce rough location (like US state or county) into the system was to prevent journalists from jumping onto that detail and sensationalizing it into something it isn't (Google/Apple grabbing your data). Both companies operate the most popular maps apps on the planet as well as OS level location services that phone home constantly so they are already in possession of that data.
Increasing the lifetime for what are currently "daily keys" reduces the precision of the contact reporting - e.g. your example of a week means that a positive user would need to report at least 3 weeks of keys, so someone can now do correlation over 3 weeks instead of X days.
There's no inclusion of location data as that has no value - the only thing that this protocol cares about was whether you were in the vicinity of someone who has tested positive for cover-19, and so suggest you get tested. Knowing where you are/were has no value for that purpose.
I think he was trying to say you could reduce the computation by narrowing the space-time radius, then searching for matches. Even a state-level restriction would be enough to substantially narrow down the possible matches without sacrificing anonymity.
You don't need full SGX if you trust the provider.
People already trust providers with their medical data. Why not trust some computation service to do the matching? This is a moment for trustworthy institutions to create data centers and get customers by their reputation.
Combine a big market of trustworthy providers and SGX, and abuse becomes much more difficult.
To answer your question: the handling of medical data is governed by HIPPA. Everything else (outside banking data) in the US (outside of California) is pretty much fair game.
> My guess to why Google/Apple didn't introduce rough location (like US state or county) into the system was to prevent journalists from jumping onto that detail and sensationalizing it into something it isn't (Google/Apple grabbing your data). Both companies operate the most popular maps apps on the planet as well as OS level location services that phone home constantly so they are already in possession of that data.
Apple is not in possession of the location of your phone. Their mapping system is designed to keep all queries to the servers anonymous using random rotated identifiers, even going so far as to keep the server from being able to see the full route from start to end (IIRC it's broken up into at least two chunks that are requested separately, though I don't know the details).
> To protect user privacy, this data is associated with an identifier that rotates at the conclusion of a trip, not with the user’s Apple ID or any other account information. Rotating the ID at the conclusion of the trip makes it harder for Apple to piece together a history of any user’s activity over time.
I think it's a nice gesture, however I wouldn't say that Apple isn't in possession of that data. The phone already uses other Apple services that are linked to your Apple ID and those services tell your IP address to Apple. Even if Apple can't track you via the rotating ID (not sure how it's made, maybe they actually can't), your IP address will reveal you, at least as long as you are using ipv6 which Apple has been heavily pushing in the past years.
They might not have the data refined, but even the whitepaper says it only makes piecing together the location history harder, not impossible.
What you quoted is specifically about traffic collection. I don't know where to find a definitive source on this now, but Apple used to have a marketing page that said
> When you use Apple Maps, your route from A to B is fragmented into scrambled sections on Apple servers because nobody else should know your entire route. Not even us. In fact, we don’t even know who requests a route.
My recollection was that the device itself sends multiple requests in chunks to get the route, but I don't know if this is accurate or if it's just fragmented on the server prior to any data retention.
In any case, the point is that Apple very intentionally discards data that can be used to track you, and anonymizes what they do retain. While yes, it's very likely that Apple could figure out where you are if your device is set to use Apple services and they wanted this info, they've set up their services to make it as difficult as possible for them to figure this out.
> Published keys are 16 bytes, one for each day. If moderate numbers of smartphone users are infected in any given week, that's 100s of MBs for all phones to DL.
"Moderate" rate of infections is not millions of new cases per week worldwide. That would be such a catastrophe that contact tracing would be useless.
No, my understanding is you only would download two weeks worth of keys when a new infection is reported. There is an assumption in any method of contact tracing that once people test positive that they are isolating themselves. If they don't, there is no reason to do the tracing since the virus will simply spread exponentially.
Regardless of the technical issues with this, I think the "prank" issue Moxie brings up is much more serious. We've already seen the phenomenon of "Zoom bombing", I can imagine "tracer bombing" would be a much more serious issue. The only way I could see this working is that if when you enter a positive result you have to enter some sort of secret key from the testing authority, but that's totally not tenable given a lot (most?) testing these days is from private providers.
Why wouldn't the patient provide their framework info (if they so chose) at the time of sample collection? Then the medical authority could report it to the local government on the patient's behalf in the event of a positive test. Other end users then decide which (if any) "reporting authorities" to pull data from and check against.
This also seems to address Moxie's concern about public location data being necessary (unless I've missed something). If I only pull all the positive tests from my local county or state, that should hopefully be a small enough dataset to be manageable even on fairly resource constrained low end devices.
My understanding too was that there was a middleman involved in collecting and distributing the keys, to avoid people spamming the system. You want to be 100% sure it's a positive, and not put the trust in the user. Otherwise random people could just say they have it. The local government would have to submit the keys as you mention and act as moderators for that region.
> The local government would have to submit the keys as you mention and act as moderators for that region.
There's a big difference between a centralized and decentralized model here.
* Centralized, there's a single (or only a few) worldwide APIs that you need approval to work with. This also hinders interoperability of different end-user app implementations.
* Decentralized, anyone can set up a distribution server and require whatever authentication they'd like for it. A local government, a hospital, the Red Cross, etc. The framework becomes nothing more than a decentralized protocol that can potentially even be repurposed for other novel uses.
For the decentralized approach, bear in mind that there's nothing preventing a third party from hosting and managing a distribution server on behalf of someone else. So (for example) the CDC could host a server (and handle authentication) for a state or county government that didn't feel up to the task.
Another example, say the local hospital has their own database (possibly hosted by the state or Google or whoever). They can feed their (authenticated, locally collected) data to a local authority (the city or county), which only needs to accept data from trusted institutions (ie all the hospitals in the area). They can in turn feed this inherently trustworthy data to a state system, and so on. If each entity in this hierarchy makes their dataset publicly available, then users can independently decide which datasets are relevant to them (perhaps they traveled recently?) and check them on a daily basis.
It doesn't really matter who hosts the database. I specifically was talking about middleman, as in someone who confirms the person is infected and then takes care of passing 14 days of keys to the server. Where the server is isn't really relevant here, just that the end-user doesn't have direct access to it.
The media reports about the german version of this include getting a one-time code from the health authorities that you have to enter into the app to mark yourself as infected.
As far as I understand, the proposal from Google and Apple is about the underlying framework, but you can set up additional controls a level above in the app and the server infrastructure. So it's likely by design that it doesn't address the issue as the solutions to ensuring only verified cases can trigger alerts must be specific to the local circumstances.
Looking at the Google doc it looks like they're going to restrict it to some "medical authorities"
"In order to be whitelisted to use this API, apps will be required to timestamp and cryptographically sign the set of keys before delivery to the server with the signature of an authorized medical authority."
Don't these providers need to be registered somewhere? It should be easy to reach them and provide with either code generator software or even printed one-time codes for database addition.
Why use a centralized model? Allow users to subscribe to a data source so that any entity can push their own dataset.
This is also important because it keeps the framework usable under a variety of adverse and unusual circumstances. An aid organization operating in a disaster zone or impoverished area could make use of such a framework without needing permission from a higher authority or even reliable internet access to the outside world.
Because it can abused. The easier you make it upload, it also allows bad actors to upload invalid data to cause people to go into quarantine unnecessarily. It only works well if you can trust the data, so I think it should error on the side of validating the data instead of openness.
In an open model, the extent to which abuse is possible would be determined entirely by the authentication requirements (or lack thereof) imposed by the entity operating the server the user selected. That being said, another commenter linked to a set of specifications (https://news.ycombinator.com/item?id=22836871) which seem to indicate that (at least on iOS) the data source is determined entirely by an app that the user chooses to run on top of the framework.
Many of the issues moxie brings up either don't apply universally or are unrelated to the part this specification touches upon.
Maybe it helps to bring up a non US perspective here: in Germany, like many other European countries, this becomes a non-issue. We have central authorities that can greenlight a positive test result or invalidate wrong results, immediately making the prank argument completely hypothetical. The question as to why this should be centralised is easy, because it already is. I'd honestly expect the reporting chain in the US not being to dissimilar from this, at least at the state level.
It's also important to note that all of this only supplements the existing, regularly manual, workflow of contact tracing. A very laborious and error prone task, especially in regions with a large number of infections. These techniques take a massive load off of a certain part of the health system that is notoriously underdeveloped because it's not really needed in this quantity in normal times.
> in Germany, like many other European countries, this becomes a non-issue.
What do you mean by that? The protocol, as published, doesn’t have a role for the central authority. Even if the German state knows that mrSick tested positive and mrPrankster did not, how would the diagnosis server reject the keys published by mrPrankster? They are by design resistant to de-anonymization. In fact the German state can’t even know if a specific key reported as positive for covid belongs to a german resident or not.
My main point is that the protocol as published is completely unrelated to the prank scenario, that's simply out of scope. The protocol does not prescribe who is able to report certain Diagnostic Keys that have tested positive. In a centralised deployment, that is likely under the current German reporting chain for infectious diseases, mrPrankster has no capability to falsely report a positive test result. You have a trustworthy central stakeholder that can provide a ground truth. At the very least it could be designed to be revocable (a step that would be necessary for false positive test results anyway).
But it is in-scope for the framework, would you say not? If we want to evaluate the privacy aspects its important to understand the whole system.
First you said it’s a complete non-issue, and now you say actually we need to tweak things here and there in a serious fashion. That’s fine.
> “The protocol does not prescribe who is able to report certain Diagnostic Keys that have tested positive.”
It heavily implies though that it is a decision by the user. It says the keys never leave the phone, it also says that the keys with the users consent gets uploaded. Maybe what they actually meant is that the keys get uploaded alongside a signed cert of the local health authorities. Or that when you get tested the health authorities extract something from your phone and they themselves report using that. But it very much sounds like this is also a very important part of the protocol then.
I don't feel like I'm contradicting myself there. Yes, the scenario of pranks would be in scope for the overall system or framework, sure. Pointing it out as a leakage / flaw of the proposal by Apple and Google is counterproductive though in my mind since a) it can be easily tackled in those other parts of the framework and b) we don't even have a specific single framework to talk about on that particular matter so it makes little sense to spread FUD about it.
> But it very much sounds like this is also a very important part of the protocol then.
That might be arguing semantics honestly, the protocol as published suggests restrictions that are beneficial to the end user's privacy, sure. It otherwise does not dictate any particular government, country, or region where the keys are supposed go in case of a positive test results or how they should be verified / handled. That in my mind would again fall into the category of the overall framework that we do not have. What we have is a manual system that is ineffective and hard to scale. What this adds is a privacy aware method to tackle a tiny part of a digital supplement to this manual system.
That's why I'm so insistent on the in scope / out of scope, sorry if that comes across harsh but I don't feel it's particularly productive to construct hypothetical overall threat models based on this very limited technical proposal. Scenarios such as malicious distributions of tests are much better looked at in the context of a full framework proposal. I can come up with dozens of threat models that include unrelated things, that doesn't mean it's particularly responsible to share those imho. We're the technical audience that can grasp this, pointing out potential shortcomings is fine but they should be grounded in reality.
It doesn’t need to be everyone involved in opting in to tracing, or everyone notified to comply with the self-isolation recommendations to reduce the r0. It works even with only partial penetration of the populace.
> So first obvious caveat is that this is "private" (or at least not worse than BTLE), until the moment you test positive.
> At that point all of your BTLE mac addrs over the previous period become linkable.
Linkable over the period of 14 days. Or even linkable during one day - each day means new key, so linking between these might be attempted only on basis on behavioral correlations.
What to do with such data? Microanalysis of customer behaviors? It won't be possible to use such data for future customer profiling, as it won't be possible to match the history with identifiers after the infection. This data is practically worthless.
* Use stationary beacons to track someone’s travel path
Doesn't work because there's no externally visible correlation between reported identifiers until after the user chooses to report there test result.
* Increased hit rate of stationary / marketing beacons
Doesn't work because they depend on coherence in the beacons, and the identifiers roll every 10 or so minutes. Presumably you'd ensure that any rolling of the bluetooth MAC also rolls the reported identifier.
* Leakage of information when someone isn’t sick
The requests for data simply tell you someone is using an app - which you can already tell if they're using app.
The system can encourage someone to get tested, if your app wants to tell people to get tested, then FairPlay to that app (though good luck in the US).
- Fraud resistance
Not a privacy/tracking concern, though I'm sure devs will have to do something to limit spam/dos
> Doesn't work because there's no externally visible correlation between reported identifiers until after the user chooses to report there test result.
So you're saying it works after the user reports their test result.
* The only things published by someone when they report a positive test result are the day keys for whatever length of time is reasonable (I assume ~14 days?)
* Given those day keys it is possible for your device to generate all the identifiers that the reporter's device would have broadcast.
* From that they can go through their database of seen identifiers and see if they find a match.
That means your device can determine when you were in proximity to the reporter, so it would in theory be able to know approximately where the contact happened, but you can't determine anything beyond that.
The server that collects and serves reported day keys doesn't have the list of identifiers any devices have encountered, so it can't learn anything about the reporters from the day keys they upload.
Let's say there's a passive fixed beacon (whatever) in a public space, it can't connect the identifiers to any specific device either, but you could see it being a useful public health tool - "we saw carriers at [some park] at [some times]". It still would not know which specific devices were reporting those keys. Even if that device went through after the day keys were published there's no way to know that its a device that's been seen before.
Only the server is able to link published day keys together because it receives them, so presumably knows who published those. The spec explicitly disallows an implementation from doing this, but assumes a malicious server, so it works to ensure that the only information it can get are day keys with no other information.
It is pretty clear that a single piece of data gathered by this system is fairly useless. But the more data an entity has gathered, the better it can be used to paint a whole picture.
If the server is malicious (think a government doing surveillance), it is possible that the data from passive fixed beacons gets linked with the identity of the person uploading keys, via IP address (when keys are uploaded) or facial recognition gathered by cameras next to Bluetooth receivers. This data can also be linked with data from fixed beacons in other places, which would allow for tracking someone throughout a variety of places.
Again, this solution _cannot_ work and it is a _threat_ to a permanent loss of privacy.
This is like the government and the adtech companies sleeping in the same bed, without any other power opposition in the balance.
1) The "solution" is created by a monopoly of 2 american private corporations.
2) It can only work reliably if everyone wear an (Apple or Android) phone at all time, and consent to give data
3) You are not necessarily infected if you cross an infected in the street at 5 meters. This will have too many false positives and give fuzzy information to people
4) It doesn't help people who are infected and _dying_
It just _doesnt make sense_. To me, it looks like electronic voting, but worse. No one can understand how it works, beside experts.
Today it is reviewed, but then the app will be forgotten and updated in the background with "new features" for adtech.
We are forgetting what we are fighting : a biological virus. All effort should go toward understanding the biological machinery of the virus and the hosts, in order to _cure_ the virus. We should be 3D printing ventilators, analysing DNA sequences, build nanorobots and synthesis new molecules.
From looking at the specification, I don't see any serious loss of privacy there, if this is implemented as stated.
2) You don't need 100%, you only need enough to drop the R0 below 1. You'll likely need a majority of people using this, which is hard enough, but you don't need everyone using it.
3) The apps are not supposed to include every single registered contact, only contacts that are over a bit longer timeframe. A typical value I've heard is 15 minutes close contact, that is considered a high risk contact when contact tracing.
> 1. You'll likely need a majority of people using this, which is hard enough, but you don't need everyone using it.
It's going to be built into iOS and Android at the operating system level, and I assume have a very clear prompt to opt-in. It would not surprise me if it quickly reaches >50% of active users, at least for iOS.
Getting a timely Android update on the other hand...
1) and 2) - the fact that Google and Apple have what is essentially a monopoly on smartphone software is exactly what makes this a good approach. it's the easiest way to reach a high percentage of the population.
3) false positive are a hell of a lot better than having no way to trace back contacts while someone was asymptomatic but contagious.
4) it helps stop others from becoming infected and possibly dying. how is that not a good thing?
> We should be 3D printing ventilators, analysing DNA sequences, build nanorobots and synthesis new molecules.
3D printing ventilators is a horrible idea, and everything else towards a vaccine takes _time_. This is something that can be rolled out today and that will help the situation. You can uninstall the app when this is over.
As both are untrustworthy American corporations no matter what they do or say it will always be a huge privacy issue. I would go back to my old SE810i phone the instant this was forced on iOS and Android users. People are already doing this (especially young people) so this will be apple and google shooting themselves in the foot.
> 4) it helps stop others from becoming infected and possibly dying. how is that not a good thing?
The virus will always be here, we cannot hide forever, we must find a way to cure it or reduce its biological effect. Once covid19 goes away (if ever), and a new virus appears, NO ONE will have that app turned on, and by then, the new virus will have spread just like covid19.
I have a very simple solution to win time : total confinement of people of more than 60 years old when a new virus is detected, and wash hands.
Also check hemo2life, which is an example of what we could do in terms of medicine
> Once covid19 goes away (if ever), and a new virus appears, NO ONE will have that app turned on, and by then, the new virus will have spread just like covid19.
Devil's advocate: so why not just keep the app running forever in the background?
If there's no virus to report - that's fine.
But the moment a new outbreak starts, the data is already there, you just have to report that you're sick.
Absolutely, and if it is indeed local only and encrypted until you say you are infected, it could be part of HealthKit and Android equivalent (don't know if it has a name)
Why not implement all your solution and contact tracing, won't help you to know that the person you sit near in a bus/plane just died by an infection diseases? Maybe you are doomed but you might just save your family. The next virus might affect children not only elders, we need to have every possible tool at our disposal including PPR, test equipment.
Maybe contact tracing is late (or maybe not) but is good to have a good system ready when the next one might appear.
I am so terribly frightened by that move I am seriously considering getting rid of Android. Of what I have heard it's going to be backed into the OS and not installed as an app I could de-install / block, right?
What truly open Smart phone OSes are available besides Android and iOS?
Librem is the usual answer, but aren’t there other, existing, baked in parts of Android that compromise your privacy? It won’t change much because of this project.
What's it with people making long, split-up twitter threads like this? They're cumbersome and hard to read. Be an adult, write and publish an article on your blog.
It feels weird having to criticize Marlinspike about this, but stupid practices are stupid no matter how prestigious the person doing them is.
The system doesn't need to ship every key to every phone, much more compact structures like Bloom filters could be used instead. If we assume about 1000 positives per day and each positive uploading 14 days of keys at 4 keys per hour that's a bit over 1 million keys per day. A Bloom filter with a false positive rate of 1/1000 could store that in about a megabyte. Phone downloads the filter each day and checks its observed keys, and only needs to download the actual keys if there's a potential match.
> only needs to download the actual keys if there's a potential match.
One of the design constraints of the service was that it should not know your (suspected) infection status unless you give consent that it should be shared.
> Matches must stay local to the device and not be revealed to the Diagnosis Server.
The better the bloom filter is, the more likely it is that you have actually been in contact with a key if the bloom filter is positive.
Furthermore, the bloom filter has to deal with a lot more keys. In fact, in your example of 1000 positives per day uploading 14 days of keys you only need to upload 14 keys as they only rotate once per day. At 16 bytes per key (as the link above specifies), you'd have to download 14 * 1000 * 16 = 224kb, much less than the bloom filter needs. And this scheme can tell you with 100% certainty whether there has been a match or not, so at least in your example it's much better than bloom filters.
The scalability issues that exist only manifest themselves at larger numbers than 1000 infections per day, say upper tens to lower hundreds of thousands where it starts becoming a problem.
So yes, rough location as moxie suggests is the best method to improve the scheme. Instead of checking the IDs of people thousands or hundreds of km away from you, you could just check the IDs of people in your US state or county. But it has to be smart enough to recognize movement, as in, you need to upload/download all areas you've been in and people living at the borders automatically stand out because they download two or three areas.
Nothing prevents the user from pushing the checking to some trusted service as well, if they so choose. If they trust the service then they'd upload their seen keys to a checking service, rather than downloading the whole set of diagnosis keys. The important part is the decision is in their hands.
Bloom filters could work that direction as well: phone produces a filter of observed keys and uploads it to the service, service checks all positive keys to see if they're in the filter. I think the main point of doing the checking on the phone is that way you're the only one who knows if you've been exposed.
> Published keys are 16 bytes, one for each day. If moderate numbers of smartphone users are infected in any given week, that's 100s of MBs for all phones to DL.
Seems like a usecase for bloom filters or k-anonymity.
16 byte keys for a quarter million people are only 4mb per day.
We aren’t seeing remotely close to a quarter million infections per day. The data sizes are reasonable, even if you multiply it times n days for the backward tracing.
I think his post is a little bit more fearmongering than is necessary.
> I think his post is a little bit more fearmongering than is necessary.
I think that is being unnecessarily charitable because of his high status. His math and his assumptions on this point are completely broken. So broken that if he were some rando, no one would have even read the rest of his thread, much less commented on it at on the front page of hacker news.
This calculation doesn't make sense to me. Since the start of the pandemic, there's been 1.6m confirmed cases so far worldwide. Even if every single one of those were to send 16 bytes identifier, that would still only be 27MB, no?
Where are they getting 100s of MBs per week? I know it's exponential growth and the number of cases will grow, but their calculation still seems off to me.
EDIT: I guess each person has 14 keys, so that makes it an order of magnitude bigger.
His argument is self-defeating. If you have rapid exponential growth and would have to publish hundreds of megabytes of keys per day, this approach of contact tracing is useless and you must instead get the entire population under lockdown. If everybody is sheltering at home, nobody needs notifications of possible contacts, because everybody is doing what would be the response to such a notification already.
This approach, just like the manual approach of tracking potential contacts via paper and phone, is only of use in a scenario with a very limited number of transmissions and an R (reproduction rate) of around or below 1. Its purpose is not to reach such a situation, but to aid in keeping that situation in effect without severe measures. But severe lockdowns must first suppress the infection counts to such levels before any contact tracing may work at all.
To ease on the fear mongering front here: This proposal relies on an app implementing these protocols, you're free to uninstall the app after the pandemic - or not install it in the first place. It is furthermore trivial to check if your device sends out these BTLE packets.
It's not a "can we put the genie back in the bottle" scenario if the genie is wearing a bright warning vest announcing its presence everywhere. You can directly measure if it's still there. All other concerns are not technical ones. If you acknowledge digital contact tracing to be a thing, this is better for privacy than any other proposal so far. The framework is designed to prevent abuse even in case it would not go away.
I'm not sure I'd count this as fearmongering. I think I know which way the tradeoffs work in my mind but there's not an unreasonable set of paths that lead to this being more permanent.
Given the broad powers passed recently in the UK they could make having this app a legal requirement to go in any shop if they wanted, and whether apps can be uninstalled reasonably is down to whoever controls the OS.
Would it not make sense to require everyone who is able to to install and use this? Or require Google and apple to force install it?
It's not like that scenario does not worry me either, sure. From a purely "fight the disease, nothing else matters" standpoint, yes, more installs mean better coverage and would make digital contact tracing work more efficiently. I haven't heard of any western government considering such a reductionist approach though, that would not be a proportional response and honestly a bit bizarre. Even in such a case the proposal by Google/Apple would be beneficial since it limits the usefulness of this data for other purposes, being designed with privacy in mind and far less intrusive than other tracking methods we could draw up.
I would still maintain that this nightmare scenario is a problem with any particular government that would implement and misuse such measures, not with an anonymization effort for the BTLE stack. We absolutely should push back against the former and insist on what's missing for a full system to be implemented in a sensible manner without infringing on basic human rights, that's a worthy hill to die on, this particular aspect is not.
Why wouldn't it? Phones used to be trackable based on WiFi MAC address, now it is randomized. General drive is towards avoiding tracking, I don't see any reason why would it change.
Having a standardized framework is a good thing provided it meets certain minimal security and privacy needs. The idea is to enable end users to proactively collect useful data without making the potential for government abuse any worse than it already is.
So long as all data remains on the physical device at all times and any access or export is _always_ actively initiated by the user, I don't see how it makes the current situation any worse. An abusive government can already subpoena or otherwise monitor all the network providers.
> An abusive government can already subpoena or otherwise monitor all the network providers.
The advantage that this tracking proposal provides is that it unfurls contact tracing from one node. Until now, authorities have had to work from a large dataset ( all phones on a mast at a particular time ) inwards; now they can start with one node of interest and expand outwards.
Combined with some other 'temporary' pandemic measures, such as the legal requirement to carry your phone at all times, this provides a huge benefit to any authority.
> such as the legal requirement to carry your phone at all times
In such a hypothetical scenario, how is making this (currently opt-in) framework mandatory any different from requiring you to install a government provided app? Such a government app could trivially log sensor and GPS data, yielding a _far_ more detailed view. The point is that the mere existence of this framework doesn't make the situation any worse than it already is.
Of course it will. These companies could already track you far more efficiently than this allows them to. This system makes tracking LESS efficient, not more. It serves no purpose other than what is stated.
Yikes, this is prep for big brother's guilt by association. I wouldn't want to test positive for anything the state can track (radical ideas? you're now a positive in this system). Opt out.
Or, it's just what it says. It's a way to implement test and trace, something that is absolutely needed to stop a pandemic like this from killing hundreds of thousands if not millions of people.
Everything isn't a slippy slope. Everything isn't about your privacy. Everything isn't a grand conspiracy that only you can see and the sheeple are too dumb to understand.
> Everything isn't a slippy slope. Everything isn't about your privacy. Everything isn't a grand conspiracy that only you can see and the sheeple are too dumb to understand.
Crises are often used by despots to seize power. That's not a conspiracy, that's historical fact. In the United States, we've seen it recently - 9/11 was used to degrade our rights across a large number of issues, and we've never gotten them back.
Implementing systems to track everyone people come into contact with is absolutely a huge invasion of privacy, and obviously not necessary.
> Sometimes, extreme measures are needed.
Extreme times do not justify all extreme measures.
Every time you lose rights or privacy, assume it's permanent. Our government is not suited for repealing law.
I think it may differ by region. I can say factually that the testing in Texas is absolutely abysmal. Many people who have lots of symptoms are being turned away for testing.
On Tuesday our illustrious governor announced with lots of fanfare that Walgreen's would be expanding drive-thru testing using Abbott's 15-minute testing devices. It's now Friday evening and still no word on even where the locations will be for these testing sites. I'm sick of these BS press conferences and press releases, just STFU if you're not actually ready to do anything.
There is no way we'll be able to restart our economy without at least a 10X or more increase in testing. For right now the lack of testing isn't that huge of a deal when most people are quarantining at home anyway, but it will become a huge deal when people start going back to work. I'm still kind of amazed I haven't seen any convincing plan about how this eventually ends. Everything will just flare back up again once social distancing ends without tons of robust testing.
This may be a bit of an optimistic take, but there's at least some evidence the IFR is ~0.37%. Given the current number of deaths in NYC, that would imply at least ~20% of the city population has already been infected, likely more given the lag between infection and death. If that's true, the best strategy will probably be to keep vulnerable groups isolated and loosen some restrictions until herd immunity is reached. How long that would be depends on the hospitalization rate since ideally we'd have hospitals just barely at max capacity, but I don't think its implausible by the end of the summer we'd reach ~75% infected at which point all restrictions could be lifted.
Further to this, once we have antigen tests rolling out en mass it will give us a much clearer picture as to how many people have been infected (and are now hopefully immune). Until then we just need to sit back and wait.
It’s very close to “almost no testing ability” if you want to routinely test people who don’t have symptoms so that people can actually leave their homes sometime before a vaccine.
I think thats what makes it even more important to have something like this. That way people who aren't presenting symptoms, and can't get tested, can find out if they have contacts and then quarantine. And testing will get more and more pervasive as that technology develops.
No clue who/what a moxie is (presumably some guy) and it makes this threads title seem even more absurd.
OP feeling like we all need to know what moxie thinks about this reminds me of this [Chappelle Show skit](https://www.youtube.com/watch?v=Mo-ddYhXAZc) about getting Ja Rule's hot take on current events.
Ah, that makes sense then. Though I do think "Founder of Signal comments on new Google/Apple contact tracing proposal" would be a far less absurd title than "Moxies take on...", further adding "private" in quotes is a bit cheeky and definitely imparts bias into the discussion.
None of us developers here are dumb enough to think an api whose goal is to literally track and trace human beings is 100% private. The question is really, is it private _enough_.
“ adhering to our stringent privacy protocols and protecting people's privacy. No personally identifiable information, such as an individual's location, contacts or movement, will be made available at any point."
Finally a decent use-case for blockchain and nobody is paying attention. Seems to make a lot more sense to reconcile location and proximity from a shared user-controlled anonymous ledger.
There's plenty of Blockchain based proposals for the backend of this, none of which takes off because it's another one of these imaginary use cases that can just leverage existing centralisation without wasting time on solving problems the introduction of a decentralised Blockchain architecture brings with it.
A modest proposal: since almost everyone is going to get this and a much smaller percentage is vulnerable, perhaps we should just use this system to track those who choose to register as vulnerable.
He ruled out downloads of megabytes of data (something that the Google/Apple proposal would imply) and couldn't find a good solution beyond trusting Intel's SGX technology, arguably not really a good solution but better than not adopting it at all [1].
You have kind of a computation/download/privacy tradeoff here. You can increase the time interval of the daily keys to weeks. Gives you less stuff to download but the devices have to do more hashes to verify whether they have been in contact with other devices. You can increase the 10 minutes to an hour. That means less privacy and more trackability, but also less computation needed.
My guess to why Google/Apple didn't introduce rough location (like US state or county) into the system was to prevent journalists from jumping onto that detail and sensationalizing it into something it isn't (Google/Apple grabbing your data). Both companies operate the most popular maps apps on the planet as well as OS level location services that phone home constantly so they are already in possession of that data.
[0]: https://signal.org/blog/contact-discovery/
[1]: https://signal.org/blog/private-contact-discovery/