>recorded and will be analyzed, possibly leaked in the future in a world where you can trivially create vocal deep fakes with a few samples.
If that's your threat model, you should be far more concerned about recorded zoom meetings or customer support calls, rather than a surveillance camera that in a likelihood isn't even networked.
>I know when I am being recorded in a meeting and it's in a professional setting, not a casual one. Not comparable.
Irrelevant if you're trying to not get deepfaked. It's like complaining about Lyft recordings possibly used to train deepfakes or whatever, when you work as a podcaster. Sure, the recording might be bad because it was done without consent, but it's risible to object to it on the basis of "possibly leaked in the future in a world where you can trivially create vocal deep fakes with a few samples"
So your response is to accept someone's speculation when the company themselves says they are piloting a recording program in the US (however they say the case in the article was not part of this). Given what is in the article, the speculation you linked to is irrelevant.
You are taking a threat model that assumes all recordings have equivalent risk of leakage. On a long enough time scale, you're probably right. I think there is a spectrum of risk that is also based on trust. I trust a company focused on audio/video to handle the materials appropriately a lot more than I do a company where this isn't their core competency.
While /any/ recording presents a risk, recordings you are unaware of are significantly higher risk because you can't do much of anything about them.
You are making a false equivalence and desperately trying to defend it. The two things are not the same.
>So your response is to accept someone's speculation when the company themselves says they are piloting a recording program in the US (however they say the case in the article was not part of this). Given what is in the article, the speculation you linked to is irrelevant.
I don't see how it's irrelevant given the recording program was in the US, and contrary to Trump's bluster, Canada isn't the 51st state yet.
>You are taking a threat model that assumes all recordings have equivalent risk of leakage. On a long enough time scale, you're probably right. I think there is a spectrum of risk that is also based on trust. I trust a company focused on audio/video to handle the materials appropriately a lot more than I do a company where this isn't their core competency.
The problem isn't necessarily that Zoom itself will get hacked, it's that such recordings can get leaked by someone else. Zoom has a quota for recordings, so they often get uploaded to the company's network share/sharepoint, which are routinely targeted in ransomware attacks. Moreover, that's not even the only threat I mentioned. Zoom might be a well known company that has reputation at stake, but the lowest bidder that a megacorp contracted out to provide call center software might not. Finally, if voice cloning is as easy as you claim, there are far easier ways to get a sample. It's not hard to call you with a made up pretense (eg. "are you [spouse]'s emergency contact?") to coax you into producing enough speech samples for voice cloning.
>While /any/ recording presents a risk, recordings you are unaware of are significantly higher risk because you can't do much of anything about them.
Is anyone seriously going to stop calling customer support because of "voice cloning"?
>You are making a false equivalence and desperately trying to defend it. The two things are not the same.
I'm not claiming they're equivalent. Quoting my initial comment:
"you should be far more concerned about recorded zoom meetings or customer support calls"
I'm not sure how you can possibly interpret that to mean "zoom recordings are the same as lyft recordings"
If that's your threat model, you should be far more concerned about recorded zoom meetings or customer support calls, rather than a surveillance camera that in a likelihood isn't even networked.