It shouldn't be too hard to film a deepfake movie from a screen or projection that don't make it obvious it was filmed. That way, the cryptographic signature will even lend extra authenticity to the deepfake!
>> I hope this sort of video signing will be mainstream in all cameras in the future (i.e. cellphones etc), as it will pretty much solve the trust issues deep fakes are causing.
> It shouldn't be too hard to film a deepfake movie from a screen or projection that don't make it obvious it was filmed. That way, the cryptographic signature will even lend extra authenticity to the deepfake!
Would you even have to go that far? Couldn't you just figure out how to embed a cryptographically valid signature in the right format, and call it good?
Say you wanted to take down a politician, so you deepfake a video of him slapping his wife on a streetcorner, and embed a signature indicating it's from some rando cell phone camera with serial XYZ. Claim you verified it, but the source wants to remain anonymous.
I don't think this idea address the problems caused by deepfakes, unless anonymous video somehow ceases to be a thing.
Similarly, it could have serious negative consequences, such as people being reluctant to share real video because they don't want to be identified and subject to reprisals (e.g. are in an authoritarian country and have video of human rights abuses).
The point of signing is to do a checksum of the content of the video, not just adding a signature next to the stream (which would be pretty useless).
The signature of the politician on a street corner would only be valid if it actually verifies the content of the video; and the only entity that can produce that signature is the holder of the private key.
> The point of signing is to do a checksum of the content of the video, not just adding a signature next to the stream (which would be pretty useless).
That's what I meant: sign the video content with some new key in a way that looks like it was done by a device.
> The signature of the politician on a street corner would only be valid if it actually verifies the content of the video; and the only entity that can produce that signature is the holder of the private key.
The hypothetical politician's encryption keys are irrelevant. The point is that even authentic video is going to be signed by some rando device with no connection to the people depicted, so a deepfake signed by some rando keys looks the same.
IMHO, cameras that embed a signature in the video stream solves some authenticity problems in some narrow contexts, but it's nowhere near a panacea that "will pretty much solve the trust issues deep fakes are causing."
It's not about the keys of the politician. It's about the keys of the manufacturer of the camera. The aim of the signature is to prove that a video did in fact come from such camera at such location, at such time.
Of course, if one has physical access to the camera/sensor it's possible to make a fake video with a valid signature. But it's a little more difficult than simply running a deepfake script on some machine in the cloud.
> It's not about the keys of the politician. It's about the keys of the manufacturer of the camera. The aim of the signature is to prove that a video did in fact come from such camera at such location, at such time.
If the goal is to allow a manufacturer to confirm video came from one of their cameras, I think it's somewhat more helpful than I was originally thinking, but it doesn't change my opinion that this technology would only "solve some authenticity problems in some narrow contexts." Namely, stuff like a burglar claiming in court that security camera video was forged to frame them. I don't think it addresses cases of embarrassing/incriminating video filmed by bystanders with rando cameras and other stuff like that.
> Of course, if one has physical access to the camera/sensor it's possible to make a fake video with a valid signature. But it's a little more difficult than simply running a deepfake script on some machine in the cloud.
If you're faking a video like I described, you certainly would have "physical access to the camera/sensor" that you claim made it. You're making a fake, which means you can concoct a story for the fake's creation involving things that are possible for you to acquire.
A screen with double the resolution and twice the framrate should be indistinguishable. Moreover if you pop the case on the camera and replace the sensor with something fed by display port (probably need an fpga to convert display port to lvds, spi,ic2 or whatever those sensors use, at speed) that should work too.
You can raise the bar a bit. No one checks to see if a $5 bill is fake. There is no real upper limit on what the payout for a deepfake could be. TPM isnt going to save facial recognition ID like the IRS from being obsoleted by deepfakes. But, for things that go to trial a TPM dashcam with tamper evident packaging (that can be inspected during trial) is probably good enough for small claims court. You could add GPS spoof detection, put as much as you can on the tpm chip (like the accelerometer), and sign all sorts of available data along with the video, but that will up the unit price a lot, and for the kind of fraudulent payouts you'd be trying to stop, you wouldn't make it enough harder to keep it from being cost effective for the fakers.
Not if the camera includes metadata like focus, shutter speed, accelerometer, GPS, etc. I don't really know, but I imagine the hardware security required wouldn't be too far from what's common now. Cameras are already unrepairable, so I suppose the arguments would have to be more from the privacy and who-controls-the-chipmakers perspectives.
GPS spoofers are available legally, you just replace the gps antenna with the spoofer, so no FCC violation. You'd have to break the law if you don't want to open the case to get to the antenna. I don't have any answers to the accelerometer or focus other than replacing those sensors too, and if you made the accelerometer on the same tpm enabled soc it would make moving shots like from a dash cam hard.
Its not like TPMs are infallible. And even if they are thought secure today, older encryption becomes trivial to crack with time. But like you said, its about raising the bar. You can do a lot to mitigate the threat of deepfakes to a certain point which will eventually push them back into just the realm of those who really know what they are doing. That's not ideal, but well funded and talents groups have been able to falsify evidence to discredit people since the beginning of time. So the nature of the problem doesn't change, they just have another tool.