Hacker News new | past | comments | ask | show | jobs | submit login
Generating convincing audio and video of fake events (economist.com)
111 points by sohkamyung on July 2, 2017 | hide | past | favorite | 40 comments



Hasn't this already happened? Remember the Project Veritas videos of ACORN employees giving advice to a pimp on how to run his child prostitution ring that were all over the news in 2009? Multiple investigations found the videos where so heavily edited as to be fake:

https://en.wikipedia.org/wiki/ACORN_2009_undercover_videos_c...

The original videos were everywhere, ACORN was driven to bankruptcy, and the investigations revealing the videos were faked got hardly any news coverage. The art project described in this post is very quaint in comparison to this very real life event.


To expand on this point further, the phenomenon of synthesizing accounts of a person saying something has already existed for thousands of years.

If I tell you that "Trump said that low income families will receive unconditional, federally sponsored medical coverage," you will not immediately take my word on it. You will critically analyze my statement by scritinizing who I am to make this claim, and the claims congruency with what you already know to be reality. I am not a reputable publication, so you are not likely to believe me, because the claim is contrary to what is already known.

The same will become true of video. A video of what looks like somebody saying something will become as useful as hearsay testimony. We will, through necessity, begin critically scrutinizing the source of footage, not just taking it's content at face value.

In many ways this already exists in the photography world. Convincing photo manipulation is old technology at this point, and I think we've adequately adapted to it.


In many ways this already exists in the photography world. Convincing photo manipulation is old technology at this point, and I think we've adequately adapted to it.

Have we? I'm pretty sure there's still a pretty serious problem with young girls developing their self-image based on heavily manipulated photos of models in the magazines they read. That's a benign example, as far as the purpose of the manipulation, which can be argued to be 'artistic'. An intentionally-damaging example of photo manipulation is much of the falsified propaganda that was distributed last year during the presidential campaign, which I'm sure included a lot of manipulated photos. That stuff definitely influenced the election, and our whole political climate. I don't think people, in general, have adapted to the idea that what they see isn't necessarily real.


This article title is a bit off. The audio is not generated, the video is not convincing, and the original conversation is not fake.

This is an art project which highlights the implications of research like Face2Face: https://youtu.be/ohmajJTcpNk


Glad you posted this, I was about to look for this same link to share. Between this and Adobe's voice emulation software https://thenextweb.com/apps/2016/11/04/adobes-upcoming-audio... it appears possible to create very realistic 'footage' of constructions based on real people doing and saying things. The film industry create computer generated characters of deceased actors (Star Wars for example). this makes the Economist piece very out of touch with realities, and presumably there are far more sophisticated technologies we don't know about too....


Someone has changed the title now, which is a good thing, because the original clickbaity title wasn't very descriptive at all, and the new one is actually much more interesting.


Most prestigious publications have weighed in on this topic, with all it's titles and euphemisms: fake news, alternative facts, conspiracy thinking, echo chambers..... mostly with nothing to say.

Scientific American recently ran a feature that may as well have been titled: "You won't believe SHOCKING the TRUTH about FAKE news!!"(1) For the first half of the article, they cited credentialed research institutes, people, sub-disciplines like computational sociology and talked about recent "advances" made "studying social phenomena in a quantitative manner."

As far as I can tell, the only content was "confirmation bias exists," with a non-explicit implication that our understanding of this newly discovered phenomenon is advancing fast. There was also an anecdote about texas.

Everyone seems to want to weigh in on this conversation, but no one seems to have anything substantial to say about it.

So sure, "conspiracy theorists" will be able to challenge video evidence. I don't know if that changes anything.

(1) actually titled "inside the echo chamber"


I remember an old sf book (sorry, forget the name) where this kind of "fake" video is used for nefarious purposes. Imagine a video of a president saying something inflammatory that they never said.

Talked about that with a friend who does video editing in the early 2000s and she said they could it then.

So, while GANs make make it far easier to build up fake soundbites, the capability has been around. We just need restraint in the media and ways for the common person to verify info. (Given how easily fake news spreads around FB, the latter is probably more important.)


It's one thing to use existing footage of someone (e.g. the president of the U.S.) talking and combine it with a different audio stream to create the impression that that person said something which they in fact didn't. However, an approach based on generating images with neural networks can go far beyond that. In theory, you could generate footage of arbitrary actions performed by any person - the president beating up a child, the president shooting a gun into a group of protesting homosexuals, the president in an X-rated home movie, you name it.

There are a lot of things special effect teams could do in old Hollywood movies, but newer movies have the potential to do the same kind of effects much more convincingly. Likewise, newer technology like this one will become better and better at creating fake footage.

Another research direction that is related is replacing faces in videos with that of other actors, or morphing parts of a real video to match a second input source, both discussed previously on HN a couple of times.


I know that at least two of Philip K. Dick’s short stories dealt with this issue; he liked exploring the question of “What is real, and how can we know?”.

The Mold of Yancy, 1955 (http://defectiveyeti.com/moy.pdf)

The Unreconstructed M, 1957 (https://www.jerkersearcher.com/sffaudio_pdfs/TheUnreconstruc... )


And Ubik too, a masterpiece.


> ways for the common person to verify info

If that could be done automatically, we could skip a lot of time consuming, expensive research. Church and Turing already proved there are no general solutions to the Entscheidungsproblem.

On the other hand, if you were thinking of some sort of service that provides "authoritative" verification, you've only moved the problem. The service can be faked (or corrupted) just as easily. Similarly, we already have many historical examples where restraints on media are (de facto) used against political enemies.

What we need is a way to educate people with the scientific method and just enough logic to implement it practically. Sagan discussed this problem in "The Demon-Haunted World: Science as a Candle in the Dark", which I regularly recommend to anybody that seems to need his "Baloney Detection Kit"[1].

[1] https://www.brainpickings.org/2014/01/03/baloney-detection-k...


The issue, unfortunately, is that if I were to generate a video of Mike Pence doing something X-rated with another man, there's no conceivable way the scientific method would help. I mean, if he were a closeted homosexual, would you really immediately discount it as faked just because the evidence was against it? That happens to conservative American politicians all the time.

Worse, the current president has set a new bar for saying outlandish things. Every time he opens his mouth I am literally shocked into silence. The number of things I genuinely cannot imagine him saying is shrinking rapidly. If you were to see a video of Trump walking up to a podium like Obama did when he announced Bin Landen's death and saying very solemnly that bombs were about to be dropped on Russia, would your first thought be "Preposterous! There's no way the man would do that; it must be a clever hoax."

Not to mention that if this technology becomes exceptionally user-friendly it could be used to destroy lives in myriad ways. Where is your science now?


I would say that such a video is none of my business; just like the Clinton's bj, that's a private topic that should be relevant only to the people involved.

I get your point, that scandals involving outdated systems of morality are still a problem in some areas. Technology marches on, and we must learn - as a species - to update our social and moral systems fast or we may find out that intelligence is not[1] an effective survival trait in the long-term.

> The number of things I genuinely cannot imagine him saying is shrinking rapidly.

Trump's speech is very predictable in it's general theme. The details are random, but he always says things that are self-serving, in a childish "obviously I was always right" attitude. If there's is a way to add a not-very-subtle insult, he will add it.

Most important, it's foolish to try to parse his statements for information, because that's not how he uses language. Instead, he uses phatic[2] language to express emotion, confidence, and social standing.

> [Trump] saying very solemnly that bombs were about to be dropped on Russia

Why would he bomb the people he wants to do business with? The few comments Trump has made about increasing talks with Russia is one of the very few things I like about Trump (not that I expect he will actually do anything whatsoever on that topic). Anything that reduces the risk of nuclear war is great.

> I am literally shocked into silence.

Fear is the mind killer. Recently a lot of people of - orthogonal to political ideology - are doing everything in their power to push the politics of fear. Almost all of it is unsubstantiated hot air... which is what the Baloney Detection Kit is for. Science isn't perfect, but it's still the best method we have for filtering out charlatans, propagandists, and simple mistakes.

[1] https://news.ycombinator.com/item?id=8916033

[2] https://news.ycombinator.com/item?id=14577909


The point wasn't about outdated morality. It could just as easily be a doctored video of Theresa May stabbing a man to death in cold blood.

I think you've focused too much on the specific examples I gave. Consider a faked video of your SO committing adultery, or a faked video of you committing a crime that's used to convict you, or a video of Merkel saying Putin has erectile disfunction or something.

You can't just analyze these videos for inconsistencies; at some point, the technology becomes more powerful than human ability to discern whether it's real. Then what? What do we do once video evidence becomes unreliable?

(By "shocked into silence" I mean, "stunned by how asinine/insane/pointless/cruel that just was".)


> Church and Turing already proved there are no general solutions to the Entscheidungsproblem.

I'm skeptical that that is as strongly connected as you seem be implying.

The inability to give a general method to determine whether a program halts / whether the output of a program has some property, does not seem to me to say all that much about the ability to verify images or other sources of information.

You would not say that, because of the lack of general solutions to the Entscheidungsproblem, that you can't verify that some message was signed with the private key corresponding to a particular public key, would you?

If not, then what is it about verifying that a photograph isn't doctored that the Entscheidungsproblem implies a problem for, but which it doesn't imply a problem for checking a cryptographic signature?


>Imagine a video of a president saying something inflammatory that they never said.

Thankfully there is no need for this presently.


On the other hand, this would be the only reasonable explanation for that.


Indeed.

It strikes me that outrageous people are more vulnerable to fake recordings. Because anything seems possible.


> ways for the common person to verify info

For some sorts of recordings, this seems like the sort of thing that a blockchain should excel at. My camera could submit a proof of work with the cryptographic signature of a recording embedded in it. Presumably, it's expensive to run a GAN or whatever other algorithm to generate a good fake video. So, if the signature was adopted into the blockchain sooner with respect to the timestamp of the recording than how long it would have taken to fake it, future viewers of the video could confirm that the timeline fits.

Of course, this depends on a large-scale discrepancy in those timings. And it's only really useful in cases where the timestamp of the video can be separately established in some manner, such as a publicly-verifiable schedule.


Why not just broadcast the signature and metadata right away?


This will become extremely common in the future. We won't be able to trust any measurement (including photo, video, audio).

This is why we need a trust management system. Something that can be used to track people's statements, commitments and predictions in order to attribute them scores. Then, we'll be able to trust people's word without having to rely on expensive and unreliable verification processes.

- Imagine being able to hire people without needing to interview them?

- Imagine being able to let strangers borrow your car or get into your house with little or no risk?

- Imagine being able to predict the future with known accuracy thanks to the trust score of every contributing agent?

- Imagine a world where nobody lies because the currency is trust?


Imagine a world where crime is literally impossible to get away with. In such a world, new laws would have no need to follow common standards of morality or decency, and could be made arbitrarily specific. Politicians and policymakers could simply decide what they wanted the law to be, and would have no need for common people or police to report crimes or enforce these laws.

Today, there are many laws that could not be made, because doing so would be ridiculous; almost nobody would report anyone breaking them, and police would almost never enforce them. There are some such laws today, created in the distant past, and the cultural norms have simply changed around them. Imagine if all such current laws would be impossible to break without automatically being detected with irrefutable evidence. There have been outdated laws in the past that have eventually been overturned, since they were obsolete by community standards. If they were instead impossible to disobey, where would the incentive be to abolish them? What would the laws of such a society eventually look like?


I advocate total transparency. I believe that privacy contributes to more harm than good. I believe that all laws should be systematically enforced, even though I probably break many on a daily basis (knowingly or not).

Systematically enforcing bad laws is probably the best way to emphasize their damage, and the fastest way to eliminate them.

Imagine putting half the population in jail because they once consumed drugs, did not pay taxes on all income, etc. I'm sure this wouldn't happen, and laws would be changed.

This would help getting rid of obviously bad laws, but this probably wouldn't help minorities (those who break less often broken but still bad laws). Therefore, we should still be critical of all laws, and focus on reducing their number to a minimum.

Personally, I believe that 99% of laws and regulations do more harm than good, and would get rid of them all. I don't really like the idea of a central government, and would prefer not have them enforce any law.


> Systematically enforcing bad laws is probably the best way to emphasize their damage, and the fastest way to eliminate them.

You don’t give any evidence for this, and practical experience seems to indicate the opposite – that the absolute enforcement of laws is what keep them in place.


Case in point: the security theater at the TSA every time one flies.


> Imagine putting half the population in jail because they once consumed drugs, did not pay taxes on all income

You don't need to imagine this, change some numbers around and you get the US.

What is legal and what is moral are different things.


Systematically enforcing laws would put all kinds of people in jail (including lawmakers), not just racial minorities.

Shouldn't we aim to reduce the gap between legality and morality?


The most shocking thing about that is that nobody has used it before.


Need to be a product, like:

http://www.newscaststudio.com/2015/09/09/viz-unveils-dynamic...

I guess the current easiest market is Military PsyOps in "found footage" for swaying public opinion - the kind of stuff that can be handed off to embedded reporters as "proof". Having some known foreign talking head (head of state or of a military organization) saying something damning might possibly help - but I think run of the mill special effects footage mixed with some traditional film making would work as good.

I also assume that most sales would be direct, and not advertised on the web...


The article's recommendation to demand metadata seems pointless. It's easier to fake than content.


Easier to _generate_, but also easier to disprove. So maybe not easier to fake. The article briefly touches on this.


I wish projects like the Hardy video had a convention to tell the audience that they're fake.

Sometimes the late night shows release satire videos which are difficult to determine if they're real. We should have a logo for "btw, this news footage is a joke."


I disagree, sharpening the skill of figuring out what is fake and what is not is important. If you don't learn that, it puts you even more at the mercy of malevolent people putting out fake stuff (which they would then of course not mark).


The issue is that there comes a point where the human skill becomes insufficient - the technical capacity to fake exceeds the cognitive capacity to detect it. Sufficiently advanced technology etc.

The solution is not clear.


Right. So authenticity becomes a matter of technical analysis. Dueling experts. And given alternative versions of events, people will believe what they like.


Or how about hard laws that forbid fabricated news (unless tagged as such)? I'm not saying it is a solution, but it should be explored.


You're in good company; Plato believed any use of writing would dull the mind. Fortunately we heeded his warning and snuffed that one out early...


Nothing new here. GANs still are not nearly as good at creating fakes then humans.


I think they mixed up which network is the adversary?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: