Opinion | The challenge of detecting fake content
The information revolution has dramatically disrupted the news industry. Technologies like generative adversarial networks use two neural networks—one to generate a fake image and the other to evaluate whether the first neural network succeeded in creating a suitable image. This technology is called deep fakes and while at present it is being primarily used by the porn industry to generate fake celebrity videos, it isn’t hard to imagine how these techniques, in the hands of the unscrupulous, could be used for extortion, defamation and false propaganda. Theoretically speaking, it should be possible to use the same neural network technologies that created deep fakes in the first place to develop forensic techniques that can detect fake content. To make all this more efficient, the immutable life record could be made accessible through APIs so that social medial services and other platforms that disseminate content can dynamically verify the content they carry against the true record of the life of a given person.
Discover Related

Threat of AI-Generated Deepfakes Remains Deep Rooted

One out of four people in India is facing deepfake content, reveals survey

AI’s next goal should be telling the authentic from the fake

Everything you need to know about deep fake videos and how to detect them

The internet cheapened news but AI could reverse that trend

Are Deep Fakes Digital Chameleons?

AI-generated deepfakes are moving fast. Policymakers can't keep up

Opinion | The deep fakes that publications must guard against

Fake videos could upend an election and Silicon Valley pushes a cure

Examining a video’s changes over time helps flag deepfakes

Congress Raises Questions On How Deep Fake Technologies Could Affect 2020 Campaign
