Ride the Lightning

Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.

DARPA Targets Deep Fake Forgeries

August 13, 2018

OK, I confess that I have an inexplicable fondness for Nicolas Cage and all his quirky movies. But why do deep fake forgers use his image so much into videos? That is mysterious to me.

Thanks to Dave Ries for passing along a really interesting post from Naked Security.

Most of us are familiar with the women who have been blackmailed with non-consensual and completely fabricated revenge porn videos, their faces stitched onto porn stars' bodies via artificial intelligence (AI).

But the damage done by these deep fake videos can be much worse, as the US Department of Defense points out. The doctored image of Parkland shooting survivor Emma Gonzalez, pictured ripping up a shooting range target in a photo that was then faked to make it look like she was ripping up the US Constitution is a great example. It makes people angry – because they believe the fakery.

Researchers in the Media Forensics (MediaFor) program run by the US Defense Advanced Research Projects Agency (DARPA) think that fake images could be used by our adversaries in propaganda or misinformation campaigns. And of course, that's already happening. Fake news works well enough but faked video footage is more convincing – and therefore more dangerous. It is very hard to spot the well done fake videos.

MediaFor has been working on the problem for two years and it's now come up with AI tools that can automatically spot AI-created fakes – the first forensics tools that can do so, MIT Technology Review reported.

Matthew Turek, who runs MediaFor, told MIT Technology Review that the work has brought researchers to the point of being able to spot subtle clues in generative adversarial networks- (GAN-) manipulated images and videos that allow them to detect the presence of alterations. GANs are a class of AI algorithms used in unsupervised machine learning, implemented by a system of two neural networks contesting with each other in a zero-sum game framework. GANs can generate photographs that often look, at least superficially, authentic to human observers.

Though it seems logical to me after reading the post, I never thought of eyelids being such a useful detector of deepfake videos. Faces made with deepfake techniques rarely, if ever, blink. When they do blink, it looks fake. That's because deepfakes are trained on still images, rather than on video, and stills typically show people with their eyes open.

Other cues include strange head movements or odd eye color: physiological signals that at this point are tough for deepfakes to mimic, according to Hany Farid, a leading digital forensics expert at Dartmouth University.

Skilled forgers can overcome the eye-blink issue – all they have to do is use images that show a person blinking. But researchers have developed a technique that's even more effective. They're keeping it under wraps for now as the DoD works to stay ahead in this fake-image arms race.

The advantage is presumably not what NBC News reported on in April -that MediaFor's tools are also picking up on deepfake differences that aren't detectable by a human eye. For example, MediaFor's technology can run a heat map to identify where an image's statistics – known as a JPEG dimple – differ from the rest of the photo.

One example is of a heat map photo that highlights a part of an image – of race cars – where pixilation and image statistics differ from the other parts of the photo, revealing that one of the cars was digitally added. In another image, MediaFor's tools picked up on anomalous light levels and an inconsistent direction from which the light is coming, showing that the original videos were shot at different times before being digitally stitched together.

We've been following such research for years so I agree with the post that MediaFor probably has another breakthrough that it prefers not to make known presently. I reckon we will hear more and more of MediaFor's work. A tip of my hat to MediaFor for figuring out the digital forensics equivalent of a lie detector test for images that can help reveal the lies and those who perpetrate them.

E-mail: Phone: 703-359-0700
Digital Forensics/Information Security/Information Technology
https://www.senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson