Deepfakes: Don't believe your eyes!
Forgery has been around since time immemorial. Comrades who had fallen out of grace with Stalin were removed from pictures, models are given a wasp waist and aunt Tilda suddenly loses weight via Photoshop. It's therefore fair to say digital images should be viewed with a healthy dose of skepticism. So far, videos have proven more resilient to manipulation, and if they were tampered with, the changes were easy to spot. Researchers from Carnegie Mellon University have now developed a method that may usher in a new era of forgery. Artificial intelligence now autonomously creates fakes that leave me speechless.
The technology allows content (like movement, facial expressions) from one video to be superimposed onto another - with stunningly realistic results! In one example, an Obama interview was layered on top of a Trump interview with Trump now saying Obama's lines in perfect sync. Developers also took the facial expressions from US television host John Oliver and applied them to his late night colleague Stephen Colbert - including minute details like nods, smiles and blinks. What distinguishes these videos from common fakes is that they were created almost fully autonomously by AI, with little need for human intervention. While in the past, entire teams spent countless hours on manipulating historic recordings, e.g. for Forrest Gump, AI is now taking over. Animating the corner of a mouth originally required highly skilled specialists, today, computers animate entire faces (and very soon people and complex scenes) on their own. Yes, if you look closely, you can spot minor flaws but the technology is still in its infancy, after all. And don't forget: AI never rests, it constantly learns and develops. I bet my boss would love for me to be the same way!
Let me present an analogy to shed some light on how AI learns. Imagine criminals seeking to counterfeit money but lacking a great deal of the required technical skills. Their first amateurish bills get circulated - and quickly spotted by the police who then issue a press statement. In it, they outline how to spot counterfeit money. The counterfeiters study the facts, recognize their mistakes and create the next iteration of forged money. The police again detect the bills, after more extensive research, issue another press statement and the next cycle begins. There are two adversaries here contesting with each other while generating new insightful data, which is why this approach is called "Generative Adversarial Network" (GAN). The generator (counterfeiters) forges money and the discriminator (police) detects errors and issues a statement which is then analyzed, after which the next round commences. The project is considered completed only once the discriminator has no more objections.
These networks can even be creative on their own! On October 25, 2018, the famous auction house Christie's in New York auctioned off a picture by Edmond de Belamy for $423,500. True, there exist more expensive pictures and more famous artists, still, the auction caused quite a sensation. That's because Edmond Belamy is not human! A generative adversarial network had dabbled in painting - with great success. Consequently, the painting does not bear a name but the mathematical representation of the algorithm used during its creation. And we're already one step further. Songs entirely composed by AI are already on the market and AI-driven movies, computer games and programs for self-driving cars are in the works. Naturally, use of AI can always be limited to certain aspects of a project, if needed. Picture a computer game that has players roam through gigantic virtual worlds. Until now, designing these landscapes was incredibly labor-intensive. For example, the game "Just Cause 4" features a freely explorable world of 1,024 square kilometers with intricate details from bumpy roads to single shrubs and various kinds of wildlife. Its creation could now be entirely left to AI, with QA being the only human element in the development process. This could save millions of development costs.
Still, despite the positives there's always potential for abuse, i.e. deepfakes in the form of images or videos that look deceptively real. As the aforementioned Obama and Trump example shows, it's already difficult to identify fakes today. And, bearing in mind the observed pernicious effects of fake news in social networks, this technology could have fatal consequences in the wrong hands. How quickly could a fake video depicting a violent crime bring down a politician? How long would it take intelligence agencies to manufacture videos of crimes allegedly committed by an opposing state? And how long would it take us to get the nasty images out of our heads should they turn out to be fake later on? It seems we'll have to face these questions very soon. AI developers are aware of this danger and try to develop new analytical methods parallel to their AI research to detect deepfakes. I fear, in spite of their efforts, not everything can be stuffed back into Pandora's box, which the technology has already opened. So be wary when you encounter videos of Trump singing the Russian national anthem at the top of his lungs or the Pope boldly tapdancing in front of a huge crowed!
What I would like to know: Will you approach "revealing" videos online with more distrust from now on? Or do you only believe half of what you see anyway?