fbpx

Reenactment and Deepfakes by John R. Patrick

Written: April 2023

Facial recognition issues go much broader and deeper than surveillance. A team of researchers funded by the German Research Foundation published a paper called, “Face2Face: Real-Time Face Capture and Reenactment of RGB Videos“.  What the researchers have done is mind boggling. They call it facial reenactment in real-time. A reenactment involves two separate videos. One is called the “source” and the other is called the “target”. To demonstrate a reenactment, one of the researchers served as the “source”. The target could be a famous movie star or a world political leader. The source video is used to “reenact” the video of the target. Suppose one video frame showed the target with his mouth closed. Using the research technology, the video was modified based on the video of the source and, voilà, the target has his mouth open.

 

The German research is profound. The researchers said, “Our approach is a game changer.”  The technology can enable editing of videos in real time on a home PC by non-experts. In 1994, the cover of Scientific American showed a picture of Marilyn Monroe standing beside Abraham Lincoln. It demonstrated what was possible with computer graphics, and it was stunning at the time. The technology demonstrated by the German researchers could be called face manipulation. It can bring “Fake News” to a whole new level. The target video of a world leader might say, “I recommend we look for a way to have peace”. The reenactment could show the leader saying, “I recommend we look for a way to go to war”. A close inspection of the video would show the leader’s mouth forming those words. The researchers said, “We hope that the numerous demonstrations of our reenactment systems will teach people to think more critically about the video content they consume every day, especially if there is no proof of origin.”  The danger is people watching the reenacted video on social media may find the video so real the viewer will not make the effort to verify what the original source of the video was.

 

The German researchers showed how AI technology could create a reenactment. A street level term for this would be deepfake. Advances in technology have resulted in many new words in the dictionary. Deepfake is one of them.

 

A deepfake is artificial media in which a person in an existing image or video is replaced with someone else’s likeness. Deepfakes are created using an AI technology called deep learning. A powerful computer in the cloud is fed large amounts of data of audio and video from a real person. In other words, the AI is trained to look or sound like someone else. The computer then uses this data to create a new image or video which is indistinguishable from the original.

 

Deepfakes can be used for a variety of harmless purposes including entertainment, satire, and benign propaganda. Unfortunately, deepfakes can also be used to create fake news and spread misinformation. As deepfake technology becomes more sophisticated, it will become increasingly difficult to tell real videos from fake ones. In the harmless category, a video was created to show Queen Elizabeth making holiday wishes. The clothing, hat, the face, the body all looked like the Queen, but it was not. You could not tell until the end of her speech when she danced the Irish jig with a fast tempo and lively steps. It was hilarious.

 

In Friday morning’s WSJ it was reported the Federal Reserve chief received a phone call from a person who sounded exactly like Volodymyr Zelenskyy. They actually had a conversation. The Fed stated no confidential information was exchanged, but certainly it could have been. There is a significant concern about the potential for deepfakes to be used to manipulate public opinion and damage people’s reputations. Some politicians routinely make up names for their opponents. With deepfake technology they will be able to put words in the mouth of the opponent. In 2021, social media showed video of President Biden altered to show him stutter uncontrollably. I predict we will see examples of deepfakes during the 2024 race.

 

We should all be aware of deepfakes and be on the lookout. One way is to look for inconsistencies in the video such as unnatural movements or changes in lighting. We should look at the source of what we hear or see. If the audio or video or image is from a source you trust, it is more likely to be real. However, even trusted sources can be hacked. The bottom line is we should be skeptical of what we see on social media posted by a political action committee (PAC) or any person affiliated with a political party or candidate.

 

Epilogue: I used Google’s Bard generative AI to fact check some parts of this article. Bard is a helpful assistant. More and more content is going to come from generative AI. We need to be careful and skeptical of everything we see or hear or read.