Making Deep Fakes Mainstream

Irene Kim
4 min readJul 15, 2021

Recently, we wrote about deepfakes following the popularity of the TikToks of “Tom Cruise’’ performing random tasks. The use of deepfakes, a particular field in artificial intelligence known as deep learning, in these videos has been entertaining for all of us, especially during the pandemic. But, it has brought up larger questions about the prevalence and impact of these technologies in the future. What if instead of Tom Cruise randomly eating a lollipop, it was the President of the United States commenting on sensitive international relations with the potential to spark a years-long conflict? As we discuss the world of deepfakes and explore the pace at which this technology is developing, it’s important to consider how this will impact the world of politics and civic society.

We’ve already seen the use of shallowfakes, or manual manipulations of existing content (e.g. video clips), playing out in disturbing ways that only highlight how the future of how such fakes could pervade our thought processes, information sources, and belief systems. Some have even raised concerns about how deepfakes may be overshadowing the immediate threat posed by shallowfakes. The video of Nancy Pelosi, U.S. Speaker of the House, appearing drunk and talking in a slurred manner, is an example of this manually altered content. Compared to shallowfakes, deepfakes utilize a different production method since they use machine learning algorithms to generate this content. Rather than an individual manually going through and editing these clips together, artificial neural networks are used to model the shape of an individual’s mouth to create the appearance that they’re saying things they really aren’t. This is even more concerning given that the potential input data for these neural networks to learn from is expansive for public figures, who often go on television, conferences, or broader engagements.

The technology of creating deepfakes is also becoming more prevalent with the advent and popularity of easily downloadable deepfake apps. One app allows you to swap faces with a celebrity or to recreate famous movie scenes based on a pre-trained generative model. Another app — Jiggy — allows users to create GIFs of them performing different dances, all based on a photo of their body. Though on its face, these apps are benign and all in good fun — it isn’t difficult to imagine how the future of these deepfake apps could expand and provide a larger audience with a way to generate such content. The CEO and co-founder of one of the popular deepfake apps, Reface, has emphasized the need for responsibility in bringing this technology to the market. This, in particular, is why this app limits users to existing clips and GIFs, rather than allowing the user to generate or upload their own videos. And though this feature will soon be available on the app, Reface has committed to countering this with a mechanism to review and detect fake content.

As apps like Reface emerge and quickly launch in the market, the expansion of deepfakes into our everyday lives will have profound consequences that need to be contemplated from multiple stakeholders, including the tech world and government agencies. We have already seen the impact of shallowfakes fueling fake news and misinformation. To counter problem content created by deepfakes, there will need to be a coordinated approach requiring government regulation, active private-public sector partnerships, and broader awareness in the general population.

Fortunately, this issue has been at the forefront of the political conversation internationally. In April, the EU passed the first legal framework on artificial intelligence: the Artificial Intelligence Act. Utilizing a risk-stratification approach to categorize different technologies on the spectrum of “let’s ban this” to “let it be”, the Act categorizes certain deep fakes at the limited risk level, which raises certain issues about transparency. Under this Act, people in the EU would have the right to know if they were watching a deepfake video. Given the future of mainstream deepfakes, there will need to be more collaboration, research, and policy development moving forward. And, while contemplating the issues posed by the proliferation of deepfakes, we must not lose sight of the existing problems posed by shallowfakes. In fact, understanding the similarities and differences between both will be critical to ensuring that such technology will be used responsibly and ethically.

Here at Persolv AI, we hope to provide everyone (including non-coders who are interested in policy, history, and humanities, etc.!) with the strong technical underpinnings to think about the future mainstream proliferation of deepfakes and the resulting challenges posed by this prevalence. Persolv AI Bootcamp, a class taught by a handful of Stanford Lecturers and Teaching Assistants, will provide you with the most applicable fundamentals of Artificial Intelligence and Machine Learning to engage meaningfully on the impact of AI.

--

--