Self-Sovereign Identity for more Freedom and Privacy - SelfKey

Deepfake is a relatively new technology that sounds like something straight out of an episode of Black Mirror (in fact, deepfake technology is seen in several episodes of the popular Netflix show). Much like any new technology, deepfake has some devastating consequences for both identity and privacy. 

Using artificial intelligence, deepfakes can replicate your voice, your face, and your mannerisms in order to create a video, or image, that looks and sounds like you, but isn’t you. You can see it in action in this Youtube video of Bill Hader impersonating Tom Cruise, which went viral earlier this month. The implications of this technology are staggering. It’s undeniable that deepfake is going to have a massive impact on the future of digital identity, let’s look at how.

What is deepfake?

Deepfake sounds a little bit like the newest Bond villain - it’s a portmanteau of “deep learning” and “fake”. In very simple terms, what deepfake does is combine and superimpose existing images and videos onto source images and videos. For example, I could make a video of you become a video of Ryan Gosling by using deepfake technology. However, while the term itself is relatively new, the technology has been around for a long time. 

In 2014, a graduate student named Ian Goodfellow invented the Generative Adversarial Network (GAN). This technology is able to generate new data sets using an algorithm to analyze old data sets. For example, using GAN, I could scan all of your selfies and generate a new selfie of you that isn’t a copy of any of the others. GAN goes beyond replication as it manages to create something brand new but is still an accurate representation of the source material. GAN was mostly used for artificial intelligence research until 2017.

The deepfake name actually comes from a Reddit user who went by the username u/deepfakes. In 2017, u/deepfakes uploaded several videos of digitally altered pornographic videos “starring” popular female celebrities. While it didn’t necessarily go viral, the videos were extremely popular and large communities were created around deepfakes. To create these videos, u/deepfakes used open source technology and subsequently created an app called FakeApp to allow others to do the same. FakeApp has had it’s website taken down, but you can still find it online if you look hard enough. The original videos were later removed from Reddit for breaking rules regarding consent and communities were banned on the site as well. 

The technology has continued to develop, becoming easier to use and in many ways, more commonplace. Memojis on the iPhone use the same technology. And of course, the film industry has been using similar technology for a long time. You’re probably familiar with the death of actor Paul Walker while filming Furious 7 in 2013. Normally, the death of a major actor would have made finishing a movie impossible, especially since there were scenes with Walker that hadn’t been shot yet. However, with the help of Walker’s two brothers and a talented VFX team, the missing scenes were created and the movie was a hit. 

There are countless other examples, one being the appearance of a young Carrie Fisher at the end of Rogue One despite the fact that the actress was currently sixty years old. Similar technology was also used during the infamous “walk of shame” scene in Game of Thrones when Cersei Lannister walked naked down the streets of King’s Landing. Lena Headey, who played Cersei, used a body double and her face was superimposed onto the other actress’ body.

Modern technology, modern problems

The possible implications of deepfake are quite frankly terrifying, and the ethics of such a technology are already being examined. Understandably, deepfake pornography, in particular deepfake revenge porn, is a big fear for many. Imagine someone creating a pornographic deepfake video starring you and threatening to send it to your family, friends, employer, or even upload it onto porn sites. You know that it is not you in the video, but as the technology develops the truth is harder to see. So far, deepfake porn has been strictly targeted towards women, deepfakes of men are often political or comedic in nature.

Even scarier are the political implications of deepfakes, which we have already seen. In May, a doctored video of House Speaker Nancy Pelosi was released, making it look like she was impaired. The video, while not a strict deepfake, had been edited to make it look like Speaker Pelosi was drunk by slowing down the original audio but keeping the rest of the original video the same. The video was retweeted by President Trump and his personal lawyer, Rudolph Giuliani (interestingly Facebook refused to take the fake video down). Buzzfeed and Jordan Peele made this video of Barack Obama calling Donald Trump a “dipsh*t” to raise awareness about deepfakes

With the rise of deepfakes, fake news is becoming a very real and scary consequence. Deepfakes have the potential to divide people even further, with each side creating media to support their own argument. Especially with the aftermath of the 2016 US presidential election and alleged Russian interference, there is a big worry that deepfakes could play a role in the upcoming 2020 election.

There is also the very real possibility that actual photos and videos are dismissed as deepfakes even though they are real. What if a video of a world leader is released saying that they are launching a nuclear attack - it could be doctored or it could be real, and there is no way to tell. Deepfake technology could have devastating consequences for democracy. In a world where deepfake exists, how do we know what is real anymore? 

Combating technology with technology

Deepfakes were easy to spot in their early iterations, but in the last two years the technology has vastly improved and become much easier for people to use. The process has become as simple as downloading an app on your phone, and it is becoming harder to discern what is a deepfake and what isn’t. Given how fast the technology is developing, it is nearly impossible for researchers combatting deepfake to keep up.

To fight deepfakes, some countries are taking action to either make them or certain aspects of them illegal. The Pentagon is working with several research institutions to develop technology that can spot deepfakes but the process is slow. Additionally, it is unclear how technology to spot deepfakes will be used once it is developed. Who will use the technology, how it will be applied, and how it can be scaled are important questions that need to be answered. 

Part of the answer here is blockchain. Currently, there is an app called Truepic that verifies photos and videos through twenty different checks to ensure they are real and unaltered. After the image or video is verified, it is recorded on the blockchain. This then allows anyone to check the original picture or video on the blockchain against another version or suspicious looking recreation. The technology is particularly useful for journalists or NGO workers to ensure that what they are sharing is legitimate. Truepic is a relatively young company; it certainly is a step in the right direction and it has a lot of potential to help prevent deepfakes. 

While blockchain does not completely solve the problem of deepfakes, it certainly makes it easier to verify information which is an important part of ensuring that what we are seeing is real. In some ways, the blockchain is perfect for this type of work because it is a public record and does not belong to any person or government - the information is unbiased. 

Conclusion - Deepfake is reshaping our digital identity

To some extent, deepfake technology has generated a modern day arms race. The technology is developing so fast that those trying to prevent it have to work even faster. Blockchain is part of the answer, but it is not the complete solution. That being said, it is a ray of hope in a time where it feels like we may have no control over how we appear to the rest of the world.

However, concerns still remain, and rightfully so. Deepfakes are redefining our digital identity and increasing the risk of our data being stolen. And deepfakes are only the beginning, there is inevitably going to be something else we will have to worry about in the next few years. Now, more than ever, it’s vital that we have control over our digital identity and who has access to it. 

While deepfakes can be humorous in nature, they also serve as an important lesson. The internet has made it possible for us to connect to people all over the world in a number of ways. We can make friends with someone on the other side of the globe now, just through talking to them online. Deepfake encourages us to be more cautious with what we share, and who we share it with. The repercussions are frightening.

SelfKey is a fast-growing DAO developing digital identity solutions. The DAO seeks to empower individuals and corporations to take back ownership of their identity data

Newsletter list

Terms and Conditions隱私政策
© 2017- 2023 by SelfKey