Should You Believe What Your Own Two Eyes Are Seeing? The Malicious Intent of Deepfakes

In this Article
Reading Time:
6
 minutes
Posted: 1st April 2021 by
Jaya Harrar
Share this article

How do you know when you are watching a YouTube video that the people you are seeing are real?

In the past, it would have been obvious to the naked eye, but with the rapid advancements of AI in the last few years, it has been increasingly more difficult to spot whether or not the person you are seeing on video is real.

It sounds almost unbelievable – perhaps something from a futuristic sci-fi movie. But we have seen shocking and slightly terrifying examples of ‘deepfakes’ over the past few months. Where it may have once taken an expert with a camera and highly technical software, it is now available for a novice to change an image of a deceased relative into a video of them speaking simply via a phone app.

Video manipulation of this sort can cause an array of issues – most of which are obvious. With fake videos of Obama calling Donald Trump a ‘complete dipshit’, Mark Zuckerberg bragging about having ‘total control of billions of people’s stolen data’ and a fake Tom Cruise practising golf on TikTok, the FBI has issued a blunt warning saying, "malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months". Such synthetic content can refer to any manipulated content – visual (videos and photos) and verbal (text and audio), including deepfakes.

The apprehension towards deepfakes has been around for a while. In 2018 we were questioning whether videos will be able to be deciphered as real or fake only a year after deepfakes emerged on the internet for the first time. Back then, the quality of the content made it obvious it had been tampered with. “In January 2019, deep fakes were buggy and flickery”, said Hany Farid, a UC Berkeley professor and deepfake expert to Forbes.

Deepfake content online is also growing at a rapid rate. According to a report conducted by Deeptrace, at the start of 2019, there were 7,964 deepfake videos online. Nine months later, that figure had already risen to 14,678. “I’ve never seen anything like how fast they’re going. This is the tip of the iceberg”, explained Farid.

From public safety, fake news and revenge porn, deepfakes have caused an uproar, with people feeling sentimental by bringing old pictures of deceased relatives to life, to victims being at the brunt of a doctored video that features them doing things they have never done. This month we have seen a mother charged with multiple counts of harassment after reportedly using explicit deepfake photos and videos to try to get her teenage daughter's cheerleading rivals kicked off the team, which essentially exhibits the way in which any person can manipulate content for a deleterious motive.

Not all is doom and gloom as this new phenomenon is also seen as the ‘future of content creation’. A few months ago, TV viewers in South Korea were watching the MBN channel to catch the latest news, but the usual news reporter, Kim Joo-Ha, wasn't actually on the screen. Viewers had been informed that Joo-Ha wasn’t real beforehand and the channel has since considered continuing the use of deepfakes for some breaking news reports. South Korean media reported a mixed response after people had seen it, with some people being amazed by the realism, and others feeling concerned that the real Kim Joo-Ha might lose her job. Despite the possible negative outcomes, this, like any technological advancement, showcases the good that could come from innovative AI, especially for businesses.

Deepfakes have the potential of being a national security risk.

"This is the future of content creation”, Synthesia’s - a London-based firm that creates AI-powered corporate training videos - chief executive and co-founder Victor Riparbelli said to the BBC.

Research conducted by Kietzmann et al has shown that are clear business opportunities with this AI development. Some are still to materialise while others are being realised already – such as Synthesia’s corporate videos that can allow in-house training content to be translated to a plethora of different languages. In their research, deepfakes were split into five different categories: voice swapping  – which could make audiobook recording a lot easier; text-to-speech – which could give a voice back to those (such as stroke victims) who have lost the ability to speak intelligibly; video face-swapping – which has the potential of the aforementioned in-house training video or, University of Southern California's Shoah Foundation’s Dimensions In Testimony project which allows visitors to ask questions that prompt real-time responses from Holocaust survivors in the pre-recorded video interviews; full-body puppetry – which could see enhancements in gaming and perhaps stunt doubles in movies; and, lip-synching  – which could make dubbed films a lot easier to watch.

Evidently there is a bag full of exciting possibilities with the rise of deepfakes, but we cannot ignore the issues it may bring.

Where does the law stand on deepfakes?

Deepfakes have the potential of being a national security risk. A prime example is how it has the potential to destabilise politics – look at what mere concerns of a televised address from their President being a deepfake did to Gabon. Last month, a political group in Belgium released a deepfake video of the Belgian prime minister giving a speech that linked the COVID-19 outbreak to environmental damage, calling for drastic action on climate change. Some would have thought the video was real. Not only is there potential to spread fake news, but there is potential of causing unrest, a rise in cybercrime, revenge porn, fake scandals and a rise in online harassment and abuse. Even video footage could potentially become useless when used as evidence in court.

The regulation of such technology demands the law to be responsive and in tune with complex, evolving trends.

New laws to regulate the use of deepfakes will be important for people who have damaging videos made of them. Take photoshop laws, for example. In the U.S., the FTC has been enforcing truth-in-advertising laws for many years but they have been slow to respond to image retouching. However, in countries around the world, legislators and regulators are beginning to take action with laws being passed, such as the Photoshop Law in Israel which requires advertisers to label retouched images. Even though this touches more on body image and its mental impacts rather than cybercrime or political wars, these laws are in place to reduce the risk of consumers being deceived. If expanded to deepfakes, content creators may have to state when their video is not real. Producers and distributors could also easily fall foul of the law, as it can potentially infringe copyright laws, breach data protection law, and be defamatory if it exposes the victim to ridicule; the laws on revenge porn could also be applicable for deepfakes, such as those created by Telegram -  predominantly a messaging app, which facilitates autonomous programmes (referred to as “bots”), one of which can digitally synthesise deepfake naked images[1]. A report found that 70% of Telegram users use its deepfake bot to target women and that, as of the end of July 2020, at least 104,852 fake nude images had been shared.

The regulation of such technology demands the law to be responsive and in tune with complex, evolving trends. But, as we all know, the law is often slow to respond. The UK is considering legislation whereby social media platforms could face fines for facilitating such content. This proposal could make companies such as Telegram take more responsibility for the safety of their users and tackle harm caused by content or activity on their service. Progress has faltered, and the legislation may not be passed until 2023, perfectly showcasing the slow nature of the change in legislation. A similar approach could be taken by the U.S., if it was to amend Section 230 of the Communications Decency Act – a law which gives Internet companies almost complete civil immunity for any content posted on their platforms by third parties; but it has been discussed time and time again and remains to open a can of worms for speech censorship. The state of California has already experimented with banning deepfakes, enacting a law last year that made it illegal to create or distribute deepfakes of politicians within 60 days of an election[2]. However, the first amendment which protects citizen’s rights to freedom of expression may make it difficult to expand this any further.

People need to be cautious when it comes to viewing content, now more than ever.

In Kietzmann et al’s research as aforementioned, they propose a possible solution, referred to as a R.E.A.L. framework: Record original content to ensure deniability, Expose deepfakes early, Advocate for legal protection, and Leverage trust to counter credulity. “Following these principles, we hope that our society can be more prepared to counter deepfake tricks as we appreciate deepfake treats”, the paper states.

People need to be cautious when it comes to viewing content, now more than ever. More importantly, however, the law needs to be clearer on what is legal and illegal when it comes to deepfakes. From commercial concerns, such as who would own the rights to a deepfake video of a deceased celebrity, to cybercrime concerns, with the anonymity and endless nature of the internet, it will be a tricky law for governments to decipher.

[1] https://theconversation.com/can-the-law-stop-internet-bots-from-undressing-you-149056

[2] https://www.forbes.com/sites/robtoews/2020/05/25/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-prepared/?sh=5eb6e6177494

About Lawyer Monthly

Lawyer Monthly is a news website and monthly legal publication with content that is entirely defined by the significant legal news from around the world.