What Is a Deepfake And How To Spot One | HP® Tech Takes (2024)

Introduction

This article will seek to answer two key questions. Firstly, what are Deepfakes which is straightforward to answer, secondly, how to spot one, which is by far the trickier proposition. We’ll also look at the history of Deepfakes, the technology behind their creation and some of their uses.

What is a Deepfake?

Simply speaking, it’s fake media content, typically still image, video or audio of one or more persons that a computer system creates using artificial intelligence (AI) techniques.

The most common use is to generate videos. For example, can computers superimpose one person’s fake images and voice patterns onto another person, instituting a face swap. As a result, a computer creates a Deepfake video of a person doing and saying things they have never done or said.

So what is a deep fake? The term is simply the conjunction of “deep learning” and “fake”, hence “Deepfake”. The distinguishing feature of Deepfake media is that they appear convincing even though they portray fictitious events at first sight.

What is Deepfake Technology?

Deepfake technology is specifically the use of computer machine learning techniques to create visual and audio content with the intention to deceive the audience. The process uses deep learning, a machine learning technique that uses an artificial neural network that learns how to create movements and sounds for an individual person by analysing real videos of that person.

The most typical method of creating Deepfake content uses two algorithms working in combination for a process of continuous refinement.

  • The first algorithm, known as the generator, produces fake content;
  • The second algorithm, known as the discriminator, then assesses the content and identifies all the data points that indicate the content is fake;
  • This information feeds back from the discriminator algorithm to the generator algorithm;
  • The generator algorithm’s machine learning-based processing then refines the content to resolve the tell-tale signs spotted by the discriminator algorithm.

This process continues until the discriminator algorithm cannot determine that the content is fake. Thus, this process is a generative adversarial network, two algorithms working in a combative partnership to achieve the desired result.

The History of Deepfakes

Manipulating photographs has been going on as long as photography has been around. Soon after the development of moving images, techniques to alter the pictures followed.

The movie industry has been at the forefront of manipulating images for artistic purposes, whether it’s superimposing computer-generated images using green screen technology or using a computer to automatically mask out an actor’s tattoos rather than relying on makeup to cover them over.

The first recorded use of Deepfake technology was in 1997, a computer-generated image of a person’s face moved in response to music to appear as if that person were singing.

The late twenty-tens says a series of advances where researchers created convincing videos, including one system that allowed a computer-generated face to recreate a person’s facial expressions in real-time. This period also saw the use of Deepfake techniques enter into general usage. Now anyone with a computer and access to software could create a Deepfake. Deepfake software is now widely available, including apps for mobile devices.

Channel 4 highlighted the capabilities of the technology on Christmas Day 2020 when they broadcast their traditional alternative Christmas speech using a Deepfake video of the Queen. This programme showed the ability to produce a broadcast-quality video that was four minutes long. The video dramatically ended by revealing the actor delivering the speech.

Why are Deepfakes Created?

Movie Making

The movie industry is always looking for methods of improving filmmaking. Deepfake technology offers the ability to include the likeness of a deceased actor in a film for continuity purposes, such as the appearances of Carrie Fisher and Peter Cushing in the Star Wars franchise films produced after their passing.

The technology also has the potential to correct acting mistakes without having to reshoot an entire scene, potentially offering huge production cost savings.

In theory, it can also replace one actor with an entirely different actor if circ*mstances warrant a post-production change.

There is also the potential to improve dubbed films by subtly modifying the actor’s facial expressions to match the dubbed soundtrack and removing the sometimes-distracting mismatch between their mouth movements and the sounds.

Deception

Another purpose widely seen is to create videos of political figures making statements that could discredit that person for parody or malicious intent. Typically the person makes a controversial or offensive statement that undermines their character or enables their opponents to support a claim against their fitness for office. Attempts of such acts of political sabotage have seen fake videos of Barak Obama and Donald Trump. However, none have stood up to scrutiny.

The technology can also create fictitious characters using AI-generated faces that appear to be real people. The purpose is to make political statements, deliver propaganda, or spread disinformation. These non-existent people are sock puppets that anonymous individuals or organisations use to convey controversial views or make personal attacks.

Criminality

From a security perspective, Deepfakes have the potential for use in phishing campaigns where hackers attempt to persuade a potential victim to click on a dangerous link or perform an action such as transferring money to the attacker’s account.

The ability to send a video message that appears to be from someone the victim knows has the potential to increase the success rate of such an attack. For example, an employee working in an accounts department receiving a video call that appears to be from a senior director instructing them to transfer funds would be significantly more compelling than an email or text message.

Adult-Orientated Content

The most common purpose is sadly content of an adult nature, creating videos of a specific person engaged in explicit acts. A survey of Deepfake videos undertaken at the end of 2019 found that over 96% were for this purpose. Almost all involved using a female celebrity’s image to generate the Deepfake.

Creating Doubt

A final observed purpose is for blackmail, or rather to counter blackmail. Deepfake technology is not yet at the stage where creating videos to blackmail a victim is a credible proposition. Forensic analysis of the videos will quickly identify the video as being fake. However, where an individual is subject to blackmail through genuine video footage, creating multiple Deepfakes on behalf of the target can cast doubt onto the believability of the real video.

Do Deepfakes have Benefits?

Away from the movie industry, Deepfake technology does have useful and practical applications.

For example, a patient who is permanently unable to speak following a medical event may require a voice generation device to communicate. While these devices were initially robotic sounding, as demonstrated by the late Stephen Hawking, modern versions sound very lifelike.

Now, Deepfake audio technology can allow these devices to replicate the user’s voice using available recordings. The ability to retain their voice, with unique accent and inflexions, offers significant wellbeing benefits over the long term as they recover.

How Can You Spot a Deepfake?

When Deepfakes first appeared, their inferior quality made visual detection simple. Looking for lip-syncing issues, odd areas of skin tones, blurring around moving features, or unnatural movements will spot the low-quality Deepfakes. However, the technology has reached the point where the generation of convincing videos that look genuine to the viewer is possible.

Looking at the believability of the content and tracing back the source of the video can help. If the video shows someone acting out of character or espousing views that run counter to their usual public persona, then you should be cautious. If the video is not from a credible and trustworthy source, then a question mark should hang over its legitimacy.

The problem is that people tend to believe anything that reinforces their personal views. Thus, even when experts expose the deception behind a Deepfake video, some people will still think it is authentic and distrust the evidence that it is fake. This is a societal problem that reached far beyond synthetic media into the broader issues of fake news.

Technological solutions for detecting Deepfakes employ primarily the same deep learning algorithms that created them in the first place.

  • Looking for subtle inconsistencies and artefacts in video and audio data that the generation process creates will provide a means to detect it as a Deepfake.
  • Other techniques look for inconsistencies in the fine detail of the fake images, such as reflections or blink patterns, to spot evidence of computer generation processes behind the imagery.

The problem is that soon after finding a solution to consistently and reliably spotting Deepfakes, updates to generation software resolve the tell-tale issues that gave them away.

Summary

Producing fake videos that can deceive the average viewer is now relatively straightforward with the technology we all have at our disposal. But, without controls, we may soon find ourselves bombarded with fake news through social media channels that appear genuine and credible. Unfortunately, it’s only a matter of time before such phoney information sways public opinion, influences elections, or manipulates stock markets.

Producing Deepfakes is not a crime unless there is an intent to use it for malicious purposes or it depicts an individual in such a way that constitutes harassment.

The cyber security sector is already seeing the use of Deepfake videos to coerce individuals to perform fraudulent acts by deceiving them into believing they are dealing with a known contact. Deepfake technology can play a significant role in social engineering techniques.

Conclusion

Spotting Deepfakes is not a simple task that we can do without the help of the technology that creates them in the first place. Unfortunately, there’s a race between the means of creation and detection, each trying to keep one step ahead.

The critical advice is don’t believe everything you see and hear, especially if it’s not from a trustworthy source that you can independently check.

About the Author: Stephen Mash is a contributing writer for HP Tech Takes. Stephen is a UK-based freelance technology writer with a background in cybersecurity and risk management.

What Is a Deepfake And How To Spot One | HP® Tech Takes (2024)

FAQs

What Is a Deepfake And How To Spot One | HP® Tech Takes? ›

The term is simply the conjunction of “deep learning” and “fake”, hence “Deepfake”. The distinguishing feature of Deepfake media is that they appear convincing even though they portray fictitious events at first sight.

What is an example of a deepfake? ›

The first well-known deepfake examples date back to the middle of the 2010s. Lucasfilm experimented with the technology in 2016, showcasing the likenesses of Carrie Fisher and Peter Cushing superimposed on other actors… to moderately convincing effect.

What technology does deepfake use? ›

The artificial intelligence and deep-learning technology currently used for deepfakes typically involve generative adversarial networks, or GANs, and autoencoders.

How to spot AI fakes? ›

How to identify AI-generated videos
  1. Look out for strange shadows, blurs, or light flickers. In some AI-generated videos, shadows or light may appear to flicker only on the face of the person speaking or possibly only in the background. ...
  2. Unnatural body language. This is another AI giveaway. ...
  3. Take a closer listen.

Is there any software to detect deepfakes? ›

Deepware is advanced software that uses artificial intelligence and machine learning technologies to detect and mitigate deepfakes. It identifies videos, images, and audio files and determines if they are fake or not.

Is it illegal to watch deepfake? ›

Watching deepfakes is not illegal in itself, except in cases where the content involves unlawful material, such as child p*rnography. Existing legislation primarily targets the creation and distribution of deepfakes, especially when these actions involve non-consensual p*rnography.

Which are the five types of deep fakes? ›

​​5 Types Of Deep Fakes You Should Be Aware Of​
  • ​​Textual Deep Fakes​ ...
  • ​​Deep Fake Video​ ...
  • ​​Deep fake Audio​ ...
  • ​​Deep Fakes on Social Media​ ...
  • ​​Real-time or Live Deepfakes​ ...
  • ​​Deep Fake Frauds in India​ ...
  • ​​How To Avoid Deep Fakes​
Jan 13, 2024

Who are the famous victims of deepfakes? ›

The increasing incidents of deepfake manipulation have made it exceedingly challenging to authenticate users.
  • Let's take a look at the victims of deep fake videos. Ratan Tata. ...
  • Narayana Murthy. ...
  • Priyanka Chopra. ...
  • Alia Bhatt. ...
  • Rashmika Mandanna. ...
  • Sachin Tendulkar. ...
  • Virat Kohli. ...
  • Norah Fatehi.
6 days ago

Is deepfake a crime? ›

Deepfake technology itself is not considered illegal – and deepfakes are by no means all malicious – but depending on the kind of content generated, some violate laws such as data protection and specific offences of non-consensual content.

What app is used for deepfake? ›

Deepfakesweb is an online deepfake maker that allows users to easily create face-swapped videos using artificial intelligence. The app works completely in the cloud, so no software download is required. Deepfakesweb is a cloud-based deepfake app that uses AI to seamlessly swap faces in videos.

How do people do deepfakes? ›

They are usually generated by specialized applications or algorithms, by a blend of old and newly manufactured video. These deepfake applications, rooted in machine learning, deconstruct the subtle features of someone's face and learn how to manipulate them based on the individual conditions of the video.

What are the dangers of deepfake technology? ›

Not only has this technology created confusion, skepticism, and the spread of misinformation, deepfakes also pose a threat to privacy and security. With the ability to convincingly impersonate anyone, cybercriminals can orchestrate phishing scams or identity theft operations with alarming precision.

How can you tell if someone is using AI? ›

These common signs of AI-generated content include:
  1. Incorrect and outdated information.
  2. Lack of depth and personality.
  3. Repetitive language.
Oct 2, 2023

How can I tell if an image was made by AI? ›

Because artificial intelligence is piecing together its creations from the original work of others, it can show some inconsistencies close up. When you examine an image for signs of AI, zoom in as much as possible on every part of it. Stray pixels, odd outlines, and misplaced shapes will be easier to see this way.

How do you check if a text is from an AI? ›

QuillBot's AI content detector tool is carefully trained to understand the difference between human-written and AI-generated content. Training an AI to identify AI-generated content works similarly to training an AI to identify plagiarism or detect grammar errors.

Can Deepfake audio be detected? ›

NPR identified three deepfake audio detection providers — Pindrop Security, AI or Not and AI Voice Detector. Most claim their tools are over 90% accurate at differentiating between real audio and AI-generated audio. Pindrop only works with businesses, while the others are available for individuals to use.

Can deepfakes be tracked? ›

As these generative artificial intelligence (AI) technologies become more common, researchers are now tracking their proliferation through a database of political deepfakes.

Which model is best for deepfake detection? ›

1. HyperVerge's deepfake detection. HyperVerge is a refined deepfake detection solution. With an AI model trained over 13 years and machine learning to provide comprehensive security, HyperVerge provides advanced deepfake detection, in addition to identity verification, facial recognition, and robust liveness checks.

How effective are deepfake detectors? ›

DeepSecure.ai showcases a remarkable accuracy rate of 96% in detecting deepfakes, capable of handling a wide array of deepfake techniques. Blackbird.AI excels with a detection accuracy of 98%, utilizing advanced machine learning algorithms alongside human intelligence for media verification.

References

Top Articles
Latest Posts
Article information

Author: Dean Jakubowski Ret

Last Updated:

Views: 5947

Rating: 5 / 5 (50 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Dean Jakubowski Ret

Birthday: 1996-05-10

Address: Apt. 425 4346 Santiago Islands, Shariside, AK 38830-1874

Phone: +96313309894162

Job: Legacy Sales Designer

Hobby: Baseball, Wood carving, Candle making, Jigsaw puzzles, Lacemaking, Parkour, Drawing

Introduction: My name is Dean Jakubowski Ret, I am a enthusiastic, friendly, homely, handsome, zealous, brainy, elegant person who loves writing and wants to share my knowledge and understanding with you.