What are deepfakes and how to spot them

Image

Deepfakes are designed to deceive viewers with manipulated, fake video and voice. Learn why they’re threatening and how you can spot them.


Deepfakes are a form of artificial intelligence in their compilation of doctored images and sounds that are put together with machine-learning algorithms. Deepfake technology can make it challenging to determine whether the news that is  seen and heard on the internet is real.

This evolving form of artificial intelligence is adept at making certain media appear to  be real, when in fact they're forged video and audio that are designed to fool you. A  surge in what’s known as fake news has shown how deepfakes can trick audiences into believing made-up stories.

In our web-centric society, these AI forgeries have become a cybersecurity concern on  individual, corporate, national, and international levels.

In this article, you’ll learn what deepfakes are, how they work, the inherent threats, and how you can try to spot a deepfake before any harm is done.

What is a deepfake?

The term deep fake melds two words: deep and fake. Deepfake meaning combines the concept of machine or deep learning with something that is not real. 

Specifically, deepfakes are AI images and sounds put together with machine-learning algorithms. The technology can manipulate media and replace a real person’s  image, voice, or both with similar artificial likenesses and voices. It essentially creates  people that don’t exist, making it appear that real people are saying and doing things  that they did not in fact say or do.

You could think of deepfake technology as an advanced form of photo-editing software that makes it easy to alter images. However, deepfake technology goes a lot further in how it manipulates visual and audio content, making fabrications look and sound real.

Audio deep fakes are another form of deception. Here’s how they work. Deepfake machine-learning and synthesizing technology creates what are known as “voice skins” or “clones” that enable someone to pose as a prominent figure. An audio deep fake scam is designed to make you believe the voice on the other line is someone you know, like your boss or a client, so you’ll be more willing to take an action — like sending money.

As a result, deepfake technology is being used as a nefarious tool for spreading misinformation, stealing, and more.

What is the purpose of a deepfake?

A deepfake seeks to deceive viewers with manipulated, fake content. Its creator wants you to believe something was said or happened that never occurred. What’s the purpose in that? Deepfake creators are using this fake media for malicious purposes like spreading misinformation and stealing identities.

What’s the point? While the movie industry has used this type of technology for special effects and animations, deepfake technology is now being used for nefarious purposes, including these:

  • Phishing and other scams
  • Hoaxes
  • Celebrity pornography
  • Reputation smearing
  • Election manipulation
  • Social engineering
  • Automated disinformation attacks
  • Identity theft
  • Financial fraud
  • Blackmail

In the last case, blackmailers claim they’ll release a fake but damaging video of you if  you don’t give them money or something else of value.

Among the possible risks, deepfakes can threaten cybersecurity, political elections,  individual and corporate finances, reputations, and more. This misuse can play out in  scams against individuals and companies, including on social media.

Corporate scams

Companies are especially concerned about scams that rely on deepfake technology,  including these:  

  • Supercharging scams where deepfake audio is used to pretend that the person  on the other line is a higher-up, such as a CEO asking an employee to send  money.
  • Extortion-based scams where attackers threaten to release fake but  incriminating videos if they aren’t paid.
  • Identity theft where deepfake technology is used to commit crimes like  financial fraud.

Many of these scams rely on an audio deepfake. Audio deepfakes create what are known as “voice skins” or “clones” that enable them to pose as a prominent figure.  If you believe that voice on the other line is a partner or client asking for money, it’s a  good idea to do your due diligence because it could be a scam.

Social media manipulation

Social media posts supported by convincing manipulations have the potential to misguide and inflame the internet-connected public. Deepfakes provide the media  that help fake news appear real.

Deepfakes are often used on social media platforms to produce strong reactions.  Consider a Twitter profile that’s volatile, taking aim at all things political and making  outrageous comments to create controversy. Is the profile connected to a real person?

Maybe not. The profile picture you see on that Twitter account could have been created from scratch. It may not belong to a real person. Another use of deepfakes has been to create fake profile pictures for so-called sockpuppet accounts.

The takeaway? Social media platforms like Twitter and Facebook have banned the use  of these nefarious types of deepfakes. However, it’s still a good idea to be wary about  what you see and hear via social media.

How was the deepfake technology created?

The term “deepfake” originated in 2017, when an anonymous Reddit user called himself  “Deepfakes.” This Reddit user manipulated Google’s open-source, deep-learning technology to create and post manipulated pornographic videos.

The videos were doctored with a technique known as face-swapping. The user “Deepfakes” replaced real faces with celebrity faces.

How are deepfakes made?

Deepfakes can be created in more than one way.

One system is known as a Generative Adversarial Network, or GAN, which is used for  face generation. It produces faces that otherwise don’t exist. GAN uses two separate  neural networks — or a set of algorithms designed to recognize patterns — that work  together by training themselves to learn the characteristics of real images so they can  produce convincing fake ones.

The two networks engage in a complex interplay that interprets data by labeling,  clustering, and classifying. One network generates the images, while the other network learns how to distinguish fake from real images. The algorithm developed can then  train itself on photos of a real person to generate fake photos of that real person —  and turn those photos into a convincing video.

Another system is an artificial intelligence (AI) algorithm known as an encoder. Encoders are used in face-swapping or face-replacement technology. First, you run thousands of face shots of two people through the encoder to find similarities between the two images. Then, a second AI algorithm, or decoder,  retrieves the face images and swaps them. A person’s real face can be superimposed  on another person’s body.

How many pictures do you need for a deepfake?

Creating a convincing deepfake face-swap video may require thousands of face shots to perform the encoding and decoding noted in the section above.

To produce a person that looks real, you also need images that display a wide range of  characteristics like facial expressions and angles, along with the right lighting. That’s  why celebrities or public figures are good subjects for creating deepfakes. Often, there  are numerous celebrity images on the internet that can be used.

Software for creating deepfakes has required large data sets, but new technology may  make creating deepfake videos easier. For example, through an AI lab in Russia,  Samsung has developed an AI system that can create a deepfake video with only a  handful of images — or even one photo.

What software technology is used to create high-quality deepfakes?

You can generate deepfakes in various ways. Computing power is important. For instance, most deepfakes are created on high-end desktops, not standard computers.

Newer automated computer graphics and machine-learning systems enable deepfakes to be made more quickly and cheaply. The Samsung technology mentioned above is one example of how new methods are fostering speed.

The types of software used to generate deepfakes include open-source Python software such as Faceswap and DeepFaceLab. Faceswap is free, open-source, multi-platform software. It can run on Windows, macOS, and Linux. DeepFaceLab is an open-source that also enables face-swapping.

How to spot a deepfake

Is it possible to spot a deepfake video? How will you know if the media that you’re  watching or listening to is real? Poorly made deepfake videos may be easy to identify  but spotting higher-quality deepfakes can be challenging. Continuous advances in  technology make detection infinitely more difficult.

However, there are notable, telltale characteristics that can help you spot deep fakes on your own and with some AI help. Here are 15 things to look for when determining if a video is real or fake.

  1. Unnatural eye movement. Eye movements that do not look natural — or a lack of  eye movement, such as an absence of blinking — are huge red flags. It’s challenging to  replicate the act of blinking in a way that looks natural. It’s also challenging to replicate  a real person’s eye movements. That’s because someone’s eyes usually follow the  person they’re talking to.
  2. Unnatural facial expressions. When something doesn’t look right about a face, it  could signal facial morphing. This occurs when one image has been stitched over another. 
  3. Awkward facial-feature positioning. If someone’s face is pointing one way and their  nose is pointing another way, you should be skeptical about the video’s authenticity.
  4. A lack of emotion. You also can spot what is known as “facial morphing” or image  stitches if someone’s face doesn’t seem to exhibit the emotion that should go along  with what they’re supposedly saying.
  5. Awkward-looking body or posture. Another sign is if a person’s body shape doesn’t  look natural, or there is awkward or inconsistent positioning of head and body. This  may be one of the easier inconsistencies to spot, because deepfake technology usually  focuses on facial features rather than the whole body.
  6. Unnatural body movement or body shape. If someone looks distorted or off when  they turn to the side or move their head, or their movements are jerky and disjointed  from one frame to the next, you should suspect the video is fake.
  7. Unnatural coloring. Abnormal skin tone, discoloration, weird lighting, and misplaced  shadows are all signs that what you’re seeing is likely fake.
  8. Hair that doesn’t look real. You won’t see frizzy or flyaway hair. Why? Fake images  won’t be able to generate these individual characteristics.
  9. Teeth that don’t look real. Algorithms may not be able to generate individual teeth,  so an absence of outlines of individual teeth could be a clue.
  10. Blurring or misalignment. If the edges of images are blurry or visuals are misalign — for example, where someone’s face and neck meet their body — you’ll know that  something is amiss.
  11. Inconsistent noise or audio. Deepfake creators usually spend more time on the  video images rather than the audio. The result can be poor lip-syncing, robotic- sounding voices, strange word pronunciation, digital background noise, or even the  absence of audio. 
  12. Images that look unnatural when slowed down. If you watch a video on a screen  that’s larger than your smartphone or have video-editing software that can slow down  a video’s playback, you can zoom in and examine images more closely. Zooming in on  lips, for example, will help you see if they’re really talking or if it’s bad lip-syncing. 
  13. Hashtag discrepancies. There’s a cryptographic algorithm that helps video creators  show that their videos are authentic. The algorithm is used to insert hashtags at certain places throughout a video. If the hashtags change, then you should suspect video manipulation.
  14. Digital fingerprintsBlockchain technology can also create a digital fingerprint  for videos. While not foolproof, this blockchain-based verification can help establish a  video’s authenticity. Here’s how it works. When a video is created, the content is  registered to a ledger that can’t be changed. This technology can help prove the  authenticity of a video.
  15. Reverse image searches. A search for an original image, or a reverse image  search with the help of a computer, can unearth similar videos online to help determine if an image, audio, or video has been altered in any way. While reverse  video search technology is not publicly available yet, investing in a tool like this could  be helpful.

Deepfakes in politics

Deepfakes can present divisive problems in the political arena, having great potential  to undermine political systems. Do you remember the 2018 video of former U.S. President Barack Obama talking about deepfakes? That was a deepfake video. It wasn’t really Obama, but it looked and sounded like him enough to fool people.

Other deepfakes have been used around the world to influence elections or stir up political controversy, including these:

  • In Gabon, a deepfake video led to an attempted military coup in the East African nation.
  • In India, a candidate used deepfake videos in different languages to criticize the incumbent and reach different constituencies.
  • In March 2022, a deepfake video posted to social media appeared to show Ukrainian President Volodymyr Zelensky telling Ukrainian soldiers to surrender  during the Russian invasion.

Deepfakes also could lead to what’s known as the “Liar’s Dividend” effect, whereby  someone like a political figure denies the authenticity of a video that may not be a  deepfake. The problem? It can be hard to prove either way, making the mere existence  of deepfakes a potentially powerful political tool.

Deepfake political videos have become so prevalent and damaging that the state of California, for example, has outlawed them during election seasons. The goal is to keep deepfakes from deceptively swaying voters.

The online harms resulting from deepfakes are also on the U.S. Federal Trade Commission's radar. In fact, the FTC is considering the issuance of a report to Congress  that explores using AI to combat online harms like deepfakes.

What are shallowfakes?

Shallowfakes are another way to spread misinformation. They’re made by freezing  frames and slowing down video with simple video-editing software. They don’t rely on algorithms or deep-learning systems.

Shallowfakes are particularly known for spreading misinformation in politics. Think  back to 2020 when a video circulated, apparently showing U.S. House Speaker Nancy Pelosi appearing to be impaired. If you believed she did what appeared in that video, you’ve been fooled by a shallowfake.

What is being done to combat deepfakes?

On a global scale, deepfakes can create problems through mass disinformation. On a  personal level, deepfakes can lead to bullying, reputational and financial damage, and  identity theft.

The United States and other countries have deepfakes on their radar. But are they  illegal? Here are some ways that the U.S. government and organizations are trying to  detect, combat, and protect against deepfakes with new technology, rules, and legislation:

  1. Social media rules. Social media platforms like Twitter and Facebook have  policies that ban deepfake technology. Twitter’s policies involve tagging  any deepfake videos that aren’t removed. YouTube has banned deepfake videos related to the 2020 U.S. Census, as well as election and voting procedures.
  2. Verification programs. Google is working on text-to-speech conversion tools to  verify speakers. Adobe has a system that enables you to attach a signature to  your contact that specifies the details of its creation. It also is developing a tool  to determine if a facial image has been manipulated.
  3. Research lab technologies. Research labs are using watermarks and blockchain  technologies to detect deepfake technology, but technology designed to  outsmart deepfake detectors is constantly evolving. Researchers at the University of Southern California and University of California, Berkeley are leading a push to discover new detection technologies. Using machine-learning technology that examines soft biometrics like facial quirks and how a  person speaks, they’ve been able to detect deepfakes with 92 to 96 percent accuracy.
  4. Deepfake Detection Challenge. Organizations like the DFDC are incentivizing  solutions for deepfake detection by fostering innovation through collaboration.  The DFDC has shared a dataset of 124,000 videos that feature eight algorithms  for facial modification.
  5. Emerging detection programs. University of California, Riverside computer  scientists have developed an “encoder-decoder” method, entitled Expression  Manipulation Detection. This framework detects and then localizes specific  image points that have been changed with a 99 percent detection rate.
  6. Filtering programs. Programs like Deeptrace are helping to provide protection.  This Amsterdam-based startup firm is developing automated deepfake  detection tools to perform background scans of audiovisual media that are  similar to a deepfake antivirus. Deeptrace is a combination of antivirus and  spam filters that monitor incoming media and quarantine suspicious content. 
  7. Corporate best practices. Companies are preparing themselves with consistent  communication structures and distribution plans. The planning includes  implementing centralized monitoring and reporting, along with effective  detection practices.
  8. U.S. Defense Advanced Research Projects Agency. DARPA is funding research to develop automated screening of deepfake technology through a program  called MediFor, or Media Forensics.
  9. U.S. legislation. While the U.S. government is making efforts to combat  nefarious uses of deepfake technology with pending legislation, three states  have taken their own steps. Virginia was the first state to impose criminal  penalties on nonconsensual deepfake pornography. Texas was the first state to  prohibit deepfake videos designed to sway political elections. California passed  two laws that allow victims of deepfake videos — either pornographic or  related to elections — to sue for damages. 

Former President Donald Trump did sign into law the National Defense Authorization  Act for Fiscal Year 2020, which was the first U.S. legislation related to deepfakes and set  forth three goals:  

  • Reporting on foreign weaponization of deepfakes.
  • Congressional notification of any deepfake disinformation that targets U.S. elections.
  • A competition to encourage the creation of deepfake detection technologies.

What can you do about deepfakes?

Here’s the problem. Creators of detection algorithms respond to the latest deepfakes  with their own new technology. In turn, the people behind those deepfakes respond to the new detection technology. It’s an evolving technology on both sides of the algorithm.

While the battle goes back and forth, it’s a good idea to know how to spot deepfakes  — at least some of the time — and take steps not to be fooled. On a personal level,  there are still some specific, practical things that you can do to help protect yourself  from becoming a victim of deepfakes.

If you’re watching a video online, be sure it’s from a reputable source before believing  its claims and sharing it with others.

When you get a call from someone — your boss, for instance — make sure the person on the other end is really your boss before acting on their request.

Most importantly, do not believe everything you see and hear on the web. In addition,  don’t forward any suspect media to your friends and family, helping expand its reach.  That’s exactly what deepfake creators want you to do. If media strikes you as  unbelievable, it’s quite possible that it is.

FAQs about deepfakes

Here are answers to some frequently asked questions about deepfakes and  deepfake technology.

What are deepfakes?

Deepfakes are artificial intelligence images and sounds put together with machine- learning algorithms that can manipulate media and replace a real person’s image,  voice, or both with similar artificial likenesses or voices.

What are deepfakes used for?

Deepfakes are used to deceive viewers with manipulated, fake content for malicious  purposes that include spreading misinformation; engaging in phishing, hoaxes, and  other scams; smearing reputations; manipulating elections; social engineering; stealing identities; and committing financial fraud and blackmail.

How does deepfake technology work?

Deepfake technology works in more than one way. A system known as a Generative  Adversarial Network, or GAN, generates faces that otherwise don’t exist. Another is an  AI algorithm known as an encoder that uses face-swapping or face-replacement  technology.

How do you spot a deepfake?

Poorly made deepfakes may be easy to identify but spotting high-quality deepfakes  can be challenging. However, red flags such as unnatural movements or a lack of  emotion may help you spot some on your own.

What is being done to combat deep fakes?

The U.S. and other countries have deepfakes on their radar. Several groups and  organizations are taking steps to help detect and guard against deepfakes with  emerging protections that include new filtering, verification, and detection programs.  In addition, social media platforms like Twitter and Facebook have policies that ban deepfake technology.

What can you do on a personal level to combat deepfakes?

You can be skeptical of videos you see and audio you hear on the web. If something  seems fantastical, don’t forward it to your friends and family. You could be playing right into the malicious intentions of its creators.

Cyber threats have evolved, and so have we.

Norton 360™ with LifeLock™, all-in-one, comprehensive protection against viruses, malware, identity theft, online tracking and much, much more.

Try Norton 360 with Lifelock.

Alison Grace Johansen
  • Alison Grace Johansen
  • Freelance writer
Alison Grace Johansen is a freelance writer who covers cybersecurity and consumer topics. Her background includes law, corporate governance, and publishing.

Editorial note: Our articles provide educational information for you. Our offerings may not cover or protect against every type of crime, fraud, or threat we write about. Our goal is to increase awareness about Cyber Safety. Please review complete Terms during enrollment or setup. Remember that no one can prevent all identity theft or cybercrime, and that LifeLock does not monitor all transactions at all businesses. The Norton and LifeLock brands are part of Gen Digital Inc. 

Contents

    Want more?

    Follow us for all the latest news, tips and updates.