What are deepfakes? How they work and how to spot them
If you’ve heard about doctored photos and videos that look real, you’ve probably asked yourself: “What are deepfakes?” Deepfakes are videos and pictures created using computer software that look almost indistinguishable from the real deal. Learning how to recognize deepfakes and when they might be used as a part of a scam is the first step in protecting your digital existence. Then, get a powerful online security app to help you protect your identity, block hackers, and stay online safer.
How do deepfakes work?
Deepfakes combine existing images, video, or audio of a person in AI-powered deep learning software that allows for manipulating this information into new, fake pictures, videos, and audio recordings. The software is fed images, video, and voice clips of people that are processed to “learn'' what makes a person unique (similar to training facial recognition software). Deepfake technology then applies that information to other clips (substituting one person for another) or as the basis of fully new clips.
What is needed to create a deepfake?
Deepfake technology is a combination of two algorithms—generators and discriminators. Generators take the initial data set to create (or generate) new images based on the data gathered from the initial set. Then, the discriminator evaluates the content for realism and feeds its findings back into the generator for further refinement.
The combination of these algorithms is called Generative Adversarial Networks (GANs). They learn from one another by refining inputs and outputs so that the discriminator can’t tell the difference between a real image, sound clip, or video and one created by a generator.
To create a deepfake, the specialized software needs lots of video, audio, and photographs. The sample size of this data is important because it directly correlates to how good or bad a deepfake is. If the deepfake software only has a few images or clips to work from, it won’t be able to create a convincing image. A larger cache of visuals and sounds can help the software create a more realistic deepfake.
Because deepfake AI technology is relatively new (and still somewhat expensive), the software needed to make convincing deepfake photos and videos is not yet widespread. But that’s changing daily as cloud computing power and machine learning grow more powerful and accessible.
Who makes deepfakes?
Movie companies use the technology to:
- De-age actors (Harrison Ford in Indiana Jones and the Dial of Destiny),
- Narrate documentaries after they’ve passed away (Anthony Bourdain’s voiceover in Roadrunner)
- Resurrect dead actors for parts in new movies by combining the performance of a living actor with digital recreations of the original actors (Peter Cushing in Star Wars: Rogue One or Paul Walker in Fast & Furious 7).
There are also amateur visual effects artists doing the same kind of work in an unofficial capacity.
Other companies (face-swapping apps, for example) make deepfake apps for individuals by placing their likenesses on famous actors in scenes from movies and television. The technology used for these apps is less advanced than what Hollywood uses, but it’s still pretty astonishing.
Educators are working to create new teaching materials that allow historical figures to speak directly to students to deepen connections between the past and present. Other groups are using this technology to bring people’s loved ones back to life through video missives and images.
But not all uses of deepfake technology are benign. Criminals and other unscrupulous people are using deepfakes for various purposes that range from distasteful to immoral and even illegal. Armed with the ability to make politicians and other public figures do or say anything, political bad actors can spread falsehoods quickly, potentially interfering with elections and public health initiatives.
Are deepfakes illegal?
Currently, there is no federal law governing deepfakes, though the creation of deepfakes is illegal in some U.S. states, while distributing them is illegal in other states. Minnesota outlawed creating and distributing deepfakes that could be used to influence an election or as a form of nonconsensual pornography. Other states are considering similar laws, and the UK has included language around how deepfakes can and can’t be created in its forthcoming Online Safety Bill.
States with laws regulating deepfakes:
- Virginia
- Georgia
- Washington
- Wyoming
- Hawaii
- Minnesota
States with proposed laws (as of September 2023) regulating deepfakes:
- Louisiana
- Illinois
- Massachusetts
- New Jersey
States that allow people to sue deepfakers in civil court:
- California
- New York
Because some estimates say that as much as 96% of the deepfake videos found online are pornographic, there are many concerns about people’s privacy and their inability to consent to such images.
What are deepfakes used for?
The entertainment industry makes deepfakes for movies and television shows, app companies offer face-swapping of images and video clips, individuals and organizations with political motives create them to spread fake news, and even criminals use the technology for fraud and blackmail.
An emerging cybercrime trend uses fake videos of celebrities to promote fake products. Exploitative people can make fake pornography, revenge pornography, and sextortion videos that violate people’s privacy while earning money by selling the material or threatening to release these videos unless a “ransom” is paid.
How to spot a deepfake
There are some fairly simple things you can look for when trying to spot a deepfake:
- Unnatural eye movement: Eye movements that do not look natural — or a lack of eye movement, such as an absence of blinking — are red flags. It’s challenging to replicate the act of blinking in a way that looks natural. It’s also challenging to replicate a real person’s eye movements. That’s because someone’s eyes usually follow the person they’re talking to.
- Unnatural facial expressions: When something doesn’t look right about a face, it could signal facial morphing. This occurs when a simple stitch of one image has been done over another.
- Awkward facial-feature positioning: If someone’s face is pointing one way and their nose is pointing another, you should be skeptical about the video’s authenticity.
- A lack of emotion: You also can spot facial morphing or image stitches if someone’s face doesn’t seem to exhibit the emotion that should go along with what they’re supposedly saying.
- Awkward-looking body or posture: Another sign is if a person’s body shape doesn’t look natural or there is awkward or inconsistent positioning of the head and body. This may be one of the easier inconsistencies to spot because deepfake technology usually focuses on facial features rather than the whole body.
- Unnatural body movement: If someone looks distorted or off when they turn to the side or move their head, or their movements are jerky and disjointed from one frame to the next, you should suspect the video is fake.
- Unnatural coloring: Abnormal skin tone, discoloration, weird lighting, and misplaced shadows are all signs that what you’re seeing is likely fake.
- Hair that doesn’t look real: You won’t see frizzy or flyaway hair, because fake images won’t be able to generate these individual characteristics.
- Teeth that don’t look real: Algorithms may not be able to generate individual teeth, so an absence of outlines of individual teeth could be a clue.
- Blurring or misalignment: If the edges of images are blurry or visuals are misaligned — for example, where someone’s face and neck meet their body — you’ll know something is amiss.
- Inconsistent audio and noise: Deepfake creators usually spend more time on the video images rather than the audio. The result can be poor lip-syncing, robotic-sounding voices, strange word pronunciations, digital background noise, or even the absence of audio.
- Images that look unnatural when slowed down: If you watch a video on a screen that’s larger than your smartphone or have video-editing software that can slow down a video’s playback, you can zoom in and examine images more closely. Zooming in on lips, for example, will help you see if they’re really talking or if it’s bad lip-syncing.
- Hashtag discrepancies: There’s a cryptographic algorithm that helps video creators show that their videos are authentic. The algorithm is used to insert hashtags at certain places throughout a video. If the hashtags change, then you should suspect the video has been manipulated.
- Digital fingerprints: Blockchain technology can also create a digital fingerprint for videos. While not foolproof, this blockchain-based verification can help establish a video’s authenticity. Here’s how it works. When a video is created, the content is registered to a ledger that can’t be changed. This technology can help prove the authenticity of a video.
- Reverse image searches: A search for an original image, or a reverse image search with the help of a computer, can unearth similar videos online to help determine if an image, audio, or video has been altered in any way. While reverse video search technology is not publicly available yet, investing in a tool like this could be helpful.
- Video is not being reported on by trustworthy news sources: If what the person in a video is saying or doing is shocking or important, the news media will be reporting on it. If you search for information on the video and no trustworthy sources are talking about it, it could mean the video is a deepfake.
As the technology of deepfaking advances, so will the difficulty of identifying them. That hasn’t stopped groups of scientists and programmers from creating new means of identifying deepfakes, though. The Deepfake Detection Challenge (DFDC) is incentivizing solutions for deepfake detection by fostering innovation through collaboration. The DFDC is sharing a dataset of 124,000 videos that feature eight algorithms for facial modification.
There are several AI-based detection tools built to spot deepfakes, including:
Social media platforms X and Facebook have banned the use of malicious deepfakes and Google is working on text-to-speech conversion tools to verify speakers. The U.S. Defense Advanced Research Projects Agency (DARPA) is funding research to develop automated screening of deepfake technology through a program called MediFor, or Media Forensics.
With all of these resources being devoted to fighting against deepfakes, combatting other scams is getting easier and faster with AI-powered solutions like Norton Genie. With it, you can find out if a link, text, or email is a scam in just seconds.
How do you protect yourself from deepfakes?
Protecting yourself from deepfakes is similar to protecting yourself from other possible threats online. To help prevent others from using your likeness in a deepfake:
- Limit how many pictures and videos of yourself and your loved ones you share.
- Make your social networking profiles private.
- Use a VPN when you use the internet.
- Use virus-blocking software.
- Have family members use a code word to identify themselves during emergencies.
- Conduct your business in person, especially if money is changing hands.
While there’s no way to completely eliminate the chances of falling victim to a deepfake scam, paying close attention to what you put out there can make a difference.
Help keep your identity safe from deepfakes
Being careful when you use the internet for browsing, shopping, or socializing is one of the most important steps you can take to limit your exposure to deepfakes and the scams that can come with them. If you want to protect yourself and your family online, sign up for Norton 360 with LifeLock Select, today. This security software features a VPN and password manager for safer browsing along with tools that help protect against identity theft by monitoring the dark web as well as the credit bureaus for misuse of your personal information.
FAQs about deepfakes
Deepfakes are complex. If you have more questions about them, keep reading.
Can deepfakes be detected?
Yes, deepfakes are detectable. There are several AI-powered systems for detecting deepfakes, and you can detect them by looking for unusual lighting, strange colors, and odd placement and movement of facial features on the subject of the video.
Is a deepfake identity theft?
In a technical sense, no. However, deepfakes can be used to create phishing scams that can lead to identity theft.
Are deepfakes really a security threat?
They could be. As the technology becomes more sophisticated and widespread, people will likely use it to stir up security threats or create election disinformation.
Who is affected by deepfakes?
Almost anyone can be affected by deepfakes. Whether you see and believe a deepfake video is real or someone creates an embarrassing video of you and posts it on social media, you can be affected by this synthetic media.
Editorial note: Our articles provide educational information for you. Our offerings may not cover or protect against every type of crime, fraud, or threat we write about. Our goal is to increase awareness about Cyber Safety. Please review complete Terms during enrollment or setup. Remember that no one can prevent all identity theft or cybercrime, and that LifeLock does not monitor all transactions at all businesses. The Norton and LifeLock brands are part of Gen Digital Inc.
Want more?
Follow us for all the latest news, tips and updates.