Deepfakes: What they are and why they’re threatening
July 24, 2020
Deepfake technology is making it harder to tell whether some news you see and hear on the internet is real or not.
What are deepfakes? Deepfake technology is an evolving form of artificial intelligence that’s adept at making you believe certain media is real, when in fact it’s a compilation of doctored images and audio designed to fool you. A surge in what’s known as “fake news” shows how deepfake videos can trick audiences into believing made-up stories.
In this article, you’ll learn what deepfakes are, how they work, the inherent threats, and how to help spot this technology.
- What is a deepfake?
- What are deepfakes used for?
- How was the deepfake technology created?
- How are deepfakes made?
- What software technology is used to create high-quality deepfakes?
- How to spot deepfakes
- Deepfakes in politics
- What are shallowfakes?
- What is being done to combat deepfakes?
The term deepfake melds two words: deep and fake. It combines the concept of machine or deep learning with something that isn’t real.
Deepfakes are artificial images and sounds put together with machine-learning algorithms. A deepfake creator uses deepfake technology to manipulate media and replace a real person’s image, voice, or both with similar artificial likenesses or voices.
You can think of deepfake technology as an advanced form of photo-editing software that makes it easy to alter images.
But deepfake technology goes a lot further in how it manipulates visual and audio content. For instance, it can create people who don’t exist. Or it can make it appear to show real people saying and doing things they didn’t say or do.
As a result, deepfake technology can be used as a tool to spread misinformation.
A deepfake seeks to deceive viewers with manipulated, fake content. Its creator wants you to believe something was said or happened that never occurred, often to spread misinformation and for other malicious purposes.
What’s the point? The movie industry has used this type of technology for special effects and animations. But deepfake technology is now being used for nefarious purposes, including these.
- Scams and hoaxes.
- Celebrity pornography.
- Election manipulation.
- Social engineering.
- Automated disinformation attacks.
- Identity theft and financial fraud.
Among the possible risks, deepfakes can threaten cybersecurity, political elections, individual and corporate finances, reputations, and more. This malintent and misuse can play out in scams against individuals and companies, including on social media.
Companies are concerned about several scams that rely on deepfake technology, including these:
- Supercharging scams where deepfake audio is used to pretend the person on the other line is a higher-up such as a CEO asking an employee to send money.
- Extortion scams.
- Identity theft where deepfake technology is used to commit crimes like financial fraud.
Many of these scams rely on an audio deepfake. Audio deepfakes create what are known as “voice skins” or “clones” that enable them to pose as a prominent figure. If you believe that voice on the other line is a partner or client asking for money, it’s a good idea to do your due diligence. It could be a scam.
Social media manipulation
Social media posts supported by convincing manipulations have the potential to misguide and inflame the internet-connected public. Deepfakes provide the media that help fake news appear real.
Deepfakes are used on social media platforms, often to produce strong reactions. Consider a Twitter profile that’s volatile, taking aim at all things political and making outrageous comments to create controversy. Is the profile connected to a real person?
Maybe not. The profile picture you see on that Twitter account could have been created from scratch. It may not belong to a real person. If so, those convincing videos they’re sharing on Twitter likely aren’t real either.
Social media platforms like Twitter and Facebook have banned the use of these nefarious types of deepfakes.
The term deepfake originated in 2017, when an anonymous Reddit user called himself “Deepfakes.” This Reddit user manipulated Google’s open-source, deep-learning technology to create and post manipulated pornographic
The videos were doctored with a technique known as face-swapping. The user “Deepfakes” replaced real faces with celebrity faces.
Deepfakes can be created in more than one way.
One system is known as a Generative Adversarial Network, or GAN, which is used for face generation. It produces faces that otherwise don’t exist. GAN uses two separate neural networks — or a set of algorithms designed to recognize patterns — that work together by training themselves to learn the characteristics of real images so they can produce convincing fake ones.
The two networks engage in a complex interplay that interprets data by labeling, clustering, and classifying. One network generates the images, while the other network learns how to distinguish fake from real images. The algorithm developed can then train itself on photos of a real person to generate fake photos of that real person — and turn those photos into a convincing video.
Another system is an artificial intelligence (AI) algorithm known as an encoder. Encoders are used in face-swapping or face-replacement technology. First, you run thousands of face shots of two people through the encoder to find similarities between the two images. Then, a second AI algorithm, or decoder, retrieves the face images and swaps them. A person’s real face can be superimposed on another person’s body.
Creating a convincing deepfake face-swap video may require thousands of face shots to perform the encoding and decoding noted in the section above.
To produce a person that looks real, you also need images that display a wide range of characteristics like facial expressions and angles, along with the right lighting. That’s why celebrities or public figures are good subjects for creating deepfakes. Often, there are numerous celebrity images on the internet that can be used.
Software for creating deepfakes has required large data sets, but new technology may make creating deepfake videos easier. For example, through an AI lab in Russia, Samsung has developed an AI system that can create a deepfake video with only a handful of images — or even one photo.
You can generate deepfakes in various ways. Computing power is important. For instance, most deepfakes are created on high-end desktops, not standard computers.
Newer automated computer graphics and machine-learning systems enable deepfakes to be made more quickly and cheaply. The Samsung technology mentioned above is one example of how new methods are fostering
The types of software used to generate deepfakes include open-source Python software such as Faceswap and DeepFaceLab. Faceswap is free, open-source, multi-platform software. It can run on Windows, macOS, and Linux. DeepFaceLab is an open-source that also enables face-swapping.
Is it possible to spot a deepfake video? Poorly made deepfake videos may be easy to identify, but higher quality deepfakes can be tough. Continuous advances in technology make detection more difficult.
Certain telltale characteristics can help give away deepfake videos, including these:
- Unnatural eye movement.
- A lack of blinking.
- Unnatural facial expressions.
- Facial morphing — a simple stitch of one image over another.
- Unnatural body shape.
- Unnatural hair.
- Abnormal skin colors.
- Awkward head and body positioning.
- Inconsistent head positions.
- Odd lighting or discoloration.
- Bad lip-syncing.
- Robotic-sounding voices.
- Digital background noise.
- Blurry or misaligned visuals.
Researchers are developing technology that can help identify deepfakes. For example, researchers at the University of Southern California and University of California, Berkeley are using machine learning that looks at soft biometrics such as how a person speaks along with facial quirks. Detection has been successful 92 to 96 percent of the time.
Organizations also are incentivizing solutions for deepfake detection. One example is the DFDC or Deepfake Detection Challenge. It was kicked off by major companies to help innovation in deepfake detection technologies. To achieve faster solutions by sharing work, the DFDC is sharing a data set of 124,000 videos that feature eight algorithms for facial modification.
In the political arena, one example of deepfake is a 2018 video of former U.S. President Barack Obama talking about deepfakes. This wasn’t really Obama, but it looked and sounded like him.
Other deepfakes have been used around the world in elections or for fomenting political controversy, including these:
- In Gabon, a deepfake video led to an attempted military coup in the East African nation.
- In India, a candidate used deepfake videos in different languages to criticize the incumbent and reach different constituencies.
Deepfake political videos have become so prevalent and damaging that California has outlawed them during election season. The goal is to keep deepfakes from deceptively swaying voters.
Deepfakes can bolster fake news. They are a major security concern, especially in the 2020 presidential election year. Deepfakes have the potential to undermine political systems.
Shallowfakes are made by freezing frames and slowing down video with simple video-editing software. They don’t rely on algorithms or deep-learning systems.
Shallowfakes can spread misinformation, especially in politics. One example in 2020? A doctored video showing U.S. House Speaker Nancy Pelosi appearing to be impaired.
On a global scale, deepfakes can create problems through mass disinformation. On a personal level, deepfakes can lead to bullying, reputational and financial damage, and identity theft.
The United States and other countries have deepfakes on their radar, but one big question is whether deepfakes are illegal.
Here are some of the way the U.S. government and companies are trying to detect, combat, and protect against deepfakes with new technology, rules, and legislation:
- Social media rules. Social media platforms like Twitter have policies that outlaw deepfake technology. Twitter’s policies involve tagging any deepfake videos that aren’t removed. YouTube has banned deepfake videos related to the 2020 U.S. Census, as well as election and voting procedures.
- Research lab technologies. Research labs are using watermarks and blockchain technologies in an effort to detect deepfake technology, but technology designed to outsmart deepfake detectors is constantly evolving.
- Filtering programs. Programs like Deeptrace are helping to provide protection. Deeptrace is a combination of antivirus and spam filters that monitor incoming media and quarantine suspicious content.
- Corporate best practices. Companies are preparing themselves with consistent communication structures and distribution plans. The planning includes implementing centralized monitoring and reporting, along with effective detection practices.
- U.S. legislation. While the U.S. government is making efforts to combat nefarious uses of deepfake technology with bills that are pending, three states have taken their own steps. Virginia was the first state to impose criminal penalties on nonconsensual deepfake pornography. Texas was the first state to prohibit deepfake videos designed to sway political elections. And California passed two laws that allow victims of deepfake videos — either pornographic or related to elections — to sue for damages.
President Donald Trump also signed into law the National Defense Authorization Act for Fiscal Year 2020, which is the first U.S. legislation related to deepfakes and sets forth three goals:
- Reporting on foreign weaponization of deepfakes.
- Congressional notification of any deepfake disinformation that targets U.S. elections.
- A competition to encourage the creation of deepfake detection technologies.
What can you do about deepfakes?
Here are some ideas to help you protect against deepfakes on a personal level.
If you’re watching a video online, be sure it’s from a reputable source before believing its claims or sharing it with others.
When you get a call from someone — your boss, for instance — make sure the person on the other end is really your boss before taking action.
Don’t believe everything you see and hear on the web. If media strikes you as unbelievable, it’s possible it is.
Cyber threats have evolved, and so have we.
Norton 360™ with LifeLock™, all-in-one, comprehensive protection against viruses, malware, identity theft, online tracking and much, much more.
Try Norton 360 with Lifelock.
Editorial note: Our articles provide educational information for you. NortonLifeLock offerings may not cover or protect against every type of crime, fraud, or threat we write about. Our goal is to increase awareness about cyber safety. Please review complete Terms during enrollment or setup. Remember that no one can prevent all identity theft or cybercrime, and that LifeLock does not monitor all transactions at all businesses.
Copyright © 2020 NortonLifeLock Inc. All rights reserved. NortonLifeLock, the NortonLifeLock Logo, the Checkmark Logo, Norton, LifeLock, and the LockMan Logo are trademarks or registered trademarks of NortonLifeLock Inc. or its affiliates in the United States and other countries. Firefox is a trademark of Mozilla Foundation. Android, Google Chrome, Google Play and the Google Play logo are trademarks of Google, LLC. Mac, iPhone, iPad, Apple and the Apple logo are trademarks of Apple Inc., registered in the U.S. and other countries. App Store is a service mark of Apple Inc. Alexa and all related logos are trademarks of Amazon.com, Inc. or its affiliates. Microsoft and the Window logo are trademarks of Microsoft Corporation in the U.S. and other countries. The Android robot is reproduced or modified from work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License. Other names may be trademarks of their respective owners.