Top 5 Ways Scammers Have Used AI and Deepfakes in 2025

AI is changing everything, including how scammers trick us. From fake voices to deepfake videos, here are the top ways criminals are weaponizing technology in 2025.

Girl laying and looking t her phone.

AI tools are advancing at lightning speed, making everyday life more efficient and creative. But scammers are always close behind, weaponizing the same technology to trick, manipulate, and steal.

The FBI’s Internet Crime Complaint Center (IC3) has already reported sharp increases in AI-powered scams, with billions lost each year. Data from the recent Gen Threat Report echoes this, showing attackers have churned out hundreds of thousands of AI-generated scam websites this year.

Here are the top 5 ways scammers have exploited AI and deepfakes in 2025, plus what you can do to protect yourself.

1. Deepfake Voice Cloning Amplifying Vishing for Urgent Calls

Description:
Voice cloning has become so accessible that anyone can replicate a person’s voice with just a few seconds of audio. Scammers use this to impersonate loved ones or trusted figures, pushing victims into hasty decisions.

Real-World Example:
In multiple reported cases, parents in the United States received calls from what sounded like their child’s voice begging for urgent help, such as bail money or emergency travel funds. The voices were generated using stolen audio clips from social media or video posts.

Why It Works:

  • Urgency and fear overwhelm rational thinking.
  • Emotional connections make victims less skeptical.
  • Background noise and tone mimic reality, reinforcing authenticity.

Protection Tips:

  • Always verify requests through a known number, family member, or secondary contact.
  • Create a family “safe word” for emergencies that only real family would know.
  • Slow down and take time to verify. Scammers rely on panic to get you to act before you think.

2. VibeScams: AI Website Builders Fueling Phishing

Description:
Scammers are misusing AI website builders to create professional-looking phishing sites in minutes. With just a prompt, these platforms can clone an existing site’s design, branding, and even customer service features.

Real-World Example:
Norton researchers documented fake Coinbase logins, Microsoft Office 365 portals, DHL delivery pages, and even localized tech support scams created with AI website builders. According to our telemetry, more than 580 new malicious AI-generated websites appear every day worldwide. Read the full research on VibeScams here.

Why It Works:

  • Sites look visually identical to trusted brands, making detection difficult.
  • Typosquatting tricks, such as “coiinbase” instead of “coinbase,” exploit quick glances.
  • Low cost and instant setup mean scammers can relaunch endlessly.

Protection Tips:

  • Double-check URLs carefully, especially for small spelling changes.
  • Do not click links in unsolicited texts or emails. Use official apps or bookmarks.
  • Use multi-factor authentication (MFA) to protect accounts even if a password is stolen.
  • Install a reputable security solution that blocks known phishing domains.

3. AI-Powered Romance and Friendship Scams

Description:
Romance scams are not new, but AI makes them more convincing. Chatbots trained on large language models can hold consistent, natural conversations around the clock. In 2025, scammers are layering in deepfake videos to “prove” their identities.

Real-World Example:
The Federal Trade Commission reports that romance scam losses topped 1.3 billion dollars in 2024 and 2025 data from Norton’s 2025 Online Dating Report shows that two in five (40%) of current online daters have been targeted by a dating scam. Victims now report “video chats” where the person on screen looks real but is actually a deepfake generated from stolen photos – there are many high-profile examples, including this one where a well-known soap opera actor was deepfaked to scam an LA-based victim out of her life savings. These fakes can smile, nod, and react in ways that trick people into believing they are genuine.

Why It Works:

  • AI provides consistency. It never gets tired, forgetful, or distracted.
  • Deepfake videos eliminate one of the biggest red flags, avoiding live video calls.
  • Victims invest emotionally, which makes financial requests harder to resist.

Protection Tips:

  • Be cautious if an online partner avoids in-person meetings or constantly makes excuses.
  • Watch for escalating asks, such as small requests turning into larger financial demands.
  • Reverse image search photos to see if they appear on multiple unrelated profiles.
  • Talk to a trusted friend before sending money to someone you only know online.

4. Business Email Compromise 2.0 (BEC with Deepfakes and Voice Clones)

Description:
Scammers are evolving BEC beyond simple phishing. Using AI, they are cloning the voices of executives and in some cases generating convincing videos to lend credibility to fraudulent instructions.

Real-World Example (WPP, UK and Global):
According to the The Guardian, the CEO of WPP was targeted by scammers who cloned his voice and used it on a fake Teams-style call. The voice sounded authentic and instructed staff to share sensitive access credentials and transfer funds under a plausible pretext. While this case stopped short of a major financial loss, it highlights how attackers are blending AI audio and video with traditional BEC tactics.

Why It Works:

  • Hearing a familiar voice or seeing a familiar face overrides skepticism.
  • Authority bias makes employees feel compelled to act quickly.
  • Combining emails with voice or video lowers the chance of anyone demanding further verification.

Protection Tips:

  • Require out-of-band verification, such as calling back on a known number, before transfers.
  • Enforce dual approval for high-value or unusual payments.
  • Train employees to pause and verify even when instructions seem urgent.
  • Explore voice authentication and anomaly detection tools that flag suspicious audio or video.

4. Business Email Compromise 2.0 (BEC with Deepfakes and Voice Clones)

Description:
AI deepfakes are increasingly used to create videos of celebrities or financial leaders promoting fake investments, miracle products, or crypto schemes. These scams spread quickly on social media and can be difficult to distinguish from real endorsements.

Real-World Example:
In 2025, multiple deepfake videos of Elon Musk circulated across YouTube and X (formerly Twitter), promoting fraudulent crypto giveaways. Victims believed they were sending funds directly to Musk’s team, only to lose thousands of dollars. Similar scams now feature actors, athletes, and even local influencers.

Why It Works:

  • Authority bias leads people to trust familiar faces.
  • Viral sharing amplifies the scam before platforms can remove content.
  • The combination of urgency and celebrity creates fear of missing out.

Protection Tips:

  • Always verify endorsements through official websites or verified social accounts.
  • Be skeptical of investment pitches that promise guaranteed returns.
  • Report fraudulent videos immediately to the platform to speed up removal.

AI has supercharged old scams with new tricks, making them faster, more convincing, and more scalable than ever before.

The best defense is a layered one: stay skeptical, double-check requests and URLs, and protect your devices with strong security software, identity monitoring, and MFA.

In 2025, the biggest threat is not AI itself. It is how humans are manipulated into believing what AI makes possible.

Michal Salát
  • Michal Salát
  • Threat Intelligence Director
Michal joined the company as a malware analyst and is now our threat intelligence director.

Editorial note: Our articles provide educational information for you. Our offerings may not cover or protect against every type of crime, fraud, or threat we write about. Our goal is to increase awareness about Cyber Safety. Please review complete Terms during enrollment or setup. Remember that no one can prevent all identity theft or cybercrime, and that LifeLock does not monitor all transactions at all businesses. The Norton and LifeLock brands are part of Gen Digital Inc. 

Contents

    Want more?

    Follow us for all the latest news, tips, and updates.