Did scammers train AI to sound like this woman’s daughter?
It’s time that we start a conversation about vishing. It’s sophisticated, AI-powered, and it’s on the rise among hackers and scammers.
Imagine arriving at work and receiving a phone call from an unknown number. When you answer, you hear the voice of your child weeping and sobbing, saying that bad people have taken them. A new voice comes on the phone and demands money in exchange for your child’s release.
It’s a nightmare scenario and, sadly, one that happened to a mother in Arizona. Thankfully the mother was able to confirm that her daughter was safe on a skiing trip. It was a scam, but more disturbing was that the voice she heard on the phone was indeed a match for her daughter.
Vishing – the new threat of AI phone scams
Many of us are seeing artificial intelligence become part of our daily routine. Scammers are no different, and they’re using access to AI systems to create more sophisticated schemes to take from their victims.
Vishing, a mashup of voice and phishing, is an attack where an AI is trained to mimic the voice of someone other than the scammer, and oftentimes mimics someone the target knows. Once the AI is set, the attacker uses the fake voice to request or extort money from the victim.
Now, you may wonder: Can AI convincingly recreate a human voice? After all, many of us converse with Siri or Alexa every day, and we would never fall for it if they called us claiming to be kidnapped. However, more convincing AI mimicry has been in development for years, and we’re beginning to see it put to use by scammers.
Previously, the technology necessary to train AI to mimic voices required hours and hours of sample audio to learn from. But that time requirement is being perpetually shortened. Scammers can potentially create a model of someone’s voice just from having access to videos from social media.
Recognizing and preventing AI voice scams
As the technology becomes more widespread, we will likely see more advanced versions of current phone scams. Instead of the robotic voices we're used to with robocalls, we could be dealing with AI-powered voices that are indistinguishable from humans. They could engage us in real conversations, respond to our questions, express empathy, and provide compelling reasons to make us part with our money or personal information.
While these threats are evolving and becoming more sophisticated, there are still red flags you can watch out for that give away a scammer :
- Unexpected urgency: Con artists aim to rush you. Real banks and police, on the other hand, give you time to think. A sense of urgency should always raise a red flag.
- Request for information: Beware of any calls requesting personal information. Financial institutions never ask for sensitive data over the phone.
- Doubt and double check: If a call from a business or bank seems suspicious, hang up. You can always dial the official number of the organization to confirm the call's authenticity.
The impending rise of advanced AI voice scams may seem like a grim prospect. But as the threats evolve, so too will the countermeasures and awareness.
Remember, knowledge is power and it’s your best defense against scammers using AI to fool you. By staying informed about the latest scam tactics and maintaining a healthy level of skepticism towards unexpected calls, we can help keep our digital lives safe in this era of evolving technology.
Editorial note: Our articles provide educational information for you. Our offerings may not cover or protect against every type of crime, fraud, or threat we write about. Our goal is to increase awareness about Cyber Safety. Please review complete Terms during enrollment or setup. Remember that no one can prevent all identity theft or cybercrime, and that LifeLock does not monitor all transactions at all businesses. The Norton and LifeLock brands are part of Gen Digital Inc.
Want more?
Follow us for all the latest news, tips and updates.