Scam Calls Using AI Voice Cloning Are On The Rise; Here's How To Stay Protected
While Artificial Intelligence is being used to create wonders like chatbots, it is also being used to scam people across the globe. Before the advent of generative AI, deepfakes were already prevalent. However, the advancement in generative AI tools has allowed scammers to find new ways to fool people. The imposters have got so convincing with fake voices that even the experts find it hard to differentiate. Thus, fake AI voice calls are increasingly racking up victims across the globe. However, there are ways through which the public can protect themselves and possibly even prevent such scams.
What is an AI voice clone scam?
The advanced generative AI tools of today can imitate anyone’s voice or appearance to near perfection. In some cases, imposters pretend to be a family member of a victim who is distressed and needs money. In other cases, it can be celebrities or politicians circulating fake information, like in the most recent scam targeting US President Joe Biden.
As per an NBC report, in the recent scam, a robocall impersonating President Biden is urging voters in New Hampshire to not participate in presidential primary. The fraudulent robocall instructs the recipients to save their vote for the November 2024 election in a voice that sounds like President Biden’s.
NBC reports that NH voters are getting robocalls with a deepfake of Biden’s voice telling them to not vote tomorrow.
— Alex Thompson (@AlexThomp) January 22, 2024
“it’s important that you save your vote for the November election.”https://t.co/LAOKRtDanK pic.twitter.com/wzm0PcaN6H
How does AI voice cloning work?
The concept of AI voice cloning tools is simple. It uses someone’s voice to train itself and generate audio similar to the subject. Scammers get a hold of this data in the form of videos and audio shared by people on social media which act like treasure troves of information for them.
How AI has been helping criminals who use deepfakes and voice cloning for financial scams, forcing banks and fintechs to invest in AI to counter fraud (@svr13 / Financial Times)https://t.co/of5S0mcVc5
— Techmeme (@Techmeme) January 20, 2024
📫 Subscribe: https://t.co/OyWeKSRpIMhttps://t.co/9UpccU8KGN
How to detect and potentially prevent AI voice clone scams
1. Verify authenticity
For AI voice scams involving celebrities, it is always best to fact-check if what they said was real. For instance, in the case of the recent spam call involving President Biden, a simple Google search will yield results of multiple publications debunking it.
2. Scrutinize calls from unknown numbers who claim to be family/friends
For AI voice scams imitating distressed family members or friends, the first thing that people should check is if the caller is the real person. In the case of scammers using sophisticated technology, it can be hard to differentiate but clues like a way of talking, mannerisms, certain words, pitch, accent and more may give it away. However, in case victims are asked to send money urgently, it is best to take a moment and verify the information thoroughly.
3. Set up a password
Modern problems require modern solutions. While it may sound nerdy, it is indeed wise to have a family password that can quickly verify a caller’s identity. A simple word or phrase can be used for this and it can come in handy in situations of actual distress.
4. Seek help of authorities
In case someone comes across a suspicious call from an unknown number, it is best to reach out to the authorities as soon as possible. Even if someone’s family member or friend is in trouble, the authorities can always help.
5. Be mindful of what you share
A non-celebrity person’s social media feed primarily determines if they can be the subject of an AI voice cloning scam or not. Social media accounts carry information on friends, family, and media like videos, photos, and audio, all of which can be used by scammers. Thus, it is recommended to avoid sharing lengthy videos containing speech or conversations and engage the privacy lock on social media accounts to prevent random people from accessing media.