A terrifying new AI scam imitates your lover's voice to steal money over the phone (2023)

Eddie Cumberbatch was sitting in his Chicago apartment in April when he received a desperate call from his father. When Eddie, a 19-year-old TikToker, heard his father's voice, he knew something was wrong. His father asked if Eddie was home and if everything was okay. "It was a very strange way for him to start the conversation," Eddie told me.

After Eddie said he was safe at home, his father asked him if he had been in a car accident. Eddie was amazed: not only had he not been in an accident, but he hadn't driven a car in six months. His father was relieved, but Eddie was confused: why did he think he had been in a car accident?

His father explained that someone called him from a foreign number. When Eddie's grandfather answered, he sounded like Eddie on the phone. This "Eddie" said that he was in a car accident and that he needed money immediately. Fortunately for Eddie's family, his father was immediately suspicious of the call. When his father found out about the incident from Eddie's grandfather over the phone, he called Eddie to verify the story. He knew it was inappropriate for Eddie to ask for money; besides, Eddie didn't even have a car in Chicago. His father's call to Eddie confirmed that Eddie was not on the phone. In reality, his family was the target of a terrifying new scam: scammers used an artificial interpretation of Eddie's voice to swindle money from his loved ones.

Pretending to steal money is nothing new. Known as "scammer scams," these scammers are reported to be the most common type of fraud in the United States.die Federal Trade Commission. People reported losing $2.6 billion to scams in 2022, up from $2.4 billion the year before.

But new technologies are making scams even more damaging. In March, the FTC announced that scammers began using artificial intelligence to stage "family emergencies," in which scammers convince people their relative needs them so they can get cash or private information. in an aprilquestionnaireSurveyed by adults in seven countries, conducted by global security software company McAfee, a quarter of respondents reported having experienced some form of AI voice fraud: one in 10 said they had been personally attacked, while the 15% said it had happened to someone they knew. .

With just a small fee, a few minutes, and an Internet connection, criminals can use AI for personal gain. HeRapportMcAfee found that in some cases, a fraudster needed as little as three seconds of audio to clone a person's voice. And on social media, it's easy to find a snippet of someone's voice that can then be used as a weapon.

While Eddie and his family managed to avoid the scam, many victims of these AI scammers are less fortunate. And as AI technology becomes more widespread, these scams are getting more sophisticated.

super charged scam

Scams come in many forms, but they generally work the same way: A scammer pretends to be someone you trust to convince you to send them money. According to theFTC websiteThere are cases of scammers posing as lovers, IRS officials, health care providers, computer technicians, and family members. Most scams happen over the phone, but they can also happen on social media, text messages, or email. in a traumaticFall, Richard Mendelstein, a software engineer at Google, received a call from what sounded like his daughter Stella was crying out for help. He was ordered to withdraw $4,000 in cash as ransom. It was only after he sent the money to a money transfer center in Mexico City that he realized he had been scammed and his daughter had been safe at school the entire time.

Previous variants of virtual kidnapping scams, such as the one that victimized Mendelstein's family, used generic language outputs that loosely matched the child's age and sex. The scammers assumed that parents would panic at the sound of a frightened teenager, even if the voice did not match their child's. But with AI, the voice on the other end of the phone can now sound scary like the real thing.Del Washington Postreported in March that aCanadian couple scammed out of $21,000after hearing an AI-generated voice that sounded like his son. Another case this year involved scammersthe cloned voicea 15-year-old girl and posed as the kidnappers to extort a $1 million ransom.

Taking my photos and uploading posts to Instagram is one thing. But trying to clone my voice is really weird and it scared me.

As an online creator with over 100,000 followers on TikTok, Eddie knew that fake accounts impersonating him would inevitably pop up. The day before the scam call, Eddie's fake account appeared on Instagram and he started messaging his family and friends. AI takes plans to the next level.

"Taking my photos and uploading my posts to Instagram is one thing," Eddie told me. "But trying to clone my voice is really weird and it scared me."

Eddie called the rest of his family to warn them about the scam and made aTikTok Videosabout their experiences in conscientization.

Most of us probably think that we would recognize the voices of our loved ones instantly. However, McAfee found that around 70% of the adults surveyed were unsure if they could tell the difference between cloned and real voices. TO2019 studyfound that the brain did not register any significant difference between real and computer-generated voices. Study subjects misidentified morphed (software-altered) images as real 58% of the time, leaving scammers plenty of room to exploit. Also, more and more people are giving their real voice to scammers: McAfee said that 53% of adults give their voice data online each week.

Whether it's a kidnapping, robbery, car accident, or simply being stuck somewhere with no money to get home, 45% of McAfee respondents said they responded to a voicemail or voice note that sounded like their boyfriend or lover, especially if it seemed to be coming from your partner, parent, or child. McAfee also found that more than a third of AI fraud victims lost more than $1,000, and 7% lost more than $5,000. HeFTCreported that scam victims lost an average of $748 in the first quarter of 2023.

wrong votes

Although the artificial intelligence technology that powers these scams has been around for a while, it has gotten better, cheaper, and more accessible.

"One of the biggest things to recognize about the advances in AI this year is that it's about bringing these technologies to a lot more people, including real scaling within the cyber stakeholder community," said Steve Grobman, director of McAfee technology. "Cybercriminals can use generative AI to spoof voices and deep spoofs in ways that previously required much more sophistication."

He added that cybercriminals are like entrepreneurs: they look for the most efficient ways to make money. "In the past, these scams were very lucrative because when they did pay out, they would often pay out quite large sums of money," Grobman said. “But if someone, instead of running a love scam for three months to get $10,000, someone can do a fake audio scam that runs in 10 minutes and gets the same result. That will be much more lucrative.”

Previous phone scams relied on the acting skills of the scammer or a certain level of gullibility on the part of the victim, but now AI does most of the work. Popular AI audio platforms like Murf, Resemble, and ElevenLabs allow users to create realistic voices using text-to-speech technology. The low barrier to entry for these programs (most vendors offer free trials, and a computer science degree is not required to crack these tools) makes them attractive to scammers. The scammer uploads an audio file of someone's voice to one of these sites, and the site creates an AI model of the voice. With a small audio file, scammers can achieve a 95% vote match. Then the scammer can type whatever they want and the AI ​​voice will say what is typed in real time.

Once voter scammers commit their crime, they are elusive. Victims often have limited information that law enforcement can use, and with voicemail scammers operating from anywhere in the world, law enforcement faces numerous logistical and legal challenges. With little information and limited police resources, most cases remain unsolved.In Great BritainOnly one in 1,000 fraud cases leads to a prosecution.

Grobman, however, believes that knowing that scams exist, don't worry. When he gets one of those calls, all he needs is the ability to step back and ask a few questions that only the loved one on the other end of the line will know the answer to. HeFTCI also recommended putting the call on hold and trying to call your relative separately to verify the story if a loved one tells you they need money, as Eddie's father did. Even if a suspicious call comes from a family member's number, it could also be fake. Another telltale sign is when the caller asks for money through dubious and elusive channels like wire transfers, cryptocurrencies, or gift cards. Security experts even recommend agreeing on a safe word with family members that can be used to distinguish between a real emergency and a scam.

AI risks

As AI becomes ubiquitous, such scams threaten our ability to trust even our closest relatives. Fortunately, the US government is trying to control fraudulent uses of AI. Better adviceJudge Neil Gorsuchstressed in February the limitations of legal protections that shield social networks from lawsuits when it comes to AI-generated content, meaning websites are largely spared from liability for what third parties post. ANDVice President Kamala HarrisIn May, the CEOs of major tech companies said they had a "moral" responsibility to protect society from the dangers of AI. This also applies to the FTCsaid companyin February: "You should understand the reasonably foreseeable risks and implications of your AI product before bringing it to market."

Ally Armeson, program executive director of the Cybercrime Support Network, a nonprofit organization dedicated to helping businesses and individuals combat cybercrime, agreed that some regulation was needed. "Generative AI is evolving very quickly," she told me. "Like any technology, generative AI can be misused or exploited for malicious purposes, so regulation will undoubtedly be necessary as these generative AI tools continue to evolve."

But as various AI cybersecurity solutions roll out, Armeson believes it's best for now to stick with them and keep the conversation going: "Until all consumers have access to these security solutions, the onus is on us." us as individuals to understand the new cyber threat landscape and protect ourselves.”

Eva Upton-ClarkHe is the author of reports on culture and society.


Is Quantum AI a scam? ›

Is Quantum AI Legit or a Scam? While there are a lot of things that would question this site's legitimacy, such as lack of transparency, lack of information, and no information about the creators, there are enough third-party reviews to suggest that this site is legitimate.

How do I know if my voice is AI generated? ›

aivoicedetector.com is a tool designed to help individuals and businesses alike verify if an audio is generated by AI or a human voice. The process is simple and straightforward - all you need to do is upload the audio file to the website and click on the "detect now" button.

Is voice AI safe? ›

Is Voice.ai a virus? Absolutely not. We regularly send our app to anti-virus companies in order to certify that it contains no malicious code or dangerous elements. This is why we are certified and tested by all leading anti-virus companies including McAfee, Google and Avast.

How does AI scam work? ›

It uses personal information pulled from social media profiles and other online sources to tailor the scam to you. AI can also be used to create deepfake audio and video clips to trick us. Using the technology, scammers collect audio data to clone a person's voice.

What is Quantum AI used for? ›

Quantum AI Bots are particularly useful for tasks that involve large amounts of data, such as financial modeling, drug discovery, and climate modeling. They are also used in the development of new materials and in the optimization of complex systems, such as transportation networks and power grids.

How do I withdraw money from Quantum AI? ›

You can easily withdraw your profits from Quantum AI by submitting a request through the provided form. The withdrawal is processed by the underlying broker and only takes a few hours. You have up to 10 free withdrawals per month.

Can AI fake voices? ›

Advances in artificial intelligence have allowed software to recreate voices with eerie precision. The technology puts voice actors, often-nameless professionals, in a precarious position. April 24 at 6:00 a.m. Companies clamor to use Remie Michelle Clarke's voice.

What is the AI that mimics your voice? ›

Vall-E's AI Text To Speech system (TTS) can take a three second recording of a person and then convert written words into a speech in that person's voice. What is the most frightful and astonishing part about the tool is its accuracy. According to Microsoft Vall-E is a 'neural codec language model'.

How does AI clone voices? ›


The software uses deep neural network techniques to generate speech from text input. Users can record their voice using a microphone or upload an existing audio file of the target speaker. The software then extracts the unique vocal characteristics of the target speaker and generates a digital voice model.

Is there an app that fakes voices? ›

Speechify makes the most of deep learning algorithms to generate natural-sounding human voices that can pass as humanlike without cloning a specific person's voice.

Can your voice be cloned? ›

In the past, this could be done using voice recognition software or by recording the person's voice and then using that recording to create a digital copy. Today, however, it is possible to get your voice cloned really easily with an AI-powered platform like Podcastle.

What is the app where AI talks to you? ›

22 Best AI Chatbots for 2023: ChatGPT & Alternatives
Name of the AI chatbotRatings ⭐️Free plan/trial
DialogFlow4.3/5 ⭐️Trial
Mobile Monkey4.9/5 ⭐️
Snapchat My AIN/A
GitHub Copilot4/5 ⭐️
18 more rows

What is the real danger with AI? ›

Some of the major dangers of AI include misinformation (including creating convincing fake images and video known as deepfakes), privacy concerns, the loss of jobs, bias and discrimination, market and financial volatility, and a so-called singularity in which AI surpasses human intelligence.

Can AI outsmart us? ›

The AI can outsmart humans, finding solutions that fulfill a brief but in ways that misalign with the creator's intent. On a simulator, that doesn't matter. But in the real world, the outcomes could be a lot more insidious. Here are five more stories showing the creative ingenuity of AI.

Does Ryan Reynolds use Quantum AI? ›

There have been rumors circulating that Quantum AI is endorsed by Canadian actor Ryan Reynolds or celebrity chef Gordon Ramsay. However, these rumors are false and there is no association between the application and these individuals.

Is Qumas a1 a scam? ›

Qumas AI is a legitimate trading platform that uses AI technology to automate crypto trades for users.

Is quantum a good investment? ›

Investing in quantum computing can be worth it, as major tech investments will provide exposure to both developing quantum tech and the current retail market.

How much is a Quantum AI stock? ›

$ 1.2582
CloseChgChg %


Top Articles
Latest Posts
Article information

Author: Mrs. Angelic Larkin

Last Updated: 22/09/2023

Views: 5829

Rating: 4.7 / 5 (67 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Mrs. Angelic Larkin

Birthday: 1992-06-28

Address: Apt. 413 8275 Mueller Overpass, South Magnolia, IA 99527-6023

Phone: +6824704719725

Job: District Real-Estate Facilitator

Hobby: Letterboxing, Vacation, Poi, Homebrewing, Mountain biking, Slacklining, Cabaret

Introduction: My name is Mrs. Angelic Larkin, I am a cute, charming, funny, determined, inexpensive, joyous, cheerful person who loves writing and wants to share my knowledge and understanding with you.