Are AI Voice Generators a Danger for the Future?

Sophia+Romo+walking+through+Huntington+Beach+High+Schools+campus.+%28Photography+by%3A+Natalie+Meschuk%29

Sophia Romo walking through Huntington Beach High School’s campus. (Photography by: Natalie Meschuk)

As time passes, technology advances. But has it been advancing too fast? AI has been making a grand appearance on the internet, and its abilities are frightening. 

Jesse Meschuk, a resident of Huntington Beach, said, “I think AI holds great promise for humanity, and when used correctly, could do a lot of great things, like a model out solutions to solving problems like cancer and incurable diseases that we couldn’t think about before. However, again I think we have to be careful because if it doesn’t have the right restrictions, what do we do when AI starts taking over more parts of our daily lives, and then starts using it against us in some way? What if AI got to a point where it had general intelligence and it’s able to think for itself?” 

Voice generators have been around since 2017  when Lyrebird, an AI Research Division of Descript, created a product that can copy any person’s voice after one minute of hearing someone speak. But voice generators have recently become a problem with new AI technology

Previous generators tried replicating people’s voices, usually famous celebrities or icons at the time. They produced rocky and robotic impressions, which did not sound like the celebrity they were trying to copy. But now, this automatic voice replication has quickly shifted and become dangerously spot-on.

AI has become more prevalent on social media. For example, Snapchat launched its new personal AI, and it has gained attention for acting strange. Abby Kiet, a freshman student at Huntington Beach High School said, “I asked the [Snapchat] AI where I was, and my location was off on Snap, and then it gave me my address. And then I asked it again the day after, and it said, ‘Sorry I cannot reveal that information because I am an AI.’ I asked for the best way to get chicken tenders in class, and then it gave me little locations such as plazas including Seacliff [Shopping Center], and I was wondering, ‘How do you know that? How do you know where I am?’”

While AI having access to our location is already scary enough,  its ability to perfectly replicate a person’s voice is on another level. This technology has become so good at copying celebrity voices that listeners can’t tell the difference between the fake, generated voice, and someone’s authentic voice. But it hasn’t stopped at just celebrities. AI has given ill-intentioned people access to brilliant technology. 

Natalie Tease, a mother, was at home relaxing, until she received a disturbing phone call. “I picked up the phone to listen to it and it was [my daughter] Olivia’s voice screaming hysterically, ‘Mom help me, help me. Will’s dead. Help me,” Tease explained. 

What Tease experienced was a scam using voices generated by AI. In Tease’s situation, it sounded like the scammer was going to ask for a ransom. What Tease said next puts this scamming technique into a real perspective. “I probably would have done anything they asked for at that moment,” she said.

Another similar incident happened in Arizona. Jennifer DeStefano’s daughter was out on a skiing trip, and Destefano didn’t think anything of it until she received a phone call from an unknown number. “It’s my daughter’s voice crying and sobbing, um, saying, ‘Mom!’” and Destefano, confused, responded with, “Okay, what happened?” 

On the other line, Destefano’s daughter said something that would make any parent’s heart drop. “Mom, these bad men have me. Help me, help me.”

The man that supposedly had Destefano’s daughter asked for a ransom, but never sent out any information to Destefano. He said he would pick the money up. Of course, after this phone call, Destefano called the friends with her daughter during her skiing trip, and to her surprise, they all said that she was safe and okay. 

AI voice technology can be seen as fun, but in real situations, it can quickly become a threat.

Meschuk said, “I also think that it could be used in a negative way to spread misinformation. We’ve seen a lot of social media being used to spread false information. And so, I worry that voice generators could be used to make an even more sophisticated manipulation. I think that it’s fun, it’s exciting, but it’s a new technology, there needs to be some laws put in place, to help regulate, what you can do, and can’t do.” 

It is alarming how fast these generators have become popular. If you simply search, “AI voice generators” on Google, 4,140,000 results come up. So many people have access to this sophisticated piece of technology, which could lead to more misinformation getting spread all over the internet.