We’re living in the dawn of a new era, with technological advancements woven into the fabric of nearly every aspect of human life, such as WhatsApp voice notes.
It is a nifty little feature adding convenience to our lives by enabling quick and personalised communication on-the-go or while multitasking.
However, a new form of cyber threat has now emerged, harnessing generative artificial intelligence to clone voices.
And all it takes is a one-minute recording, and deep fake technology will do the rest – creating incredibly convincing imitations.
This advancement has placed everyone from ordinary people to high-profile celebrities, politicians and companies under unprecedented risk.
Generative AI is a wonderful thing. Its potential is vast and it is reshaping how businesses operate, how content is created and how data is analysed.
Yet, its immense possibilities are also easily twisted by cybercriminals who employ Gen AI for malicious purposes, such as disturbingly realistic deep fakes and voice scams.
READ: AI voice scams now a reality – Here’s how they work
And the financial losses pile up.
In 2019, this technology was used to impersonate an unnamed UK energy company’s CEO, resulting in a loss of $243 000 (approximately R4.6 million.)
A similar scam in Hong Kong in 2021 resulted in a $35-million loss.
Stephen Osler, co-founder at Nclose, say scammers, using online tools, can mimic a specific individual’s voice with just seconds of recorded audio.
The alarming frequency of voice notes usage, especially by busy executives, opens up opportunities for cybercriminals.
For example, an unsuspecting IT administrator might execute a fake instruction via voice note, granting unauthorised access to vital business infrastructure.
And the voice notes used to create these audio deepfakes can be sourced from WhatsApp, Facebook Messenger, phone calls, social media posts, to just mention a few.
Once captured, Osler says AI technology is used to manipulate these recordings, making it seem as if the person is speaking live.
For now, businesses can stay ahead of the game by implementing strong processes and procedures requiring multiple authentication levels.
Osler says a well-defined process must be established for all transactions, and training must be provided to ensure employees aware of these evolving risks.
While businesses have resources to fall back on, and while researchers are still working on how to detect deepfakes, individuals may be at risk.
Luckily, there are easy-to-follow steps to take, such as being aware of the latest voice phishing (or vishing) scams.
It’s wise to be mindful of what you share and with whom, such as your ID number, home address, birth date, and phone numbers.
Heck, even your middle name and the names of your children.
Always be mindful of unexpected phone calls, but the harsh reality is that even a friend or parent’s voice can be faked, as can their caller IDs.
The best defence, according to Matthew Wright, a professor of computing security at the Rochester Institute of Technology is to know yourself.
Wright says: “Specifically, know your intellectual and emotional biases and vulnerabilities. This is good life advice in general, but it is key to protect yourself from being manipulated.
“Scammers typically seek to suss out and then prey on your financial anxieties, your political attachments or other inclinations, whatever those may be.”
“This alertness is also a decent defense against disinformation using voice deepfakes.
“Deepfakes can be used to take advantage of your confirmation bias, or what you are inclined to believe about someone.”
Download our app and read this and other great stories on the move. Available for Android and iOS.