3 Reasons Why AI Generated Deepfakes Are a Growing Concern The Era of Digital Deception
Sophisticated scam technology harnessing artificial intelligence is capable of deceiving even the most vigilant.
COMPUTER-GENERATED children’s voices that fool their own parents. Masks created with photos from social media deceive a system protected by face Id.
They sound like the stuff of science fiction, but these techniques are already available to criminals preying on everyday consumers.
The proliferation of scam tech has alarmed regulators, police, and people at the highest levels of the financial industry. artificial intelligence (ai) in particular is being used to “turbocharge” fraud, US Federal Trade Commission chair Lina Khan warned in June, calling for increased vigilance from law enforcement.
Even before ai broke loose and became available to anyone with an Internet connection, the world was struggling to contain an explosion in financial fraud.
In the United States alone, consumers lost almost Us$8.8bil (Rm40.9bil) last year, up 44% from 2021, despite record investment in detection and prevention. Financial crime experts at major banks, including Wells Fargo and Co and deutsche Bank ag, say the fraud boom on the horizon is one of the biggest threats facing their industry.
On top of paying the cost of fighting scams, the financial industry risks losing the faith of burned customers.
“It’s an arms race,” says James Roberts, who heads up fraud management at the Commonwealth Bank of australia, the country’s biggest bank.
“It would be a stretch to say that we’re winning.”
The history of scams is surely as old as the history of trade and business.
One of the earliest known cases, more than 2,000 years ago, involved a greek sea merchant who tried to sink his ship to get a fraudulent payout on an insurance policy.
Look back through any newspaper archive, and you’ll find countless attempts to part the gullible from their money.
But the dark economy of fraud, just like the broader economy, has periodic bursts of destabilising innovation.
new technology lowers the cost of running a scam and lets the criminal reach a larger pool of unprepared victims.
Email introduced every computer user in the world to a cast of hard-up princes who needed help rescuing their lost for tunes.
Crypto brought with it a blossoming of Ponzi schemes that spread virally over social media.
The future of fake
The ai explosion offers not only new tools but also the potential for life-changing financial losses.
and the increased sophistication and novelty of the technology mean that everyone, not just the credulous, is a potential victim.
The Covid-19 lockdowns accelerated the adoption of online banking around the world, with phones and laptops replacing face-to-face interactions at bank branches.
It’s brought advantages in lower costs and increased speed for financial firms and their customers, as well as openings for scammers.
Some of the new techniques go beyond what current off-theshelf technology can do, and it’s not always easy to tell when you’re dealing with a garden-variety fraudster or a nation-state actor.
“We are starting to see much more sophistication with respect to cybercrime,” says amy Hoganburney, general manager of cybersecurity policy and protection at Microsoft Corp.
Globally, cybercrime costs, including scams, are set to hit US$8 trillion (RM37.18 trillion) this year, outstripping the economic output of Japan, the world’s third-largest economy.
By 2025, it will reach US$10.5 trillion (RM48.8 trillion), after more than tripling in a decade, according to researcher Cybersecurity Ventures.
In the Sydney suburb of Redfern, some of Roberts’ team of more than 500 spend their days eavesdropping on cons to hear firsthand how ai is reshaping their battle.
a fake request for money from a loved one isn’t new. But now parents get calls that clone their child’s voice with ai to sound indistinguishable from the real thing.
These tricks, known as social engineering scams, tend to have the highest hit rates and generate some of the quickest returns for fraudsters.
Today, cloning a person’s voice is becoming increasingly easy.
Once a scammer downloads a short sample from an audio clip from someone’s social media or voicemail message – it can be as short as 30 seconds – they can use ai voice-synthesising tools readily available online to create the content they need.
Public social media accounts make it easy to figure out who a person’s relatives and friends are, not to mention where they live and work and other vital information.
Bank bosses stress that scammers, who run their operations like businesses, are prepared to be patient, sometimes planning attacks for months.
What fraud teams are seeing so far is only a taste of what ai will make possible, according to Rob Pope, director of new Zealand’s government cybersecurity agency, CERT nz.
He points out that ai simultaneously helps criminals increase the volume and customisation of their attacks.
“It’s a fair bet that over the next two or three years we’re going to see more ai-generated criminal attacks,” says Pope,
a former deputy commissioner in the New Zealand Police who oversaw some of the nation’s highest-profile criminal cases. “What AI does is accelerate the levels of sophistication and the ability of these bad people to pivot very quickly. AI makes it easier for them.”
To give a sense of the challenge facing banks, Roberts says right now the Commonwealth Bank of Australia is tracking about 85 million events a day through a network of surveillance tools.
That’s in a country with a population of just 26 million.
The industry hopes to fight back by educating consumers about the risks and increasing investment in defensive technology.
New software lets CBA spot when customers use their computer mouse in an unusual way during a transaction – a red flag for a possible scam.
Anything suspicious, including the destination of an order and how the purchase is processed, can alert staff in as few as 30 milliseconds, allowing them to block the transaction.
At Deutsche Bank, computer engineers have recently rebuilt their suspicious transaction detection system, called Black Forest, using the latest natural language processing models, according to Thomas Graf, a senior machine learning engineer there.
The tool looks at transaction criteria such as volume, currency, and destination and automatically learns from reams of data what patterns suggest fraud.
The model can be used on both retail and corporate transactions and has already unearthed several cases, includone ing involving organised crime, money laundering, and tax evasion.
Wells Fargo has overhauled its tech systems to counter the risk of Ai-generated videos and voices. “We train our software and our employees to be able to spot these fakes,” says Chintan Mehta, Wells Fargo’s head of digital technology. But the system needs to keep evolving to keep up with the criminals. Detecting scams, of course, costs money.
The digital dance
One problem for companies: Every time they tighten things, criminals try to find a workaround.
For example, some US banks require customers to upload a photo of an ID document when signing up for an account.
Scammers are now buying stolen data on the dark web, finding photos of their victims on social media, and 3D-printing masks to create fake IDS with the stolen information.
“And these can look like everything from what you get at a Halloween shop to an extremely lifelike silicone mask of Hollywood standards,” says Alain Meier, head of identity at Plaid, which helps banks, financial technology companies, and other businesses battle fraud with its ID verification software. Plaid analyses skin texture and translucency to make sure the person in the photo looks real.
Meier, who’s dedicated his career to detecting fraud, says the best fraudsters, those running their schemes as businesses, build scamming software and package it up to sell on the dark web.
Prices can range from US$20 (RM95) to thousands of dollars.
“For example, it could be a Chrome extension to help you bypass fingerprinting or tools that can help you generate synthetic images,” he says.
As fraud gets more sophisticated, the question of who’s responsible for losses is getting more contentious.
In the United Kingdom, for example, victims of unknown transactions – say, someone copies and uses your credit card – are legally protected against losses.
If someone tricks you into making a payment, responsibility becomes less clear.
In July, the US top court ruled that a couple who were fooled into sending money abroad couldn’t hold their bank liable simply for following their instructions.
But legislators and regulators have leeway to set other rules: The government is preparing to require banks to reimburse fraud victims when the cash is transferred via Faster Payments, a system for sending money between UK banks.
Politicians and consumer advocates in other countries are pushing for similar changes, arguing that it’s unreasonable to expect people to recognise these increasingly sophisticated scams.
Banks worry that changing the rules would simply make things easier for fraudsters.
Financial industry leaders around the world are also trying to push a share of the responsibility onto tech firms.
The fastest-growing scam category is investment fraud, often introduced to victims through search engines where scammers can easily buy sponsored advertising spots.
When would-be investors click through, they often find realistic prospectuses and other financial data. Once they transfer their money, it can take months, if not years, to realise they’ve been swindled when they try to cash in on their “investment”.
In June, a group of 30 lenders in the UK sent a letter to Prime Minister Rishi Sunak asking that tech companies contribute to refunds for victims of fraud stemming from their platforms.
The government says it’s planning new legislation and other measures to crack down on online financial scams.
The banking industry is lobbying to spread responsibility more widely, in part because costs appear to be going up. Once again, a familiar problem from economics applies in the scam economy, too.
Like pollution from a factory, new technology is creating an externality, or a cost imposed on others. In this case, there’s a heightened reach and risk for scams.
Neither banks nor consumers want to be the only ones forced to pay the price.
Chris Sheehan spent almost three decades with the country’s police force before joining National Australia Bank Ltd, where he heads investigations and fraud.
He’s added about 40 people to his team in the past year with constant investment by the bank.
When he adds up all the staff and tech costs, “it scares me how big the number is”, he says.
“I am hopeful because there are technological solutions, but you never completely solve the problem,” he says. It reminds him of his time fighting drug gangs as a cop.
Framing it as a war on drugs was “a big mistake”, he says.
“I will never phrase it in that framework – of a war on scams – because the implication is that a war is winnable,” he says. “This is not winnable.” – Bloomberg
Related post:
When malware strikes