Martin Lewis warns of ‘frightening’ fraudsters using his face for money
Martin Lewis’s face and voice have been faked by fraudsters looking to use the trusted money-saving expert to rope people into a scam.
An AI-generated video of Mr Lewis showed him endorsing an investment scheme from Tesla founder and Twitter owner Elon Musk.
After being made aware of the deepfake clip, Mr Lewis called for government regulation of “big tech publishing such dangerous fakes”.
He described the video as “frightening”, saying if these scams are allowed to continue unchecked people will lose money and “lives will be ruined”.
Highlighting the scam on Twitter, he wrote: “This is a scam by criminals trying to steal money. This is frightening, it’s the first deep fake video scam I’ve seen with me in it.
READ MORE Tech-loving entrepreneur says AI does more work in half the time[ INISIGHT]
“Govt & regulators must step up to stop big tech publishing such dangerous fakes. People’ll lose money and it’ll ruin lives.”
In the video, the deepfake version of the consumer champion says: “Elon Musk presented his project in which he has already invested more than $3 billion.
“Musk’s new project opens up great investment opportunities for British citizens. No project has ever given such opportunities to residents of the country.”
He speaks with a convincing recreation of Mr Lewis’ voice in a way that would be difficult to detect for anyone not wary of the technology behind it.
While photo editing tools like Photoshop have existed for some time, the increasing availability of AI-led deepfake tools enables anyone with a computer to artificially generate not just convincing images, but also audio and video recordings.
The video also mimics the branding of ITV’s This Morning, a show which Mr Lewis has frequently appeared on. It is not clear where it was first posted.
Mr Lewis’ website, Moneysavingexpert.com, confirmed they had contacted Twitter to ask about the scam and if Mr Musk is aware of the “investment opportunity” being falsely advertised in it.
They also stated that Mr Lewis “NEVER does adverts or promotes investments.”
He had previously sued Facebook in 2019 for defamation back in 2019 after the social media site published scam ads featuring his image.
Don’t miss…
The first person to use a bionic arm with AI[REVEAL]
The AI genie is out of the bottle and already posing a grave danger[INSIGHT]
AI could ‘undermine’ entire justice system as ‘key issue’ identified[ANALYSIS]
We use your sign-up to provide content in ways you’ve consented to and to improve our understanding of you. This may include adverts from us and 3rd parties based on our understanding. You can unsubscribe at any time. More info
As part of an out of court settlement, Facebook pledged to create a reporting tool and donate £3 million to Citizens Advice for an anti-scam project.
Speaking to MPs on the Draft Online Safety Bill Joint Committee in 2021, Mr Lewis became visibly emotional as he described how lives were being “destroyed” by fraudsters who had used his image in scam adverts online.
He gave several examples, including a woman with cancer who lost thousands of pounds earmarked for her granddaughter’s wedding, after she saw an advert falsely claiming to be endorsed by him.
“She said ‘It’s Martin sponsoring it, it must be all right’,” Mr Lewis said. “It was a scam, and she lost tens of thousands. She lost £15,000 trying to get back the money initially lost.”
Last month experts warned Express.co.uk that deepfake technology ran the risk of “undermining” the justice system through not just scams, but a denial of evidence.
Burkhard Schafer, Professor of Computational Legal Theory at Edinburgh Law School told Express.co.uk that deepfake technology works to incite conspiracy theorists and armchair detectives who refuse to accept the results of criminal proceedings.
He said: “Self-appointed, often anonymous Internet sleuths going over clips and images from the trial, but taken out of their context, and then ‘refuted’ with technically sounding but mostly idiotic analysis, often ignoring that the image they look at will be a copy of a copy of a copy, often having changed format, and not the one the jury saw.”
But there may be ways to control for AI content spreading online. Matt Moynahan, CEO of cybersecurity firm OneSpan, told Express.co.uk: “Our company has been working with banks over the years, since they moved to digital banking, and what we do is have an ability for a bank and an end user to be linked securely at the end user level, and then to audit everything that happens between that bank and an end user user.”
He said that you can then take a “cryptographic hash” of these transactions to confirm their provenance – tech which could then act as a verification sticker for content being spread online.
But Mr Schafer said the technology and funding needed to catch up to the requirements to roll out such a feature on a scal large enough to tackle scammers.
Source: Read Full Article