security

Deepfakes make banks keep it real – Financial Times


Receive free Cyber Security updates

Deepfake technology has been able to create realistic — but false — renderings of real people and their voices for some time. But, now, with rapid advances in generative artificial intelligence, it is set to become suddenly more sophisticated. How soon, then, before digital impersonators start filling our screens with fake narratives and emptying our bank accounts?

Ever since deepfakes began to be used in film editing, experts feared that the technology might be abused to spread online misinformation or for identity theft and fraud.

As a result, a market for deepfake detection tools quickly developed. These can use AI to monitor the tell-tale signs that content has been faked — based on an understanding of the way deepfakes are created. For example, in the case of a person’s photo, inconsistencies around lighting, shadows and angles, or instances of distortion and blurring, are obvious giveaways.

However, with the recent explosion of generative AI models and consumer chatbots such as ChatGPT, deepfake technology has become more convincing — and more readily available, at scale. Hackers no longer need advanced technical capabilities. 

Michael Matias, chief executive of deepfake detection start-up Clarity, says: “More advanced AI models are [being] released within the open-source domain, making deepfakes more prevalent and pushing technology even further.” And he warns that “the rise of easily accessible ‘killer apps’ empowers bad actors to generate super high-quality deepfakes quickly, easily and with no costs”. This is already rendering some detection tools less effective. 

Readers Also Like:  What are the cloud computing challenges for security leaders? - Tech Wire Asia

According to technology provider ExpressVPN, there are now millions of deepfakes online today, up from fewer than 15,000 in 2019. In a survey by Regula, some 80 per cent of companies said these deepfakes — voice or video — represented real threats to their operations.

“Businesses need to view this as the next generation of cyber security concerns,” says Matthew Moynahan, chief executive of authentication provider OneSpan. “We’ve pretty much solved the issues of confidentiality and availability, now it’s about authenticity.”

It is a priority, too. A June report by Transmit Security found that AI-generated deepfakes can be used to bypass biometric security systems, such as the facial recognition systems protecting customers’ accounts, and to create counterfeit ID documents. Chatbots could now be programmed to emulate a trusted individual or customer services representative, tricking people into handing over valuable personally identifiable information for use in other attacks.

Only last year, the FBI reported a rise in complaints citing the use of deepfakes alongside stolen personally identifiable information to apply for jobs and work-at-home positions online.

One way to combat this type of ID theft is to use what is known as behavioural biometrics, says Haywood Talcove, chief executive of LexisNexis Risk Solutions Government Group. This involves assessing and learning how a user handles a device, such as a smartphone, or behaves when using a computer. If any suspicious changes are detected, they are flagged.

“These behavioural biometric systems look for thousands of cues that somebody might not be who they say they are,” Talcove explains. For example, if a user is on a new part of a website that they have never visited before yet appear familiar with it and able to navigate at speed, it might be a fraud indicator.

Readers Also Like:  FTC and HHS Warn Hospital Systems and Telehealth Providers ... - Federal Trade Commission News

Start-ups, such as BioCatch, as well as bigger groups, such as LexisNexis, are among those developing this technology to continuously verify a user in real time.

There is a risk of counter-attacks, though. “Traditional fraud detection systems often rely on rule-based algorithms or pattern-recognition techniques,” notes Transmit Security. “However, AI-powered fraudsters can employ deepfakes to evade these systems. By generating counterfeit data or manipulating patterns that AI models have learned from — a fraud technique known as adversarial attacks — fraudsters can trick algorithms into classifying fraudulent activities as legitimate.”

Other approaches for fighting identity theft online include multi-factor authentication and device assessment tools. Henry Legard, chief executive of verification start-up Verisoul, believes the quality he calls “liveness” will become important in preventing identity theft.

This can involve companies requiring users to film a short video of themselves, to confirm who they are. Technology is used to “make sure there’s movement or humanlike change — blinks, mouth twitches — to make sure you’re real, not a 3D mask or image”. It will also check that the company is receiving a real-time video feed.

But, while many corporations, particularly banks, are embracing behavioural biometrics and other robust verification techniques, Talcove notes that most US state labour departments are still relying on facial recognition alone.

“At this moment in time, almost the entire US government is exposed,” he says.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.