Meta, the parent company of Facebook, Instagram and WhatsApp, is to test new uses of facial recognition technology to detect and stop scammers from misusing the images of celebrities and public figures for fake ads.
The technology will also be used to help people verify their identity to regain access to compromised accounts or to appeal enforcement actions.
The measures will not launch in the EU at this time due to what Meta described as the complexity of the European regulatory system.
It is understood the company has engaged with EU regulators and European policymakers as part of the testing process.
The Irish Data Protection Commission (DPC), which is the lead privacy regulator for Meta in the EU, said it was contacted by the company last month and told that it planned to roll-out the service across the EU from early 2025.
“We have sought documentation and further information from Meta as we examine it from a data protection perspective,” said Graham Doyle, Deputy Commissioner at the DPC.
Scammers often use images of celebrities or public figures for fake ads in a process commonly known as “celeb-bait”.
Meta said that it uses machine learning classifiers to review every ad that runs on its platforms for violations of its ad policies, including scams.
The automated process includes analysis of the different components of an ad, such as the text, image or video.
Now, as an added layer of detection, it is testing a new way of identifying “celeb-bait” scams.
Meta said that if its systems suspect that an ad may be a scam that contains the image of a public figure, it will try to use facial recognition technology to compare faces in the ad against the public figure’s Facebook and Instagram profile pictures.
“If we confirm a match and that the ad is a scam, we’ll block it,” said Monika Bickert, VP Content Policy, Meta.
“We immediately delete any facial data generated from ads for this one-time comparison regardless of whether our system finds a match, and we don’t use it for any other purpose,” Ms Bickert said in a Meta newsroom post.
“Early testing with a small group of celebrities and public figures shows promising results in increasing the speed and efficacy with which we can detect and enforce against this type of scam,” she added.
Facial recognition technology is also being tested to see if it can assist people that lose access to their Facebook or Instagram accounts, if they forget their password, lose their device, or when a scammer has tricked them into turning over a password.
“We’re now testing video selfies as a means for people to verify their identity and regain access to compromised accounts,” Meta said.
“As soon as someone uploads a video selfie, it will be encrypted and stored securely. It will never be visible on their profile, to friends or to other people on Facebook or Instagram.
“We immediately delete any facial data generated after this comparison regardless of whether there’s a match or not,” the company said.
Meanwhile, the Tánaiste said fast-tracking technology is extremely important to prevent fake ads and other scams from happening.
Micheál Martin said technology must develop counter measures to fraud, fake news and defamation of people because “what has been happening to date isn’t good enough”.
“Companies seem to be, in some instances, in respect of the advertisements, fake ads that go up in respect of people that are well known, endeavouring to lull others into false investments or whatever.
“That is becoming a bit of a revenue model for some of the companies,” he said.
Mr Martin said they have to demonstrate a determination “to root those kind of practices out.
“The most effective way to defeat technological advances that are focused on fraud is to develop counter technological measures or mechanisms that can stymie or undermine such efforts of fraud.”
Additional reporting Karen Creed