As concerns regarding AI-driven fraud, impersonation, and digital deception continue to grow, new research from VerifyLabs.AI has revealed that over a third (35%) of Brits said deepfake nudes (non-consensual intimate imagery) or videos of themselves or their child were what they feared most when it came to deepfakes. This fear was even more pronounced among younger age groups, with 50% of 16-34 year olds identifying it as their biggest worry.
More than one in three people surveyed (36%) also said they worry about the potential harm deepfakes could cause to their family and friends. These findings highlight the very real emotional and psychological harm deepfakes can cause when used maliciously against individuals or their families.
The survey of 1,000 adults also found that more than half (55%) are most concerned about deepfakes being used for financial scams and fraud. Nearly half (47%) of respondents stated their greatest fear was sophisticated business fraud, including blackmail, criminal activity, and the risk of losing their life savings. A further 44% fear AI-generated deepfake technology could be used to access personal or sensitive information without consent.
Other findings:
- Growing public demand for deepfake detection tools to be made widely available – 57% of respondents said they would be likely to use a deepfake detection feature if it were offered by a trusted organisation, such as their bank or employer, highlighting a clear public appetite for solutions to safeguard everyday communications.
- One in ten Brits still aren’t sure what constitutes a deepfake call – A worrying state underscoring the need for greater public awareness to help people protect themselves from audio-based deepfake scams.
The findings coincide with the launch of the VerifyLabs.AI Deepfake Detector, a Deepfake Detector suite of tools designed specifically for individuals and professionals to quickly and easily identify AI-generated threats hidden in images, video, and audio. For the first time, advanced deepfake detection technology, once reserved for governments and large corporations, is now accessible for everyday use, empowering people to protect themselves both at home and in the workplace.
“Not all deepfakes are bad, a meme or a bit of satire can be harmless fun but when they’re used to mislead, scam, abuse, or incite hate, they can be devastating for the people targeted,” comments Nick Knupffer, CEO of VerifyLabs.AI. “VerifyLabs.AI, gives people the opportunity to take back control of their own online safety and easily identify when things are not quite what they seem. Whether it’s used to support compliance, confirm someone’s identity, or check for signs of fraud, it puts the power of deepfake detection directly into people’s hands, taking away the control of the criminals.”
Operating at up to 98% accuracy, the VerifyLabs.AI Deepfake Detector tool uses powerful pattern analysis to detect the subtle clues left behind by AI-generated material. Whether it’s images, audio or video, the technology examines content frame by frame, word by word, and pixel by pixel to determine whether it’s been created by a human or machine. The system looks for tell-tale signs, from unnatural lighting and facial anomalies in video to overly polished writing and robotic voice patterns, which humans typically miss.The VerifyLabs.AI Deepfake Detector is currently available as a browser extension or as an app for iOS (Apple iPhone), with an Android version coming soon.
The post New research from VerifyLabs.AI highlights the nation’s fears when it comes to deepfakes appeared first on IT Security Guru.
The original article found on IT Security Guru Read More