
Recent iProov research found that .1% of U.S. and U.K. consumers could accurately distinguish real from fake content across all stimuli which included images and video.
The study found that 30% of 55-64 year olds and 39% of those aged 65+ had never even heard of deepfakes, highlighting a significant knowledge gap and increased susceptibility to this emerging threat by this age group.
Deepfake videos proved more challenging to identify than deepfake images, with participants 36% less likely to correctly identify a synthetic video compared to a synthetic image. This vulnerability raises serious concerns about the potential for video-based fraud, such as impersonation on video calls or in scenarios where video verification is used for identity verification.
While concern about deepfakes is rising, many remain unaware of the technology. One in five consumers (22%) had never even heard of deepfakes before the study.
Despite their poor performance, people remained overly confident in their deepfake detection skills at over 60%, regardless of whether their answers were correct. This was particularly so in young adults (18-34). This false sense of security is a significant concern.
Social media platforms are seen as breeding grounds for deepfakes with Meta (49%) and TikTok (47%) seen as the most prevalent locations for deepfakes to be found online. This, in turn, has led to reduced trust in online information and media — 49% trust social media less after learning about deepfakes. Just one in five would report a suspected deepfake to social media platforms.
Three in four people (74%) worry about the societal impact of deepfakes, with “fake news” and misinformation being the top concern (68%). This fear is particularly pronounced among older generations, with up to 82% of those aged 55+ expressing anxieties about the spread of false information.
Less than a third of people (29%) take no action when encountering a suspected deepfake which is most likely driven by 48% saying they don’t know how to report deepfakes, while a quarter don’t care if they see a suspected deepfake.
Despite the rising threat of misinformation, just one in four search for alternative information sources if they suspect a deepfake. Eleven percent of people critically analyze the source and context of information to determine if it’s a deepfake, meaning a vast majority are highly susceptible to deception and the spread of false narratives.