He is no longer the Pope wearing Balenciaga. Last March, a video circulated of Ukrainian President Volodymyr Zelensky announcing his unconditional surrender. It is his face, it is his voice. Almost simultaneously, another video was published in which the president of Russia, Vladimir Putin, appears announcing that peace had been achieved. The videos are not real. Are deepfakes: fakes like these created with artificial intelligence. But they still went viral as if they were true news.
A new study attempted to measure the danger of these alterations in times of war. And he uses these two videos of Putin and Zelensky as main examples. “The Russo-Ukrainian war presents the first real-life example of the use of deepfakes in the war”with a series of contents that are “used for misinformation and entertainment,” the researchers explain in their report published this week in PLOS ONE.
Researchers at University College Cork (UCC) chose Twitter for the analysis. The social network that, according to the team, spreads the most notable cases of deepfakes. On this platform, the images of Donald Trump’s false arrest and the Pope’s postcards went viral. They analyzed a set of 4869 tweetspublished between January 1, 2022 and August 1, 2022.
The content related to deepfakes grew steadily during the week before the start of the war and the first weeks of the conflict between Ukraine and Russia. After reviewing the messages, the analysts identified three major trends. Two of them point to the use of these falsifications to misinform (fake news) or to create humorous content (a montage of Putin in a scene from Lord of the Rings, for example).
The deepfakes during the war generates skepticism
But the study draws special attention to the third of these trends: how, paradoxically, the complaint about the deepfakes generates skepticism among users. In other words, people who distrusted legitimate news because they suspected that it was fake.
Many tweets in the data set expressed “healthy skepticism” toward the deepfakes. That is, users who warned about the dangers of this practice. “Unfortunately, most of this kind of talk about deepfakes during the war it consisted of an unhealthy skepticism,” the study report notes.
Researchers explain that fear of deepfakes It undermined users’ trust in the images they received of the conflict, to the point of dismissing any information related to the conflict. “We can’t always trust our eyes anymore” or “It’s not that this video is fake, but we have to consider all possibilities,” These are two examples cited in the analysis.
They also reported that the argument about the proliferation of these fakes was used against journalism and verified media outlets. And, in the same way, to fuel conspiracy theories. “This highlights how speech can be used deepfake in arguments that undermine the veracity and reliability of the media,” the researchers point out.
In search of credible information
The reviewed tweets rated real media as fake more often than fake ones. deepfakes related to war. “What we discovered was that people used the term deepfake as a buzzword to attack people online,” John Twomey, a researcher at the UCC School of Applied Psychology and co-author of the study, told Gizmodo.
The world’s largest technology companies, developers of the most powerful artificial intelligence tools, have realized the danger. “We need to take steps to protect against the alteration of legitimate content with the intent to deceive or defraud people through the use of AI,” Brad Smith, president of Microsoft, said last May.
Microsoft and other companies like Google have implemented labeling tools that allow you to verify if an image or video was created by artificial intelligence. The new study warns, however, that some of the well-intentioned media coverage of the dangers of deepfakes can unintentionally contribute to a deterioration in trust.
«There are more people aware of the deepfakes than of its real prevalence”, Twomey says. The media and journalism, he explains, will have to be very careful in how they label assumptions deepfakes so as not to cause suspicion about the real media.