Artificial intelligence is widely used not only to personify amusing animals, but also for much more dangerous activities, such as mass deception of the public with the help of deepfakes, reports the BBC.
Alan Read, a professor at King’s College London, also faced unwanted involvement in the spread of propaganda. He usually did not pay much attention to the deepfakes that appeared on his social media feed – sometimes the professor reported them, other times he simply scrolled on. One day, a strange user tagged the professor’s profile in a video in which he himself was seen. In it, an artificial voice, almost indistinguishable from Reed’s, made politicized statements against French President Emmanuel Macron, condemning both him and other Western leaders, and indicating that they were all on the “Titanic” called the European Union. Reed told the BBC that almost everything in the altered video was irredeemably stupid and was terrible to listen to. The video showed him as completely alien.
The unsuspecting theatre professor (who has no connection to politics) has emerged in a new wave of Russia-related artificially created videos that have security experts worried that the West must prepare to counter the Kremlin’s influence on the artificial intelligence front. Chris Kremidas-Courtney, a defence and security analyst at the European Policy Centre think tank, said that not only was there an increase in the number of deepfakes, but also a change in the way influence was being generated. Society was now confronted with systems that could create large-scale deception at a fraction of the cost. He added that no current system of governance was capable of combating this.
The AI-generated videos, which have racked up hundreds of thousands of views, are discrediting EU institutions and accusing Kyiv of corruption as the bloc struggles to agree on financial aid for Ukraine. The latest wave of deepfake videos began after OpenAI released the latest version of its video-making software, Sora2. To compete for market share, many companies are offering their apps at lower prices or cutting costs at the expense of security features, such as removing watermarks.
Sora2 videos include watermarks to distinguish them from real-life footage.
Russian AI expert Arman Tuganbayev said OpenAI was trying to prevent the use of real people in its videos, while other apps allow it. OpenAI told the BBC that it was taking action against user accounts that engage in deceptive activities with the aim of causing harm, including those that mislead about the origin of the content.
The technological rush has triggered an increase in both the volume and quality of foreign influence campaign content, strengthening Russia’s grip on the hybrid conflict with the West.
In late December, videos created by artificial intelligence in which young Polish women called for Poland to leave the EU gained popularity on the TikTok platform. Adam Szlapka, a spokesman for the Polish government, said that it was definitely a disinformation campaign created by Russia. He added that if you look closely at the video, you can see the influence of Russian syntax. Poland has called on the European Commission to investigate the case. TikTok deleted the videos and the accounts that published them.
The platform reported that it had ended 75 covert influence campaigns by 2025.
The British parliament, in turn, discussed concerns that Russian deepfake news could influence local elections in May. The UK’s Online Safety Act does not specifically classify disinformation as harmful, but it does require platforms to remove content if it is found to be created by foreign influence agents. This can often take a long time, while videos can gain popularity on social media in just a few hours.
The country of origin of social media posts is often difficult to determine, but Western researchers have noted that many of them share common characteristics, from stylistic cues to distribution channels, that link them to organized disinformation units close to the Kremlin.
Unlike traditional Russian propaganda outlets such as RT and Sputnik, which were sanctioned in the West shortly after Russia’s full-scale invasion of Ukraine, deepfake campaigns allow for a level of credible deniability that makes them harder to combat.
Read also: Fake followers and account blocking: threath to democracy
The post Artificial intelligence-manipulated videos: the engine of Russia’s misinformation campaigns appeared first on Baltic News Network.