The Influence of Russian Propaganda on American Voters
Since the 2016 presidential election, there has been an ongoing debate about the effectiveness of Russian propaganda in shaping the opinions of American voters. It is well-documented that Russia has utilized large IT companies, such as the Internet Research Agency, to create and disseminate divisive, pro-Russia content targeted at Americans. However, quantifying the impact of this propaganda has been challenging. Nevertheless, it is likely that it has had some influence, particularly in reinforcing existing views, as many people do not take the time to fact-check the information they consume. Furthermore, the community notes system on X is flawed, which can exacerbate the spread of misinformation.
The Shift to Targeting AI Models
According to a recent report by NewsGuard, Russia has shifted its strategy from directly targeting humans with propaganda to targeting AI models that are increasingly used to bypass traditional media websites. The report found that a propaganda network called Pravda produced over 3.6 million articles in 2024, which are now incorporated into the 10 largest AI models, including ChatGPT, xAI’s Grok, and Microsoft Copilot.
The Spread of Disinformation
The NewsGuard audit discovered that the chatbots operated by the 10 largest AI companies repeated false Russian disinformation narratives 33.55% of the time, provided a non-response 18.22% of the time, and a debunk 48.22% of the time. All 10 chatbots repeated disinformation from the Pravda network, and seven chatbots even directly cited specific articles from Pravda as their sources. This new tactic, dubbed "AI grooming," exploits the reliance of AI models on retrieval augmented generation (RAG) to produce articles using real-time information from the web. By creating seemingly legitimate websites, Russian propagandists can feed misinformation to AI models, which then regurgitate it without understanding its origins.
Examples of Disinformation
One notable example of disinformation spread by these chatbots is the claim that Ukrainian President Volodymyr Zelensky banned Truth Social, a social network affiliated with President Trump. Despite being provably false, six of the 10 chatbots repeated this narrative as fact, citing articles from the Pravda network. Another example is the viral video that claimed Kamala Harris left a woman paralyzed in a hit-and-run accident, which was linked to Russian disinformation.
The Role of TigerWeb
The latest propaganda operation has been linked to an IT firm called TigerWeb, which has been tied to foreign interference and is based in Russian-held Crimea. Experts believe that Russia relies on third-party organizations to conduct this type of work, allowing them to claim ignorance of the practice. TigerWeb shares an IP address with propaganda websites that use the Ukrainian .ua TLD.
The Concerns About AI-Powered Disinformation
The increasing reliance on AI models for information has raised concerns about the potential for disinformation to spread quickly and uncontrollably. Social networks, including X, have been flooded with claims that President Zelensky has stolen military aid to enrich himself, another claim that originated from Russian propaganda websites. The fact that AI models can cite information from these websites as legitimate sources is particularly troubling.
The Future of AI-Driven Information
As AI models become more prevalent, there is a growing concern that those who control these models will have significant influence over individual opinions and ways of life. Companies like Meta, Google, and xAI control the biases and behavior of models that will power the web. The fact that Elon Musk has been tinkering with the outputs of xAI’s Grok model to suppress certain ideologies raises questions about the potential for censorship and manipulation.
The Importance of Media Literacy
As people increasingly rely on AI summaries and overviews, the importance of media literacy cannot be overstated. The fact that more than half of Google searches are "zero click," meaning they do not lead to a website click, highlights the need for people to be critical of the information they consume. AI models, while useful, are not infallible and can perpetuate disinformation. It is essential to be aware of the potential pitfalls of relying solely on AI-generated information and to develop the skills necessary to evaluate information critically.
Conclusion
The NewsGuard report highlights the significant threat posed by Russian propaganda and disinformation to the integrity of AI models and the information ecosystem as a whole. As AI models become more pervasive, it is essential to address these concerns and develop strategies to mitigate the spread of disinformation. The full NewsGuard report can be found here.
Source Link