[go: up one dir, main page]

Sections

Russian propaganda floods Europe's social networks

Russian propaganda floods Europe's social networks

06.24.2024, by
As the European elections draw near, Paul Bouchaud, a specialist in algorithms, shows that Meta (the company that owns Facebook, Instagram, and WhatsApp) is not preventing pro-Russian propaganda from flooding its platforms with political messages.

As part of your research, you study social media algorithms and their impact on society, in particular on elections. How do you go about this?
Paul Bouchaud1: At the ISC-PIF complex systems institute, I regularly collaborate with AI Forensics, a European non-profit organisation that investigates the influential and opaque algorithms used by YouTube, Google, TikTok, and Meta, which owns Facebook, Instagram, and WhatsApp. We conduct independent technical studies – with a team of ‘digital detectives’ of which I am a part (experts in IT, law, ethics, sociology, psychology, and communication) –in order to reveal the damage caused by these algorithms. We concentrate on election monitoring due to the immediate impact on democracy and fundamental rights, and then draw media attention to these investigations, in an effort to promote transparency and responsibility in connection with these influential algorithms.

It was the 2016 US elections that raised awareness of the risk of manipulation on social media…
P. B.: Indeed, lengthy parliamentary and judicial investigations were required to lift the veil on the Kremlin’s propaganda offensive during the elections that saw Donald Trump’s rise to power. In 2018, the indictment brought by US Special Counsel Robert Mueller showed that 126 million individuals on Facebook and 1.4 million on Twitter had been exposed, via fake profiles and targeted advertising, to messages seeking to divide American society.

Hearing with Robert Mueller, Special Counsel investigating alleged Russian interference in the 2016 US electoral campaign, before a committee of the House of Representatives, Washington, 24 July 2019.
Hearing with Robert Mueller, Special Counsel investigating alleged Russian interference in the 2016 US electoral campaign, before a committee of the House of Representatives, Washington, 24 July 2019.

Some practices and control procedures have been adjusted since, but they are largely insufficient; there is a high risk that the 2016 scenario could repeat itself. US legislation is much less restrictive than that of the European Union (EU), which if truly applied could combat disinformation, as my research2 demonstrates today.

Could these new European regulations verify whether Meta adheres to the rules governing the posting of political propaganda online?
P. B.:
Yes, the Digital Services Act (DSA), which took effect on 25 August 2023, seeks to limit the spread of hateful, paedopornographic, and terrorist content online, in addition to the spread of illicit (counterfeit or dangerous) products. It does not challenge the limited responsibility of platforms with respect to these products and the content they host (notion of  ‘passive’ host). However, they must offer users the means to report them, and to this end provide public access to the advertising libraries of twenty platforms3.

So technically, beginning in August 2023, I trained an algorithm that I developed on 14 European languages, basing it on all of the examples of ads made available by Meta on Facebook and Instagram. This algorithm can detect which ads are political, and whether they have been identified as such by Meta. If the ad is reported as being political, additional information is made public regarding who paid for it, the target audience (age, gender, location), and the audience reached (same information).

Example of a political ad published on Facebook that Meta approved without moderating, and that Paul Bouchaud identified thanks to his algorithm. At right, Bouchaud uses a flag to indicate the country being targeted, along with the number of accounts reached and on what dates, in addition to a proposed translation of the text.
Example of a political ad published on Facebook that Meta approved without moderating, and that Paul Bouchaud identified thanks to his algorithm. At right, Bouchaud uses a flag to indicate the country being targeted, along with the number of accounts reached and on what dates, in addition to a proposed translation of the text.

What control does Meta have over the content of ads posted online?
P. B.:
Ads are defined as messages whose diffusion on Meta (or other platforms) is subject to compensation, whether their content is commercial or not. A company or individual can thus pay Meta to publish – for a smaller or larger audience, or one that is more or less targeted – a message intended to sell socks for example, or to vote for a particular party. Also, the same ad can be posted simultaneously on multiple networks (for instance Instagram and Facebook). The only safeguard is if the content is of a political nature, the advertiser must tick the box ‘my ad is of a political nature’, which requires Meta to verify it, and to then approve or reject it.

I sifted through 30 million ads published in January and February 2024 in 16 European countries: approximately 98% of them were commercial, but of the remaining 2% (approximately 600,000), 95% were not identified by Meta as conveying a political message. This may not seem like much, but for instance for France, it represents 200 ads per day, and for the 16 EU countries, a daily 1,300, knowing that some clearly come from coordinated propaganda campaigns, and are seen by millions of people.

You also revealed an important network of pro-Russian propaganda?
P. B.:
Over the course of seven months, from August 2023 to March 2024, pro-Russian propaganda, whose objective is to undermine the support of European governments and the EU for Ukraine, reached 38 million accounts in France and Germany. Of the 3,826 Facebook pages involved, less than 20% were moderated by Meta, even though the messages had already been presented to users at least 2.6 million times. In the run-up to the elections, these activities intensified. Between 1 and 27 May 2024, Meta allowed at least 275 pro-Russian posts to reach 3 million French, German, Italian, and Polish accounts. What is more, these messages are now massively targeting Italy and Poland, as demonstrated by our latest reports (see chart below).

In the run-up to the European elections, the number of accounts reached by pro-Russian propaganda ads has continued to increase, and they are still being approved by Meta.
In the run-up to the European elections, the number of accounts reached by pro-Russian propaganda ads has continued to increase, and they are still being approved by Meta.

Concretely, what form do these ads take?
P. B.:
They take the form of texts, often illustrated with caricatures, and include a recent news item in order to ridicule a government, take aim at its negligence, or denounce a political measure. For instance, messages targeting France present President Macron as being unable to secure a small territory such as New Caledonia – let alone to defend Ukraine. Another example claims that if America is getting richer while Europe grows poorer, it is because the EU is spending its resources to support Ukraine. This last post also included per capita productivity growth rates for the two continents, which helps lend it credence as coming from a ‘serious’ source. Numerous messages are conceived in this way, allowing them to easily fool the targeted individuals.

Do the authors of these ads identify themselves?
P. B.:
The names appearing at the top of these messages (‘tempnafor’, ‘Awudud online shop’, etc.) are automatically generated by the advertisers, and don’t correspond to anything in particular. In fact, users practically never look at them. In recent days, an ad appeared usurping the graphic identity of the weekly Le Point, and is difficult to distinguish from a real article. As for the advertisers, they are rarely identified.

An example of media plagiarism, in this case a text that appears to have been published in Le Point, but the source address of “lepoint.wf” instead of “lepoint.fr” makes it possible to detect the falsification.
An example of media plagiarism, in this case a text that appears to have been published in Le Point, but the source address of “lepoint.wf” instead of “lepoint.fr” makes it possible to detect the falsification.

A particularly influential network has nevertheless been revealed, the pro-Russian Doppelgänger network. Since the beginning of the Russo-Ukrainian War, it has disseminated fake news about the conflict through realistic copies of the websites of ministries from European countries, and by circulating articles from the Reliable Recent News ‘media outlet’, which publishes daily disinformation content in multiple languages, including French. In July 2023, the EU issued sanctions4 against two companies (Social Design Agency and Structura National Technologies), which were suspected of playing a central role in circulating these articles. However, this network continues to act, and is now trying to manipulate the conflict between Israel and Hamas in the same way. 

Was your algorithm complicated to develop?
P. B.:
It took me a month. I developed it on my own, with no help from a team. So if a company the size of Meta cannot quickly detect concealed political ad campaigns, it is simply because it does not have the will to do so. Of course conducting controls is more difficult in places where there are few moderators. According to Meta’s numbers (see table), it could be complicated for three Estonians to check all of the messages published on networks in the country. However, France and Germany for example, with over two hundred moderators, are better equipped.

Some countries have very few moderators, making it more difficult to control what content is circulating on social media in the local language. This table presented on Meta’s website indicates the number of moderators per country in the European Union.
Some countries have very few moderators, making it more difficult to control what content is circulating on social media in the local language. This table presented on Meta’s website indicates the number of moderators per country in the European Union.

Besides, as our research has shown, content moderation tools are also effective. Yet Meta does not communicate information on how it moderates these ads, simply stating that it uses both and humans and instruments to that effect, because it is under the obligation to do so.

As an individual, can I post any content?
P. B.:
Any message conveying hate or false information can be deleted. Automatic detection of hateful content is fairly effective. On the other hand, with regard to political disinformation, unless your message goes viral, there is little chance Meta will spot it. It has to be reported by someone.

What pressure can be exerted on Meta to ensure it carries out controls?
P. B.:
This study is a direct challenge to the EU’s efforts to ensure fair and transparent elections. The continued abuse of social media for political ends emphasizes the need for strict monitoring and proactive measures by both regulatory organisations and digital platforms themselves. I informed the European Commission, which on 30 April announced the initiation of formal proceedings against Meta for alleged breaches of the DSA. Meta must therefore take measures to prevent its advertising system from being used for propaganda purposes. But this procedure does not include any kind of deadline. If the dispute continues, the EC can impose sanctions, namely a fine totalling up to 6% of the platform’s global turnover.

Propaganda ads rely on information in the news (in this case the deteriorating state of public schools) to denigrate a government policy or action.
Propaganda ads rely on information in the news (in this case the deteriorating state of public schools) to denigrate a government policy or action.

In the past, we have observed how online algorithms can influence what happens offline, including attacks against an entire population…
P. B.: A report by Amnesty International5, in which I did not take part, showed how Facebook’s algorithmic system promoted hate against the Rohingya People, thereby contributing to the atrocities committed by the Myanmar Army in 2017. Some actors, seeking to dehumanise the Rohingya, posted violent messages that disinhibited the members of the military tasked with ethnic cleansing and mass rape. By choosing which messages to display in each user’s feed, Meta highlights content that elicits strong reactions, whether they be positive or negative, which is actually characteristic of hateful content. Facebook offered them an unprecedented audience in Burma. Multiple Rohingya groups filed legal cases against Meta, but the damage was done. Only state regulation can avoid such situations. ♦

Source
 "On Meta's Political Ad Policy Enforcement: An analysis of Coordinated Campaigns & Pro-Russian Propaganda", Paul Bouchaud, 2024. hal-04541571v1

Further reading on our site
The Internet, a disinformation highway?
How social networks manipulate public opinion
The 2017 presidential election via the prism of Twitter (in French)

 

Footnotes
  • 1. A specialist in algorithmic auditing, and a doctoral student at the Paris ISC-PIF complex systems institute (CNRS).
  • 2. On Meta's Political Ad Policy Enforcement: An analysis of Coordinated Campaigns & Pro-Russian Propaganda – HAL Open Archive.
  • 3. In connection with the DSA, the European Commission identified about twenty major social media platforms concerned, including Alibaba, Amazon Store, Apple AppStore, Booking.com, Facebook, Google Play, Google Shopping, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, X (formerly Twitter), Wikipedia, YouTube, Zalando, etc. along with two major search engines, Bing and Google Search.
  • 4. Members of these enterprises have had their assets frozen, and are prohibited from entering EU territory.
  • 5. “The Social Atrocity”, Amnesty International report, September 2022.

Comments

0 comment
To comment on this article,
Log in, join the CNRS News community