Executive Summary
Recommender systems play a central role in shaping exposure to political content on social media, which is crucial in democracies, especially during elections. Studies conducted around the 2024 U.S. presidential and 2025 German federal elections, including by Democracy Reporting International (DRI)1, reveal a consistent pattern: even when users provide little or no political input, such as following no accounts or engaging equally with diverse political content, recommender systems still tend to surface more right-leaning or far-right content. These trends are marked on X, visible on TikTok, while weak on Instagram.
This brief examines how political exposure bias can be addressed through the framework of the Digital Services Act (DSA). The DSA enhances transparency and user control over recommender systems, but it does not seem to impose a direct obligation to ensure algorithmic non-partisanship or political balance. However, the European Commission’s guidelines on electoral integrity recommend that platforms take into account “media diversity and pluralism” when designing recommender systems (Measure d[i]).
Persistent political exposure bias—and more critically, its downstream effects on users (particularly “balanced” or “neutral” ones)—could nonetheless constitute a systemic risk to civic discourse and democratic processes. This risk emerges not merely from the presence of bias, but from its capacity to foster polarisation, radicalisation, or societal fragmentation. The risk is compounded when such bias is the result of coordinated manipulation, inauthentic behaviour, or platform’s exploitation. Furthermore, if recommender systems systematically downrank or suppress certain political actors without notification, this could violate Article 17 of the DSA, which requires transparency in content moderation decisions.
Recommendations
- In their forthcoming risk assessment reports, platforms should provide a detailed analysis of political exposure bias as a potential systemic risk driven by their recommender systems. This must include a clear identification of the technical, behavioural, or design features that contribute to or amplify such bias.
- The European Commission should prioritise the robust enforcement of Article 40 of the DSA to guarantee researchers timely and meaningful access to both public and non-public platform data. Such access is essential to investigate how recommender systems shape political exposure and to assess their wider impact on civic discourse and democratic processes.
The role of Recommender Systems in Political Exposure Bias in social media: Why It Is Time to Re-examine a Long-Standing Issue
In the lead-up to the 2024 U.S. presidential election and the 2025 German parliamentary elections, DRI and many other researchers looked into the same question: are social media recommender systems politically biased? Do these algorithms give more visibility to certain parties or candidates, while sidelining others?
This is not a new question. However, over the past year, it has taken on renewed urgency as the Trump administration has stepped up pressure on tech companies to align with its ideological agenda. At the same time, Elon Musk—the owner of a major social media platform—has openly voiced support for far-right political parties.
As more studies point to algorithms contributing to biased patterns of political exposure (again, nothing new), another question naturally follows: So what? What does it mean for elections and democratic debate if recommender systems, whether intentionally or unintentionally, show unbalanced political content to their users? In this brief, we take a closer look at several studies from both these election cycles and consider whether the DSA provides a useful lens through which to examine these dynamics.
Insights from Research on Political Exposure Bias in Recommender Systems during the 2024 U.S. and 2025 German Elections
There is an ever-growing amount of research exploring political exposure bias in recommender systems. In this brief, however, we focus only on studies investigating two major elections: the 2024 U.S. presidential election and the 2025 German federal elections.
X, and its “For You” feed, were a natural focus of the analyses. In the context of the U.S. election, researchers from the University of Southern California audited this feed using 120 artificial “sock-puppet” accounts with controlled attributes.2 These accounts were divided evenly across four political orientations: left-leaning, right-leaning, centrist, and neutral. Over a six-week period, from October to November 2024, these accounts collected more than 9 million tweets. The study found three patterns: (1) Left- and right-leaning accounts were mostly exposed to content aligned with their own views, with limited exposure to opposing perspectives; (2) neutral accounts that did not follow any users were disproportionately shown right-leaning content; and (3) X’s algorithm prioritised content from a small group of high-popularity accounts, with right-leaning accounts receiving the most uniform feed of right-leaning content.
A Wall Street Journal investigation showed similar results.3 Also using a “sock puppet” methodology, the study found that even users with non-political interests (e.g., users looking for content on cooking, crafts, sports, etc.) were quickly shown content promoting Donald Trump, as well as content that questioned the integrity of the election.
Another preprint study, by researchers from the Queensland University of Technology and Monash University examined potential algorithmic bias by analysing engagement metrics on X.4 Preliminary results showed that, after Elon Musk’s endorsement of Trump on 13 July 2024, Musk’s posts saw a significant boost in views, retweets, and likes compared to other prominent users, one possible explanation being that Musk’s content might be prioritised in terms of platform visibility and user engagement. The study also found that Republican-leaning accounts saw a notable rise at the same moment in views compared to Democratic-leaning ones, although this effect was less consistent across likes and retweets. This suggested a potential change of the recommender algorithm to favour Republican content.
TikTok also drew research interest. Researchers from New York University Abu Dhabi conducted 323 independent algorithmic audit experiments to assess partisan content distribution leading up to the U.S. election.5 They found that accounts aligned with Republican content received, on average, 11.8 per cent more party-consistent recommendations than their Democratic counterparts, while Democratic-aligned accounts were exposed to approximately 7.5 per cent more cross-party content.
The 2025 German federal elections presented another case of interest. Once again, X came under fire, this time due to Musk’s repeated public support for the far-right AfD party, including a high-profile exchange with party leader Alice Weidel.
Audits by AlgorithmWatch and the DFR Lab examined engagement metrics to see whether Musk’s support had an amplifying effect on AfD visibility.6 Their findings show that Weidel received a significant engagement boost, especially from English-language accounts, following her interactions with Musk. However, the study did not identify a broader pattern of increased social media engagement for the AfD beyond this “Musk bump, ” nor any evidence of systematic suppression of opposition content. Researchers from the Technische Universität Dresden found similar results: Musk’s influence appeared to significantly boost Weidel’s reach, but did not extend to other AfD politicians.7
The Institute for Strategic Dialogue conducted a study on the asymmetrical amplification of party-political content on TikTok, using thirteen accounts positioned across different points of the political spectrum. Among its findings, the study noted that during account training 9 out of the 13 accounts were first shown AfD-related content. Notably, this content almost exclusively came from fan pages, whereas for other parties, the first political posts typically came from official accounts.8
DRI also studied this phenomenon by using five sock-puppet accounts with different political leanings on TikTok and Instagram.9 Between 17 and 21 February 2025, the accounts collected a dataset of 1, 000 videos. Results indicated that TikTok was more likely than Instagram to recommend political content, and that its algorithm more effectively targeted users based on inferred political orientation. Importantly, when users were exposed to content not aligned with their presumed views, this was often extreme right-wing content.
A separate study by Global Witness looked at political content on TikTok, Instagram, and X.10 Among other things, the study found that the “For You” feeds of TikTok and X disproportionately promoted AfD-related content, even when simulated users engaged equally with posts from Germany’s four major parties (CDU, AfD, SPD, and the Greens).
What We’ve Learned So Far – and Why Political Exposure Bias in Recommender Systems is Still Hard to Study
Across the studies reviewed, one pattern was consistent: recommender systems tend to amplify content that reflects users’ existing political views. This isn’t surprising – most algorithms are designed to maximise engagement. More notably, several studies observed that this reinforcement effect is particularly strong among right-leaning and far-right leaning users.
A second important finding is that, even when users provide little or no political input, such as following no accounts or engaging equally with diverse political content, recommender systems still tend to surface more right-leaning or far-right content. In other words, these platforms may exhibit a kind of “default bias.” This pattern seems to have been strong on X in the US 2024 elections, notable on TikTok, while less marked on Instagram.
This “default bias” could be the most concerning trend identified. But in which way does this bias can be harmful? Some argue that, as private companies, platforms are entitled to promote whatever content they choose, even if it carries a clear political leaning. It could also be argued that as long as the platform’s bias is visible—such as with Trump’s Truth Social or Elon Musk’s X—transparency is achieved: users know the political orientation of the space they are engaging with (though it could be argued that X benefits from a residual network effect).
Others rightly point out, however, that when recommender systems shape political exposure on a massive scale, especially during election periods, the implications extend far beyond questions of corporate autonomy or platform branding. At stake are fundamental democratic principles: a level electoral playing field, diverse and balanced access to political information, and the responsibility of platforms that now serve as key gateways to public discourse; especially for users that do not display any political interest
It is worth noting that engagement-based algorithms are not the only option. Alternative models have been explored that could support a more pluralistic information environment. For instance, some universities have developed recommender systems grounded in democratic values – designed to downrank content that undermines democratic norms. Another approach is “bridging-based” ranking, which prioritises content that resonates across different demographic or ideological groups. DRI and the European Partnership for Democracy co-authored an op-ed last year highlighting such alternatives.11
That said, studying existing recommender systems and their impacts comes with major challenges. Most existing research relies on sock-puppet accounts – artificial profiles designed to mimic specific user behaviours. While this method is valuable for auditing algorithms, it has limitations. These include the lack of realistic user interaction, small sample sizes, short study durations, and the inability to capture broader network effects, such as how content spreads or how users influence one another in real time.
Beyond methodology, there is a deeper challenge: recommender systems are not just technical tools (i.e., they are not just the algorithm); they are socio-technical systems shaped by a constant feedback loop between user behaviour and platform design. Understanding platform’s design culture and user communities is as important as the algorithms themselves.12
Some platform owners—such as Musk— are openly aligning with specific political ideologies. But the picture is more complex for others, for which there is less direct evidence of intentional political bias. If, as some argue, the effects of recommender systems are not always foreseeable, then bias may not automatically reflect intent. In any case, given the consistent findings around political exposure bias—especially favoring far right-leaning content—platforms should at least be compelled to take these effects seriously, understand how they come about and consider how to balance exposure in the public interest of supporting political pluralism and reducing anti-democratic extremism.
DSA and Political Exposure bias by Recommender Systems
The DSA’s treatment of recommender systems has been analysed extensively. 13 14
Broadly, the DSA’s obligations related to recommender systems fall into three key areas – transparency, user agency, and systemic risk management. These rules aim to give users greater control over the content they see, while holding platforms accountable for how their algorithms shape public discourse.
Under Article 27, platforms must clearly explain in their terms and conditions the main parameters used in their recommender systems. They must also inform users of any available options to adjust or influence these settings. In addition, users must be allowed to modify their recommendation preferences at any time, which is particularly relevant where multiple feeds exist, such as “For You” versus “Following” timelines. Meanwhile, Article 38 requires very large online platforms (VLOPs) to offer users at least one recommender system that is not based on profiling, such as a chronological feed. It appears that some platforms are not fully respecting these obligations yet.15
Under Article 17, when platforms demote or downrank content as part of moderation actions, they are required to explain this to the affected user in their Statement of Reasons.
Recommender systems are also covered under Articles 34 and 35, which concern the identification and mitigation of systemic risks. These provisions explicitly recognise that recommender systems can be vectors of risks to civic discourse and democratic processes and, therefore, must be assessed and potentially adjusted as part of platforms’ risk mitigation strategies.
The DSA strengthens transparency and user agency, but it does not explicitly impose an obligation to ensure algorithmic non-partisanship or political balance. However, the European Commission’s guidelines on electoral integrity recommend that platforms take into account “media diversity and pluralism” when designing recommender systems (Measure d[i]).
Moreover, persistent political exposure bias—and more critically, its downstream effects on users (especially “neutral” non-partisan users) —could constitute a systemic risk to civic discourse and democratic processes. This risk emerges not merely from the presence of bias, but from its capacity to foster polarisation, radicalisation, or societal fragmentation. The risk is compounded when such bias is the result of coordinated manipulation, inauthentic behaviour, or exploitation of the platforms. Furthermore, if recommender systems systematically downrank or suppress certain political actors without notification, this could violate Article 17 of the DSA, which requires transparency in content moderation decisions.
Unfortunately, online platforms have yet to meaningfully assess how their recommender systems influence political content. The first risk assessments published under the DSA contained minimal information on risks related to the functioning of their recommender systems and even less information on related mitigation measures, as highlighted in a report by the EPD secretariat and the Civil Liberties Union.16
Establishing causal links between political bias in recommender systems and the associated broader impacts on democracy and elections requires independent, robust research; for that, access to platform data under Article 40 is essential. Yet researchers continue to face substantial barriers in securing the data necessary to carry out this work.
Recommendations
- In their forthcoming risk assessment reports, platforms should provide an analysis of political exposure bias as a potential systemic risk driven by their recommender systems. This must include a clear identification of the technical, behavioural, or design features that contribute to or amplify such bias.
- The European Commission should prioritise the robust enforcement of Article 40 of the DSA to guarantee researchers timely and meaningful access to both public and non-public platform data. Such access is essential to investigate how recommender systems shape political exposure and to assess their wider impact on civic discourse and democratic processes.
References
1. Camila Weinmann, Ognjan Denkovski & Francesca Giannaccini, ‘202f“Filtered for You: Algorithmic Bias on TikTok and Instagram in Germany”, DRI, 10 April 2025.
2. Jinyi Ye, Luca Luceri’202f&’202fEmilio Ferrara, ‘202f“Auditing Political Exposure Bias: Algorithmic Amplification on Twitter/X During the 2024 U.S. Presidential Election”, ‘202fPreprint, 20 March 2025.’202f
3. Jack Gillum, Alexa Corse’202f&’202fAdrienne Tong, ‘202f“X Algorithm Feeds Users Political Content—Whether They Want It or Not”, ‘202fWall Street Journal, 29 October 2024. Also, listen to the podcast on the same study at’202fTech News Briefing.
4. Timothy Graham’202f&’202fMark Andrejevic, ‘202f“A computational analysis of potential algorithmic bias on platform X during the 2024 US election”, preprint, November 2024; Prithvi Iyer, ‘202f“New Research Points to Possible Algorithmic Bias on X”, Tech Policy Press, 15 November 2024.
5. Hazem Ibrahim, HyunSeok Daniel Jang, Nouar Aldahoul, Aaron R. Kaufman, Talal Rahwan’202f&’202fYasir Zaki, “TikTok’s recommendations skewed towards Republican content during the 2024 U.S. presidential race”, preprint, 29 January 2025.
6. Mark Scott’202f&’202fOliver Marsh, ‘202f“The Musk Effect: Assessing X’s impact on Germany’s election discourse, ”‘202fDigital Forensic Research Lab (DFRLab) and AlgorithmWatch, 2/20/2025, 2025.
7. Sami Nenno’202f&’202fPhilipp Lorenz-Spreen, ‘202f“Do Alice Weidel and the AfD benefit from Musk’s attention on X?”, Technische Universitat Dresden, ‘202f9 February 2025.
8. Anna Katzy-Reinshagen, Martin Degeling, Solveig Barth and Mauritius Dorn, ‘202f“Wahlkampf im Feed?Wie TikTok mit parteipolitischen Inhalten im Vorfeld der Bundestagswahl 2025 umgeht”.’202fISD Germany, 22 February 2025.
9. Camila Weinmann, Ognjan Denkovski’202f&’202fFrancesca Giannaccini, ‘202f“Filtered for You: Algorithmic Bias on TikTok and Instagram in Germany”, ‘202fDRI, ‘202f10’202fApril 2025.
10. Global Witness, ‘202f“TikTok and X recommend pro-AfD content to non-partisan users ahead of the German elections”, 27 March 2025.
11. Jan Nicola Beyer’202f&’202fSofia Calabrese, ‘202f” We should not lose momentum in reforming social media recommender system”, Euractiv, 27 February 2024.
12. Paddy Leerssen, ”Algorithm Centrism in the DSA’s Regulation of Recommender Systems”, Vergassungsblog, 22 March 2022; Arvind Narayanan, ”Understanding Social Media Recommendation Algorithms”, ‘202fthe’202fKnight First Amendment Institute at Columbia University, 9 March 2023.
13. Maximilian Gahntz’202f&’202fClaire Pershan, ‘202f”Action Recommended: How the Digital Services Act Addresses Platform Recommender Systems”, Verfassungsblog, 27 February 2023.
14. Urbano Reviglio’202f&’202fMatteo Fabbri, ‘202f” The Regulation of Recommender Systems Under the DSA: A Transition from Default to Multiple and Dynamic Controls?”, DSA Observatory, 22 November 2024.
15. EDRi, ‘202f”Civil’202fsociety’202ffiles’202fDSA’202fcomplaint’202fagainst’202fMeta’202ffor’202ftoxic, ‘202fprofiling-fueled’202ffeeds”, ‘202f15 April 2025.
16. Orsolya Reich’202f&’202fSofia Calabrese, ‘202f“Civic Discourse and Electoral Processes in the Risk Assessment and Mitigation Measures Reports under the DSA”, March 2025.’202f
Acknowledgements
This brief was written by Daniela Alvarado Rincón, Digital Democracy Policy Officer (DRI) with contributions from Michael Meyer-Resende, Executive Director (DRI). This brief is part of the Tech and Democracy project, funded by Civitates and conducted in partnership with the European Partnership for Democracy. The sole responsibility for the content lies with the authors and the content may not necessarily reflect the position of Civitates, the Network of European Foundations, or the Partner Foundations.