Big tech is backing out of commitments countering disinformation — What’2019s Next for the EU’2019s Code of Practice?

Executive Summary 

As the EU Code of Practice on Disinformation (CoP) transitions to become a legally binding Code of Conduct (CoC) under the Digital Services Act (DSA), signatory online platforms have considerably scaled back their commitments. These developments add to pre-existing concerns over only partial compliance by platforms with CoP commitments. While the CoC will still have the potential to strengthen the implementation of EU digital regulations, the reduced ambitions on the part of Google, Microsoft, Meta, and TikTok raise significant concerns about the Code’s effectiveness in combating disinformation and ensuring accountability. 

Key findings 

  • In early 2025, online platforms reduced their commitments under the CoP by 31 per cent, with major cuts in areas like fact-checking, the transparency of political advertising, and empowering the research community. Overall, their justifications for this have been vague or insufficient, raising doubts about why these measures are no longer considered relevant. Instead of building on the established framework they helped design, platforms will now need to develop new ways to meet their DSA obligations.
  • Microsoft fully withdrew from fact-checking measures, arguing that the measures were not proportionate to the platform’s risk profile, according to their Systemic Risk Assessment (SRA). This is inconsistent, however, with LinkedIn’s own SRA, which classifies election mis/disinformation and other election-related risks as “high risk”.
  • Google (including Google Search and YouTube) also withdrew fact-checking measures entirely. The platform argues these measures are neither “relevant, pertinent, nor practicable”. Google says it offers other tools that allow users to assess the factual accuracy of online information. Those alternatives are not mentioned in the subscription document.
  • Meta has kept most of its fact-checking commitments but warned it may review them after Chairman Mark Zuckerberg’s announcement on 7 January. TikTok has agreed to maintain its commitments, on the condition other platforms do the same.
  • Microsoft has shown the most significant reductions overall, with its three products – Bing Search, Microsoft Advertising, and LinkedIn –each cutting by half the number of measures to which they had committed. 
  • Google, Microsoft, and TikTok have withdrawn from all commitments related to political advertising, citing their current ban on political ads. Given recent cases of policy circumvention, platforms should at least publish reports on how they enforce these bans. Moreover, the CoP established measures to encourage a common definition of political advertising and a consistent approach to enforcing such a definition across services (Commitments 4 and 5). These measures remain relevant, even with a “no-political-ads policy”, to prevent inconsistencies and selective removal across platforms.
  • All platforms have unsubscribed from Commitment 27, which required them to develop, fund, and collaborate with an independent third-party body to enable data access for researchers. 

The European Commission will play a key role in ensuring platforms uphold their commitments under the CoC, by reviewing platform reports and audits, tracking progress on key performance indicators (KPIs), and assessing whether the alternative measures platforms introduce –such as Community Notes – are as effective as the commitments they replace. The CoC should also serve as a reference point for evaluating how Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) identify and address risks, as outlined in Recital 104 of the DSA. 

At the same time, non-platform signatories, including DRI, can support this effort by researching platform compliance with the CoC and KPIs, as well as by gathering evidence on the effectiveness of different strategies to combat disinformation. 

Most importantly, it is crucial to push back against the idea that platforms are scaling back their commitments to protect free speech. On the contrary, strong enforcement of the EU’s digital regulations will help create a more transparent online space in which the free speech and right to information of all users is better protected. 

From Code of Practice to Code of Conduct 

The European Union launched its first Code of Practice on Disinformation in 2018. An updated version came into effect in 2022, in response to a call by the European Commission1 to strengthen the commitments. This self-regulatory framework outlines a range of measures to combat disinformation in the EU, spanning broad commitments and detailed quantitative and qualitative key performance indicators (KPIs). The Code addresses areas such as the demonetisation of disinformation, the transparency of political advertising, and service integrity, and includes measures to empower users, researchers, and the fact-checking community. As of January 2025, the Code had 40 signatories, including major online platforms, the online ad industry, ad-tech companies, fact checkers, and civil society organisations. DRI became a signatory in 2023. 

From the outset, the strengthened CoP and the DSA were designed to complement and reinforce one another.2 The Code’s preamble explicitly states its aim “to become a Code of Conduct under Article 35 of the DSA,” specifically for VLOPs that are signatories. Meanwhile, in Articles 45 – 47 and several other articles and recitals, the DSA references Codes of Conduct as instruments to support its implementation. 3

In line with this framework, last year the Commission began the process of converting the current CoP into a Code of Conduct (CoC). As a result, platforms decided to reevaluate the measures they had subscribed to in 2022. Why? Converting the CoP into a CoC integrates it into the DSA framework, granting it stronger legal enforcement – an upgrade that brings important implications for the platforms involved. 

On the one hand, the commitments that platforms make under Codes of Conduct will be audited annually by independent auditors (Article 37(1)(b)), much like the audits conducted on VLOP’s and VLOSEs’ risk assessments and mitigation measures. If an audit report identifies issues (“negative” findings), platforms are required to address any operational recommendations from the auditors within one month, and to report back to the Commission on how they have been implemented. 

On the other hand, compliance with Codes of Conduct is explicitly recognised as an appropriate risk-mitigation measure (Article 35 [1] [h]). Hence, refusing to participate in these Codes without a valid explanation can be considered a potential violation of a platform’s obligations under the DSA (Recital 104). Platforms are also expected to view Codes of Conduct as industry best practices, and to use them as benchmarks when selecting risk mitigation measures. They must regularly report to the Commission and Digital Services Coordinators on actions taken under the Codes, with progress evaluated against their KPIs. 

In other words, Codes of Conduct serve as baselines for the Commission to use to assess whether platforms are fulfilling their due diligence obligations.4 Platforms are not obliged to participate in a Code of Conduct or subscribe to all of its commitments, but if they choose not to they must demonstrate that the measures they are adopting instead are equally or more effective at addressing systemic risks. 

Platform Rollbacks on CoP Commitments: A Comparative Analysis of 2022 vs. 2025 

On 22 January, the Commission published the updated subscription documents from VLOPs and VLOSEs that are signatories to the Code of Practice on Disinformation (i.e., Google, Meta, Microsoft, and TikTok), which are expected to be the basis for the new Code of Conduct.  

As reported by DRI,5 between 2022 and 2025, platforms reduced the number of measures committed to in the CoP by 31 per cent.6 In 2022, platform signatories subscribed to an average of 78 (59 per cent) of the 132 available measures. By 2025, this figure has dropped sharply, with platforms subscribing to an average of just 53 measures (40 per cent) in the Code; a significant number of these subscriptions relate to the governance and reporting of the Code itself, rather than to measures directly addressing disinformation. This decline signals a clear reduction in platforms’ levels of ambition within the CoP. 

While all areas of the Code have been affected, the most significant reductions have been in measures supporting the fact-checking community (a 64 per cent decrease), followed by measures related to political advertising and empowering the research community. Table 1 provides a detailed breakdown, showing the total number of measures available for subscription, the average subscriptions per section of the Code in both 2022 and 2025, and the percentage change in average subscriptions over this period. 

When it comes to platforms, Microsoft showed the most significant reductions, with its three products – Bing Search, Microsoft Advertising, and LinkedIn – each cutting their commitments by half. Google followed closely, with Google Search, Google Advertising, and YouTube completely dropping all measures related to political advertising and fact-checking. 

Meta and TikTok upheld their commitments to fact-checking measures — an unexpected move following the statement by Chairman Mark Zuckerberg’s on 7 January in which he announced that all fact-checking programmes would be suspended in the United States. These commitments, however, are not without caveats, which will be explored in detail below. Table 2 shows these changes. 

Table 1: Changes in average subscriptions to CoP measures between 2022 and 2025 

Topic Total # of measures Average subscriptions in 2022 Average subscriptions in 2025 % Change in average subscriptions 
Empowering the fact-checking community 12 6.6 2.4 -64.38% 
Political advertising 28 9.5 4.3 -55.24% 
Empowering the research community 14 7.5 3.5 -52.44% 
Scrutiny of ad placements 13 6.6 4.1 -38.36% 
Empowering users 25 10.0 6.7 -32.73% 
Monitoring of the Code 13 12.3 10.0 -18.52% 
Integrity of services 4.3 3.8 -10.64% 
Transparency centre 14 14.0 12.3 -12.34% 
Permanent taskforce 6.0 5.7 -4.55% 
Total 132* 76.8 52.8 -31.24% 

Source: Subscription documents (2025) available at the CoP Transparency Centre. 
*The CoP has 44 commitments and 128 measures. We counted Commitments 38, 39, 42, and 43 as standalone measures, bringing the total to 132 measures to which platforms can subscribe. 

Table 2: Comparison of CoP measures adopted by platforms in 2022 and 2025 

Platforms Total subscriptions in 2022 Total subscriptions in 2025 % Change in total subscriptions 
Microsoft Advertising 49 21 -57.14% 
Bing Search 75 33 -56.00% 
LinkedIn 99 47 -52.53% 
Google Advertising 69 42 -39.13% 
TikTok 114 72 -36.84% 
YouTube 77 58 -24.68% 
Google Search 70 55 -21.43% 
Instagram 113 96 -15.04% 
Facebook 112 96 -14.29% 
WhatsApp 33 30 -9.09% 
Messenger 34 31 -8.82% 

Source: Subscription documents (2025) available at the CoP Transparency Centre. 
In the following sections, we analyse the specific changes in commitments and key withdrawals by each platform, as well as the specific measures that were reduced or dropped, along with the rationale provided by the platforms (where available). 

Microsoft 

LinkedIn 

From 99 commitments in 2022 to 47 in 2025 

LinkedIn withdrew from all measures related to fact-checking, claiming that such measures were not proportionate to the platform’s risk profile, according to their risk assessmentsand that the measure is “not a necessary mitigation in addition to those currently in effect”. This justification appears inconsistent with LinkedIn’s 2024 Systemic Risk Assessment (SRA),7 which explicitly identifies misinformation and disinformation as risk scenarios in at least two areas: civic discourse and electoral processes, and public security. Notably, LinkedIn classified the probability of election misinformation and other electoral-related risks as “likely” and its severity as “critical,” resulting in an overall risk level that was deemed “high.” Moreover, the platform itself acknowledges in the SRA that it mitigates these risks by, among other measures, collaborating with “external fact-checkers” and using a “curated repository of reputable global fact-checking resources across multiple priority languages and localities to support accurate review of election-related content.”8 

LinkedIn also withdrew from several commitments aimed at empowering users. These included measures to promote media literacy (C17), to adopt safe design practices to reduce the spread of misinformation (C18), and to provide tools for users to assess the provenance, edit history, authenticity, and/or accuracy of digital content (C20). The platform cited “auditing challenges”, and claimed that terms like “media literacy,” “critical thinking,” and “safe design practices” were too vague and open to interpretation. 

One could take the opposite view: Unlike the broader provisions in the DSA, each commitment in the CoP is accompanied by Qualitative Reporting Elements (QREs) and Service Level Indicators (SLIs), which arguably provide clearer benchmarks and are better suited for audits than more general mitigation measures.9 

Regarding measures to provide provenance and edit history for digital content, LinkedIn argued that Commitment 20 would require it to “develop” technology solutions instead of integrating tools developed by others, such as C2PA content credentials. The text of Commitment 20 does not support this interpretation. On the contrary, the Code explicitly mentions C2PA as an example of an appropriate provenance solution. 

Nearly all commitments aimed at empowering the research community were dropped, including funding research on disinformation (28.4) and cooperating with an independent third party to vet researchers (27). 

Microsoft Advertising 

From 49 commitments in 2022 to 21 in 2025 

Key withdrawals for Microsoft Advertising included measures allowing third-party audits to verify platform reports and ad policy enforcement (M1.5), as well as measures to share best practices and collaborate with relevant players in the digital advertising value chain (C3). 

The platform also withdrew from measures outlining key activities of the Task-Force (M37.2 and M37.3), such as establishing and participating in a rapid response system, commitments to allocate adequate financial and human resources for implementing the Code (C38), and measures related to reporting yearly on progress with SLIs and QREs (C40). These decisions raise concerns about whether Microsoft Advertising will continue to engage with the logistical and reporting aspects of the Code. 

Bing Search 

From 75 commitments in 2022 to 33 in 2025 

In addition to withdrawing from all fact-checking measures, offering the same justifications as LinkedIn, Bing Search also opted out of nearly all measures related to service integrity (C14, C15, 16.2). These include critical commitments to collaborate on a cross-service understanding of manipulative behaviors, such as malicious deepfakes (C14), and to report on policies for countering manipulative practices involving AI-generated content (C15).10 

Google 

YouTube 

From 77 commitments in 2022 to 58 in 2025 

Google Search 

From 70 commitments in 2022 to 55 in 2025 

YouTube and Google Search have withdrawn from all fact-checking measures, arguing that complying with these commitments is neither “relevant, pertinent, nor practicable” for mitigating the systemic risk of disinformation on their platforms. They claim their current measures are “reasonable, proportionate, and effective”, and say they offer other tools that allow users to assess the factual accuracy of online information. Those alternatives are not mentioned in the subscription document. 

Both platforms have also pulled back from important measures aimed at empowering the research community. These include commitments to develop, fund, and cooperate with an independent third-party body to vet researchers and research proposals (C27). Another significant withdrawal concerns safe design practices (M 18.1). This measure required platforms to publish the main parameters of their recommender systems in their transparency reports. 

Google Advertising 

From 69 commitments in 2022 to 42 in 2025 

Google Advertising withdrew from all political advertising measures, claiming they no longer apply, since the platform does not allow political ads. However, a “no-political-ads policy” is also a policy, not an absence of policy, and transparency about its enforcement remains essential. 

With the Regulation on the Transparency and Targeting of Political Advertising (TTPA) taking effect in October 2025, Google announced it would stop allowing political ads in the EU.11 The company argued that the regulation’s broad definitions would make it difficult to reliably identify political ads at scale. Yet, as some organisations have warned, in practice, Google will still need to distinguish political ads from non-political ones to enforce its own ban.12 

The CoP established measures to encourage a common definition of political advertising and a consistent approach to enforce such a definition across services (Commitments 4 and 5). These measures would still be relevant even with a “no-political-ads policy” – they would ensure that the application of VLOPs and VLOSEs political ads policies is consistent and transparent. 

Furthermore, it would be very valuable if Google, along with other VLOPs like LinkedIn and TikTok that have adopted similar policies, would provide regular and comprehensive enforcement reports (including, for example, the number of political ads removed, average time of removal, etc). 

Meta 

Facebook & Instagram 

From 112 commitments in 2022 to 96 in 2025 

Facebook and Instagram have kept nearly all measures related to fact-checking, only dropping a few commitments related to maintaining a repository of fact-checking content (M31.3 and M31.4). The platforms included a notable caveat, however, stating that, in line with Meta’s public announcements of 7 January 2025, they will continue to assess the applicability of the Fact-Checking chapter to Facebook and Instagram. It is expected that Meta platforms will review their commitments once the Community Notes feature is deployed in the EU. 

Facebook and Instagram also withdrew from measures providing users with tools to assess the factual accuracy of sources flagged by fact-checking organisations (C21). Like many other platforms, Meta also pulled back from commitments supporting the research community, including funding and cooperating with an independent third-party body to vet researchers and research proposals (C27). 

WhatsApp 

From 33 commitments in 2022 to 30 in 2025 

Messenger 

From 34 commitments in 2022 to 31 in 2025 

Messaging apps made minimal changes to their Code subscriptions. 

TikTok 

From 114 commitments in 2022 to 72 in 2025 

TikTok withdrew from all measures related to political advertising, arguing that these commitments are irrelevant, since the platform does not allow political ads. The platform did, however, state that “TikTok will endeavour to continue providing political advertising enforcement metrics, as outlined in Chapter 2 of its reports, to remain transparent.” This is welcome, given DRI’s recent findings during the Romanian elections, where political actors successfully circumvented TikTok’s political ad policies. 

TikTok also scaled back its commitments on fact-checking and empowering users. The platform opted out of measures requiring the integration of independent fact-checkers’ work (M 31.1) and the creation of a fact-checking repository (M 31.3). Additionally, TikTok withdrew from commitments to provide users with tools for assessing the factual accuracy of sources through flagging of content by fact-checking organisations (C 21) and from offering indicators of trustworthiness to users (C 22). 

Despite these withdrawals, TikTok remains committed to certain fact-checking measures outlined in Chapter VII of the Code. These include providing fact-checkers with timely and, whenever possible, prompt access to relevant information (C 32), and establishing a framework for financially sustainable cooperation with fact-checking organisations (C 30). TikTok has made these commitments conditional, however, stating they will only uphold them if other signatories offering similar services do the same. It is likely that TikTok will follow suit in the event that Meta changes its policy and withdraws from these commitments.  

Finally, like all other platforms, TikTok withdrew from commitments supporting the research community, including funding and cooperating with an independent third-party body to vet researchers and research proposals (C 27). 

What Lies Ahead for the Code of Practice (Conduct) on Disinformation? 

Platforms’ drastically reduced ambition within the CoP signals a troubling trend, with the major players (initially X and, more recently, Meta, and now the others as well) scaling back their efforts to address harmful content online, including disinformation. This retreat comes at a critical time, as the new U.S. presidential administration is expected to push for weaker digital regulatory frameworks worldwide – and particularly in the EU. The DSA and the DMA now face a critical test of their ability to deliver effective digital governance. 

In this context, Codes of Conduct can still play an important role in ensuring robust implementation of the EU’s digital regulations. One of their key strengths is their ability to translate abstract legal provisions into concrete, actionable standards.13 The CoP on Disinformation often offers more detailed and technical guidance than the DSA, including specific quantitative and qualitative KPIs. It also requires platforms to demonstrate their compliance with these benchmarks, through transparency reports. 

Take Data Access commitments, for instance: The CoP Transparency Reports14 have been the only source of quantitative information about how many applications VLOPs/VLOSEs received from researchers under Article 40.12, and how many were approved or rejected. This is because SLI 26.2.1. asks signatories to provide metrics on the uptake, swiftness, and acceptance level of research tools and processes. Such granular insights are not available in other reporting mechanisms, including audits and risk assessments. 

CoCs can also establish industry best practices that extend beyond what regulations mandate. For instance, Commitment 25 introduces measures to address misinformation on messaging apps. These include implementing design features to limit the forwarding of messages across multiple conversations, and adding features, where possible, that display appropriate fact-checking labels when content from social media is re-shared in the messaging apps. WhatsApp and Messenger signed on to these commitments even though both platforms fall, in principle, outside the DSA’s scope. Such measures could also serve as benchmarks to encourage non-signatory messaging platforms, such as Telegram, to adopt similar improvements in their practices. 

The CoP encourages active stakeholder participation in its implementation. While not without flaws, this engagement enables direct collaboration between tech companies and key actors, such as fact-checkers and NGOs. In our experience, such collaboration has led platforms to make timely policy decisions against the spread of disinformation. 

That said, the significant withdrawal of platforms from CoP commitments undermines the credibility and effectiveness of this instrument in addressing disinformation. Many platforms have provided vague or insufficient justifications for their decisions, leaving unanswered questions about why certain measures are no longer deemed relevant or practicable. The argument that converting the CoP into a CoC would introduce legal risks related to audits under Article 37 (1)(b) is weak. CoP measures are often more specific and detailed than the DSA text or its guidelines, making them more suitable for audits. Instead of working within an established framework that that they themselves co-designed, they will now have to try new ways to satisfy DSA obligations. 

Google and Microsoft’s withdrawal from all fact-checking measures, combined with Meta and TikTok conditioning their commitments, poses a serious challenge for fact-checking organisations, many of which rely heavily on platform funding and partnerships to sustain their operations.15 Furthermore, the alternatives proposed by platforms to combat misinformation are either unclear (as with Google) or ineffective without fact-checkers (as with Meta or X). Experts agree that tools like Community Notes, which rely on user-generated input and ratings, require fact-checkers to provide accurate context. Without this oversight, these tools risk being manipulated by users to spread misinformation instead of mitigating it.16 

Another significant loss is Commitment 27, which required platforms to develop, fund, and collaborate with an independent third-party body to vet researchers and research proposals. All platforms withdrew from this commitment, despite the European Digital Media Observatory (EDMO) already making progress on this initiative.17 

These recent developments add to pre-existing challenges. Studies analysing platforms’ CoP Transparency Reports revealed partial compliance with the Code, with significant gaps in both qualitative and quantitative information.18 In December 2024, the European Fact-Checking Standards Network (EFCSN) published a compliance report focused on the fact-checking chapter. The report concluded that most platforms failed to demonstrate effective implementation of their commitments, and only partially adhered to the European Commission’s guidelines on mitigation measures during elections.19 

What steps can we take in the coming months to ensure the Code of Conduct remains effective? 

Overall, it is important to challenge the narrative that the platforms’ drastically reduced ambition in fighting disinformation is about protecting free speech. For one thing, fact-checkers have never had the power to delete content or control recommendation algorithms – these decisions have always been made by the platforms themselves. Enforcing EU digital rules is, furthermore, not at odds with protecting freedom of expression. The EU has a long history of defining the limits of free speech, in order to balance individual rights with broader democratic responsibilities. Germany, for example, criminalises Holocaust denial as a safeguard against hate speech and to protect democratic values.20 

Monitor comprehensive and transparent reporting. With the CoC now integrated into the DSA framework and subject to audits, analysis of platforms’ compliance with their commitments is more crucial than ever. The Commission should require platforms to provide clear and comprehensive reports on both qualitative and quantitative KPIs. Non-platform signatories should track compliance and the audit reports. 

Evaluate alternative approaches to combating disinformation. Given the significant number of commitments from which platforms have unsubscribed, the Commission should closely monitor the alternative measures platforms implement to combat disinformation. Meanwhile, as research organisations, we should evaluate to what extent these alternatives, such as Community Notes, are as effective as the commitments they have replaced. 

Position the CoC as a guiding framework for risk assessments and mitigation. Beyond its legal enforceability through audits, the CoC should serve as a practical benchmark for evaluating VLOPs and VLOSEs’ risk assessments and mitigation measures under the DSA, as outlined in Recital 104. Ensuring alignment with the CoC can help maintain consistency in addressing systemic risks across platforms. 

References

  1. European Commission, “Commission presents guidance to strengthen the Code of Practice on Disinformation”, 26 May 2021.
  2. This is not only the case for the Code of Practice on Disinformation, but also for the 2016 Code of Conduct on countering illegal hate speech.
  3. Rachel Griffin, Codes of Conduct in the Digital Services Act, Technology and Regulation, 2024, pp. 167-187. On Page 171, this article provides an in-depth analysis of the references to Codes of Conduct within the DSA and their intended role in supporting the DSA framework.
  4. According to the Commission: “Adhering to codes of conduct under article 45 of the DSA is a voluntary act. However, signatories undertake to respect the commitments outlined in the code or codes they adhered to. For signatories who are designated as VLOPs and VLOSEs, this may help to ensure appropriate risk mitigation measures are in place.” European Commission, “Codes of conduct under the Digital Services Act”, 2025.
  5. Democracy Reporting International, “DRI Statement on Platforms Reducing Commitments Ahead of Strengthened Code of Conduct on Disinformation”, 22 January 2025.
  6. It’s worth noting that, even in 2022, platforms did not subscribe to all measures in the Code. Some measures were not relevant to certain platforms (e.g., messaging apps or search engines), while others targeted non-platform signatories, such as fact-checking or research organisations.
  7. Linkedin, “Systemic Risk Assessment”, August 2024, pp. 34, 40.
  8. Ironically, LinkedIn found itself at the centre of a high-profile disinformation case last year. As reported by the BBC, a LinkedIn post played a significant role in spreading false information about the fatal stabbings at a children’s dance class on 29 July in the United Kingdom. The post falsely alleged that the suspect was an illegal immigrant, and, after it went viral on other platforms, it triggered riots in England and Northern Ireland.
  9. For instance, one of the Service Level Indicators (SLIs) for commitment 17 (media literacy) is the total number of impressions of the media literacy tool, along with data on user interactions and engagement – information that should be relatively straightforward to provide.
  10. Last year, the European Commission initiated an enforcement process against Bing, under the suspicion that the platform may have breached the DSA for risks linked to generative AI, the viral dissemination of deepfakes, and the automated manipulation of services that can mislead voters.
  11. Annette Kroeber-Riel, Google’s Vice President, Government Affairs and Public Policy for Europe, “An update on political advertising on the European Union”, 14 November 2024.
  12. Joint Civil Society Statement, “Google’s decision to stop serving Political Ads in Europe will restrict political discourse online”, 11 December 2024.
  13. Rachel Griffin, Codes of Conduct in the Digital Services Act, Technology and Regulation, 2024, p. 167-187.
  14. Transparency Centre, Code of Practice on Disinformation, Reports Archive.
  15. Lucas Graves, “Will the EU fight for the truth on Facebook and Instagram?”, The Guardian, 13 January 2025. A
  16. Maldita.Es, “Por qué las Notas de la Comunidad y los verificadores pueden (y deben) trabajar juntos, y cómo hacerlo para que funcione”, 17 January 2025.
  17. European Digital Media Observatory, “Launch of the EDMO Working Group for the Creation of an Independent Intermediary Body to Support Research on Digital Platforms”, 15 May 2023.
  18. Stephan Mündges & Kirsty Park, “But did they really? Platforms’ compliance with the Code of Practice on Disinformation in review”, Internet Policy Review, 13(3), 25 July 2024.
  19. European Fact-Checking Standards Network, “Commitments unfulfilled: big tech and the EU Code of Practice on Disinformation”, 18 December 2024.
  20. Michael Meyer-Resende, “Wir sehen einen Trump-Effekt”, Frankfurter Allgemeine Zeitung, 25 January 2025.

Acknowledgements 

This brief was written by Daniela Alvarado Rincón, Digital Democracy Policy Officer (DRI) with contributions from Michael Meyer-Resende, Executive Director (DRI). This brief is part of the Tech and Democracy project, funded by Civitates and conducted in partnership with the European Partnership for Democracy. The sole responsibility for the content lies with the authors and the content may not necessarily reflect the position of Civitates, the Network of European Foundations, or the Partner Foundations.  

Co-organised by Democracy Reporting International, Forum Transregionale Studien, 
Berliner Landeszentrale für politische Bildung and Verfassungsblog.

Thursday 20 February 2025
Revaler Str. 29, 10245 Berlin

18:30 – 20:00

Supported by

Civitates

Related posts