Today, the eagerly anticipated Artificial Intelligence (AI) Act comes into force across the EU. After nearly three years of negotiations ‘2500 including five rounds of trilogue discussions and countless revisions – this regulation is set to establish a new benchmark for AI governance.
At DRI, we have been closely monitoring the evolving impact of AI-powered technologies on civic discourse and electoral integrity. Our work has highlighted the risks associated with chatbots spreading harmful narratives and misinformation during elections, raised early alarms about the dangers of fully synthetic content and voice cloning, and developed a guide to identifying AI-generated disinformation. Recently, we have also identified incidents of generative AI content in the run-up to the EP Elections.
Looking ahead, we are committed to continuing our work by applying the AI Act’s provisions to protect democracy from the risks posed by AI technologies. In this overview, key features of the AI Act, its potential shortcomings and the areas DRI will be closely monitoring during the implementation phase will be discussed.
The AI Act’s Game-Changing Features
The AI Act is a complex and lengthy piece of legislation. One of its main innovations is the introduction of a risk-based approach in which AI systems are categorized based on their level of risk ‘2500 unacceptable, high, limited, or minimal risk ‘2500with the most stringent obligations applying to the high-risk systems. The Act also defines and establishes obligations for general-purpose AI (GPAI) models, including those that could pose “systemic risks.” Additionally, it outlines measures for addressing AI systems that present risks at the national level.
Understanding the Risk-Based System
Prohibited AI practices (Art. 5). The AI Act prohibits eight types of AI, based either on their actual or likely effects, or their function. While none of these prohibitions explicitly address risks to democracy, some banned applications include AI systems that could potentially impact electoral integrity and civic discourse such as (i) using AI-enabled subliminal techniques, (ii) exploiting vulnerabilities to manipulate individuals’ behaviour, or (iii) using biometric categorisation systems to infer political opinions.
These prohibitions will start applying on 1 February 2025, six months after the AI Act takes effect. Violations could result in hefty fines—up to €35 million or 7% of a company’s total global annual turnover, whichever is the higher value.
High-Risk AI systems (Art. 6). An AI system is classified as high-risk if it is used in any of the areas specified in Annex III and poses a significant risk to the health, safety, or fundamental rights of individuals. Annex III identifies areas like the administration of justice and democratic processes, explicitly mentioning AI systems used to influence election outcomes, referendums, or voting behaviour. The European Commission can also update Annex III to include new high-risk use cases as they emerge.
High-risk AI systems are not prohibited, but companies must carry out a self-assessment and register them in the public EU AI database before they can be marketed or deployed. They are also subject to the most stringent obligations. These include doing thorough risk assessments, keeping activity logs for traceability, feeding the system with high-quality data to avoid risks and bias, providing accurate and robust cybersecurity, and making sure there is always a natural person overseeing the use of the AI system.
The AI Act also defines specific roles within the AI supply chain—providers, deployers, distributors, and importers—each with its own set of responsibilities. For those interested in a detailed breakdown, the CDT, a non-profit organization, has compiled a comprehensive list of these obligations for each entity. Failure to meet these obligations can result in fines of up to 15 million euros or 3% of the provider’s total annual turnover, whichever is the greater of the two.
Obligations for High-Risk AI systems will start applying on 1 August 2026. However, by 2 February 2026, the European Commission must release guidelines that include a comprehensive list of practical examples of what qualifies as High-Risk AI systems.
Limited-Risk AI systems (Art. 50). This category covers AI systems that interact directly with people, like chatbots, and those that generate synthetic content, such as audio, images, videos, or text. In contrast to high-risk AI systems, the Limited-Risk category carries less stringent transparency obligations. Notably, Chatbot providers must clearly inform users that they are interacting with an AI, not a human. Providers of AI systems that create synthetic content must label these outputs in a machine-readable format to indicate that they are artificially generated or manipulated.
These obligations will not apply until 1 August 2026, two years after the AI Act enters into force. Until then, the AI Act encourages the AI Office to draft EU-level Codes of Practice to start detecting and labelling artificially generated or manipulated content.
Minimal-Risk AI systems. AI systems that don’t fit into the other categories (e.g. AI-enabled video games or spam filters) carry few obligations. Nonetheless, providers and deployers of these systems can sign Codes of Conduct to voluntarily implement some of the obligations included for high-risk systems (Art. 95).
ChatGPT and Friends: Rules for General-Purpose AI
ChatGPT was brought to market during the AI Act’s negotiation phase, dramatically transforming the perceptions around AI among the wider public. The rapid evolution of general-purpose AI systems like ChatGPT, raised significant concerns across multiple areas, including democratic processes. In response, the European Parliament advocated for the regulation of such systems, leading to their inclusion in the AI Act.
The final regulation defines general-purpose AI models (GPAI) as those trained on extensive datasets and capable of performing a wide array of tasks. While GPAI providers face fewer obligations compared to those dealing with high-risk AI systems, they are still required to prepare technical documentation, share relevant information with downstream users (like deployers), and ensure compliance with copyright laws. Obligations for GPAI will begin on 1 August 2025 and will apply once the GPAI systems are placed on the market.
The Act also recognizes that some GPAI models might pose systemic risks in the EU, including potential negative impacts on democratic processes (Recital 110). To address this, it introduces a category for GPAI models with “systemic risk.” These are models with either significant “high-impact capabilities” (Art. 52) or a major influence in the EU market due to their extensive reach (Annex XIII). For these models, the Act mandates regular risk assessments, proactive measures to mitigate identified risks, continuous monitoring of serious incidents, and strong cybersecurity practices.
Remarkably, open-source GPAI models—those publicly available under a free or open license, with their architecture accessible to the public—only need to meet obligations if they pose a systemic risk or fall into the High-Risk AI category.
A New European Governance Structure for AI
The AI Act sets up enforcement authorities at both national and regional levels. It also establishes an AI Office within the European Commission which is responsible for drawing up Codes of Practice, overseeing rules for GPAI models and systems, promoting the development of trustworthy AI, and coordinating with other enforcement bodies like those implementing the Digital Services Act (DSA) and Digital Markets Act (DMA).
Additionally, the Act introduces other advisory bodies, such as the European AI Board, the Advisory Forum, and the Scientific Panel of Independent Experts, which will support and provide guidance to the Commission in its efforts.
Some AI Act Shortcomings to Watch Out For
Throughout the lengthy negotiation process, the European Commission faced considerable criticism, particularly regarding the transparency and the secrecy of the discussions with lobbyists. An especially controversial moment occurred when Mistral, a French AI company, lobbied EU politicians against regulating generative AI, claiming it would harm Europe’s AI competitiveness. It later emerged that Mistral was negotiating an agreement with Microsoft at the same time of its discussions with the EU. Moreover, despite pressure from civil society and experts, the final text of the AI Act still overlooks many crucial risks associated with AI systems.
One of the most contentious aspects of the AI Act is its list of prohibited AI systems. On one hand, the prohibitions outlined in Articles 5(a) and 5(b) only apply if AI systems are likely to cause “significant harm” to an individual or group, but the Act does not define this term or provide any examples. Experts also raised concerns about the scope of the prohibition of “subliminal techniques”, which traditionally refers to “sensory stimuli that are weak enough to escape conscious perception but strong enough to influence behaviour.” Experts have suggested a broader interpretation that would cover most of the AI manipulation techniques of concern, such as harmful nudges that are noticeable to users. Otherwise, they argue, the prohibition would not be applicable.
The act also prohibits the use of real-time biometric identification (RBI) in publicly accessible spaces for the purposes of law enforcement, and the use of AI systems for predicting propensity towards criminal behaviour. However, many organisations have raised concerns that these bans include numerous exceptions. For example, retrospective biometric identification would still be allowed and the definition of “publicly accessible spaces” in Recital 19 does not explicitly include borders—a location where human rights abuses frequently occur.
Another major concern is the self-assessment model in the AI Act. AI providers can exempt themselves from being classified as high-risk – the category with the most stringent obligations – by claiming, for example, that their systems only perform preparatory tasks. While EU authorities can verify these claims, experts fear that EU authorities may lack the resources for effective and comprehensive oversight. Furthermore, limited access to data on the architecture and functioning of AI models makes it difficult for civil society organisations to verify these self-assessments.
In March, DRI also raised an alarm about how exemptions for open AI models under the EU’s AI Act will create a foreseeable – and avoidable – back door for malicious actors to spread harmful content. Last year we tested the ability of three popular open-source models – all of which are publicly accessible on the Hugging Face platform – to see whether they would generate harmful content, including hate speech and misinformation. They did. While there are valid arguments against over-regulating open-source GPAI, we believe that close monitoring of these models by the AI Office is desirable.
Last but not least, the AI Act falls short in ensuring meaningful participation of CSOs and individuals affected by AI systems in the implementation of the regulation. This gap was recently highlighted in a controversy over the involvement of CSOs and other stakeholders in drafting the GPAI Code of Practice. The Commission ultimately launched a multi-stakeholder consultation on 30 July on the “issues to be covered by the codes of conduct.” However, it remains uncertain whether CSOs will be signatories.
Advocacy Pathways and DRI’s Priority Areas
Here are some critical areas of the AI Act that DRI will be keeping a close eye on and the advocacy avenues we plan to use in response:
Use (and Abuse) of GPAI Systems, including those with Systemic Risk, During Elections and Key Democratic Events
We will keep a close watch on how politicians, the media, and citizens use chatbots and other GPAI models –including open-sourced ones- during elections, as these tools could pose significant risks to civic discourse and the electoral process. We will also be tracking the spread of this content on online platforms, ensuring that AI providers, deployers, and Very Large Online Platforms/Search Engines (VLOPs/VLOSEs) are sticking to the rules set by both the Digital Services Act and the AI Act. To help with this, we are creating a guide on how to audit LLMs, which will be published later this year.
Identifying Prohibited, and High-Risk AI Systems Relevant to Democracy
In collaboration with other CSOs, we have begun identifying use cases of AI systems that could pose risks to democracy and electoral processes, particularly those that might fall into the Prohibited and High-Risk categories. Once operational, we can also monitor the EU’s public AI database to analyse and report on systems that impact democratic integrity, and to assess whether AI providers and deployers are complying with their obligations. In our monitoring efforts, we will advocate for risk assessments to consider not only individual harms, but also broader societal and collective risks, ensuring a thorough approach to protecting democratic values.
Push for a Participatory Approach in the AI Act Implementation
While it is still uncertain if NGOs will be able to sign onto the AI Act Codes of Practice, we are committed to seizing every opportunity to spotlight the risks and threats that AI systems – including GPAI- pose to democratic processes. We will actively participate in consultations hosted by the AI Office and engage in multi-stakeholder discussions, like those promoted through DRI’s hybrid conference, DisinfoCon. Our goal is to ensure that the voices of CSOs and impacted individuals are heard and considered in the implementation of the AI Act.