Biased by Design? Chatbots and misinformation in Sri Lanka’2019s 2025 local elections

After nearly seven years, local elections returned to Sri Lanka. But this time, a new generation of voters is turning to AI-powered chatbots and search engines for answers – making the risk of automated misinformation more urgent than ever. As Large Language Models (LLMs) become embedded in everyday tools, countries around the world are racing to understand their potential and risks.

But as these models are built into mainstream platforms – often without adequate oversight or regulation – they present real risks to democratic participation and electoral integrity. To explore these risks in the context of Sri Lanka’s 2025 local elections, this study tested four chatbots in English, Tamil, and Sinhala using 18 questions related to voting procedures and the major political issues shaping the campaign. The results reveal how LLM errors can misinform, confuse, or even exclude voters – especially in a multilingual and politically complex environment like Sri Lanka. The risks posed by LLMs are not just theoretical – they manifest in concrete ways that can shape voter understanding.

On questions related to the electoral process, we found that:

  • All four chatbots returned incomplete or inaccurate answers on electoral process questions, with Gemini performing best (10.4% false or misleading responses), followed by Copilot (16.7%), ChatGPT 4.0 (18.8%), and DeepSeek performing the worst (35.4%).
  • LLMs gave the most accurate answers in Sinhala, with 71.8% of responses correct, compared to 68.1% in English and 64.1% in Tamil. Yet, each language saw a significant share of inaccurate outputs, with false or misleading information revealed in 21.9% of English and Tamil responses, and 17.2% in Sinhala.
  • Chatbots handled general questions about the electoral system fairly well, often citing sources like Wikipedia. However, they struggled with timely, locally specific queries where all four models gave outdated or incorrect answers when asked about current candidates – frequently referencing past elections due to a lack of updated information on official sites during data collection.

On politically sensitive questions, for instance, we found that:

  • LLMs generally tended to provide neutral answers, either summarising the positions of main political parties – such as National People’s Power (NPP)/ The Janatha Vimukthi Peramuna (JVP), Samagi Jana Balawegaya (SJB), The Sri Lanka People’s Front (SLPP), and United National Party (UNP) – or avoiding party references altogether.
  • Gemini stood out as the best performer, providing entirely unbiased answers across English, Tamil, and Sinhala. In comparison, only 66.7% of responses from ChatGPT 4.0 and DeepSeek were considered neutral. Copilot was the most biased, with two-thirds of its responses showing partisan leaning – mainly due to the omission of perspectives from smaller but politically significant parties.
  • Sinhala responses were the most neutral overall, with 87.5% rated as unbiased. In contrast, Tamil answers showed the highest rate of bias, with 37.5% containing partisan content. For example, ChatGPT 4.0 and Copilot often leaned toward the NPP/JVP when advising users on issues like workers’ rights and LGBTQ+ protections. Half of the English responses also showed some bias, with DeepSeek and Copilot favouring parties such as the NPP, SJB, and other progressive groups.
  • Questions about workers’ rights triggered the highest levels of bias across models’ responses, more so than those related to LGBTQ+ rights in Sri Lanka.

These patterns of bias add to growing concerns about how LLMs handle sensitive election-related content. They echo findings from previous DRI research, which showed that LLMs can spread misinformation – both by providing inaccurate information about the electoral process and by displaying bias across languages and political contexts. However, this study marks a shift: whereas models like Gemini had previously refused to answer election-related questions in other settings, such as during the 2025 German federal elections, they now offer responses, even when those answers are incorrect. This change may reflect a recent shift in Google’s policy or reveal inconsistencies in how restrictions are applied across different contexts, suggesting a patchwork approach where LLM providers may be handling these risks on a case-by-case basis rather than through a universal policy.

Based on these findings we recommend that:

  • Voters consult official websites and resources, rather than AI-powered chatbots and search engines.
  • Chatbot providers should train their models to avoid giving information about the electoral process or political issues. Instead, they should direct users to official sources from the election authorities or ensure their responses fully follow Sri Lanka’s constitutional media guidelines (Article 104B(5)(A)).
  • The Electoral Commission of Sri Lanka collaborate with technology providers to monitor the spread of election-related misinformation and promote their official digital platforms.

Download Report

Co-organised by Democracy Reporting International, Forum Transregionale Studien, 
Berliner Landeszentrale für politische Bildung and Verfassungsblog.

Thursday 20 February 2025
Revaler Str. 29, 10245 Berlin

18:30 – 20:00

Supported by

The European Union

Related posts