Generative artificial intelligence (AI) can produce persuasive text, audio, images or video at low cost. These tools already assist political actors by crafting fundraising emails, tailoring campaign ads and testing messages on specific audiences. At the same time, many fear AI‑generated misinformation and deepfakes will undermine elections, distort voting behaviour and corrode trust in public institutions. This report synthesises research and developments through July 2025 on the interplay between AI and democracy. It covers five themes: political campaigning, voting behaviour, misinformation, deepfakes and public trust. Evidence comes from peer‑reviewed studies, policy reports and news articles.
How generative AI is reshaping political campaigns
Content generation and micro‑targeting
- Low‑cost content creation. Generative AI allows campaigns to quickly produce videos, images, speeches, social‑media posts or fundraising emails. The Brennan Center notes that AI tools reduce the need for large digital teams and enable low‑resource campaigns to produce sophisticated contentbrennancenter.orgbrennancenter.org. AI‑generated advertisements can be tailored to different voter segments, enabling personalised appeals at scalebrennancenter.org. Commercial platforms already offer “AI‑powered” ad tools for political campaignstechpolicy.press.
- Potential democratisation of campaigning. Interviews with U.S. campaign consultants in 2024 indicate that local and under‑resourced campaigns are experimenting with generative AI because it lowers entry barriersmediaengagement.org. Consultants said AI accelerates data analysis, message testing and fundraising, potentially engaging new voters or allowing campaigns to mimic cultural and linguistic identitiesmediaengagement.org. However, they note that well‑funded campaigns can afford custom models, so AI may widen resource inequalitiesmediaengagement.org.
- Micro‑targeting and personalisation. Generative AI can automate the creation of many message variants, offering a theoretical ability to micro‑target voters. A 2024 study summarised by TechPolicy.Press found that large language models (LLMs) could generate personalised political messages that slightly increased persuasiveness; the authors warned that such techniques could be used for voter suppression or misinformationtechpolicy.press. Observers worry that micro‑targeting fragments the electorate because voters receive different versions of a party’s message, making it harder to hold parties accountableblogs.lse.ac.uk. A subsequent 2025 study, “The Levers of Political Persuasion with Conversational AI,” directly tested AI‑driven persuasion across 76 770 participants and 707 issues. It found that model size and post‑training increased persuasiveness but that personalisation (providing demographic or attitudinal data to the model) increased persuasion by less than one percentage pointarxiv.org. The most persuasive prompts encouraged models to provide more factual information; adding information increased persuasion by 27 %, whereas prompts like moral reframing performed worsearxiv.org. Thus, while generative AI can slightly tailor messages, the main drivers of persuasion are model training and information density, not micro‑targeting.

Risks and misuse in campaigns
- AI‑generated misinformation. AI models can produce false or misleading content at scale, enabling bad actors to generate misleading ads or conspiracy narratives. The Brennan Center warns that unsupervised AI may create bland, biased or inaccurate messages and could be exploited to suppress votes or incite violencebrennancenter.org.
- Liars’ dividend and label distrust. Awareness of deepfake technology allows politicians to dismiss authentic recordings as fakes. A Brennan Center essay notes that public figures might claim real evidence is fabricated, fostering the “liars’ dividend”brennancenter.org. States like Florida require AI‑generated political ads to carry a disclaimerncsl.org, but no federal mandate exists.
- Legal responses. After incidents such as a deepfake robocall telling New Hampshire voters to skip the 2024 primary, legislators have moved to regulate AI in elections. By May 2025 half of U.S. states (25) had enacted laws to regulate political deepfakes; twenty of these states passed their laws after January 2024citizen.org. Public Citizen’s tracker shows that states across the political spectrum are adopting bipartisan measures requiring disclosure or prohibiting deceptive AI in campaign communicationscitizen.org. However, Congress has not enacted similar federal protectionscitizen.org, and some bills could pre‑empt state effortscitizen.org.
- Campaign finance implications. A Congressional Research Service brief notes that no current federal statutes explicitly govern the use of AI in political campaigns. Regulatory gaps include disclosure of AI‑generated content and rules on targeting voters, leaving the Federal Election Commission to decide whether to treat AI‑generated deepfakes as “fraudulent misrepresentation.”
Effects on voting behaviour and persuasion
Limited impact of AI‑enabled persuasion
Research suggests that generative AI has a modest effect on political attitudes. The 2025 “Levers of Political Persuasion” experiment found that even the most persuasive models produced an average attitude change of roughly 9–12 percentage points compared with control participantsarxiv.org. Personalisation provided minimal additional influencearxiv.org. Moreover, reward‑model post‑training (designed to select the most persuasive responses) increased persuasiveness but reduced factual accuracy, showing a trade‑off between persuasion and truthfulnessarxiv.org.
Evidence outside the lab also indicates that AI‑enabled misinformation is less potent than feared. Analyses of global elections in 2024 found only a handful of AI‑generated disinformation or deepfake incidents, with no clear effect on outcomesknightcolumbia.org. Reuters’ 2025 review argues that mainstream news remains the most trusted source for many voters and that political identity and pre‑existing beliefs shape the consumption of misinformationreutersinstitute.politics.ox.ac.uk. A UK Government media‑literacy roundtable concluded that research has not found significant impacts of deepfakes or general disinformation on voting choice, partly because it is difficult to prove causal effectsgov.uk.
Polarisation and accountability
Even when persuasion effects are small, AI‑enabled micro‑targeting can fragment the public sphere. The LSE observed that UK parties used identical ads with slight variations to appeal to different demographicsblogs.lse.ac.uk. This practice may prevent voters from seeing the full policy platforms of parties and undermine democratic accountabilityblogs.lse.ac.uk. Scholars caution that micro‑targeting may reinforce echo chambers, because individuals receive content that aligns with their pre‑existing preferencesmediaengagement.org. Transparent ad libraries and disclosure requirements can help citizens see how different groups are targeted.
Misinformation and AI‑enabled influence operations
Example incidents
- Deepfake robocalls and voter suppression. In January 2024 a deepfake robocall cloned President Joe Biden’s voice and urged New Hampshire voters to skip the Democratic primary. Roughly 5 000 voters received the callaljazeera.com. The Louisiana consultant responsible was indicted on 13 counts of felony voter suppressionvoanews.com, and the Federal Communications Commission proposed fines for him and the carrier that transmitted the callsvoanews.com.
- AI‑generated images and videos. Florida Governor Ron DeSantis’s presidential campaign circulated an AI‑generated video showing Donald Trump hugging Anthony Fauci, and other robocalls imitated Senator Lindsey Grahamaljazeera.com. In Slovakia, a deepfake audio clip surfaced two days before the 2023 election, allegedly recording opposition leader Michal Šimečka discussing electoral fraud. The clip went viral during a period when election campaigns are silent, contributing to speculation about AI swinging the electionmisinforeview.hks.harvard.edu.
- Foreign interference and voice cloning. A U.S. Senate hearing on AI and deepfakes noted that voice‑cloning technologies are now free and accessible; a street magician rather than a technical expert created the Biden robocallcongress.gov. Senators warned that deepfakes can be deployed by foreign adversaries and domestic actors alike and emphasised the need for consent requirements and watermarking (e.g., the Protect Elections from Deceptive AI Act)congress.gov.
Constraints on AI‑driven misinformation
- Limited distribution channels. Misinformation researchers emphasise that deepfake and AI‑generated disinformation are not yet widespread. Many incidents remain isolated; the Knight First Amendment Institute’s review found only a handful of viral AI‑enabled disinformation cases in 2024 electionsknightcolumbia.org.
- Demand‑side factors. Political identity, cognitive biases and selective exposure determine how people process misinformation. The UK Government roundtable concluded that there is no clear evidence that disinformation significantly influences voting behaviourgov.uk. Instead, public awareness of deepfakes can create uncertainty and suspicion toward institutionsgov.uk. In Slovakia, low trust in government and media—only 18 % of Slovaks trusted their government before the election—created fertile ground for pro‑Russian narrativesmisinforeview.hks.harvard.edu.
- Trade‑off between persuasiveness and accuracy. The 2025 persuasion study found that conditions increasing persuasion (model scale, information prompts, reward modelling) also decreased factual accuracyarxiv.org. When models were optimised for maximal persuasion, nearly 30 % of their factual claims were inaccuratearxiv.org. This suggests that AI‑driven mis‑/disinformation may succeed not because of micro‑targeting but because of high information density—at the cost of truthfulness.
Deepfakes and their impact on trust
Erosion of media credibility
Deepfakes blur the boundary between reality and fiction. A 2025 scoping review found that deepfake technology erodes media credibility and allows malicious actors to manipulate public perception, underscoring the need for policy interventions and media‑literacy programmesjournal.uii.ac.id. Pindrop Security’s 2025 survey of 2 000 U.S. consumers reported that 90 % of respondents are concerned about deepfakes and voice cloning and only 7 % trust mass media a great dealpindrop.com.
The liar’s dividend and denial of evidence
Because people know deepfakes exist, powerful figures can dismiss authentic videos or audio recordings as forgeries. This phenomenon—known as the liar’s dividend—enables politicians to evade accountabilitybrennancenter.org. As deepfake technology becomes more realistic, citizens may treat all evidence with scepticism, making it harder to establish shared facts.
Experimental evidence on trust
A 2025 experiment exposed U.S. and Singaporean participants to a deepfake of an infrastructure failure. U.S. participants who viewed the deepfake expressed increased distrust in government, whereas Singaporean participants did not; higher education moderated the effectpmc.ncbi.nlm.nih.gov. This suggests that deepfakes can erode institutional trust, but effects vary across contexts and are influenced by education.
Case study: Slovakia’s “deepfake election”
The 2023 Slovak parliamentary election was widely cited as the first election “swayed” by a deepfake. However, a detailed analysis by the Harvard Kennedy School’s Misinformation Review argues that the deepfake’s effect cannot be separated from underlying conditions. Long‑standing pro‑Russian disinformation campaigns, low trust in government (only 18 % of Slovaks trusted their government before the election) and pre‑existing conspiracy beliefs predisposed voters to narratives of fraudmisinforeview.hks.harvard.edu. The authors warn against “technopanics” and note that overly restrictive anti‑misinformation laws can undermine independent journalismmisinforeview.hks.harvard.edu.
Policy and legislative responses
State legislation in the United States
As of July 2025, Public Citizen’s tracker lists bills in every U.S. state regulating election deepfakes. By May 13 2025, 25 states had enacted laws on the topic, up from five before 2024citizen.org. States such as Florida, Louisiana and Tennessee criminalise the distribution of deceptive AI‑generated media or require political ads to include AI‑generated content warningsncsl.org. Legislation is bipartisan and often includes civil and criminal penalties for creating or disseminating materially deceptive mediancsl.org. Some examples include:
| Jurisdiction | Deepfake law status (2024‑25) | Key features |
|---|---|---|
| Florida | SB 850/HB 919 (2024) – enacted | Requires political ads and electioneering communications using AI or synthetic media to include a disclaimer; imposes criminal and civil penalties for non‑compliancencsl.org. |
| Louisiana | HB 316 HS1 (2024) – enacted | Creates the crime of unlawful dissemination or sale of images created by AI; prohibits non‑consensual use of someone’s likenessncsl.org. |
| Tennessee | HB 2479 (2025) – enacted | Replaces the Personal Rights Protection Act with the “Ensuring Likeness, Voice and Image Security Act,” establishing property rights in an individual’s name, voice and likenessncsl.org. |
| Alaska | Multiple bills (SB 64, SB 2, SB 33, HB 358) considered or passed (2024‑25) | Legislation covers voter registration, defamation claims based on synthetic media and prohibits deceptive media intended to influence electionscitizen.org. |
| California | AB 730 (2019), AB 2355/2839/2655 (2024), AB 502 (2025) | Early leader in deepfake regulation; laws require clear disclosure when political ads include deepfakes and permit court orders removing deceptive content. |
Federal proposals and international initiatives
- Federal Election Commission (FEC) rulemaking. In June 2023 the FEC declined a petition to ban or regulate political deepfakes, but pressure remains to clarify whether existing “fraudulent misrepresentation” rules applycitizen.org. FCC Chairwoman Jessica Rosenworcel has proposed requiring disclosure of AI‑generated content in political adsvoanews.com.
- Protect Elections from Deceptive AI Act and NO FAKES Act. U.S. Senators Amy Klobuchar and Josh Hawley introduced bipartisan bills to prohibit deceptive political deepfakes, require consent and watermarking, and clarify that Section 230 does not shield platforms from liabilitycongress.gov.
- EU and UK actions. The EU’s Digital Services Act and forthcoming AI Act contain provisions requiring large platforms to ensure transparent labelling of synthetic media and risk‑management measures. The UK Government’s 2025 media‑literacy review recommends increasing public awareness and warns that overreacting to deepfakes could erode free expressiongov.uk.
Calls for transparency and detection technology
Experts advocate for watermarking AI‑generated media, mandatory disclosure of synthetic content and publicly accessible ad libraries. Witnesses at a 2024 U.S. Senate hearing emphasised that voice‑cloning and deepfake tools are free and widely accessible and urged Congress to adopt bipartisan safeguardscongress.gov. Companies such as Reality Defender and Resemble AI provide free detection tools for consumerscongress.gov.
Building resilience and public trust
Media literacy and public awareness
Evidence from the UK Government roundtable indicates that while deepfakes have not substantially affected voting behaviour, public awareness can breed uncertainty and reduce trust in institutionsgov.uk. Programmes that teach people how AI‑generated media works, how to verify sources and how to avoid sharing misinformation can reduce susceptibility. Encouragingly, studies show that people generally prefer to share accurate content and will avoid spreading misinformation when reminded of accuracygov.uk.
Strengthening institutions and transparency
The Slovak case demonstrates that deepfakes gain traction in societies with low trust in government and mediamisinforeview.hks.harvard.edu. Strengthening independent journalism, improving public services and fostering political stability may be more effective than focusing solely on technology.
Encouraging authentic political engagement
AI will likely remain part of campaigning, but democratic health depends on authentic engagement—door‑to‑door canvassing, town‑hall meetings and debates. Regulators should ensure that AI tools complement rather than replace human interaction and should guard against misuse that undermines fairness or suppresses votes.
Conclusion
Generative AI is rapidly changing political communication by lowering barriers to content creation and enabling campaigns to test and scale messages. Yet evidence through mid‑2025 suggests its impact on voting behaviour is limited: personalisation produces marginal gains, and persuasion relies more on information density and model training than on micro‑targetingarxiv.org. AI‑enabled misinformation and deepfakes are real threats—as illustrated by deepfake robocalls and viral videos—but their reach and influence have so far been constrained. Nevertheless, deepfakes erode media credibility, increase scepticism and enable a liar’s dividend, posing risks to public trustbrennancenter.org. State legislatures are moving quickly to regulate AI in election communications, but national and international frameworks are still emerging.
Ultimately, preserving democratic integrity in the age of generative influence requires a balanced approach: invest in detection and transparency tools, promote media literacy, strengthen institutions, and adopt targeted legislation that addresses clear harms without stifling legitimate political speech. As AI models become more powerful and accessible, continuous monitoring, research and adaptive governance will be essential to protect elections and maintain public trust.