What AI Could Mean for Democracy, Trust, and Reality
What AI Could Mean for Democracy, Trust, and Reality
AI is not just changing technology. It is changing the information environment democracy depends on: what people see, what they believe, who they trust, how campaigns persuade, how institutions communicate, and whether shared reality can survive synthetic media at scale.
AI can support democracy through access, translation, civic information, and transparency, but it can also intensify misinformation, manipulation, polarization, and trust collapse if society cannot verify what is real.
Key Takeaways
- AI could reshape democracy by changing how people receive information, evaluate evidence, encounter political messages, trust institutions, and participate in civic life.
- Deepfakes, synthetic audio, AI-generated images, fake screenshots, bot networks, and automated propaganda can make misinformation cheaper, faster, more convincing, and harder to trace.
- The biggest risk is not only fake content. It is trust collapse: when people become unsure what is real, bad actors can deny true evidence and spread doubt more easily.
- AI can also help democracy by improving civic access, translating government information, summarizing policy, detecting manipulation, supporting accessibility, and helping people understand complex issues.
- Elections may face new risks from AI-generated impersonation, targeted persuasion, voter suppression messages, fake scandals, automated harassment, and synthetic public opinion.
- Democratic AI governance requires transparency, accountability, auditability, privacy protections, strong institutions, media literacy, independent journalism, and rules for high-risk systems.
- The future of democracy with AI depends on whether society builds verification systems, civic education, public trust, and accountability faster than synthetic manipulation scales.
Democracy runs on reality.
Not perfect reality. Humans have always argued, spun, exaggerated, lied, misremembered, and turned group chats into tiny constitutional crises.
But democracy needs enough shared reality to function.
People need some way to agree that an event happened, a statement was made, a vote was counted, a source is credible, a leader is accountable, a video is real, a law says what it says, and a public problem is not just a hallucination produced by whichever feed had the best emotional hook.
AI puts pressure on that.
Generative AI can create realistic text, images, audio, video, fake documents, fake screenshots, fake experts, fake comments, fake outrage, fake consensus, fake scandals, and fake evidence.
AI systems can also summarize the news, answer civic questions, moderate content, recommend political posts, personalize persuasion, detect fraud, monitor public sentiment, and help governments deliver services.
That means AI is not only a technology issue.
It is a democracy issue.
It affects trust.
It affects speech.
It affects elections.
It affects journalism.
It affects government power.
It affects whether citizens can tell the difference between a real event and a generated performance wearing reality’s jacket.
This does not mean AI automatically destroys democracy.
That is too simple, too theatrical, and frankly gives AI too much main-character energy.
AI could also make democracy more accessible. It could help people understand policies, translate government information, summarize public documents, detect manipulation, support disabled voters, and make civic participation easier.
The question is not whether AI is good or bad for democracy.
The question is whether democratic societies can manage AI well enough to preserve trust, accountability, and shared reality while still using the parts that genuinely help.
This article breaks down what AI could mean for democracy, trust, and reality: the risks, the benefits, the weirdness, and the civic guardrails we will need if truth is not going to become a premium subscription.
Why AI and Democracy Matter
AI and democracy matter because democracy depends on informed citizens, trusted institutions, open debate, fair elections, independent media, and the ability to hold power accountable.
AI touches all of that.
It can influence:
- What information people see
- Which stories spread
- How political messages are targeted
- How quickly misinformation scales
- Whether citizens trust evidence
- How campaigns communicate
- How governments deliver services
- How platforms moderate speech
- How journalists verify content
- How public opinion is measured
- How institutions explain decisions
Democracy is not only voting.
Voting is the visible ritual. The deeper system is public trust, civic knowledge, legitimacy, accountability, and shared information.
If AI improves access to reliable information, democracy could become more participatory and more understandable.
If AI floods the public sphere with manipulation, fake evidence, and synthetic outrage, democracy could become more fragile.
The stakes are high because AI operates at scale.
A single person can now generate content that looks like a media operation. A campaign can personalize messages more cheaply. A scammer can impersonate a public official. A hostile actor can seed doubt across platforms. A citizen can be overwhelmed by conflicting claims and decide that truth is too exhausting to pursue.
Democracy does not require everyone to agree.
It does require people to believe reality is still reachable.
AI Changes the Information Environment
AI changes democracy first by changing the information environment.
The public sphere used to be shaped by newspapers, television, radio, campaigns, institutions, community networks, and later social media platforms.
Now AI becomes another layer.
It can generate information, summarize information, recommend information, translate information, distort information, and automate the spread of information.
AI can create:
- Articles
- Comments
- Images
- Videos
- Audio clips
- Memes
- Political ads
- Fake news sites
- Bot posts
- Personalized messages
- Fake screenshots
- Translated propaganda
- Targeted scam messages
This makes public information faster, cheaper, and easier to manipulate.
It also makes good information easier to access when used responsibly.
A citizen can ask AI to explain a ballot measure. A journalist can use AI to analyze documents. A watchdog group can use AI to detect coordinated campaigns. A voter can translate official information into a language they understand.
The problem is that the same tools that help civic understanding can also help civic manipulation.
AI lowers the cost of both explanation and deception.
That is the democratic tension.
Deepfakes and Synthetic Media
Deepfakes are AI-generated or AI-manipulated media that can make people appear to say or do things they never said or did.
They can include fake videos, cloned voices, generated images, altered recordings, and synthetic scenes.
Deepfakes matter because democracy depends on evidence.
Video used to feel persuasive because people believed they were seeing something real.
Audio felt persuasive because people recognized a voice.
Images felt persuasive because they seemed captured from the world.
AI weakens that assumption.
Deepfakes can be used to:
- Impersonate candidates
- Spread fake scandals
- Suppress votes through false messages
- Harass journalists or activists
- Target women and public figures
- Create fake evidence
- Manipulate public opinion
- Damage trust in institutions
- Confuse people during fast-moving crises
The most dangerous deepfakes may not be the polished ones that fool everyone.
They may be the fast, cheap, good-enough ones released at the right moment, before journalists, platforms, campaigns, or election officials can respond.
Democracy has always had rumors.
AI gives rumors a production studio.
The Liar’s Dividend
The liar’s dividend is one of the sneakiest AI threats to reality.
It means that once people know media can be faked, liars can dismiss real evidence by claiming it is fake.
A real video emerges.
“Deepfake.”
A real recording leaks.
“AI-generated.”
A real document appears.
“Fabricated.”
A real scandal breaks.
“Synthetic smear campaign.”
This is dangerous because AI does not only create fake evidence.
It can weaken real evidence.
If everything could be fake, then anything inconvenient can be called fake.
That benefits people with power, bad actors, corrupt officials, extremists, scammers, and anyone who would prefer accountability to dissolve into a fog machine.
The liar’s dividend turns uncertainty into a weapon.
The public does not have to believe the lie completely.
They only have to become unsure enough to disengage.
That is how reality fatigue becomes political strategy.
AI and Elections
Elections are one of the most obvious places AI could affect democracy.
Campaigns, voters, platforms, journalists, election officials, and bad actors all operate in a high-pressure information environment where timing matters.
AI can affect elections through:
- Deepfake candidate videos
- Voice cloning
- Fake robocalls
- Personalized political ads
- Automated campaign content
- Foreign influence operations
- Fake local news sites
- Bot networks
- Voter suppression messages
- Fake polling information
- Harassment of candidates or officials
- AI-generated fundraising scams
Election misinformation does not need to persuade everyone.
It may only need to confuse a small group, suppress turnout, inflame distrust, or create enough doubt after an election to weaken legitimacy.
The most dangerous moments may be close to election day, when there is little time to verify and correct false claims.
An AI-generated voice message claiming a polling place changed.
A fake video of a candidate saying something inflammatory.
A synthetic image of election fraud.
A bot swarm pushing the same false claim across platforms.
Democracy needs response systems fast enough for the speed of synthetic media.
Otherwise, fact-checks arrive after the damage has already voted.
Personalized Persuasion and Political Targeting
AI can make political persuasion more personalized.
Campaigns have long used data to target voters. AI can make that targeting cheaper, faster, and more customized.
Instead of one campaign message for a broad group, AI could help generate many variations for different audiences, concerns, locations, identities, emotional triggers, or policy interests.
AI-driven persuasion could involve:
- Personalized campaign emails
- Targeted social ads
- Customized fundraising messages
- Generated talking points for canvassers
- Automated voter outreach
- Issue-specific message testing
- Microtargeted persuasion
- Language and tone adaptation
Some of this is normal campaigning with better tools.
The risk is manipulation.
If AI systems can generate emotionally tailored messages at scale, voters may be targeted with messages designed to exploit fear, anger, identity, loneliness, resentment, or confusion.
Democracy needs persuasion.
It does not need invisible psychological puppetry with a campaign logo.
The line between persuasion and manipulation will become harder to police when AI can personalize political communication at scale.
Bots, Swarms, and Artificial Public Opinion
AI can make fake public opinion easier to manufacture.
Bots are not new, but AI makes them more convincing. Instead of repetitive spam accounts using clumsy scripts, AI-generated accounts can produce more natural language, adapt to context, argue, joke, mimic local slang, respond to replies, and coordinate across platforms.
AI bot activity can create:
- Fake consensus
- Artificial outrage
- Coordinated harassment
- Manipulated trending topics
- False grassroots movements
- Amplified conspiracy theories
- Attacks on journalists or officials
- Comment section flooding
- Manufactured doubt
This matters because people use social signals to decide what matters.
If thousands of accounts appear to care about an issue, people may assume the issue is bigger than it is. If a journalist is attacked by a swarm, others may self-censor. If a false claim trends, media may cover it simply because it is trending.
Artificial public opinion can distort democratic debate.
It creates the illusion of people where there may be machines, campaigns, or coordinated actors wearing human-shaped usernames.
Democracy needs public voice.
AI makes it easier to counterfeit the chorus.
Trust Collapse and Reality Fatigue
The deepest risk is trust collapse.
Not one fake video.
Not one misleading article.
Not one bot campaign.
The deeper risk is people getting exhausted by uncertainty and deciding that truth itself is unreachable.
Reality fatigue can sound like:
- “Everything is fake.”
- “You cannot trust anyone.”
- “All media lies.”
- “All evidence is manipulated.”
- “It is impossible to know what happened.”
- “Everyone has their own truth.”
That may sound skeptical.
It is actually vulnerable.
When people believe nothing can be trusted, they often retreat into tribe, emotion, identity, or whichever source confirms what they already feel.
That is terrible for democracy.
Democracy needs disagreement, but it also needs reality anchors.
If every fact becomes negotiable and every source becomes suspect, public debate becomes less about evidence and more about loyalty.
AI did not invent distrust.
But it can accelerate the conditions that make distrust profitable.
Institutions Under Pressure
AI will put pressure on institutions that already struggle with trust.
Governments, courts, schools, news organizations, election offices, public health agencies, and technology platforms may all face new demands for verification, transparency, and speed.
Institutions will need to respond to:
- Fake official announcements
- Impersonation of public leaders
- AI-generated legal or policy misinformation
- False emergency alerts
- Synthetic evidence
- Election misinformation
- Automated harassment campaigns
- Public confusion around AI use
- Demands for transparency in automated decisions
The institutions that survive the AI trust crisis best will probably be the ones that communicate clearly, verify quickly, explain decisions, publish evidence, correct mistakes, and avoid treating the public like a compliance audience.
Trust is not restored by saying “trust us.”
That line has the energy of a locked door.
Trust is restored by showing receipts, building reliable processes, admitting uncertainty, correcting errors, and making accountability visible.
AI raises the bar for institutional communication.
Because silence creates space for synthetic nonsense to move in and redecorate.
Journalism, Fact-Checking, and the Open Web
Journalism becomes more important in an AI-shaped information world.
Also more strained.
AI can help journalists analyze documents, translate materials, summarize data, find patterns, and investigate coordinated manipulation. But AI also creates more content to verify, more fake evidence to debunk, more synthetic media to inspect, and more pressure on already fragile media business models.
Journalists and fact-checkers may need to verify:
- Images
- Videos
- Audio clips
- Screenshots
- Documents
- Social media posts
- Bot networks
- Source identities
- AI-generated claims
- Fake local news sites
AI search and answer engines also create a traffic problem for publishers.
If AI systems summarize news without sending readers to original sources, journalism may lose audience and revenue.
That is dangerous because democracy needs original reporting.
You cannot fact-check society with recycled summaries of journalism nobody can afford to produce.
The future of trust depends partly on whether independent journalism remains sustainable.
The open web cannot be treated as a buffet for AI systems while the cooks go unpaid.
How AI Could Improve Civic Access
AI is not only a threat to democracy.
It could also make democracy more accessible.
Government information is often complicated, buried, confusing, legalistic, or written in a tone that suggests clarity was removed during procurement.
AI could help citizens understand civic information more easily.
AI could support democracy by helping people:
- Understand ballot measures
- Summarize legislation
- Compare candidate positions
- Translate government information
- Navigate public services
- Understand rights and eligibility
- Access disability-friendly formats
- Find public meeting information
- Track local issues
- Submit public comments
- Understand policy tradeoffs
This is one of the best-case democratic uses of AI.
AI can make complicated systems easier to navigate.
It can help people participate who might otherwise be excluded by language, disability, time, bureaucracy, or information overload.
But civic AI tools need neutrality, transparency, source links, and clear limits.
A tool that explains democracy should not quietly become a tool that steers democracy.
AI Governance and Democratic Oversight
AI governance is how societies decide what AI systems can do, who is accountable, what rights people have, and how risks are managed.
Democratic oversight matters because AI systems increasingly affect public life.
AI governance may include:
- Transparency requirements
- Disclosure of AI-generated content
- Rules for deepfakes
- Election-specific AI restrictions
- Audits for high-risk systems
- Privacy protections
- Bias testing
- Public procurement standards
- Appeals processes for automated decisions
- Safety testing for powerful models
- Platform accountability
- Rules around political ads
The challenge is balance.
Too little governance allows manipulation, discrimination, privacy violations, and unchecked power.
Bad governance can create censorship, lock in large companies, suppress useful tools, or give governments too much control over speech.
Democratic AI governance needs public debate, independent oversight, technical expertise, civil liberties protections, and accountability that applies to both companies and governments.
Leaving AI governance entirely to private platforms is not enough.
Leaving it entirely to governments is also risky.
Welcome to the part where democracy has to do democracy.
Free Speech, Moderation, and Censorship Risks
AI creates difficult free speech questions.
Platforms and governments may need to respond to AI-generated misinformation, deepfakes, harassment, impersonation, and coordinated manipulation. But efforts to control harmful content can also threaten legitimate speech if they are vague, biased, politicized, or overbroad.
The core tension:
- Too little moderation can allow manipulation and abuse to spread.
- Too much moderation can suppress legitimate expression.
- Bad moderation can be uneven, political, or discriminatory.
- Automated moderation can misunderstand context.
- Governments can use misinformation rules to silence critics.
This is not an easy problem.
Anyone pretending it is easy is probably trying to sell a platform policy or win a cable segment.
Democracies need rules that target harmful deception and manipulation without crushing dissent, satire, journalism, activism, or unpopular speech.
That means precision matters.
A fake emergency alert is not the same as parody.
A deepfake meant to suppress votes is not the same as political criticism.
A coordinated bot campaign is not the same as citizens organizing.
AI content policy needs nuance.
Which is unfortunate, because nuance has terrible engagement metrics.
AI in Government and Public Services
Governments will increasingly use AI in public services.
This could improve efficiency, but it also raises serious democratic accountability questions.
AI may be used in:
- Benefits administration
- Tax services
- Immigration systems
- Public health
- Emergency response
- Fraud detection
- Public safety analysis
- Transportation planning
- Education administration
- Citizen service chatbots
- Document processing
Public-sector AI must meet a higher standard because government decisions can affect rights, benefits, safety, freedom, and access to essential services.
Citizens should know when AI is used.
They should have ways to appeal decisions.
Systems should be tested for bias and errors.
Agencies should explain how AI tools are governed.
Private vendors should not become unaccountable decision-makers inside public systems.
Efficiency is not enough.
Government AI must be legitimate.
A faster bureaucracy that citizens cannot challenge is not modernization.
It is bureaucracy with a black box upgrade.
Privacy, Surveillance, and Power
AI can expand surveillance.
That is one of the biggest democracy risks.
AI systems can analyze faces, voices, movement, behavior, social media posts, messages, purchases, location patterns, public records, and massive datasets. Governments and companies can use that power for security, personalization, fraud detection, advertising, political targeting, or control.
AI surveillance concerns include:
- Facial recognition
- Predictive policing
- Workplace monitoring
- Political profiling
- Location tracking
- Social media analysis
- Biometric data collection
- Automated risk scoring
- Mass data aggregation
- Targeted manipulation
Privacy matters to democracy because people need room to think, organize, dissent, associate, investigate, protest, and change their minds without constant monitoring.
A society where everything is tracked is not simply more efficient.
It is less free.
AI makes privacy more urgent because it can extract patterns from data that once felt harmless in isolation.
The issue is not only what data is collected.
It is what can be inferred from it.
The Benefits for Democracy
AI could strengthen democracy if used responsibly.
That is easy to forget because the risks are loud, and because deepfakes have the dramatic flair of a villain with editing software.
But AI can help democratic participation too.
Potential benefits include:
- Better access to civic information
- Translation of government resources
- Accessibility support for disabled citizens
- Summaries of complex policies
- Tools for public comments and participation
- Detection of coordinated manipulation
- Support for investigative journalism
- Faster analysis of public records
- Better public service navigation
- More responsive government communication
- Improved election administration support
AI can help people understand systems that are currently too complex, too slow, or too inaccessible.
It can make public information easier to navigate.
It can help watchdogs detect patterns.
It can help journalists analyze large document sets.
It can help citizens ask better questions.
The democratic upside is real.
But it depends on transparency, independence, accountability, and trust.
AI that helps people understand power is useful.
AI that helps power manipulate people is dangerous.
The Major Risks
The major risks of AI for democracy are not limited to fake media.
The broader risk is an information environment where manipulation scales faster than trust can recover.
Major risks include:
- Deepfakes and synthetic media
- Election misinformation
- Voter suppression messages
- Political impersonation
- Automated propaganda
- Bot swarms
- Personalized manipulation
- Reality fatigue
- Harassment of public figures
- Distrust in real evidence
- Opaque government AI systems
- Surveillance and privacy erosion
- Biased automated decisions
- Weak accountability
- Power concentration among platforms and AI companies
The most dangerous future is not one where everyone believes every fake.
It is one where nobody knows what to believe, so people retreat into tribes, influencers, parties, conspiracy networks, or whatever source makes uncertainty feel emotionally manageable.
Democracy does not die only from lies.
It can also weaken from exhaustion.
When reality becomes too much work, manipulation gets easier.
How to Protect Shared Reality
Protecting shared reality will require more than telling people to “do their own research,” which often means “watch three videos and become extremely confident.”
Society needs better systems.
Individuals need better habits.
Platforms need better accountability.
Governments need better rules.
Institutions need better communication.
Ways to protect shared reality include:
- Media literacy education
- AI literacy education
- Source verification habits
- Content provenance tools
- Watermarking and authenticity signals
- Deepfake detection tools
- Clear election misinformation response plans
- Transparent platform policies
- Independent journalism support
- Public-interest technology funding
- Privacy protections
- Rules for political deepfakes
- Audit requirements for high-risk AI systems
- Clear appeals for government AI decisions
Individuals can also build better habits:
- Pause before sharing emotional content.
- Check the source before believing a claim.
- Look for corroboration from multiple credible outlets.
- Be skeptical of screenshots without context.
- Check dates and original links.
- Be careful with sensational videos near elections.
- Ask who benefits if you believe or share the content.
- Do not assume “AI-generated” just because evidence is inconvenient.
The goal is not paranoia.
The goal is civic skepticism with a functioning spine.
Believe evidence, but inspect it.
Question claims, but do not turn doubt into a religion.
What Comes Next
The future of AI, democracy, trust, and reality will likely be defined by a race between synthetic manipulation and verification.
1. More realistic synthetic media
AI-generated video, audio, images, and documents will become easier to create and harder to detect casually.
2. More election-specific AI rules
Governments will create more rules around political deepfakes, AI-generated campaign content, robocalls, and election misinformation.
3. More verification tools
Watermarking, provenance systems, detection tools, media authentication, and source tracing will become more important.
4. More bot and influence operations
AI may make coordinated manipulation more realistic, more adaptive, and harder to distinguish from organic public opinion.
5. More public-sector AI use
Governments will use AI for public services, administration, communication, and analysis, raising accountability questions.
6. More tension around speech and moderation
Democracies will struggle to balance protection from manipulation with civil liberties, free expression, satire, journalism, and dissent.
7. More citizen AI tools
People may use AI to understand laws, compare candidates, summarize public meetings, track issues, and navigate civic systems.
8. More pressure on trust
Institutions that cannot communicate clearly and verify quickly will struggle in a synthetic information environment.
The next phase of democracy will require a new civic skill:
Reality verification.
Not because everyone should become a forensic analyst.
Because everyone will live in a world where evidence has become editable.
Common Misunderstandings
AI and democracy are already buried under dramatic takes, shallow takes, and the kind of confident nonsense that deserves its own fact-checking habitat.
“AI will destroy democracy.”
Not automatically. AI creates serious risks, but democracy’s future depends on policy, institutions, civic literacy, platform accountability, journalism, and how AI tools are designed and used.
“Deepfakes are the only real threat.”
No. Deepfakes matter, but so do bot networks, personalized persuasion, fake screenshots, synthetic news, trust collapse, surveillance, biased systems, and the liar’s dividend.
“People will simply learn to spot fakes.”
Not enough. Synthetic media will keep improving. People need better tools, stronger institutions, source verification systems, and civic education.
“AI can solve misinformation by detecting it.”
AI can help detect manipulation, but detection is imperfect. Bad actors adapt, and content moderation raises free speech and governance questions.
“All AI-generated political content should be banned.”
That may be too broad. Some AI use may be harmless or useful, such as translation and accessibility. The bigger concern is deceptive, undisclosed, manipulative, or harmful use.
“Government AI is automatically more trustworthy.”
No. Public-sector AI needs transparency, audits, appeals, bias testing, privacy protections, and democratic oversight.
“The solution is to trust nothing.”
No. Total distrust is also dangerous. The goal is calibrated trust: verify sources, inspect evidence, recognize uncertainty, and avoid turning skepticism into helpless cynicism.
Final Takeaway
AI could mean many things for democracy, trust, and reality.
It could help citizens understand government, access public services, translate civic information, support journalism, detect manipulation, and make participation easier.
It could also flood the public sphere with deepfakes, fake evidence, automated propaganda, bot swarms, personalized manipulation, and enough uncertainty to make people give up on truth entirely.
The core issue is not only misinformation.
It is trust.
Democracy needs citizens who can access reliable information, institutions that can be held accountable, media that can verify claims, platforms that do not amplify manipulation for engagement, and public systems that are transparent enough to deserve legitimacy.
AI raises the difficulty level.
It makes deception cheaper.
It makes evidence easier to fake.
It makes real evidence easier to deny.
It makes civic information easier to access.
It makes manipulation easier to personalize.
For beginners, the key lesson is simple:
The future of democracy with AI depends on whether we can protect shared reality without sacrificing open debate.
That means better verification tools, better civic education, better AI literacy, better journalism, better platform accountability, better privacy protections, and better democratic oversight of powerful systems.
AI will not decide the future of democracy by itself.
People will.
But people will need to move faster, think sharper, verify harder, and stop treating trust like a renewable resource that magically refills after every information crisis.
Reality is becoming easier to manufacture.
That makes protecting the real thing everyone’s problem.
FAQ
How could AI affect democracy?
AI could affect democracy by changing how people receive information, how campaigns persuade voters, how misinformation spreads, how institutions communicate, how governments use automated systems, and how citizens verify what is real.
What are the biggest AI risks for democracy?
The biggest risks include deepfakes, election misinformation, voter suppression messages, bot swarms, personalized manipulation, synthetic media, surveillance, biased public-sector AI, weak accountability, and trust collapse.
What is the liar’s dividend?
The liar’s dividend is when people use the existence of deepfakes or AI-generated media to dismiss real evidence as fake. AI makes it easier for dishonest actors to deny true recordings, images, or documents.
Can AI help democracy?
Yes. AI can help people understand laws, summarize policy, translate civic information, support accessibility, detect manipulation, analyze public records, and improve access to government services when used responsibly.
How does AI affect elections?
AI can affect elections through deepfake videos, cloned voices, fake robocalls, bot campaigns, personalized ads, misinformation, voter suppression messages, fake scandals, and automated harassment of candidates or election officials.
How can people protect themselves from AI misinformation?
People can pause before sharing, check original sources, compare multiple credible outlets, be skeptical of emotional or sensational claims, verify dates and context, inspect screenshots carefully, and avoid treating uncertainty as proof that nothing is real.
What should governments do about AI and democracy?
Governments should create clear rules for high-risk AI, election deepfakes, political impersonation, public-sector AI, privacy, automated decisions, and platform accountability while protecting free speech, journalism, satire, and dissent.

