This week, Anthropic CEO Dario Amodei publicly defended his company against White House artificial intelligence czar David Sacks’ accusation that Anthropic is building “woke AI.” In other words, the leader of a $183 billion AI company found himself reassuring the current administration that his company’s AI chatbot wouldn’t spread “ideological bias.”
The U.S. currently ranks as the most speech-protective country for generative AI among major economies.
Despite the Trump White House’s claims to champion free speech and AI innovation, the administration is simultaneously pressuring private companies to align their AI systems with its own definition of acceptable viewpoints. This comes as the administration deploys similar pressure tactics against broadcasters and universities to compel them to conform to government-approved viewpoints.
Yet, new research reveals both good news and a warning for America’s leadership in AI. According to a comprehensive report released this month by our organization, The Future of Free Speech at Vanderbilt University, the U.S. currently ranks as the most speech-protective country for generative AI among major economies.
Analyzing both legislation and corporate practices across six jurisdictions — including China, the European Union, India, Brazil and South Korea — the study found that America’s First Amendment protections and light regulatory touch have created an environment in which AI can flourish without heavy-handed government interference.
But this lead is fragile. The Trump administration’s “anti-woke” AI agenda and a patchwork of worrying state laws threaten to undermine the very openness that made American AI companies global leaders.
Our report shows that the U.S.’ high ranking in AI policies that respect free speech rests on the First Amendment’s strong baseline: minimal government intervention in expression, creating wide latitude for debate, even on controversial issues. The U.S. also leads other countries because the current administration has embraced a philosophy rooted in global competitiveness, which includes a light regulatory touch and the promotion of open models.
But Sacks’ accusation that Anthropic has created a “woke AI” is part of the White House’s broader push to enforce “neutrality” in AI systems. On one hand, the White House’s recent “AI Action Plan” and the executive order “Preventing Woke AI in the Federal Government” purport to defend free expression by keeping AI “free from ideological bias.” In practice, however, they risk substituting one orthodoxy for another.
The order requires that federal procurement of AI systems favor models deemed “neutral” or “truth-seeking,” while directing agencies such as the National Institute of Standards and Technology to strip concepts such as diversity, equity and inclusion from their standards.
Algorithms cannot be entirely free from ideology, especially if the government defines which ideas count as ideological.
“Neutral AI” may sound appealing in theory, but in practice, it is not a static setting but a moving target shaped by culture and politics. That’s because algorithms cannot be entirely free from ideology, especially if the government defines which ideas count as ideological. A true free-speech-oriented approach would allow diverse models to coexist, reflecting different values, rather than enforcing uniformity through procurement and compliance incentives.
Despite the AI Action Plan’s emphasis on innovation and openness in AI, this unrealistic push for neutrality could easily slide into viewpoint policing as more companies vie for government contracts and favor.
The First Amendment was designed precisely to prevent government actors from dictating which viewpoints are acceptable. But America’s leadership on AI and free speech depends on maintaining a steadfast commitment to these principles.
Meanwhile, states are moving fast, often at the expense of these free-speech principles, to regulate AI. In the first half of 2025, 38 states adopted or enacted about 100 laws and policies related to AI. Some efforts, such as narrowly defined policies aimed at tackling explicit content concerning children, are obviously welcome responses to genuine concerns.
But laws aimed at political expression, such as proposals to ban political deepfakes, risk violating the First Amendment. In August, a federal judge struck down a California law prohibiting deceptive political deepfakes before elections, citing First Amendment concerns.
The result of these numerous attempts to regulate AI is a messy, unstable environment that could not only chill innovation, but also restrict lawful expression and users’ right to receive information.
But government overreach isn’t the only danger for AI’s free-speech culture. Our analysis of eight leading AI models — including Anthropic’s Claude Sonnet 4, OpenAI’s GPT-5 and Google’s Gemini 2.5 — found that corporate content policies themselves remain vague, overly broad and inconsistently applied.
One test of whether a chatbot has a healthy free-speech culture is whether it responds to controversial but lawful prompts. That’s because a necessary component of free expression is users’ rights to access information. A 2024 study we conducted found high rates of refusal among popular chatbots to generate content about controversial political topics such as colonialism, transgender athletes in sports or the Israel-Palestine conflict.
Although our latest and more comprehensive study shows that refusal rates on controversial prompts have declined, models still block lawful but sensitive discussion on issues such as reproductive rights, race or conflicts. These refusals are often justified through opaque “acceptable use” rules rather than clear, rights-based standards.
Governments and companies should resist the urge to weaponize the concept of “AI neutrality” for political ends.
In our ranking of AI models’ “free-speech culture,” Anthropic ranked third. When we tested Anthropic’s model with 64 prompts on contentious topics, it performed markedly better than last year, responding to 73% of prompts, up from 36%.
But none of the models demonstrated a fully transparent or rights-based framework. That matters because, as AI chatbots become emerging gateways for information for hundreds of millions of users, their invisible filters can quietly shape what people see, learn and say.
Protecting freedom of expression in the AI era must be embedded from the ground up — in system design, corporate governance and lawmaking alike. Governments and companies should resist the urge to weaponize the concept of “AI neutrality” for political ends and move away from empty promises to measurable commitments grounded in robust free-speech standards.
The U.S. still stands as the world’s most speech-protective environment for AI development. But federal “anti-woke” directives and overzealous state regulation that ignores free-speech principles could turn AI systems into another casualty of the culture wars.
America has a chance to prove that free speech and innovation are not conflicting goals, but mutually reinforcing pillars of a democratic digital age.