When Elon Musk acquired Twitter and transformed it into X, he reignited the platform’s dedication to safeguarding the freedom of expression in the digital age. Positioned as a stronghold for free speech, X has sparked a broader conversation across the tech industry, with other platforms now exploring features to emphasize open dialogue and robust expression.
At the start of the new year, Meta CEO Mark Zuckerberg and Chief Global Affairs Officer Joel Kaplan announced a significant shift in the company’s content moderation strategy. Meta, admitting previous attempts at moderation have fallen into the realm of censorship, will end its third-party fact check program, and adopt a Community Notes model powered by contributing users with diverse perspectives.
Similar to X, Meta’s Community Notes will provide users the ability to rate and critique potentially misleading information on the platform. Users will have the option to view additional context and decide for themselves whether a post is truthful or not.
Meta plans to withdraw from its fact-checking operation once Community Notes are implemented. The company also pledged to no longer “limit legitimate political debate,” demote fact-checked posts, nor obscure controversial posts with large warning labels.
It’s not often that companies of this scale admit to user – much less themselves – that they made a mistake. But Meta is breaking that mold. These are welcome changes to Meta’s products at a time when many countries, even those in Western democracies, are pursuing policies that threaten freedom of expression online while punishing citizens and private companies for posting content not considered “mainstream” by government agencies or legacy media institutions.
We saw some of this come into play last year in Brazil when officials blocked X and seized assets from the Musk-owned Starlink after he reinstated blocked Brazilian accounts. European Union officials have similarly threatened Musk with fines and regulatory retaliation, including an extensive investigation into X for alleged violations of the EU’s onerous Digital Services Act (DSA) and charges of propagating “misinformation” across the continent.
Across the English Channel, Members of Parliament crafted a new Online Safety Act to crack down on speech bureaucrats deem “hatred, disorder, provoking violence, or disinformation.” United Kingdom regulators have been clear about their intent to “put the tech companies on notice” and extract billions of dollars in penalties and fines for failing to adequately moderate online posts.
And even here in the states, the Biden Administration pressured platforms to remove content relating to the COVID-19 pandemic and demote posts about the Hunter Biden laptop. Both accounts were later confirmed by Zuckerberg in an August 2024 letter to the House Judiciary Committee.
Members of the American Legislative Exchange Council object to the clear weaponization of government to suppress lawful speech. ALEC’s four key principles on news censorship help state lawmakers craft policies that respect freedom of speech, push back against blatant censorship, and base content decisions on objective, quantitative metrics.
The definition of “misinformation” and “disinformation” is often in the eye of the beholder. The very act of fact-checking can become subjective and political process that chills speech and leads to the overenforcement of content moderation rules. Fortunately, many lawmakers are acutely aware of the growing censorship threat and are taking corrective action to preserve the First Amendment.
Congress recently used the annual National Defense Authorization Act to defund the Global Engagement Center (GEC), a State Department agency that was accused of inappropriately stifling free speech. And after a widespread uproar, the Department of Homeland Security shuttered the infamous Disinformation Governance Board just months after it was formally announced.
Instead of having government actors decide which perspectives are permissible to share online, platforms should follow Meta and X to invite more speech on their platforms and allow users to be the judge. Social media sites can then focus content moderation resources on posts that truly violate their terms of service — especially illegal content such as child exploitation, terrorism, spam attacks, and fraud.
Zuckerberg and Kaplan aptly noted in their announcement that free expression is often messy. That’s okay. It’s time to embrace the messiness of the virtual public square and carry on America’s storied legacy of free expression in line with the Jeffersonian principles codified by our Founding Fathers nearly 250 years ago. One that brings us a step closer to being the land of the free – at least when it comes to speech.
Jake Morabito is the director of ALEC’s Communications and Technology Task Force.