Media depicting child abuse aren’t “free speech.” Political violence and threats aren’t “legitimate political discourse,” either.

Media depicting child abuse aren’t “free speech.”  Political violence and threats aren’t “legitimate political discourse,” either.

Hello from sweltering Washington, where we’re getting one more dose of humid heat before summer releases her grasp on the District. Alex here, with another civic text.

A hearty welcome to the new subscribers who’ve signed up after my most recent dispatch on the arrest of Telegram founder and CEO Pavel Durov.

If you find my original analyses useful or insightful, I hope you’ll consiser upgrading to a paid membership and amplifying these newsletters to your networks.

I also hope you’ll subscribe to Casey Newton at Platformer, whose explainer of Durov’s arrest was clear and well-reported. As he acknowledged, it’s possible that French prosecutors overreached here, or will.

It’s also possible that Telegram’s refusal to answer law enforcement requests and tolerate “truly vile behavior” and illegal content on its platform led to this moment.

Newton highlighted a post over on LinkedIn by Daphne Keller, the director of the Stanford Cyber Policy Center’s program on platform regulation, which goes right to the heart of the matter:

The Telegram CEO arrest in France seems unsurprising, and like something that also could have happened under U.S. law.
It has long been rumored (and maybe reliably reported?) that Telegram fails to remove things like unencrypted CSAM or accounts of legally designated terrorist organizations even when notified. That could make a platform liable in most legal systems, including ours.
CSAM, terrorist content, and drug sales are all regulated by federal criminal law. Platforms have no immunity from that law. There are even some special provisions for platforms in federal criminal drug law (though IIRC they didn't add much).
The prosecution of Silk Road operator Ross Ulbricht seems pretty analogous here. That was about federal criminal liability of a platform operator. In addition to drug charges, Wikipedia says he was convicted re money laundering, hacking, and forged identity documents.
We even arrest platform operators for copyright infringement. Remember Megaupload and Kim Dotcom? Apparently New Zealand finally agreed to extradite him this month.
I am usually one of the people making noise about free expression consequences when lawmakers go overboard regulating platforms. Possibly this will turn out to be one of those cases. But so far, I don't think so.

That aligns with my conclusions so far, too.

So did Jason Koebler’s dispatch at 404 Media, which is worthy of your support:

We have spent many hours at 404 Media discussing amongst ourselves why there is so much crime on Telegram itself and why Telegram has continued to let blatant criminal organizations operate on its platform in open, unencrypted channels. The only theory that makes any sense thus far is that the company sees itself as operating entirely outside the law.

Over at Bloomberg, Kurt Wagner acknowledged that while we are all still waiting for more facts about what Durov did or didn’t do to address criminal behavior on Telegram, ”if you build or run a service that propagates criminal activity without stepping in to try and halt it, simply throwing up your hands and proclaiming free speech has never been an adequate defense.”

That’s what Princeton professor Zeynep Tufecki concluded as well today, in her op-ed arguing that “free speech“ shouldn’t shroud criminal activity:

Free speech is an important value, but protecting it does not mean absolving anyone of responsibility for all criminal activity. Ironically, Telegram’s shortage of end-to-end encryption means the company is likely to be more liable simply because it can see the criminal activity happening on its platform. If, for example, Telegram did not cooperate with authorities at all after receiving legal warrants for information about criminal activities, that would mean trouble even in the United States, with its sweeping free speech protections.

Just so.

Over the decades, I have tended towards near-absolutism on freeedom of expression from government regulation or censorship because curbs tend to be misused or abused by authoritarian governments across the ideological spectrum.

As the Electronic Frontier Foundation explains, the Internet of 2024 could not exist without Section 230 of the Communications Decency Act, which protects our freedom of expression by exempting online intermediaries from liability for content posted on them:

"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." (47 U.S.C. § 230(c)(1)).

I supported tech companies choosing to deplatform former President Trump in 2021 after he spread lies about widespread voter fraud and incited an insurrection in my neighborhood to stop the transition of power under their policies, which they had declined to enforce until January 6th, 2021.

Political violence is never “legitimate political discourse.”

Peaceful protests are legitimate discourse, as is taking a knee during the playing national anthem, or civil disobedience at segregated diners or in front of the White House.

Freedom of expression have not only made America greater, but better: Peaceful marches, nonviolent protests, and civil disobedience led to near-universal suffrage and civil rights laws.

Petitioning for redress of grievances has made our union stronger, inclusive, & far more vibrant: we are now a full democracy of, by, and for (almost) all of our people.

Seditious mob violence to overturn free and fair election results are not, nor are threats to school boards or election officials.

“Revenge porn“ or child abuse sexual material is not a legitimate form of expression that merits protection under human rights law or the Amendment.

The question of how to hold tech companies and their leaders accountable for tolerating criminal activity on their platforms — much less engaging in it — should be separate from how to regulate the ways they do or do not moderate expression on them.

Unfortunately, the two have now not only been conflated in public discourse but the object of deception by politicians and partisans who have either become convinced that there is a “fantasy industrial complex” that discriminates against conservatives because of who they are, not what they say — or are laundering that false claim for profit or political advantage.

When tech companies added civic integrity and medical misinformation policies for online platforms after 2016, they created a structural default towards accuracy about voter fraud and vaccine safety, among other things, which they then tried to enforce.

People who shared falsehoods about the prevalence of election fraud or health risks of vaccines then were moderated more often — but it was because of the content they shared, not because of who they were, as far too many politicians and partisans continue to claim in bad faith.

The core issue is not viewpoint-based discrimination by tech companies against conservarives or nationalists, but rather widespread false beliefs in these demographics that were in structural conflict with platforms that attempted to be arbiters of truth after they amplified lies in a historic election and tried to be bettee stewards of public information during a historic pandemic.

The bad faith on the right about this is the result of cowardice of Congressional leaders unable or unwilling to tell their supporters that they’ve been disinformed — or who believe the lies themselves.

The “victory for free speech” we saw hailed yesterday after Facebook founder Mark Zuckerberg sent a letter to Congress exemplifies this perspective.

Zuckerberg, perhaps seeking insulation against risk for his businesses in a new administration, expressed regret for deciding to moderate content about COVID-19 after the Biden White House encouraged them to do so — including humor and satire — and for limiting sharing and distribution of a NY Post story about Hunter Biden’s laptop.

Private companies have the right under the First Amendment not to publish “lawful but awful” speech if they do not wish to do so, and Zuckerberg provided no evidence — much less under oath — that the U.S. government had compelled him to take these actions.

But the searing underlying irony is that the same “weaponization” committee Zuckerberg responded to has now itself abused official government power to chill the speech of researchers who study lies and misinformation online and pressure the world’s largest social network to back off enforcing civic integrity policies in an election year.

That’s an outcome that will benefit no one but the merchants of doubt, deception, and denial who continue to profit from muddying public discourse and poison the minds of Americans with toxic lies.

It’s true that U.S. tech companies taking an explicitly pro-democracy stance in 2024 by prebunking election lies would be perceived by nationalists around the world as taking a partisan side — especially domestically, given Trump’s lies — risking profits, fines, or anti-trust action in an administration willing to abuse state power, but an anti-democratic movement merits weathering that hurricane of outrage.

Tech CEOs could and should use their immense power on the behalf of democracy everywhere by holding the line against censorship by authoritarian nations while cooperating with democratic governments seeking removal of CSAM — especially given how feckless this White House has been in drafting & implementing a national strategy for information resilience.

That’s all for now. As always, feel free to write to me at alex@governing.digital to tell me why I’m wrong, explain what I’m missing, or tip me off to something unreported or underexplored.

but