Tech companies should require disclaimers for media made with AI, & governments should codify that mandate. Musk's behavior shows self-regulation will fail.

Tech companies should require disclaimers for media made with AI, & governments should codify that mandate. Musk's behavior shows self-regulation will fail.

Good morning from sunny Maine, where I've been enjoying clean, cool air and a digging into a review copy of a new novel that revolves around an online encyclopedia. You might say it's targeted to my interests!

Alex Howard here, with another civic text.

I was surprised to find myself in the New York Times today, after I'd asked Elon Musk a video altered by AI that he'd reposted without disclosure or disclaimer. I'd learned of the platform violation from Nina Jankowitz, author and executive director of the American Sunlight Project, who'd posted about it over on Threads.

On the one hand, I was pleased to see that the Times reported that "the billionaire owner of the social media platform X reposted a video that mimics Vice President Kamala Harris’s voice, without disclosing that it had been altered."

I was less pleased to see that the Times headline was that "Musk Shares Manipulated Harris Video, in Seeming Violation of X’s Policies."

That caveat is unwarranted, as is the one that follows in the copy.

On Friday night, Mr. Musk, the billionaire owner of the social media platform X, reposted an edited campaign video for Vice President Kamala Harris that appears to have been digitally manipulated to change the spot’s voice-over in a deceptive manner.

The Times literally reported out that the images and audio in the Harris campaign video had been edited with AI, that Musk had posted it without disclosure or disclaimer, and yet still added caveats.

In practice, that meant softening a headline that should clearly state that Musk broke X's rules on synthetic media. This instance of synthetic media qualifies on all three counts.

The creator of the video confirmed that this video was made with AI. It's not unclear. (The editor-in-chief of the New York Times knows that their staff don't put enough time into headlines, which many people skim and share without clicking through, and this is unfortunately a good example.)

While the creator of may vigorously proclaim that their tweet sharing the states that this AI-generated media is a PARODY, and thus exempted under X's rules for satire, the reality is there is no watermark on the synthetic media that makes it clear the Vice President's voice was AI-generated, nor is there a disclosure or disclaimer added to it by the platform.

Once the synthetic media is shared outside the original tweet, as Musk did, consumer-generated annotation is gone, leaving full context collapse. Musk did not label it "satire," as is required under X's current rules.

As everyone can see, Community Notes – the crowdsourced fact-checking feature formally known as Birdwatch at Twitter – has thus far failed to add a note for Musk’s tweet so far. (The sysadmin of X has always been THE stress case for the fact-checking feature, and it's not working today.)

Musk hasn’t responded to me yet, but then I didn’t expect him to do so —. much less acknowledge that he broke the rules of the service formerly known as Twitter.

TBD if either changes after more media attention today, but I wanted to share some thoughts with you about what should happen next.

Back in July 9, which feels like roughly an Internet century ago, the chairwoman of the Federal Communications Commission testified to Congress that the agency would introduce a rule making that would require radio and TV stations to disclose whether AI was used to create a political ads.

In theory, that would in turn require campaigns or other entities buying electioneering to tell those stations if they'd used AI to create the ads.

As with public knowledge about who paid for political ads and disclaimers about who made them, kind of transparency should be tablestakes in a healthy democracy.

This past week, the FCC moved forward, issuing a notice of proposed rule making that may or may not result in new rules in place for radio and TV before Election Day in November. (I'll share my comments in response in a future piece.)

But as the FCC does not have statutory oversight over X, what happens next here is unclear.

The obvious answer may be… nothing, which is a suboptimal outcome for our democracy. That would also be a failure for the rest of the democracies around world that hold out faint hope our institutions can sort this mess out. (The autocracies and theocracies are no doubt pleased that Americans mostly continue amusing ourselves to death with synthetic social media, soma, gaming, gambling, sports, and consumerism.)

The European Union enacted new laws on AI and digital services that include many valuable transparency provisions for platforms, but the specific regulation focuses on "whether an artificial intelligence system has been used to target or deliver the political advertisement" – not a requirement for a disclosure that AI was used to create it.

The current X rules for synthetic media make it clear that this video should be labeled to include missing context, but that's not a proportional response to an obvious violation of platform integrity by the platform's owner during an election year.

X and other tech companies should begin requiring all creators to indicate if they media we share there was created with AI when they try to upload it.

Then, tech companies should automatically apply a label to synthetic media at the platform level.

If folks don’t identify their AI-made creations, tech companies should suspend their account on first offense, then ban them after multiple attempts.

Tech companies need to require the human who uploads each video to have to indicate whether it is AI created or not.

It’s likely that many instances of synthetic media will continue to go up without labels, but there should be a global standard for the platforms that their owners and executives are willing to abide by themselves. While it's hard to imagine Musk being sanctioned by X in any way, it's the right standard for this platform – and others.

Eventually, a regulatory mandate will be necessary to ensure such a feature goes up, beyond platform "rules" that can be bent or broken if staff, executives, founders, or their friends wish to do so.

As the Times reported, there is a law on the books that bans fraudulently misrepresenting a federal candidate, but it does not clarify whether using AI to synthesize a voice or image – as was done here – is illegal.

In any case, the (intentionally) gridlocked, dysfunctional election regulator hasn't moved forward with voting to clarify the rules in time for this election cycle, as the Times reported:

The Federal Election Campaign Act prohibits fraudulent misrepresentation of federal candidates or political parties, but the law, written in 1971, is ambiguous when it comes to modern technologies such as artificial intelligence.
Last August, the Federal Election Commission approved a rule-making petition from the watchdog group Public Citizen calling for the law to be amended to clarify that it “applies to deliberately deceptive Artificial Intelligence (AI) campaign advertisements.” That amendment was supported by the Democratic National Committee, as well as 52 Democratic members of Congress, but it was opposed by the Republican National Committee, which said that it was “not a proper vehicle for addressing this complex issue” and argued that it could violate the First Amendment.

In an ideal world, this (incredibly!) high-profile test case of synthetic media would now provoke movement at the FEC to act, but it won't be enough.

"Neither proposal is adequate here," wrote Jankowitz, when I followed up with her for comment about what's happened. "All I know is that it’s bananas that we don’t have any AI rules (besides the robocall one) and that they didn’t try to get something on the books in time for the election is nuts. Platforms need to require—and enforce— clear labeling *throughout* these AI enabled campaign materials or politics related comedy - the original on X was labeled in the text with video, but it didn’t carry over to Musk’s tweet."

Unfortunately, I suspect it will require something much uglier than this to galvanize action.

So-called deepfakes have been a much-hyped edge case for years that haven't emerged out of tech policy debates and the feverswamps of online platforms into mainstream media and public discourse in 2024.

What we’ve seen so far in the political sphere is “cheapfakes” of Speaker Emerita Pelosi, former President Obama, or President Joe Biden.

As with revenge porn, the negative impact of deep fakes and "nudify" apps that use AI to strip people of clothing has primarily been on women, not elections. Unlike the racism, homophobia, and misogyny directed towards minorities, women, and GLBTQ folks running for office, the threat to democratic integrity has been emergent, not self-evident to all.

As Nina wrote in 2021, "I have a sinking suspicion the first widely successful foreign influence campaign using a deep fake will be sexual in nature and will target a woman."

Unfortunately, it's increasingly clear what now lies ahead: VP Harris will be targeted by nonconsensual pornographic videos synthesized by AI in 2024 that are intended to humiliate, degrade, and delegitimize her candidacy.

Nina tells me there are already hundreds of instances of synthetic pornography depicting Harris. None of gone mainstream yet, but that's likely simply a matter of time.

Whether or not X, Meta, or YouTube allow that synthetic media to be uploaded, and cede it freedom of reach remains to be seen, though Musk's action and X's lack of reaction suggest far worse is yet to come than a manipulated ad.

That's it for today. As always, your comments, suggestions, questions, or other feedback are welcome at Alex@governing.digital. I hope you will share these newsletters wisely and consider upgrading to a paid subscription to support my work.


P.S. The Times also snuck in a dubious editorial observation about that "with 191 million followers, Elon Musk is the most influential voice on X and, arguably, on all of social media." (They caveated that, too.)

Follower count has always been one of the poorest proxies for influence, but it's the one print and broadcast reporters reach for because of the parallel to subscribers and ratings.

As I wrote thirteen (!) years ago for CBS News, online influence is not just about followers: It's about the engagement an account holds on a given network and whether they can drive conversations to do something, online or offline. (Like, say, register to vote.)

I'd bet a sawbuck that Taylor Swift is more influential on X than Musk, much less more broadly online, as are former President Trump and former President Obama.

I'd read and share the heck out of reporting that accurately measured the cultural, economic, regulatory, and national security influence of online accounts.

Read more