Governing artificial intelligence depends on governments understanding AI first

Governing artificial intelligence depends on governments understanding AI first
Transformers in Georgetown could either welcome our robotic overlords, or become them. (TBD if we're lucky enough to get help fighting algorithmic discrimination from Optimus Prime.)


Good afternoon from sunny Washington, where President Biden has signed a national security supplemental into law that will require ByteDance to divest from TikTok or the social media app will banned in the United States. Alex Howard here, with a new edition of Civic Texts. As I just texted, I share the view of First Amendment scholars who say this law is unconstitutional and open access advocates who argue that this will hasten the world's retreat from an open Internet. I'd like to be wrong about this, honestly. If you're interested, I talked with a journalist at length about why a TikTok ban misses the mark.

Today, however, I want to share a reported column on regulating artificial intelligence that an editor ending up not publishing in 2023. I think it holds up pretty well, months on, in the wake of the executive order order on AI that President Biden issued, but I'd love your thoughts on it. As always, if you have suggestions, questions, comments, tips, or concerns about this or any other matter, you can find me online as @digiphile across social media, or directly on email alex@governing.digital. (write to if you want to connect over Signal.

Why governing artificial intelligence depends on governments understanding it

Asking “how should we regulate artificial intelligence?” feels akin to asking how to regulate the Internet. First, what does artificial intelligence (AI) even mean? Is it machine learning? Is it generative AI engines and large language models like OpenAI or Bard? Is it a spreadsheet or algorithm which calculates the risk of recidivism for potential parolees?

Both AI and the Internet include a huge range of technologies, formats, platforms, companies, standards, and private and public actors and many AI implementations will likely be largely invisible to the user. But in each case, you have to start somewhere, much like eating an elephant. 

In between state-controlled industries and unregulated marketplaces for goods and services sits a sweet spot for innovation, competition, and free enterprise.

While finding the right balance between regulation that creates healthy guardrails for consumer protection and rules that stifle competition or benefit is challenging, that’s precisely what we depend on independent regulators to achieve. The suite of technologies that drive artificial intelligence are no different, though the scale and speed of their impact are challenging many policymakers to move more quickly than in the past. 

Both states and companies are building out supercomputers that provide the power behind today’s AI. Cities, states, and national governments may well offer AI as a kind of on-demand utility for the general public, as when state universities once allotted medical researchers like my grandfather time on a mainframe to run analyses.

People who have more means, however, will pay for faster, smarter AI personalized to their needs. Those who don't will get a default personal AI, like the basic avatar in a public terminal in Neal Stephenson's metaverse.

Regulating industries like utilities is necessary when insufficient competition exists, consumers have no other option – from electricity to water to gas – and significant harms persist without standards, from electroduction to floods to fires.

If a few tech companies developing the next generation of AI tools succeed in creating de facto monopolies or duopolies with high barriers to entry, then regulators will need to be more active in preventing anti-competitive practices.

Should open source AI models lead to far more ubiquitous use and abuse, legislatures will need to be far more nimble in nurturing checks and balancing across nations and time zones that center human rights. Computational propaganda and automated genocide are not the right legacies to bring into the 21st century.

If we want to nurture responsible uses of AI across industries and sectors, there are practical ways to achieve that goal, focusing on outcomes.

According to Alondra Nelson, the former acting director of the White House Office of Science and Technology Policy, we need a broad toolkit for governing AI as it becomes more integrated across societies. This toolkit includes formal regulations, incentives for industry, and other measures to minimize disruptions and ensure people's well-being.

“As we're moving to the possibility of living in a world in which advanced AI is woven through wider swaths of our lives, I think we need a pretty broad toolkit for AI governance,” said Nelson, the former acting director of the White House Office of Science and Technology Policy who now works at the Institute for Advanced Study, in an interview.

“I use governance and not 'regulation' because I think that we need lots of different levers,” she suggested. “Some of them will look like formal regulation that sits in a regulatory body in the US, the UK, and the EU. But there will be other levers, including incentives that governments might offer for industry to cause less disruption, with regards to changes in employment and how people are working.”

To achieve a future resembling Star Trek rather than a cyberpunk novel, nations and corporations should invest in several strategies:

  1. Governments must focus on research and understanding emerging technologies. Legislative bodies need nonpartisan experts who can provide insight and foresight without being influenced by commercial motives or political advantage, like the Office of Technology Assessment once did for Congress – and could do again.
  2. Legislatures should enact data protection laws to safeguard privacy as a human right. The United States, in particular, needs comprehensive data privacy legislation that addresses the risks posed by data brokers and surveillance capitalism.
  3. Avoiding the worst mistakes of the 20th century means checking state power to prevent human rights abuses. The federal government has begun addressing these issues, but it takes time to make significant changes. In the meantime, civil liberties and human rights should take top priority, particularly for marginalized populations.
  4. Public engagement is crucial for “monitory democracy,” where publics become part of improving the governance of AI through reporting harms or benefits from their interactions. Experts should interact with the public and share outcomes from their interactions with AI systems. Testing and transparency are necessary to ensure safety and accountability.
  5. Best practices from other democracies should be adopted and adapted to establish codes of conduct and democratic norms. In turn, the USA should be leading by the power of our example. For instance, presidential campaigns could agree not to use generative AI for propaganda and disclose any uses of synthetic media.
  6. Elected officials should draft legislation in a transparent, iterative, and inclusive manner to protect consumers from emerging technologies, shoring up trust instead of shrouding that work in secrecy. Antitrust actions, empowering government agencies, and holding developers accountable are possible avenues.

Algorithmic transparency and accountability are essential. Algorithms used for governing or regulating industries should be disclosed and audited independently. Developers should be liable for harms caused by their code.

Algorithmic accountability is a useful approach, where the individuals and companies that develop code are liable for harms that come from its misuse, or poor design, much in the same way that civil engineers and car manufacturers might be held accountable offline for mistakes in bridges or vehicles.

“Transparency is useful, but I want to see standards, investigative powers, algorithmic audits via subpoenas,” emphasized Alex Engler, a fellow in Governance Studies at The Brookings Institution, in an interview last year. (Engler has since joined the White House.)

He warned that the failures to limit the spread of spyware and ransomware shows how difficult it would be to constrain the spread of artificial intelligence tools without much more energy turned to risk mitigation across all levels of government and society. 

Engler also worries that tech giants are deprioritizing trust and safety at exactly the wrong time, should artificial intelligence tools and technologies become widely available globally.

“There’s no mechanism in the United States or restriction on an open source model or a code release by accident,” he said.

Engler suggested that an empowered Department of Justice or Federal Trade Commission might use a combination of independent researchers and consent decrees to hold states and private companies accountable for demonstrated civil rights and human rights violations. 

The United States can learn from the European Union's initiatives in this area, such as the Digital Service Act, General Data Protection Regulation (GDPR), and AI Act. These efforts provide a comprehensive approach to AI governance, including consumer protection and data privacy. While the GDPR is not perfect, the law has empowered regulators to curb privacy violations and unethical data practices. 

To ensure effective regulation, risk mitigation efforts should be prioritized at all levels of government and society. Engaging independent researchers and consent decrees can hold states and private companies accountable for civil and human rights violations.

States and local governments can take action without waiting for federal intervention. California, Connecticut, and Washington State are already working on legislation based on the AI Bill of Rights, and more experiments around the world are good for humanity.

Ultimately, it will be up to us, the people, to collectively take action and address the challenges created by AI. No one else will come to solve these problems, so we must move forward carefully and fix things, together.

Read more