Whether or not calls for pausing AI development succeed (spoiler: they won’t), artificial intelligence is going to need regulation. Every technology in history with comparably transformational capabilities has been subject to rules of some sort. What that regulation should look like is going to be an important and complicated problem, one I and others will be writing a lot about it in the months and years to come.
Before we even get to the content of the regulation needed, however, there’s a crucial threshold question that needs to be addressed: Who should regulate AI? If it’s government, which part of government, and how? If it’s industry, what are the right kinds of mechanisms to balance innovation with safety?
I’d like to start suggesting some basic principles that should guide our approach, starting with government regulation. I’ll save the question of private sector self-regulation for a future column. (Disclosure: I advise a number of companies that are involved in AI, including Meta.)
Let’s begin with the specter that haunts the AI debate: The possibility that AI might pose an existential threat to human society. In a well-publicized 2022 survey of AI researchers, nearly half of respondents said that there was a 10% or greater chance that AI would eventually produce an “extremely bad” outcome, along the lines of human extinction.
There are some caveats. Only 17% of researchers contacted returned the survey, and it may be that the most worried researchers were more likely to respond. And even so, a quarter of those who answered put the risk of an extremely bad outcome at 0%. Nevertheless, the results are striking.
If AI poses an existential threat to human survival, then in the real world, that would call for government regulation of a serious type. There’s a reason you can’t just raise venture capital and start a company to make and sell nuclear missiles to all comers. Nuclear weapons pose an obvious existential threat to humanity. The only fit actors to control such power are governments. And not just any governments: nuclear nonproliferation is the name we give to the effort to limit which governments can get access to nuclear weapons. And of course, in the minds of many people, even governments shouldn’t be trusted with such dangerous engines of mass destruction.
So the basic regulatory rule with respect to nuclear weapons is: You can’t have them, unless you’re a government that somehow manages to get hold of them. (Then it’s hard to take them away. Consider North Korea.) To the extent that private capital plays a role in funding peaceful nuclear power projects, it does so in a way that is wholly subservient to government regulation, which decides when, where and how nuclear power can be deployed.
There is a crucial lesson here. The basic raison d’être of governments, whether democratic or authoritarian, is to protect their citizens. (They also, of course, protect themselves.) If governments take seriously the idea that there is a credible, proximate existential threat posed by AI, then governments will assume de facto control over AI companies and regulate them as national security assets. Existing AI companies will be like arms and weapons producers: heavily regulated, staffed by security-cleared scientists, and closely linked to the national security state that will essentially supervise them through a combination of regulation and government contracts.
Some governments might nationalize AI companies or outlaw AI research and development altogether. Those actions might sound radical. But no government on earth is going to allow private parties to control technology that it deems capable of destroying its citizens, itself and the world.
If you think this outcome sounds very unlikely, then the odds are that, on some level, you don’t really believe AI poses existential risk at any meaningful probability. Or perhaps you think AI companies would become so powerful that governments wouldn’t be able to take them over or shut them down. That fantasy, a cousin of the fantasy that cryptocurrencies can’t be regulated, ignores the most basic truth of regulation: Companies are made up of people. And people, no matter where they are, can be regulated and ruled by a government that is prepared to imprison them.
But a government takeover of the AI industry is the most extreme end of the spectrum. If we decide that AI could do of harm but does not pose an existential threat, more moderate regulation becomes a possibility.
When society considers some outcome sufficiently wrong, we outlaw it using the criminal code. If you cause that outcome, you can go to prison. It’s easy to imagine criminal liability for anyone who deploys AI to commit fraud or to stalk and harm other people. It’s even possible to imagine laws being enacted that impose criminal liability on whoever made the harmful AI in the first place.
Then there’s statutory civil regulation, with violations punishable by fines. You can picture statutes that would deter a range of AI outcomes by threatening civil liability. In some case, existing statutes might apply via the makers and users of AI. Race and sex discrimination, for example, are punishable by civil liability. A party whose AI perpetrates these social wrongs may already be liable under existing law; more statutes with more specificity could easily be added.
A third moderate option would be administrative rules. These are common in complex industries — think of the Securities and Exchange Commission, the Food and Drug Administration, and the Environmental Protection Agency. Congress could create a new agency to regulate AI. It could be given power to enact necessary rules and enforce them, complete with administrative expertise.
Such agencies are sometimes thought to be captured by industry, a risk that would be especially great where qualified regulators might have to be taken from industry itself. Seen from the other extreme, agencies can be lobbied by counterparties to the industry, like associations of workers who might lose their jobs to AI-driven efficiencies. The agencies also create bureaucracy, and with it, waste. Nevertheless, a complex, specialized field like AI might fare better under administrative supervision rather than direct congressional control.
Finally, there’s the lightest-touch mode of regulation: lawsuits. Under the US system of tort liability, we require the maker or seller of the technology to exercise what we call “reasonable care.” If they don’t, someone can sue to hold them financially liable for harm they’ve caused.
The beauty of the system — also its most infuriating aspect — is that we don’t tell the maker or seller exactly what to do. We expect them to make a credible cost-benefit analysis and spend as much on preventing foreseeable harm as reasonably necessary. Then we second-guess the hell out of them. If we think they’ve got it wrong, we’re not above putting the company out of business – literally. Sometimes the government even takes over the company, the way a number of state governments are poised to take over Purdue pharmaceuticals.
Put another way, the tort system offloads the cost of risk onto private actors. We’re used to it, so we take for granted that capital must price the risk of massive after-the-fact tort liability into every investment it makes. Investors don’t love this. But at the same time, bankruptcy law and the limited liability company offer some degree of protection for capital. So as a means of social insurance, the tort system also has a major upside for investors. Which is, no doubt, why the US still uses it, even as other countries have chosen to pair more up-front regulation with less after-the-fact liability.
The takeaway, I think, is that all these forms of governmental regulation may be necessary for AI; and all have noteworthy flaws. But we need to start sorting through them — right now.
Noah Feldman is a Bloomberg Opinion columnist and host of the podcast “Deep Background.” He is a professor of law at Harvard University and was a clerk to U.S. Supreme Court Justice David Souter. His books include “The Three Lives of James Madison: Genius, Partisan, President.”