Vestigo Partners

View Original

Treading Carefully: The Precautionary Principle in AI Development

Treading Carefully: The Precautionary Principle in AI Development

The Biden administration announced last week that it had secured voluntary commitments from leading artificial intelligence (AI) companies to manage the risks posed by AI. Anthropic, Amazon, Google, Inflection, Meta, Microsoft, and OpenAI pledged to ensure their products are safe before introducing them to the public, build systems that prioritize security, and earn the public’s trust by being transparent about their AI systems’ capabilities and limitations.

On the positive side, the consensus reached by these competitors deserves recognition and is undeniably a positive move for the AI sector. However, the commitment’s phrasing is vague and, in many instances, reaffirms the existing activities these companies were already doing. These pledges are also voluntary, with no real sense of who is responsible for verifying their fulfillment nor the consequences of not complying. 

That said, this raises two broader concerns. First, this effort embodies the precautionary principle, which advocates caution and review before leaping into new innovations. As John McGinnis writes in Law & Liberty:

The precautionary principle requires the government to take preventive action in the face of uncertainty, shifting the burden of proof to those who want to undertake an innovation to show that it does not cause harm. It holds that regulation is required whenever an activity creates a substantial possible risk to health, safety, or the environment, even if the supporting evidence is speculative.

While the precautionary principle can be a valuable tool in policymaking, it has its own set of drawbacks and associated risks. Critics argue that it can hinder innovation and progress. For example, what constitutes an acceptable level of risk can vary greatly, and the administration’s approach to AI does not provide clear guidance. This could lead to inconsistencies in how AI risks are managed and create uncertainty for AI companies developing the models as well as businesses using these systems. And these risks must also be evaluated against the risk of withholding these technologies, including putting the US at a disadvantage in the global AI race, in which countries like China are rapidly advancing their AI capabilities.

There is also reason to be cautious about the accumulative effects of the precautionary principle over time. For example, the AI companies’ commitments require them to carry out extensive internal and external security testing of their AI systems, share information on managing AI risks, and develop robust technical mechanisms to ensure transparency. These are important and should absolutely be part of responsible AI development, but there are risks that these government requirements will grow over time and that the companies will have to address an endless series of hypothetical scenarios.  

We see this at play in how the accumulation of environmental reviews and excessive bureaucracy can significantly delay the deployment of beneficial technologies. National Environmental Policy Act (NEPA) reviews, meant to protect the environment, have become so complex and time-consuming that they now inhibit the adoption of needed solutions. Portions of these reviews take, on average, four years to complete, hindering the implementation of climate technologies and renewable energy projects. It’s no surprise then that a broad array of policymakers, ranging from the White House to top Republicans and Democrats in the Senate are calling for permitting reforms to accelerate the deployment of new climate technologies.

The second, broader concern is that the commitment framework is oriented entirely toward the harms the government wants to minimize instead of the societal benefits the nation wants to maximize. It’s only when we reach the last commitment that the companies are asked to use AI to “help address society’s greatest challenges.”

In order to truly leverage the potential of AI, we need to promote a more proactive and ambitious approach that doesn’t just focus on risk management but also encourages these AI companies to actively direct their efforts toward tackling the major challenges that society faces today. There is no shortage of areas in which AI can deliver enormous benefits, including climate change, fighting cancer, and tackling the learning loss students face coming out of COVID-19. We need more affirmative visions for the benefits we want to achieve with AI, not just the harms we want to avoid.  

In conclusion, while the Biden administration’s initiative to secure voluntary commitments from leading AI companies is a commendable effort in guaranteeing AI safety, these commitments need to strike a balance between the precautionary principle and the advancement of AI technologies, ensuring neither path hampers the other. Moreover, to truly realize AI’s transformative potential, there should be equal emphasis on harnessing its capabilities to address societal challenges and on the important work of mitigating the risks.