Published in AEI Ideas.
Building Trust in AI: A Call for Clear Guidelines to Harness Its Benefits
ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users just two months after launching, making it the fastest-growing consumer application in history. On Friday, Google announced a partnership with Anthropic, a startup that powers the chatbot Claude, and an experimental conversational artificial intelligence (AI) service called Bard.
Policymakers have yet to fully understand the profound effect the AI revolution will have on our economy, national security, and social well-being. AI, for all its benefits, will also present new risks, challenges, and tensions with our values. To prevent the worst consequences of AI, clear regulations are needed to harness its benefits while managing its risks.
There has always been an uncomfortable tension between entrepreneurs and policymakers, given that unnecessary or overly burdensome regulations can impede innovations or slow their deployment. What is unusual in this wave of AI breakthroughs is that the innovators themselves are calling on policymakers to establish clear rules of the road.
Alphabet CEO Sundar Pichai, OpenAI Chief Technology Officer Mira Murati, and Microsoft President Brad Smith have all called for regulations on AI. Pichai wrote in the Financial Times that “there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.” Murati has called for regulations to guide the safe and ethical use of AI. And Smith argued that new technologies bring out both the best and worst in people and that AI could be used as a tool or weapon. He stressed the need for policymakers to establish guardrails to prevent AI from being used to spread false information, undermine democracy, and advance evil.
In an effort to install such guardrails, the EU has proposed the AI Act, which prohibits certain uses of AI such as the government-operated “social credit scoring” used in China. It also categorizes certain high-risk applications that are subject to greater scrutiny. And finally, it introduces new transparency and auditing requirements to help surface and manage additional risks.
In the United States, the White House released a draft Blueprint for an AI Bill of Rights outlining five principles of AI governance: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives, consideration, and fallback. However, only a handful of agencies have staff dedicated to putting these principles into action in their respective sectors: the Department of Defense, Department of Energy, Department of Veterans Affairs, and the Department of Health and Human Services.
More recently, the National Institute of Standards and Technology released an AI Risk Management Framework that organizations can use to improve their ability to incorporate trustworthiness principles into the design, development, use, and evaluation of AI products, services, and systems. One of its most important contributions is helping organizations think through the difficult trade-offs presented when weighing the benefits of new AI technologies and their potential risks.
These are all important initial steps, but they do not address crucial concerns, such as who within the federal government will have the authority and responsibility to regulate AI and what the criteria they should use to evaluate the risks and benefits. The frameworks are also voluntary and will likely only be embraced by companies already committed to ethical AI practices. Additionally, these drafts are vulnerable to being discarded or altered by future administrations.
These are questions that will require thoughtful deliberation from Congress to create a more permanent regulatory framework that at least establishes a process through which to resolve these issues. The challenge is that Congress has struggled to modernize data privacy and content moderation for technologies that are a decade old, much less grapple with the emerging issues presented by AI.
However, it is urgent that they do so. The rapid growth of AI applications, including the popular ChatGPT, highlights the pressing need for regulations to manage AI’s potential risks and challenges. It is important that Congress actively engages in thoughtful deliberation to create a permanent framework that provides consumer protections, builds public trust in AI systems, and creates the regulatory certainty companies need for their product road maps. Considering the potential for AI to affect our economy, national security, and broader society, there is no time to waste.