An AI Healthcare Coalition Suggests a Better Way of Approaching Responsible AI

Published at AEI Ideas

An AI Healthcare Coalition Suggests a Better Way of Approaching Responsible AI

In the rapidly evolving landscape of artificial intelligence (AI), the dialogue often veers between the extremes of stringent regulation, like the European Union’s AI Act, and laissez-faire approaches that risk unbridled technological advances without sufficient safeguards. Amidst this polarized debate, the Coalition for Health AI (CHAI) has emerged as a promising alternative approach that addresses the ethical, social, and economic complexities introduced by AI, while also supporting continued innovation.  

The effort began three years ago when a group of academics from Duke, Mayo Clinic, and Stanford, along with technology companies Google and Microsoft, began wrestling with difficult questions: What should responsible, trustworthy AI look like in healthcare? What does accuracy look like in a large language model’s output? What does reliability look like in a large language model’s output, where the same prompt can yield two different responses? 
The lack of a consensus around these questions led to the launch of CHAI which now includes a diverse group of 1,500 beneficiaries, healthcare providers, researchers, and technology companies, who are working collaboratively to develop consensus standards and evaluation protocols for AI in healthcare.

But what is perhaps most unique about this coalition is that it includes regulators from the Food and Drug Administration (FDA), the Centers for Medicare & Medicaid Services (CMS), the Department of Health and Human Services (HHS), the Office of the National Coordinator for Health Information Technology (ONC), the National Artificial Intelligence Institute (NAII), the Advanced Research Projects Agency for Health (ARPA-H), and the White House Office of Science and Technology Policy (OSTP). The blending of government and industry expertise facilitates the exploration of key questions surrounding the quality of AI tools and develops a shared understanding of these technologies and their uses in different areas of healthcare. 

Healthcare is a prime candidate for AI-driven disruption, as the technology holds immense promise in enhancing disease diagnosis, optimizing treatment plans, and revolutionizing drug development. Moreover, AI can alleviate doctors’ administrative burden by assisting with notetaking and empowering patients to navigate their care more effectively. It comes as no surprise, then, that the healthcare sector has been among the first to explore ways AI can be used.

For example, the FDA has processed over 300 submissions for drugs and biological products incorporating AI, alongside more than 700 for AI-driven devices. “We don’t have the tools today to understand whether machine learning algorithms and these new technologies being deployed are good or bad for patients,” says John Halamka, president of Mayo Clinic Platform. This lack of understanding creates a fundamental problem of trust. 

The significance of CHAI lies not only in its mission to establish a unified framework for thinking about responsible AI in the healthcare sector but also in the opportunities it creates for cross-sectoral learning and knowledge sharing. By fostering open dialogue and collaboration among these different communities, CHAI facilitates a deeper understanding of the capabilities, limitations, and potential impacts of emerging AI technologies within the complex healthcare ecosystem. This is particularly important for policymakers, given that the high demand for AI experts in the private sector, coupled with the rapidly evolving nature of the technology, has made it challenging for government agencies to build the internal expertise required to oversee and regulate AI systems effectively.

The effort also aims to solve an oversight gap that has led to the inconsistent vetting of AI products used to automate tasks and make consequential decisions about patient care. Even for products that undergo government review, it’s difficult for hospitals and other users to tell whether a given AI model will work on their patients or in different clinical settings.

In response, CHAI will establish a nationwide network of laboratories to independently assess AI tools’ accuracy, quality, and safety, and their potential biases. By evaluating algorithms across diverse datasets from different regions, CHAI aims to ensure that AI applications are reliable and equitable regardless of where they are used. This approach emphasizes the lifecycle of AI models – from development to deployment and maintenance – underlining the importance of tailored considerations at each stage.

The most important concept tested by this partnership is whether industry and government can effectively collaborate in managing fast-moving technologies like AI. If so, CHAI’s model of collaboration highlights a path forward in navigating the complexities of AI integration across various sectors, including energy, education, and finance.

As the debate surrounding AI regulation continues, it is crucial for policymakers, industry leaders, and civil society to engage in constructive dialogue and collaboration to develop a nuanced and adaptive regulatory approach that can keep pace with the breakneck speed of AI advancement. More importantly, it can help accelerate the exploration of using these powerful technologies to drive better outcomes for patients, medical professionals, and the systems that serve them.