AI Tutors: Hype or Hope for Education?

AI Tutors: Hype or Hope for Education?

In an analysis of Sal Khan's "Brave New Words" and the evolving landscape of AI in education, I present a case for AI-powered tutoring as a transformative force. Recent advancements in speech, image analysis, and emotional intelligence, combined with promising research studies showing significant learning gains, suggest AI tutoring could help address our urgent educational challenges like pandemic learning loss and chronic absenteeism.

Serving on Virginia's AI Taskforce

Serving on Virginia's AI Taskforce

I’m deeply honored to be appointed by Governor Youngkin to serve on Virginia’s inaugural AI Task Force. This distinguished group of leaders from academia, nonprofits, and industry will advise policymakers on harnessing AI to transform government services, streamline regulations, and position Virginia as a leader in responsible AI innovation. As we unlock AI’s potential to improve efficiency and reduce burdens on state agencies, we must also ensure thoughtful safeguards to protect privacy, fairness, and public trust. I

Why the Veto of California Senate Bill 1047 Could Lead to Safer AI Policies

Why the Veto of California Senate Bill 1047 Could Lead to Safer AI Policies

Gov. Gavin Newsom’s recent veto of California’s Senate Bill (SB) 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, reignited the debate about how best to regulate the rapidly evolving field of artificial intelligence. Newsom’s veto illustrates a cautious approach to regulating a new technology and could pave the way for more pragmatic AI safety policies.

The Deception Dilemma: When AI Misleads

The Deception Dilemma: When AI Misleads

An emerging body of research suggests that large language models (LLMs) can “deceive” humans by offering fabricated explanations for their behavior or concealing the truth of their actions from human users. The implications are worrisome, particularly because researchers do not fully understand why or when LLMs engage in this behavior.  

The Promise and Limitations of AI in Education: A Nuanced Look at Emerging Research

The Promise and Limitations of AI in Education: A Nuanced Look at Emerging Research

New research reveals the transformative potential and complex realities of AI in education. From AI-powered grading systems that match human accuracy while saving countless hours, to AI tutors that both enhance and hinder learning, to AI's ability to reason like humans and predict social science experiments, these studies paint a nuanced picture of AI's role in the future of education. As the field rapidly evolves, enthusiasts and skeptics alike must grapple with the profound implications of AI for teaching and learning and ensure their views are shaped by research.

Preserving Trust and Freedom in the Age of AI

Preserving Trust and Freedom in the Age of AI

I was excited to contribute to a group of over 20 prominent AI researchers, legal experts, and tech industry leaders from institutions including OpenAI, the Partnership on AI, Microsoft, the University of Oxford, a16z crypto, and MIT on a paper proposing "personhood credentials" (PHCs) as a potential solution to the growing challenge of AI-powered online deception. PHCs would provide a privacy-preserving way for individuals to verify their humanity online without disclosing personal information. While implementation details remain to be determined, the core concept warrants serious consideration from policymakers and tech leaders as AI capabilities rapidly advance, threatening to erode the trust and accountability essential for societies to function.

Open AI Models: A Step Toward Innovation or a Threat to Security?

Open AI Models: A Step Toward Innovation or a Threat to Security?

The increasing capabilities of AI have sparked an important debate: Should the "weights" that define a model's intelligence be openly accessible or tightly controlled? This article dives deep into the arguments on both sides. Proponents say open-weight models accelerate innovation, enable greater scrutiny for safety, and democratize AI capabilities. Critics warn of risks like misuse for disinformation or military purposes by adversaries. The piece examines recent developments like frontier open-weight models from Meta and Mistral, Mark Zuckerberg's case for openness, concerns about China's AI ambitions, and the US government's cautious approach.

Charting a Bipartisan Course: The Senate’s Roadmap for AI Policy

Charting a Bipartisan Course: The Senate’s Roadmap for AI Policy

The recent AI policy roadmap from the bipartisan AI working group in Congress strikes a thoughtful balance between promoting AI innovation and addressing potential risks. It lays out a nuanced approach including increased AI safety efforts, crucial investments in domestic STEM talent, protections for children in the age of AI, and "grand challenge" programs to spur breakthroughs - all while avoiding hasty overregulation that could stifle progress

An AI Healthcare Coalition Suggests a Better Way of Approaching Responsible AI

An AI Healthcare Coalition Suggests a Better Way of Approaching Responsible AI

In the rapidly evolving landscape of artificial intelligence (AI), the dialogue often veers between the extremes of stringent regulation, like the European Union’s AI Act, and laissez-faire approaches that risk unbridled technological advances without sufficient safeguards. Amidst this polarized debate, the Coalition for Health AI (CHAI) has emerged as a promising alternative approach that addresses the ethical, social, and economic complexities introduced by AI, while also supporting continued innovation.  

Understanding the Capabilities of GenAI

Understanding the Capabilities of GenAI

It's difficult to understand some technologies because they're better experienced than described. I've found GenAI to be one example where it's difficult to grasp the full range of capabilities unless you see some of it in action. Over the last year, I've given a number of presentations that tried to contextualize GenAI for the audience by demonstrating relevant use cases. I compiled them in this long master deck, which I periodically update and am sharing in the hope that it may spark some ideas for you.

Why AI Struggles with Basic Math (and How That’s Changing)

Why AI Struggles with Basic Math (and How That’s Changing)

Large Language Models (LLMs) have ushered in a new era of artificial intelligence (AI) demonstrating remarkable capabilities in language generation, translation, and reasoning. Yet, LLMs often stumble over basic math problems, posing a problem for their use in settings—including education—where math is essential. However, these limitations are being addressed through improvements in the models themselves along with better prompting strategies.  

Autonomous Vehicles: A Safer Road Ahead

Autonomous Vehicles: A Safer Road Ahead

Recent studies reveal autonomous vehicles (AVs) boast impressive safety records, with far fewer and less severe crashes compared to conventional cars. Advanced AV systems that constantly monitor surroundings, precisely follow traffic laws, and avoid dangerous situations contribute to their superior performance. With rapid technological improvements enabling AVs to learn from analyzing massive amounts of human driving data, they are poised to drastically reduce traffic accidents and fatalities. Further real-world experience for AVs promises to refine their systems and enhance safety even more. Regulators should enable this progress while enacting reasonable safeguards.

ChatGOV: Harnessing the Power of AI for Better Government Service Delivery

ChatGOV: Harnessing the Power of AI for Better Government Service Delivery

The capabilities of AI promise not only efficiency but potentially a more accessible interface for citizens. As governments begin to integrate these technologies into their service-delivery mechanisms, it is imperative to approach the adoption with due diligence, guided by robust frameworks like NIST’s AI RMF. With a combination of strategic foresight, stakeholder engagement, and capacity building, governments can harness the power of AI to truly transform public service, making it more responsive and citizen-centric than ever before.

The Opportunities and Challenges of AI in Education

The Opportunities and Challenges of AI in Education

There is an incredible amount of excitement and confusion around what this wave of generative AI means for education. These technologies are rapidly improving, and developers are introducing capabilities that would have been considered science fiction just a few years ago. In my latest Education Next piece, I provide an overview of generative AI and explore how this technology will influence how students learn, how teachers work, and ultimately how we structure our education system.

Treading Carefully: The Precautionary Principle in AI Development

Treading Carefully: The Precautionary Principle in AI Development

The Biden administration recently secured voluntary commitments from major AI companies to manage risks associated with AI, including ensuring products are safe and transparent about capabilities and limitations. While a step in the right direction, the vague pledges largely reaffirm existing activities and oversight mechanisms are unclear. This embodies the precautionary principle which can hamper innovation if taken too far through accumulating requirements. There should be a balance between risk mitigation and advancing AI to realize its benefits. The commitments are also oriented toward avoiding harms rather than proactively leveraging AI's potential to address pressing societal challenges like climate change and learning loss. A more ambitious, benefit-focused approach is needed alongside reasonable precautions.

Leveraging AI’s Immense Capabilities While Safeguarding the Mental Health of Our YouthLeveraging AI’s Immense Capabilities While Safeguarding the Mental Health of Our Youth

Leveraging AI’s Immense Capabilities While Safeguarding the Mental Health of Our YouthLeveraging AI’s Immense Capabilities While Safeguarding the Mental Health of Our Youth

The rise of artificial intelligence (AI) coincides with a concerning public health crisis of loneliness and isolation, particularly among young people. According to a CDC survey, a shocking 44% of adolescents are dealing with constant feelings of sadness and hopelessness. As AI technology becomes more prevalent, concerns are growing about its potential to worsen these emotional struggles. This new technological context necessitates a deeper investigation into the role AI might play in intensifying these existing societal issues.