OpEd

Why the Veto of California Senate Bill 1047 Could Lead to Safer AI Policies

Why the Veto of California Senate Bill 1047 Could Lead to Safer AI Policies

Gov. Gavin Newsom’s recent veto of California’s Senate Bill (SB) 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, reignited the debate about how best to regulate the rapidly evolving field of artificial intelligence. Newsom’s veto illustrates a cautious approach to regulating a new technology and could pave the way for more pragmatic AI safety policies.

The Promise and Limitations of AI in Education: A Nuanced Look at Emerging Research

The Promise and Limitations of AI in Education: A Nuanced Look at Emerging Research

New research reveals the transformative potential and complex realities of AI in education. From AI-powered grading systems that match human accuracy while saving countless hours, to AI tutors that both enhance and hinder learning, to AI's ability to reason like humans and predict social science experiments, these studies paint a nuanced picture of AI's role in the future of education. As the field rapidly evolves, enthusiasts and skeptics alike must grapple with the profound implications of AI for teaching and learning and ensure their views are shaped by research.

Preserving Trust and Freedom in the Age of AI

Preserving Trust and Freedom in the Age of AI

I was excited to contribute to a group of over 20 prominent AI researchers, legal experts, and tech industry leaders from institutions including OpenAI, the Partnership on AI, Microsoft, the University of Oxford, a16z crypto, and MIT on a paper proposing "personhood credentials" (PHCs) as a potential solution to the growing challenge of AI-powered online deception. PHCs would provide a privacy-preserving way for individuals to verify their humanity online without disclosing personal information. While implementation details remain to be determined, the core concept warrants serious consideration from policymakers and tech leaders as AI capabilities rapidly advance, threatening to erode the trust and accountability essential for societies to function.

Open AI Models: A Step Toward Innovation or a Threat to Security?

Open AI Models: A Step Toward Innovation or a Threat to Security?

The increasing capabilities of AI have sparked an important debate: Should the "weights" that define a model's intelligence be openly accessible or tightly controlled? This article dives deep into the arguments on both sides. Proponents say open-weight models accelerate innovation, enable greater scrutiny for safety, and democratize AI capabilities. Critics warn of risks like misuse for disinformation or military purposes by adversaries. The piece examines recent developments like frontier open-weight models from Meta and Mistral, Mark Zuckerberg's case for openness, concerns about China's AI ambitions, and the US government's cautious approach.

Charting a Bipartisan Course: The Senate’s Roadmap for AI Policy

Charting a Bipartisan Course: The Senate’s Roadmap for AI Policy

The recent AI policy roadmap from the bipartisan AI working group in Congress strikes a thoughtful balance between promoting AI innovation and addressing potential risks. It lays out a nuanced approach including increased AI safety efforts, crucial investments in domestic STEM talent, protections for children in the age of AI, and "grand challenge" programs to spur breakthroughs - all while avoiding hasty overregulation that could stifle progress

An AI Healthcare Coalition Suggests a Better Way of Approaching Responsible AI

An AI Healthcare Coalition Suggests a Better Way of Approaching Responsible AI

In the rapidly evolving landscape of artificial intelligence (AI), the dialogue often veers between the extremes of stringent regulation, like the European Union’s AI Act, and laissez-faire approaches that risk unbridled technological advances without sufficient safeguards. Amidst this polarized debate, the Coalition for Health AI (CHAI) has emerged as a promising alternative approach that addresses the ethical, social, and economic complexities introduced by AI, while also supporting continued innovation.  

Autonomous Vehicles: A Safer Road Ahead

Autonomous Vehicles: A Safer Road Ahead

Recent studies reveal autonomous vehicles (AVs) boast impressive safety records, with far fewer and less severe crashes compared to conventional cars. Advanced AV systems that constantly monitor surroundings, precisely follow traffic laws, and avoid dangerous situations contribute to their superior performance. With rapid technological improvements enabling AVs to learn from analyzing massive amounts of human driving data, they are poised to drastically reduce traffic accidents and fatalities. Further real-world experience for AVs promises to refine their systems and enhance safety even more. Regulators should enable this progress while enacting reasonable safeguards.

ChatGOV: Harnessing the Power of AI for Better Government Service Delivery

ChatGOV: Harnessing the Power of AI for Better Government Service Delivery

The capabilities of AI promise not only efficiency but potentially a more accessible interface for citizens. As governments begin to integrate these technologies into their service-delivery mechanisms, it is imperative to approach the adoption with due diligence, guided by robust frameworks like NIST’s AI RMF. With a combination of strategic foresight, stakeholder engagement, and capacity building, governments can harness the power of AI to truly transform public service, making it more responsive and citizen-centric than ever before.

Leveraging AI’s Immense Capabilities While Safeguarding the Mental Health of Our YouthLeveraging AI’s Immense Capabilities While Safeguarding the Mental Health of Our Youth

Leveraging AI’s Immense Capabilities While Safeguarding the Mental Health of Our YouthLeveraging AI’s Immense Capabilities While Safeguarding the Mental Health of Our Youth

The rise of artificial intelligence (AI) coincides with a concerning public health crisis of loneliness and isolation, particularly among young people. According to a CDC survey, a shocking 44% of adolescents are dealing with constant feelings of sadness and hopelessness. As AI technology becomes more prevalent, concerns are growing about its potential to worsen these emotional struggles. This new technological context necessitates a deeper investigation into the role AI might play in intensifying these existing societal issues.

The Promise of Personalized Learning Never Delivered. Today’s AI Is Different

The Promise of Personalized Learning Never Delivered. Today’s AI Is Different

Over the past decade, many have been disappointed by the unfulfilled promises of technology transforming education. However, recent advancements in AI, such as OpenAI's GPT-4, may signal a genuine breakthrough. These large-language models have smarter capabilities, function as reasoning engines, use language as an interface, and are scaling rapidly thanks to tech giants. As a result, AI-powered tutoring and teaching assistants are emerging to provide individualized learning, automate administrative tasks, and offer constructive critiques on student writing. While there are limitations, it is expected that future iterations will address these issues. Harnessing AI's potential could lead to a future where education is more effective, equitable, and personalized, with teachers focusing on fostering meaningful connections with their students.

From Automation to Reinvention: How AI Is Shifting the Nature of Work

From Automation to Reinvention: How AI Is Shifting the Nature of Work

The rise of AI technologies like ChatGPT promises to boost global GDP by 7% in a decade, according to Goldman Sachs. However, this may disrupt 63% of US jobs, including higher-skilled professions like auditors and interpreters. These AI tools will change the nature of work, shifting focus from mundane tasks to more advanced, human-centric activities. Surprisingly, AI could benefit the least skilled workers, narrowing the performance gap among employees. While job displacement is inevitable, the larger disruption may be the creation of new, hybrid jobs combining domain expertise with AI skills. Policymakers must proactively invest in education and workforce training to ease these transitions and capitalize on AI's productivity potential.

Building Trust in AI: A Call for Clear Guidelines to Harness Its Benefits

Building Trust in AI: A Call for Clear Guidelines to Harness Its Benefits

Policymakers have yet to fully understand the profound effect the AI revolution will have on our economy, national security, and social well-being. AI, for all its benefits, will also present new risks, challenges, and tensions with our values. To prevent the worst consequences of AI, clear regulations are needed to harness its benefits while managing its risks.

Assessing the Threat of AI Misuse in Disinformation Campaigns

Assessing the Threat of AI Misuse in Disinformation Campaigns

OpenAI. Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory wrote a provocative report surfacing misinformation risks posed by AI.   The report provides a thoughtful framework through which to think about the threat of AI-enabled influence operations and some of the steps that can minimize those risks along with their associated trade-offs.

Quarantines, Not School Closures, Led to Devastating Losses in Math and Reading

Quarantines, Not School Closures, Led to Devastating Losses in Math and Reading

The recent dismal results from the National Assessment of Educational Progress introduced a new learning-loss puzzle. It was assumed that states with more remote instruction would have lower academic scores than those with more in-person classes during the pandemic. But states that had more days of in-class learning also saw declines. The likely reason is due to the hidden disruptions to student learning caused by COVID quarantines.  

Meet ChatGPT: The AI Chatbot That Can Write Code, Pass Exams, and Generate Business Ideas

Meet ChatGPT: The AI Chatbot That Can Write Code, Pass Exams, and Generate Business Ideas

Just over a week ago, OpenAI introduced ChatGPT, a cutting-edge artificial intelligence (AI) chatbot that uses a massive dataset to generate human-like responses to text-based inputs. In just five days, the chatbot reached over one million users, a milestone that took Facebook almost a year to achieve when it first launched. As we continue to develop and advance this technology, it will undoubtedly have a significant effect on the future of work and the way we approach tasks and responsibilities. It is important that we carefully consider how we can use AI to its full potential while also mitigating any possible negative effects. By doing so, we can ensure that AI will be a powerful and positive force for change.

A New Apprenticeship Requirement Could Slow Federally Funded Energy Projects

A New Apprenticeship Requirement Could Slow Federally Funded Energy Projects

The IRA’s new tax incentives create a massive demand for apprenticeships, which should be celebrated. However, federal and state policymakers must act quickly to set up those apprenticeship programs so there are no delays in building out the new climate projects ushered in by the IRA.

Federal Regulators Should Approve Election Prediction Markets

Federal Regulators Should Approve Election Prediction Markets

The US Commodity Futures Trading Commission (CFTC) continues to wrestle with how to best regulate prediction markets. The commission is expected to make a decision as soon as this week on whether the startup Kalshi can offer a market on the outcome of the upcoming midterms. Election prediction markets have proven to be a powerful tool for forecasting elections and are typically more accurate, timely, and complete than conventional methods. Kalshi’s proposal does not pose a risk to the integrity of the US election system. Approving Kalshi’s submission would be a step in the right direction for the commission and promotes the public interest

Why Arati Prabhakar is Uniquely Suited to Lead OSTP

Why Arati Prabhakar is Uniquely Suited to Lead OSTP

On September 22, the Senate confirmed Arati Prabhakar as White House Office of Science and Technology Policy (OSTP) director, the first woman of color and immigrant to hold the position. Prabhakar is uniquely suited for navigating these challenges. She previously headed the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology. She held several roles in Silicon Valley, including with the early-stage venture firm US Venture Partners, and she recently founded the nonprofit Actuate, which seeks to bring new actors to the table in developing solutions for areas such as climate, health, and trustworthy data. She will be able to draw on that public and private sector experience to shape how agencies stand up these new programs and design the guidelines and rules for new funding streams.

Implementing Federal Innovation Programs: A Road Map for States

Implementing Federal Innovation Programs: A Road Map for States

An unexpectedly productive Congress has passed the Infrastructure Investment and Jobs Act (IIJA), CHIPS Act, and Inflation Reduction Act (IRA), which aim to improve America’s infrastructure, boost competitiveness with China, and accelerate new climate technologies. Attention now shifts to implementation. State and community leaders should begin work now to prepare for funding competitions on the horizon.

Congress Must Pass Innovation Legislation, Despite Hurdles

Congress Must Pass Innovation Legislation, Despite Hurdles

Senate leaders are expected to release updated text on a slimmed-down set of bills to bolster the US semiconductor chip industry. The measures will likely include $52 billion in subsidies and an investment tax credit to boost US manufacturing, but the rest of the Bipartisan Innovation Act (BIA) remains in limbo at a time when more urgent action is needed. Strengthening America’s leadership in science and innovation tomorrow will depend on three crucial areas of investments needed today in bolstering semiconductor manufacturing, boosting federal R&D, and addressing the talent gap.