Featured Writing
Gov. Gavin Newsom’s recent veto of California’s Senate Bill (SB) 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, reignited the debate about how best to regulate the rapidly evolving field of artificial intelligence. Newsom’s veto illustrates a cautious approach to regulating a new technology and could pave the way for more pragmatic AI safety policies.
An emerging body of research suggests that large language models (LLMs) can “deceive” humans by offering fabricated explanations for their behavior or concealing the truth of their actions from human users. The implications are worrisome, particularly because researchers do not fully understand why or when LLMs engage in this behavior.
I was excited to contribute to a group of over 20 prominent AI researchers, legal experts, and tech industry leaders from institutions including OpenAI, the Partnership on AI, Microsoft, the University of Oxford, a16z crypto, and MIT on a paper proposing "personhood credentials" (PHCs) as a potential solution to the growing challenge of AI-powered online deception. PHCs would provide a privacy-preserving way for individuals to verify their humanity online without disclosing personal information. While implementation details remain to be determined, the core concept warrants serious consideration from policymakers and tech leaders as AI capabilities rapidly advance, threatening to erode the trust and accountability essential for societies to function.
The increasing capabilities of AI have sparked an important debate: Should the "weights" that define a model's intelligence be openly accessible or tightly controlled? This article dives deep into the arguments on both sides. Proponents say open-weight models accelerate innovation, enable greater scrutiny for safety, and democratize AI capabilities. Critics warn of risks like misuse for disinformation or military purposes by adversaries. The piece examines recent developments like frontier open-weight models from Meta and Mistral, Mark Zuckerberg's case for openness, concerns about China's AI ambitions, and the US government's cautious approach.
The recent AI policy roadmap from the bipartisan AI working group in Congress strikes a thoughtful balance between promoting AI innovation and addressing potential risks. It lays out a nuanced approach including increased AI safety efforts, crucial investments in domestic STEM talent, protections for children in the age of AI, and "grand challenge" programs to spur breakthroughs - all while avoiding hasty overregulation that could stifle progress
In the rapidly evolving landscape of artificial intelligence (AI), the dialogue often veers between the extremes of stringent regulation, like the European Union’s AI Act, and laissez-faire approaches that risk unbridled technological advances without sufficient safeguards. Amidst this polarized debate, the Coalition for Health AI (CHAI) has emerged as a promising alternative approach that addresses the ethical, social, and economic complexities introduced by AI, while also supporting continued innovation.
It's difficult to understand some technologies because they're better experienced than described. I've found GenAI to be one example where it's difficult to grasp the full range of capabilities unless you see some of it in action. Over the last year, I've given a number of presentations that tried to contextualize GenAI for the audience by demonstrating relevant use cases. I compiled them in this long master deck, which I periodically update and am sharing in the hope that it may spark some ideas for you.
I had a great time joining James Pethokoukis on his podcast, Faster, Please! We touched on a number of areas including how AI can help improve teaching and learning.
As we close out 2023, I find myself reflecting on a year filled with both personal achievements and significant developments in the world of AI. It was an honor to learn that several of my written works were recognized and included in various "best of" lists.
I am thrilled to share that I have been appointed to the board of the Federation of American Scientists (FAS). It is an incredible honor to have the opportunity to contribute to the vital work of this important organization, particularly during a time when science and technology are progressing at an unprecedented pace.
The capabilities of AI promise not only efficiency but potentially a more accessible interface for citizens. As governments begin to integrate these technologies into their service-delivery mechanisms, it is imperative to approach the adoption with due diligence, guided by robust frameworks like NIST’s AI RMF. With a combination of strategic foresight, stakeholder engagement, and capacity building, governments can harness the power of AI to truly transform public service, making it more responsive and citizen-centric than ever before.
There is an incredible amount of excitement and confusion around what this wave of generative AI means for education. These technologies are rapidly improving, and developers are introducing capabilities that would have been considered science fiction just a few years ago. In my latest Education Next piece, I provide an overview of generative AI and explore how this technology will influence how students learn, how teachers work, and ultimately how we structure our education system.
Over the past decade, many have been disappointed by the unfulfilled promises of technology transforming education. However, recent advancements in AI, such as OpenAI's GPT-4, may signal a genuine breakthrough. These large-language models have smarter capabilities, function as reasoning engines, use language as an interface, and are scaling rapidly thanks to tech giants. As a result, AI-powered tutoring and teaching assistants are emerging to provide individualized learning, automate administrative tasks, and offer constructive critiques on student writing. While there are limitations, it is expected that future iterations will address these issues. Harnessing AI's potential could lead to a future where education is more effective, equitable, and personalized, with teachers focusing on fostering meaningful connections with their students.
The COVID-19 pandemic has exposed the urgency for state governments to improve digital service delivery and address long-standing technology challenges. The Tech Talent Project, in collaboration with AEI, the Beeck Center for Social Impact + Innovation at Georgetown University, and New America, has released a report offering guidance for states to build technical capacity and avoid past pitfalls. I had the privilege of co-chairing this effort with Cecilia Muñoz Former Assistant to the President and Director, White House Domestic Policy Council.
The rise of AI technologies like ChatGPT promises to boost global GDP by 7% in a decade, according to Goldman Sachs. However, this may disrupt 63% of US jobs, including higher-skilled professions like auditors and interpreters. These AI tools will change the nature of work, shifting focus from mundane tasks to more advanced, human-centric activities. Surprisingly, AI could benefit the least skilled workers, narrowing the performance gap among employees. While job displacement is inevitable, the larger disruption may be the creation of new, hybrid jobs combining domain expertise with AI skills. Policymakers must proactively invest in education and workforce training to ease these transitions and capitalize on AI's productivity potential.
The US Commodity Futures Trading Commission (CFTC) continues to wrestle with how to best regulate prediction markets. The commission is expected to make a decision as soon as this week on whether the startup Kalshi can offer a market on the outcome of the upcoming midterms. Election prediction markets have proven to be a powerful tool for forecasting elections and are typically more accurate, timely, and complete than conventional methods. Kalshi’s proposal does not pose a risk to the integrity of the US election system. Approving Kalshi’s submission would be a step in the right direction for the commission and promotes the public interest
On September 22, the Senate confirmed Arati Prabhakar as White House Office of Science and Technology Policy (OSTP) director, the first woman of color and immigrant to hold the position. Prabhakar is uniquely suited for navigating these challenges. She previously headed the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology. She held several roles in Silicon Valley, including with the early-stage venture firm US Venture Partners, and she recently founded the nonprofit Actuate, which seeks to bring new actors to the table in developing solutions for areas such as climate, health, and trustworthy data. She will be able to draw on that public and private sector experience to shape how agencies stand up these new programs and design the guidelines and rules for new funding streams.
Senate leaders are expected to release updated text on a slimmed-down set of bills to bolster the US semiconductor chip industry. The measures will likely include $52 billion in subsidies and an investment tax credit to boost US manufacturing, but the rest of the Bipartisan Innovation Act (BIA) remains in limbo at a time when more urgent action is needed. Strengthening America’s leadership in science and innovation tomorrow will depend on three crucial areas of investments needed today in bolstering semiconductor manufacturing, boosting federal R&D, and addressing the talent gap.
The pandemic and economic disruptions have accelerated the adoption of automation technologies that will introduce important benefits to businesses and consumers but may also create disruptions for many workers and communities. Policymakers and leaders can take steps now to help navigate these disruptive changes.
For the first time in over a decade, Congress is considering legislation to strengthen US leadership in science and innovation by bolstering domestic research and development (R&D) and manufacturing capabilities to build supply-chain resilience and reduce dependence on China. Congress should also use this opportunity to pass complementary immigration reforms that will allow the US to meet the challenges of the 21st century.
For the first time, the presidential transition process will receive technology-focused agency review briefs — written by expert technologists and policymakers, who themselves served in the agencies — to assess the current technological capacity, critical items for the first 200 days, and key technical leadership positions for 2021.
I’ve had the chance to do a few aerial flights with FLYNYON and loved experiencing the city from a different perspective. But nothing prepared me for the beauty of the city at night, nor how powerful it would be to see the Tribute of Light on the 18th anniversary of the horrific attacks of 9/11. Incredible tribute to those lost, but also a symbol of resolve and resiliency.
For decades Americans have had the chance to invest in emerging markets all around the world. Now they have the chance to invest in America’s own emerging markets and finance the comeback story so many communities have been waiting to write.
In late 2015, upon the birth of their first child, Facebook co-founder Mark Zuckerberg and his spouse, pediatrician Priscilla Chan, announced that they would dedicate 99 percent of their Facebook holdings — at the time, an estimated $45 billion — to “improving this world.” Who are the key staff members working alongside Mr. Zuckerberg and Dr. Chan to spend tens of billions of dollars?
John Bailey: The CZI education fellow’s former gigs include director of educational technology at the Department of Education, special adviser to President George W. Bush, and start-up consultant. He also spent a year at the Bill & Melinda Gates Foundation, managing $20 million in advocacy grants.