
John Bailey
John serves as a strategic advisor and investor, leveraging cross-sector experience in government, philanthropy, and the private sector to drive solutions in AI, technology, workforce development, climate technology, and behavioral health.
He currently serves as a fellow at the Chan Zuckerberg Initiative and a Non-Resident Senior Fellow at the American Enterprise Institute. He has supported a range of investors and philanthropies with designing their strategies, launching initiatives, and developing policy agendas.
John served as a domestic policy advisor in the White House where he coordinated the Bush Administration's efforts during the credit crisis to stabilize $200 billion in student loans and served as a negotiator for the reauthorization of the Trade Adjustment Assistance program. As the Deputy Policy Director to the U.S. Secretary of Commerce, he contributed to the first national pandemic preparedness strategy and worked on policies related to American competitiveness and comprehensive immigration reform.
He co-founded the strategic advisory firm Whiteboard Advisors and served as a senior program officer at the Bill and Melinda Gates Foundation. Earlier in his career, John served as the nation's second Director of Educational Technology at the U.S. Department of Education, where he co-chaired the interagency Advanced Education Technology Initiative. While working for Pennsylvania Governor Tom Ridge, John spearheaded several technology and education initiatives, including a statewide broadband mapping project.
John currently serves on the board of directors for Zearn Math, the Federation of American Scientists, U.S. Digital Response, the Just Trust, and American Policy Ventures. He also serves on advisory boards for Trustible.ai, the XPRIZE, the FPF Center for Artificial Intelligence, and the Tech Talent Project. In addition, John is a Pahara-Aspen Institute Fellow and a moderator and member of the Aspen Global Leadership Network. He is also an alumnus of the American Council on Germany Young Leaders Program. In 2022, 2023, and 2024 the Washingtonian named him as one of Washington's Most Influential People in policy.
He previously served on boards for the Aspen Institute’s Future of Work, Indego Africa, and the Data Quality Campaign.
Featured Writing
AI has become an invaluable partner for me, giving me instant access to a wide array of expert “assistants.” I now have a data analyst, driver with Waymo, brainstorming partner, legislative analyst, medical assistant, start-up advisor, graphics designer, and researcher at my fingertips, ready to help whenever I need specialized skills and knowledge.
A recent social media clash that erupted between Elon Musk, Vivek Ramaswamy and Trump loyalists over high-skilled immigration reform exposed deep ideological rifts within the Republican coalition. But the importance of the debate over immigration policy and the American education system extends far beyond social media — solving these problems is critical to America’s competitiveness. By combining pragmatic immigration reforms, bold educational investments and innovative AI-driven learning, we can forge the “Talent Dominance” agenda we desperately need.
In an analysis of Sal Khan's "Brave New Words" and the evolving landscape of AI in education, I present a case for AI-powered tutoring as a transformative force. Recent advancements in speech, image analysis, and emotional intelligence, combined with promising research studies showing significant learning gains, suggest AI tutoring could help address our urgent educational challenges like pandemic learning loss and chronic absenteeism.
I’m deeply honored to be appointed by Governor Youngkin to serve on Virginia’s inaugural AI Task Force. This distinguished group of leaders from academia, nonprofits, and industry will advise policymakers on harnessing AI to transform government services, streamline regulations, and position Virginia as a leader in responsible AI innovation. As we unlock AI’s potential to improve efficiency and reduce burdens on state agencies, we must also ensure thoughtful safeguards to protect privacy, fairness, and public trust. I
Gov. Gavin Newsom’s recent veto of California’s Senate Bill (SB) 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, reignited the debate about how best to regulate the rapidly evolving field of artificial intelligence. Newsom’s veto illustrates a cautious approach to regulating a new technology and could pave the way for more pragmatic AI safety policies.
An emerging body of research suggests that large language models (LLMs) can “deceive” humans by offering fabricated explanations for their behavior or concealing the truth of their actions from human users. The implications are worrisome, particularly because researchers do not fully understand why or when LLMs engage in this behavior.
I was excited to contribute to a group of over 20 prominent AI researchers, legal experts, and tech industry leaders from institutions including OpenAI, the Partnership on AI, Microsoft, the University of Oxford, a16z crypto, and MIT on a paper proposing "personhood credentials" (PHCs) as a potential solution to the growing challenge of AI-powered online deception. PHCs would provide a privacy-preserving way for individuals to verify their humanity online without disclosing personal information. While implementation details remain to be determined, the core concept warrants serious consideration from policymakers and tech leaders as AI capabilities rapidly advance, threatening to erode the trust and accountability essential for societies to function.
The increasing capabilities of AI have sparked an important debate: Should the "weights" that define a model's intelligence be openly accessible or tightly controlled? This article dives deep into the arguments on both sides. Proponents say open-weight models accelerate innovation, enable greater scrutiny for safety, and democratize AI capabilities. Critics warn of risks like misuse for disinformation or military purposes by adversaries. The piece examines recent developments like frontier open-weight models from Meta and Mistral, Mark Zuckerberg's case for openness, concerns about China's AI ambitions, and the US government's cautious approach.
The recent AI policy roadmap from the bipartisan AI working group in Congress strikes a thoughtful balance between promoting AI innovation and addressing potential risks. It lays out a nuanced approach including increased AI safety efforts, crucial investments in domestic STEM talent, protections for children in the age of AI, and "grand challenge" programs to spur breakthroughs - all while avoiding hasty overregulation that could stifle progress
In the rapidly evolving landscape of artificial intelligence (AI), the dialogue often veers between the extremes of stringent regulation, like the European Union’s AI Act, and laissez-faire approaches that risk unbridled technological advances without sufficient safeguards. Amidst this polarized debate, the Coalition for Health AI (CHAI) has emerged as a promising alternative approach that addresses the ethical, social, and economic complexities introduced by AI, while also supporting continued innovation.
It's difficult to understand some technologies because they're better experienced than described. I've found GenAI to be one example where it's difficult to grasp the full range of capabilities unless you see some of it in action. Over the last year, I've given a number of presentations that tried to contextualize GenAI for the audience by demonstrating relevant use cases. I compiled them in this long master deck, which I periodically update and am sharing in the hope that it may spark some ideas for you.
I had a great time joining James Pethokoukis on his podcast, Faster, Please! We touched on a number of areas including how AI can help improve teaching and learning.
The capabilities of AI promise not only efficiency but potentially a more accessible interface for citizens. As governments begin to integrate these technologies into their service-delivery mechanisms, it is imperative to approach the adoption with due diligence, guided by robust frameworks like NIST’s AI RMF. With a combination of strategic foresight, stakeholder engagement, and capacity building, governments can harness the power of AI to truly transform public service, making it more responsive and citizen-centric than ever before.
There is an incredible amount of excitement and confusion around what this wave of generative AI means for education. These technologies are rapidly improving, and developers are introducing capabilities that would have been considered science fiction just a few years ago. In my latest Education Next piece, I provide an overview of generative AI and explore how this technology will influence how students learn, how teachers work, and ultimately how we structure our education system.
Over the past decade, many have been disappointed by the unfulfilled promises of technology transforming education. However, recent advancements in AI, such as OpenAI's GPT-4, may signal a genuine breakthrough. These large-language models have smarter capabilities, function as reasoning engines, use language as an interface, and are scaling rapidly thanks to tech giants. As a result, AI-powered tutoring and teaching assistants are emerging to provide individualized learning, automate administrative tasks, and offer constructive critiques on student writing. While there are limitations, it is expected that future iterations will address these issues. Harnessing AI's potential could lead to a future where education is more effective, equitable, and personalized, with teachers focusing on fostering meaningful connections with their students.
The COVID-19 pandemic has exposed the urgency for state governments to improve digital service delivery and address long-standing technology challenges. The Tech Talent Project, in collaboration with AEI, the Beeck Center for Social Impact + Innovation at Georgetown University, and New America, has released a report offering guidance for states to build technical capacity and avoid past pitfalls. I had the privilege of co-chairing this effort with Cecilia Muñoz Former Assistant to the President and Director, White House Domestic Policy Council.
The rise of AI technologies like ChatGPT promises to boost global GDP by 7% in a decade, according to Goldman Sachs. However, this may disrupt 63% of US jobs, including higher-skilled professions like auditors and interpreters. These AI tools will change the nature of work, shifting focus from mundane tasks to more advanced, human-centric activities. Surprisingly, AI could benefit the least skilled workers, narrowing the performance gap among employees. While job displacement is inevitable, the larger disruption may be the creation of new, hybrid jobs combining domain expertise with AI skills. Policymakers must proactively invest in education and workforce training to ease these transitions and capitalize on AI's productivity potential.
The US Commodity Futures Trading Commission (CFTC) continues to wrestle with how to best regulate prediction markets. The commission is expected to make a decision as soon as this week on whether the startup Kalshi can offer a market on the outcome of the upcoming midterms. Election prediction markets have proven to be a powerful tool for forecasting elections and are typically more accurate, timely, and complete than conventional methods. Kalshi’s proposal does not pose a risk to the integrity of the US election system. Approving Kalshi’s submission would be a step in the right direction for the commission and promotes the public interest
Senate leaders are expected to release updated text on a slimmed-down set of bills to bolster the US semiconductor chip industry. The measures will likely include $52 billion in subsidies and an investment tax credit to boost US manufacturing, but the rest of the Bipartisan Innovation Act (BIA) remains in limbo at a time when more urgent action is needed. Strengthening America’s leadership in science and innovation tomorrow will depend on three crucial areas of investments needed today in bolstering semiconductor manufacturing, boosting federal R&D, and addressing the talent gap.
The pandemic and economic disruptions have accelerated the adoption of automation technologies that will introduce important benefits to businesses and consumers but may also create disruptions for many workers and communities. Policymakers and leaders can take steps now to help navigate these disruptive changes.
For the first time in over a decade, Congress is considering legislation to strengthen US leadership in science and innovation by bolstering domestic research and development (R&D) and manufacturing capabilities to build supply-chain resilience and reduce dependence on China. Congress should also use this opportunity to pass complementary immigration reforms that will allow the US to meet the challenges of the 21st century.
For the first time, the presidential transition process will receive technology-focused agency review briefs — written by expert technologists and policymakers, who themselves served in the agencies — to assess the current technological capacity, critical items for the first 200 days, and key technical leadership positions for 2021.
I’ve had the chance to do a few aerial flights with FLYNYON and loved experiencing the city from a different perspective. But nothing prepared me for the beauty of the city at night, nor how powerful it would be to see the Tribute of Light on the 18th anniversary of the horrific attacks of 9/11. Incredible tribute to those lost, but also a symbol of resolve and resiliency.
For decades Americans have had the chance to invest in emerging markets all around the world. Now they have the chance to invest in America’s own emerging markets and finance the comeback story so many communities have been waiting to write.