The open-source gambit: How America plans to outpace AI rivals by democratising tech

Jovan Kurbalija

Author:   Jovan Kurbalija

Subscribe to Diplo's Blog

On 23 July, the United States announced an AI Action Plan with 103 policy recommendations. It does not bring many surprises. The Plan’s keyword is AI race, mainly with China, summarised in the words of David Sacks, Trump’s Special Advisor for AI: We believe we are in an AI race, and we want the United States to win that race.


Key developments to follow:

  • Open source and safety: Can open-weight models coexist with a national security focus?
  • Bias ambiguity: Who defines truth in AI outputs?
  • Global fragmentation: Will US AI domination alienate allies?
  • Labour realities: Will tax incentives offset AI-driven job losses?
  • Deregulation and monopolies: Will fewer rules lead to more centralised AI?

Winner: Silicon Valley and tech companies

Silicon Valley and the tech industry are positioned to gain an upper hand and significant freedom in AI developments. The plan explicitly states that to maintain US global leadership, America’s private sector must be unencumbered by bureaucratic red tape. This pro-business approach has been central to Trump’s tech agenda since his inauguration in January. 

In addition to less regulation and more investment, a critical development for tech companies will be the push to review and potentially limit legal liabilities that could hinder their actions. The plan recommends to review all Federal Trade Commission (FTC) investigations commenced under the previous administration to ensure that they do not advance theories of liability that unduly burden AI innovation. The plan also calls for reviewing FTC orders for injusticion with potentially burdensome effect on AI developments. 

With such provisions, the US government reinforces nascent industry protection analogous to Section 230 from 1996, which protected online platforms from legal liability for the content they provide. While back in 1996 such a type of regulation protected small internet companies, today, it can entrench monopolies of tech giants such as Google, Microsoft, and OpenAI.  

Geopolitics: Preparing for the AI race against China

Although China is mentioned by name only a few times, the entire document is framed in the context of as a strategic competition between two AI superpowers. President Trump declares a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance. The plan asserts that the winner of the AI race will set global standards and gain significant economic and military advantages.

The geopolitical race permeates all layers of the AI pyramide, from hardware infrastructure (strengthening semiconductor export controls) to data access (building world-class scientific datasets), algorithms (promoting open-source AI), and the global application of AI technology (platforms and tools). 

 Business Card, Paper, Text
AI Governance Pyramid (Kurbalija, 2024)

A specific action item includes conducting evaluations of frontier models from the People’s Republic of China to check for alignment with Chinese Communist Party talking points and censorship.

Open source AI: From tech labs to geopolitics

The main news of the Plan is putting open-source AI at the centre of the geostrategic race. It represents a significant turn, as open-source principles are not typically associated with the national security agenda that dominates the document. The plan views open-source models as having geostrategic value because they can become global standards in some areas of business and in academic research worldwide. This focus aims to ensure America has leading open models founded on American values.

The strategy goes beyond open-source code by explicitly encouraging open-weight AI, where the model’s parameters are freely available. According to the Plan, it is intended to empower startups, which can innovate without dependency on closed-model providers, and benefit academic research, which relies on access to model weights for rigorous experiments. 

Openness shift could also be interpreted in the context of race with China. As Chinese tech firms, such as DeepSeek, increasingly release open-source models with millions of users, US policymakers see an opportunity to undercut rivals by seeding the world with American AI tech and standards.

The AI openness approach will spark a heated debate around the dual nature of open-source AI. The benefits are evident in spreading innovation and more economic and societal inclusion. However, there are two main types of critical arguments. One is that adversaries could spot vulnerabilities in open source code more easily. However, this criticism is counter-argued as open-source code can be inspected and stress-tested by a wide community, which may surface vulnerabilities faster (a cybersecurity benefit)

The second criticism is from the ‘existential threat’ school. For example, Nobel Prize Laureate Geoffrey Hinton argued that realising foundational model weights is like making nuclear materials freely available. 

Apart from these, sometimes, dramatic criticism, the Plan’s open source focus is good news for democratising and opening AI to citizens, countries, and communities worldwide. 

Call for AI interpretability

The US Action Plan also addresses one of the central technical challenges of the AI era: fully understanding how Large Language Models (LLMs) operate. The document notes that the inner workings of frontier AI systems are poorly understood and that technologists often cannot explain why a model produced a specific output.

This lack of predictability is framed as a barrier to using advanced AI in high-stakes defence and national security applications. To address this, the plan recommends launching a technology development program led by DARPA to advance AI interpretability, control, and robustness.

Diplomacy: More competition – less cooperation

The Plan’s International AI Diplomacy and Security section focuses more on securing allies’ support for the AI race than on traditional international cooperation. The approach is a mix of carrots and sticks. Allies can benefit from exporting America’s full AI stack, including hardware, models, and software.

Regarding export controls on sensitive technologies, the plan suggests using tools like the Foreign Direct Product Rule and secondary tariffs to achieve international alignment. One recommendation proposes establishing end-use monitoring in countries with a high risk of diversion of advanced, US-origin AI compute.

This requirement will leave little space for manoeuvre and put additional pressure on the EU, India, Japan, and many other countries trying to find their place and space in emerging AI geopolitics beyond the bipolar division between China and the USA. In relations with the EU, a fundamental difference exists between the EU’s heavy regulation (the EU AI Act) and the USA’s no-regulation approach to AI. 

Regarding export controls on sensitive technologies, the plan suggests using tools like the Foreign Direct Product Rule and secondary tariffs to achieve international alignment. One recommendation proposes establishing end-use monitoring in countries with a high risk of diversion of advanced, U.S.-origin AI compute.

This new framing of US AI diplomacy is sceptical of multilateral initiatives. The document states that too many of these efforts have advocated for burdensome regulations, vague ‘codes of conduct’ that promote cultural agendas that do not align with American values, or have been influenced by Chinese companies. It explicitly names the UN, OECD, G7, G20, and ITU in this context. Surprisingly, the Internet Corporation for Assigned Names and Numbers (ICANN) is also listed, though it does not deal with AI.

Risks: Centrality of national security

The AI Action Plan addresses risks primarily through the lens of national security. The main concern is that the most powerful AI systems could pose novel threats, such as aiding in the development of chemical, biological, radiological, nuclear, or explosive (CBRNE) weapons. The plan also highlights the risk of cyberattacks and adversarial threats against US critical infrastructure.

Other risks identified include:

  • Labour: AI-related job displacement is considered a significant risk. The plan calls for guidance on using state Rapid Response funds to proactively upskill workers at risk and retrain those impacted by displacement.
  • Content: Malicious deepfakes are outlined as a significant challenge to the legal system. The document warns that fake audio, video, or photo evidence could be used “to deny justice to plaintiffs and defendants”. It calls for giving courts and law enforcement new tools and standards to combat this.

Complex AI triangle: White House – California – Silicon Valley

The plan signals potential friction between federal policy and state-level regulations. It recommends that the federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds. It further directs federal agencies with discretionary funding to consider a state’s AI regulatory climate when making funding decisions and to limit funding where such regulations might hinder its effectiveness. 

Most likely, this provision will affect states like California that have moved forward with AI regulatory frameworks. It can set new tensions in the triangle between the White House, Silicon Valley, and California authorities, the state hosting most of the leading AI companies. 

Bias: No (AI biases) + Yes (American values)

The Plan argues that AI systems must be free from ideological bias and be designed to pursue objective truth rather than social engineering agendas. At the same time, the Plan calls for the following American values, which are like all values and ideologies, forms of bias. 

Here, the Plan resurface deeper philosophical issues of claiming ‘objective truth. Who will decide what ‘truth’ is? What is factual information?

To enforce a no-bias approach, the strategy provides specific guidelines on content policy. A primary recommendation is for the National Institute of Standards and Technology (NIST) to revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.

 Furthermore, federal procurement guidelines must be updated to ensure the government only contracts with frontier model developers whose systems are objective and free from top-down ideological bias.

Skills: Apprenticeship and training for AI

A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capacity through education, training, and apprenticeships. The plan operationalises this through several actions, including issuing Treasury Department guidance to clarify that many AI literacy and skill development programs can qualify for tax-free reimbursement from employers under Section 132 of the Internal Revenue Code.

The plan also calls for a national initiative to identify high-priority occupations essential for building AI infrastructure, such as electricians. 

One of the most explicit recommendations on the overall education front is to grow our Senior Military Colleges into hubs of AI research, development, and talent building, teaching core AI skills and literacy to future generations.

Regulation: Removing rules and new conditions

As expected, the AI Action Plan calls for removing regulations perceived as a constraint on growth in reversal of the previous administration’s approach. The document quotes Vice President Vance, who argued that AI regulation would unfairly benefit incumbents

However, incumbents, which are AI giants, could be empowered by a lack of regulation, such as anti-trust rules. In such developments, their unchecked power can foster new AI monopolies with far-reaching consequences beyond the economy on the social and political fabric of the United States and the world. 

The Plan uses an open-source approach as a potential counterweight to the risk of the concentration of AI power. Whether it will work is the main challenge for the Plan and overall global AI governance. 

Data: Central geostrategic asset 

The Action Plan reiterates the critical importance of data, labelling high-quality data as a “national strategic asset”. It calls for the US to “lead the creation of the world’s largest and highest quality AI-ready scientific datasets”. The environmental impact of the massive data centres required for this is addressed through a policy of streamlining permitting under laws like the National Environmental Policy Act (NEPA) and the Clean Water Act. The plan also acknowledges security risks to data, with data poisoning listed as a potential malicious activity that secure AI systems must be designed to detect.

Tailor your subscription to your interests, from updates on the dynamic world of digital diplomacy to the latest trends in AI.

Subscribe to more Diplo and Geneva Internet Platform newsletters!