The Biden administration is ramping up efforts to safeguard American artificial intelligence (AI) from China and Russia, and is now planning to impose restrictions on the most advanced AI models, similar to the core software powering systems like ChatGPT.
According to a Reuters report, the US Commerce Department is exploring new regulatory avenues that would restrict the export of proprietary or closed-source AI models and related software, which conceal both their software and training data.
These steps will work in tandem with earlier actions and hinder the export of sophisticated AI tech, both hardware and software, to China and Russia. This is part of a larger strategy intended to slow down Beijing’s progress in AI, particularly when it comes to military use.
However, keeping pace with the rapidly evolving AI landscape presents significant challenges for regulators.
While the Russian Embassy in Washington is yet to respond, the Chinese Embassy has criticized the move as economic coercion, pledging to take necessary measures to protect its interests.
Currently, major US AI players like Microsoft-backed OpenAI, Google’s DeepMind, and rival Anthropic can freely sell their most powerful closed-source AI models worldwide without government oversight.
However, there are some serious concerns among government and private sector researchers about threat actors exploiting these models for aggressive cyber attacks or even for developing biological weapons.
Impact Shorts
More ShortsAccording to sources, any potential export controls would likely target countries such as Russia, China, North Korea, and Iran.
These controls could be based on the computing power required to train a model, as proposed in an AI executive order issued last October. However, it is believed that no model has yet reached this threshold, although Google’s Gemini Ultra is reportedly close.
The agency is still in the early stages of considering these measures, highlighting the US government’s commitment to address gaps in efforts to counteract China’s AI plans.
However, just how feasibile and effective implementing such export controls can be, are still unclear largely diue to the rapidly evolving landscape that AI is.
Given the potential risks associated with advanced AI capabilities falling into the wrong hands, experts and officials are emphasising the importance of addressing these concerns. While some advocate for a threshold based on computing power, others argue for a focus on national security risks and the intended uses of AI models.
Regardless of the approach, controlling AI model exports poses significant challenges, especially considering the open-source nature of many models and the difficulty in defining criteria for regulation.
Nevertheless, potential export control measures could impact access to the underlying software of consumer applications like ChatGPT, while leaving downstream applications unaffected.
(WIth inputs from agencies)



)
)
)
)
)
)
)
)
