The US and UK have entered a ground-breaking agreement on AI safety. The UK Science Minister Michelle Donelan and US Commerce Secretary Gina Raimondo signed this agreement on Monday 1st April 2024 in Washington. This agreement lays out how these two governments will collect the technical knowledge, talent, and information on AI safety.
This great deal represents the first bilateral arrangement on AI safety in the world. This shows that these governments are quite conscious of the greater regulation of the existential risks of new technology. For instance, more use of AI is damaging Cyberattacks or people are designing bioweapons.
As Donelan stated a fact to the Financial Times, “The next year is when we have really got to act quickly because the next generation of AI models is coming out, which could be complete game-changers, and we don’t know the full capabilities that they will offer yet”.
After this agreement, the UK’s new AI Safety Institute (AISI) which was set up in November will be specially enabled, and its US equivalent, which is yet to begin its work, to exchange expertise through the endorsement of researchers from both countries. These foundations will also work together on how to independently assess private AI models built by OpenAI and Google.
This agreement displayed one between the UK’s Government Communications Headquarters (GCHQ) and the US National Security Agency. They both are working closely on certain issues like security purposes and intelligence.
Donelan said, “The fact that the United States, a great AI powerhouse, is signing this agreement with us, the United Kingdom, speaks volumes for how we are leading the way on AI safety.”
She also added that since most of the AI companies were currently based in the US, the American government’s expertise was key to understanding the risks of AI and holding companies to their commitments. But, Donelan insisted that despite conducting research on AI safety and ensuring guardrails were in place, the UK is not going to plan to regulate the technology more broadly in the near term as it was already evolving too rapidly.
Considering the toughest regime on the use of AI in the world, Joe Biden the US president has issued an executive order targeting AI models that can threaten national security. China has also issued guidelines ensuring that technology does not challenge its long-standing censorship regime.
Raimondo stated that AI was “the defining technology of our generations”. This new partnership is going to accelerate both of our institutes’ work together with the full spectrum of risks. Whether to our national security or to our broader society.
She said, “Our partnership makes clear that we aren’t running away from these concerns – we’re running at them. Because of our collaboration, our institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidelines.”
The AISI, supported by the UK government, chaired by tech investors and the entrepreneur Ian Hogarth, has hired researchers such as Google Deepmind’s Geoffrey Irving and Chris Summerfield from the University of Oxford to start testing existing and unreleased AI models.
DeepMind, Microsoft, Meta, and OpenAI are some of the best tech groups that signed the commitments to open up their latest generative AI models for review by Britain’s AISI. This foundation is key to Prime Minister Rishi Sunak’s ambition for the UK to tackle the development of AI.
This testing’s main focus is on the risks associated with the misuse of the technology, including cybersecurity by orientated on expertise from the National Cyber Security Centre with GCHQ.
Donelan in her discussion said that she and Raimondo planned to discuss shared challenges such as AI’s impact on upcoming elections this year. She added that they would also discuss the need for computing infrastructure for AI “sharing our skillsets and how we can deepen our collaboration in general to get benefit for the public.”