The U.K. AI Safety Summit concluded its first day with a common declaration, the U.S. announcing an AI safety institute, China willing to communicate on AI safety, and comments from Elon Musk.
11227 Total views
24 Total shares
Listen to article
5:09
Follow up
Join u
The United Kingdom’s global summit on artificial intelligence safety, the AI Safety Summit, began on Nov. 1 and will carry on through Nov. 2, with government officials and leading AI companies from around the world in attendance, including from the United States and China.
U.K. Prime Minister Rishi Sunak is hosting the event, which is taking place nearly 55 miles north of London in Bletchley Park. It comes at the end of a year of rapid advancements in the widespread use and accessibility of AI models following the emergence of OpenAI’s popular chatbot, ChatGPT.
Who is in attendance?
Around 100 guests are expected to attend the AI Safety Summit, including the leaders of many of the world’s prominent AI companies, such as Microsoft president Brad Smith, OpenAI CEO Sam Altman, Google and DeepMind CEO Demis Hassabis, Meta AI chief Yann LeCun and president of global affairs Nick Clegg, and billionaire Elon Musk.
On a governmental level, global leaders from around 27 countries are expected to attend, including United States Vice President Kamala Harris, European Commission President Ursula von der Leyen and United Nations secretary-general Antonio Guterres.
The U.K. also extended the invitation to China, which has been a major competitor to Western governments and companies in AI development. Chinese Vice Minister of Science and Technology Wu Zhaohui will be attending, along with companies Alibaba and Tencent.
Initial summit proceedings
The two-day summit’s primary aim is to create dialogue and cooperation between its dynamic group of international attendees to shape the future of AI, with a focus on “frontier AI models.” These are defined as highly capable, multipurpose AI models that equal or surpass the capabilities of current models available.
The first day featured several roundtable discussions on risks to global safety and integrating frontier AI into society. There was also an “AI for good” discussion on the opportunities presented by AI to transform education.
The “Bletchley Declaration” and the U.S.’ AI Safety Institute
During the summit, Britain published the “Bletchley Declaration,” which serves as an agreement to boost global efforts of cooperation in AI safety. Twenty-eight countries signed onto the declaration, including the U.S. and China, along with the European Union.
Advertisement
Stay safe in Web3. Learn more about Web3 Antivirus →
In a separate statement on the declaration, the U.K. government said:
“The Declaration fulfills key summit objectives in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration.”
Other countries endorsing the statement include Brazil, France, India, Ireland, Japan, Kenya, Saudi Arabia, Nigeria and the United Arab Emirates.
Related: Biden administration issues executive order for new AI safety standards
In addition, U.S. Secretary of Commerce Gina Raimondo said that the United States plans to create its own AI Safety Institute, focusing on the risks of frontier models.
Raimondo said she would “certainly” call on many in the audience “in academia and the industry” to participate in the initiative. She also suggested a formal partnership with the U.K.’s Safety Institute.
Musk calls the summit a “referee”
Musk, the owner of social media platform X (formerly Twitter) and CEO of both SpaceX and Tesla, has been a prominent voice in the AI space. He has already participated in talks with global regulators on the subject.
At the U.K.’s AI Safety Summit on Nov. 1, he said the summit wanted to create a “third-party referee” to oversee AI development and warn of any concerns.
According to Reuters, Musk said:
“What we’re really aiming for here is to establish a framework for insight so that there’s at least a third-party referee, an independent referee, that can observe what leading AI companies are doing and at least sound the alarm if they have concerns.”
He also said before there is “oversight,” there must be “insight” inference to global leaders making any mandates. “I think there’s a lot of concern among people in the AI field that the government will sort of jump the gun on rules, before knowing what to do,” Musk said.
Related: UN launches international effort to tackle AI governance challenges
China says it’s ready to bolster communications
Chinese Vice Minister of Science and Technology Zhaohui emphasized that everyone has the right to develop and deploy AI.
“We uphold the principles of mutual respect, equality and mutual benefits. Countries, regardless of their size and scale, have equal rights to develop and use AI,” he said.
“We call for global cooperation to share AI knowledge and make AI technologies available to the public on open-source terms.”
He said China is “willing to enhance our dialogue and communication in AI safety” with “all sides.” These remarks come as China and many Western countries, particularly the U.S., have been racing to create the most advanced technology on the market.
The summit will continue into its final day on Nov. 2 with remarks from the U.K.’s prime minister and its technology secretary, Michelle Donelan.