By Joyce Lee
SEOUL (Reuters) -Sixteen companies at the forefront of developing Artificial Intelligence pledged on Tuesday at a global meeting to develop the technology safely at a time when regulators are scrambling to keep up with rapid innovation and emerging risks.
The companies included U.S. leaders Google (NASDAQ:GOOGL), Meta (NASDAQ:META), Microsoft (NASDAQ:MSFT) and OpenAI, as well as firms from China, South Korea and the United Arab Emirates.
They were backed by a broader declaration from the Group of Seven (G7) major economies, the EU, Singapore, Australia and South Korea at a virtual meeting hosted by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol.
South Korea's presidential office said nations had agreed to prioritise AI safety, innovation and inclusivity.
"We must ensure the safety of AI to ... protect the wellbeing and democracy of our society," Yoon said, noting concerns over risks such as deepfake.
Participants noted the importance of interoperability between governance frameworks, plans for a network of safety institutes, and engagement with international bodies to build on agreement at a first meeting to better address risks.
Companies also committing to safety included Zhipu.ai, - backed by China's Alibaba (NYSE:BABA), Tencent, Meituan and Xiaomi (OTC:XIACF) - UAE's Technology Innovation Institute, Amazon (NASDAQ:AMZN), IBM (NYSE:IBM) and Samsung Electronics (KS:005930).
They committed to publishing safety frameworks for measuring risks, to avoid models where risks could not be sufficiently mitigated, and to ensure governance and transparency.
"It's vital to get international agreement on the 'red lines' where AI development would become unacceptably dangerous to public safety," said Beth Barnes, founder of METR, a group promoting AI model safety, in response to the declaration.
Computer scientist Yoshua Bengio, known as a "godfather of AI", welcomed the commitments but noted that voluntary commitments would have to be accompanied by regulation.
Since November, discussion on AI regulation has shifted from longer-term doomsday scenarios to more practical concerns such as how to use AI in areas like medicine or finance, said Aidan Gomez, co-founder of large language model firm Cohere on the sidelines of the summit.
China, which co-signed the "Bletchley Agreement" on collectively managing AI risks during the first November meeting, did not attend Tuesday's session but will attend an in-person ministerial session on Wednesday, a South Korean presidential official said.
Tesla (NASDAQ:TSLA)'s Elon Musk, former CEO of Google Eric Schmidt, Samsung Electronics' Chairman Jay Y. Lee and other AI industry leaders participated in the meeting.
The next meeting is to be in France, officials said.