From Nobel laureate Geoffrey Hinton to former Google CEO Eric Schmidt, experts warn about the risks that AI poses
Wency Chen in Shanghai and Ann Cao in Shanghai Published: 26 Jul 2025
Renowned scientists and business leaders from the US and China are calling for greater collaboration in the field of artificial intelligence amid growing concerns that humanity might lose control of the rapidly evolving technology.
At the World Artificial Intelligence Conference (WAIC), which commenced in Shanghai on Saturday, Nobel laureate and AI pioneer Geoffrey Hinton proposed the establishment of “an international community of AI safety institutes and associations that works on techniques for training AI to be benevolent”.
In his talk, Hinton acknowledged the challenges of international cooperation owing to divergent national interests on issues such as cyberattacks, lethal autonomous weapons, and the creation of fake videos that manipulate public opinion. However, he emphasised a critical common ground: “No country wants AI to take over”.
Hinton warned that AI was akin to a “cute tiger cub” kept as a pet by humans, but which could become dangerous as it matured. He stressed the importance of preventing this scenario through international cooperation, drawing parallels to US-Soviet collaboration on nuclear non-proliferation during the Cold War.
Yan Junjie, founder and CEO of Shanghai-based AI unicorn MiniMax, said “AGI [artificial general intelligence] will undoubtedly become a reality, serving and benefiting everyone”.
Representing China’s young AI entrepreneurs, Yan highlighted AI’s role as a productivity driver and emphasised that achieving AGI required collective action, involving both AI companies and users – not just any single organisation.
Eric Schmidt, former CEO of Google and CEO of rocket start-up Relativity Space, praised Chinese open-source AI models like DeepSeek and Alibaba Group Holding’s Qwen as “extremely powerful” and “world-class”. Alibaba owns the Post.
He noted the growing necessity for US-China collaboration on AI governance as the technology rapidly evolves and cautioned against potential misuse.
“I’m a genuine optimist that China and the United States can build trust from the bottom … It’s been done before and it can be done again,” Schmidt added.
That sentiment was echoed by Harry Shum Heung-yeung, council chairman at Hong Kong University of Science and Technology and Microsoft’s former head of AI and research, during a fireside chat with Schmidt.
“We must have dialogue to understand each other, even regarding the definition of values,” Shum said.
Zhou Bowen, head of the Shanghai AI Lab – one of China’s top AI research institutions – also stressed that technological advancement and safety should be treated with equal priority.
He advocated for a shift in industry philosophy from “make AI safe” to “make safe AI”, meaning developers should consider the long-term security of AI rather than merely applying patches to improve safety.
Stuart Russell, a computer science professor at the University of California, Berkeley, urged tech companies and certain countries to abandon an “arms race” mentality.
“This is a very pointless race because when AGI is created, it will essentially be a source of unlimited wealth in terms of services and knowledge,” he said.
The three-day event, themed “Global Solidarity in the AI Era”, features over 100 sub-forums and events exploring AI applications across various fields, including science, industrial applications, safety, agents, and humanoid robotics, with more than 800 exhibitors showcasing the latest developments.
“Many people believe that in the next three years or so, [AI] systems will begin to learn on their own,” Schmidt said. “We have to be very careful to understand what they’re learning – they could learn bad things as well as good [things].”
“The key thing would be to get researchers in China and the West talking about how we maintain human dignity and human control,” he suggested, while calling for the exchange of “testing data, insights, and concerns” to avoid unforeseen results.