On April 25, academics, diplomats, policymakers and technologists from around the world gathered at the sub-forum “Global AI Governance: Barriers and Pathways Forward” of Shanghai Forum for a high-level dialogue around the existing difficulties in global AI governance and the possible paths that can lead to a future together.
As AI reshapes economies, politics, and human experience at an unprecedented pace, global governance efforts are struggling to keep up. The sub-forum examined both challenges and opportunities, identifying key areas of shared interest among participating nations.
Rethinking governance in the age of AI acceleration
Throughout the forum, speakers highlighted a common theme: the widening gaps AI governance must confront—between technology and regulation, North and South, East and West, and among competing political systems.
Kim Won-soo, former Deputy Secretary General of the United Nations, warned at the forum: “AI is now advancing like a rocket, while governments around the world are still crawling like a snail.” He said the world has four gaps to be filled urgently: geopolitical, value system (between East and West), infrastructural (digital inequity), and institutional (fragmented regulation). Kim urged that it would be a long journey to build global consensus, but what we can do is to start small and quickly move forward.
Kim addressed the importance of China and the U.S. taking the roles as leaders in the global AI revolution, as the two greatest powers in the world that lead the charge in the technology field. He called for trust building, starting from small but agile frameworks built globally.
“Trust” was the keyword Kim emphasized throughout. Echoed by other speakers at the sub-forum, “Trust building” is no doubt the most important element for countries to work on in the future, in bridging the gaps and finding universal solutions.
Diverse situations, shared principles
Multiple approaches to AI regulation and research in different areas around the world were presented at the forum.
Thomas Greminger, Executive Director of the Geneva Centre for Security Policy, detailed Europe’s “ethics-first” approach in regulating AI, centering on the EU AI Act. Greminger raised a problem that may trouble policymakers in AI regulations in Europe—same for other regions—the possible dilemma of protecting the economic interests of industries while restricting companies with strict requirements to protect the overall interest of the technological ecosystem. The solution to this, however, still returns to global agreements and unified regulations to prevent negative competition in AI markets.
Major powers increasingly regard artificial intelligence as an extension of their national sovereignty, explained Greminger towards the problems he mentioned. Countries tend to prioritize their own technological capabilities and autonomy. At the same time, there is widening agreement that unified regulations are needed to address emerging risks, oversee market competition, and prevent stifling innovation and economic growth.
While Europe may possess slightly fewer assets in the field compared to China and the United States, Greminger believes through strong collaboration, China and Europe can be uniquely positioned to help shape the future global framework for AI governance by working together.
Maxime Stauffer of the Simon Institute noted how advanced AI systems are shifting power dynamics by using data to show the alarming speed at which AI developed in recent years—these shifts are not just between nations, but also between corporations and governments, and even between humans and machines. “We need anticipatory governance,” he said, “before the next leap renders today’s norms obsolete.”
Meanwhile, South Korea’s Kyoungjin Choi, Professor of College of Law, Gachon University, introduced a “third path” approach—balancing growth and risk through licensing and oversight for high-impact AI systems. The Korean AI model emphasizes national security, human dignity, and international alignment, while preserving domestic flexibility.
South Korea is willing to act as a bridge between developed countries and developing countries for a shared future, said Choi. Through his speech, Choi delivered South Korea’s positive signals of pushing forward a future where AI thrives with other traditional industries, under human’s proper restrictions.
China has seen remarkable achievements in AI studies. Presenters in the forum also voiced China’s willingness to take its special role in leading the world to an AI era shared by every country.
Dr. WEI Kai, Director of Big Data Research Institute of China Academy of Information and Communications Technology, showcased the country’s industry-driven model, which couples government oversight with robust AI deployment in manufacturing and consumer platforms. China’s systematic framework was highlighted as a potential benchmark for assessing AI capabilities and safety worldwide.
Using real-world data and accessibility tests, Dr. FANG Shishi from Shanghai Academy of Social Sciences revealed that Global South cities consistently faced higher delays and lower stability when accessing major AI services like GPT or Claude, reflecting the inevitable gap in technology that we need to pay attention to beside the gap in regulations.
Significantly, Chinese models including Wenxin Yiyan ensured more stable access in these regions, potentially paving the way for “digital rebalancing”. She concluded that for AI to become true global infrastructure, “we must address not just ethical and legal frameworks, but infrastructural access itself.”
Another thread running through the discussions was the future direction of AI in the Global South. Ilmas Futehally, Executive Director of the Strategic Foresight Group, emphasized the need for unified global AI governance rather than fragmented national efforts and called for greater participation from Global South countries.
The group’s work focuses on three goals: listening to Southern perspectives, identifying commonalities and differences to draft shared guidelines, and building consensus between North and South.
Sundeep Waslekar, President of the Strategic Foresight Group, contrasted two emerging paradigms: the current data-driven AI model, and a possible “zero-data” future. Waslekar pointed out that some developing countries, developed countries as well, might not be willing to accept the truth that AI can bring huge threats, and may even block AI technologies out in fear of harming their own interests. Although it can be tough, Waslekar advised all regions to have a broadened view for long-term development and start small from basic research.
Global Security in the Crosshairs: The Rise of AI
Except for economic and technological concerns, security risks also take center stage at the intersection of AI and previous military policies.
Lynn Rusten, Vice President of the Nuclear Threat Initiative (NTI) of the United States, delivered a keynote speech around managing AI nuclear risks. Drawing on her decades of experience in U.S. government and non-governmental efforts related to nuclear policy, she warned that heightened tensions among major powers—particularly the United States, China, and Russia—combined with weakened communication and the erosion of arms control agreements, have significantly increased the risk of nuclear conflict.
Turning to AI, Rusten highlighted its growing influence on nuclear systems, cautioning that AI could complicate decision-making for policymakers. “It is important for humans to stay in the loop.” Confronting all the challenges, Rusten called for sustained human involvement in nuclear decision-making and the establishment of clear norms and standards for AI applications in this field.
Also, regarding the gap between regulation and technology, Rusten attached great importance to policymakers and technology experts sitting together at the conference table, for further and deeper communications and more rational decisions.
Similarly, Eric Richardson, President of AGO International, drew on his experience with U.S.-China Track II dialogues to propose a framework for identifying, mapping, and mitigating AI-related military risks. “The AI arms race isn’t inevitable—but we’re running out of time to prevent it.”
Referring to previous examples of misused technologies in wars, Richardson also warned against the misuse of AI in both biological and military fields. Being a double-edged sword in global peacebuilding, AI’s development is alerting all nations to move from seeking technological dominance to shared responsibility, especially between major powers like the U.S. and China.
Looking Ahead: Endless possibilities in cooperation
For comments, leading figures in AI management revealed possibilities in cooperation, drawing a rosy picture for the future ahead.
FENG Shuai, Deputy Director of the Artificial Intelligence Research Center at the China Academy of Information and Communications Technology (CAICT), concluded China’s active promotion in responsible AI development. He pointed out that global AI governance must not only focus on security and ethics but also ensure equitable technological access and benefits for all countries.
LU Chuanying, Director of the Cyberspace International Governance Research Center at the Shanghai Institutes for International Studies, stressed the growing strategic competition surrounding AI technologies globally. Lu underlined that China is willing to work with all countries to promote a balanced and fair AI governance system, calling for enhanced multilateral dialogue particularly within frameworks like the United Nations.
The forum closed on a note of cautious optimism. Fudan’s Dr. Jiang Tianjiao, who co-chaired the event, remarked, “AI is more than technology—it is a test of how we govern as a species.”
Whether it’s through treaties, standards, or economic partnerships, the global conversation on AI governance needs to begin to move from aspiration to real architecture. As Fudan’s Dr. YAO Xu, another co-chair of the forum, said to end the conference, “Although today’s forum has come to an end, our dialogue, communication, and collaboration will never end, as we work together to build a better AI governance framework all around the world.”
(END)
Writer: YANG Xinrui
Proofreader: WANG Jingyang
Editor: WANG Mengqi, LI Yijie