News & Events

15 May 2026

SHARE THIS
University Affairs

Bridging the Intelligence Divide: Global AI Governance in an Era of Rising Risk, Infrastructure Inequality, and Human Restraint

By

In an era when artificial intelligence is no longer merely a tool but is emerging as the foundational infrastructure for modern civilization, humanity stands at a profound crossroads.


At the sub-forum Global AI Governance: Bridging the Intelligence Divide and Strengthening North-South Dialogue, held as part of the 2026 Shanghai Forum, leading scholars, technologists, and policymakers gathered to examine the unprecedented transformations brought about by artificial intelligence (AI). Moving beyond the zero-sum narratives of geopolitical rivalry, this dialogue ventured into systemic risk management, constitutional frameworks, and the philosophical imperatives of human restraint.


Infrastructure and Technical Regulation


Jörg Friedrichs, Associate Professor of Politics at the University of Oxford, pointed out,We should approach AI governance from an infrastructure perspective.” As previous industrial revolutions have demonstrated, regulation becomes effective only when a technology has matured into infrastructure.


“Infrastructure and regulation go hand in hand: without infrastructure, regulations cannot be enforced, and without regulation, infrastructure may fail to serve public purposes effectively.” AI infrastructure encompasses data centers, cloud computing, and energy grids. Where the state controls or facilitates these public utilities, it holds the leverage to make access contingent on regulatory compliance.


Karson Elmgren, Senior Researcher at the Institute for AI Policy and Strategy (IAPS), noted that the scale of frontier AI models, with trillions of parameters, is making traditional human monitoring increasingly infeasible. Elmgren proposed Using Barbarians to Govern Barbarians (以夷制夷), referring to technical solutions such as data filtering, Confucian-inspired moral cultivation for models, and whole-chain monitoring. He warned that AI companies are engaged in such intense competition that they may be unable to do the right thing without expert governance institutions.


LI Wenlong, Research Professor at Guanghua Law School, Zhejiang University, emphasized that content authenticity is a core challenge for AI regulation. He evaluated various labeling and watermarking methods. While China has taken the lead in developing targeted rules in specific areas, he noted that global institutional coordination is needed to ensure that AI-generated content remains identifiable across borders and to prevent public confusion.


Global Equity and the North-South Divide


Despite the many challenges the world must confront together, AI is not inherently negative. Yoo Chandong, Professor of Electrical Engineering at the Korea Advanced Institute of Science and Technology, proposed that AI could serve as a global equalizer in bridging the intelligence divide. While the Global North may control financial capital and GPU clusters, the Global South possesses demographic youth and critical minerals. AI, he argued, could help narrow the 45:1 education spending gap by delivering PhD-level instruction to millions of children through offline tutors. He called for a new synthesis in which intelligence is treated as a universal utility.


“The global discourse on AI has shifted in a warring direction; the language of cooperation and shared responsibility is increasingly replaced by competition and winning.” XIAO Qian, Deputy Director of the Center for International Security and Strategy at Tsinghua University, framed the development of AI nowadays as a race, warning that this approach can be counterproductive since it incentivizes speed over safety. “AI capabilities are embedded in global supply chainsopen research ecosystems, and national infrastructures; it is not a zero-sum domain,” Xiao emphasized. Risks do not respect political boundaries and human require targeted cooperation.


Denis Simon, holder of the Bank of America Chair in International Finance at Schwarzman College, Tsinghua University, and Senior Fellow at the Quincy Institute, argued that talent, not capital, is the critical source of the global innovation economy. AI is fundamentally redesigning the nature of work, with research indicating that up to 12% of jobs could be replaced globally. He warned against “talent wars” and emphasized that education must be redesigned for agility rather than linear specialization.


Governance Capacity and Sovereign Ethics


Maximilian Mayer, Assistant Professor of International Relations and Global Technology at the University of Bonn, proposed that AI represents a “constitutional moment”, in which private companies are beginning to frame their technologies into social and political orders. Mayer analyzed Anthropic’s “Claude Constitution” and warned that global and national governance discussions are not keeping pace with what companies are already codifying as constitutional principles. AI, he argued, should not be left to commercial actors alone, but treated as a matter of public concern.


LU Chuanying, Vice Dean of the School of Political Science and International Relations at Tongji University, pointed out that government involvement with artificial general intelligence (AGI) follows divergent logics. The United States adopts a model-centric approach focused on strategic competition, while China emphasizes infrastructure-centric development, aimed at lowering the threshold for public and industrial use. Based on this distinction, he warned that the technology community should not be overly optimistic in defining AGI and that a clear roadmap is needed.


Baek Seoin, Assistant Professor at the College of Global Culture, Hanyang University, noted that mid-tier countries are facing a “sovereign AI dilemma”. Countries such as South Korea must develop their own AI platforms to reduce asymmetric dependency on either U.S. or Chinese models. He discussed how national AI strategies are shifting from safety-oriented concerns to security-oriented priorities, as well as the challenge of balancing domestic industrial innovation with international governance standards.


CAI Cuihong, Deputy Director of the Center for Global AI Innovative Governance and Professor at the Institute of International Studies, Fudan University, delivered the concluding remarks and pointed out that the core challenge of global AI governance has shifted from institutional insufficiency to a mismatch between institutional expansion and governance capacity. As governance mechanisms multiply, AI incidents continue to rise, showing that more rules do not necessarily translate into effective governance. Cai further defined this capacity through the “three Cs”: Capability, Competence, and Credibility, referring respectively to the ability to innovate, the ability to regulate, and the ability to be trusted in global dialogue.


Taken together, these diverse perspectives point to a shared concern: AI governance remains fragmented, uneven, and inadequate. Moving forward, the world needs a systemic, inclusive, and technically robust global framework—one capable of regulating AI physical infrastructure, identifying early warning indicators of loss of control, and bridging divergent governance values across countries and stakeholders.


As the frontiers of intelligence advance, wisdom must be its compass; as power grows, moral stewardship must remain its anchor.



(END)

Writer: CHENG Yuting

Proofreader: YANG Xinrui

Editor: WANG Mengqi, LI Yijie

Editor: