Before the 2025 Shanghai Forum’s sub-forum entitled “Global Al Governance: Barriers and Pathways Forward” started, we sat down with FANG Shishi, Director of internet Governance Research Center at the Institute of Journalism and Communication at the Shanghai Academy of Social Sciences, also an alumna of Fudan’s School of Journalism—to explore what we should be aware of in international AI management, and also a quieter, yet no less urgent frontier: the impact of AI on the practice of Journalism.
This conversation took place just before Fang’s panel talk about her research on the globalization of large language model(LLM)at the forum. We talked about what journalists are facing today, what happens when AI hallucination begins to blind its users, and whether humans can still be the “conductor”, in a world that is largely orchestrated by machines we produce.
Q: Many people today are talking about AI replacing journalists. Do you think that’s already happening?
Fang: From a historical perspective,a sense of crisis has long pervaded journalism, yet this time it is driven by intelligent technologies.
Fifteen years ago, when I was a doctoral student, many senior journalists departed the newsrooms of traditional media to engage with the internet industry. This shift was motivated not only by a curious spirit of adventure but also by anxiety about the state of journalism at that time.
Frankly speaking, the challenges we face today are undoubtedly more severe. Journalists at some traditional media outlets now must choose: either assist AI news generators with meager pay or step out of their comfort zones to forge new paths. We must confront harsh realities.
Q: What were the reasons behind the this transition at that time?
Fang: This all began with the intervention of social media platforms and algorithmic technologies between media and audiences. Previously, media held a primacy, enabling them to set the social agenda: while they could not determine how you thought, they could influence what you thought about. However, when platforms and algorithms began recommending content based on user preferences, this primacy collapsed. Media began to adapt, even optimizing for search engines and creating content tailored to algorithmic preferences. Consequently, media find themselves working for platforms and algorithms while drifting increasingly apart from their audiences.
Q: Has artificial intelligence exacerbated this phenomenon?
Fang: Exactly. The first disruption was a revolution in how news is disseminated; now this wave has moved upstream to the production of news content. The pace is rapid: generative AI machines can now fluently draft articles, generate headlines, provide voiceovers, and even produce short videos.
We’ve talked to editors at mainstream news agencies. Some of them told us bluntly: Photography departments are no longer needed. Now, they generate images with AI. It’s cheaper and faster. But the quality? It’s… “stiff (lifeless)”. I can’t say there are souls in them.
Q: Are journalists still needed in these processes? And what are they doing now?
Fang: Many have been reassigned to content moderation. What is termed machine-assisted content production appears more like human-assisted machine production. This is a pity, as it amounts to a waste of the valuable experience and judgment accumulated in the media industry.
Q: What relationship would seem ideal between journalists and AI?
Fang: This is a profound inquiry. It calls to mind the technological philosophy of French thinker Gilbert Simondon, whose insights provide a clarifying framework for understanding human-technological interaction. Simondon rejects the notion of technology as a static artifact, instead casting it as an ongoing process—his concept of technical existence is fundamentally one of perpetual becoming. AI, for instance, is not a fixed entity but a dynamic ecosystem, continuously evolving, adapting, and generating new configurations.
Simondon proposes a transformative model of human-technology relations: analogous to a conductor directing an orchestra, the relationship between human and technology should be one of intentional orchestration. In this paradigm, technologies serve as instrumental performers, while humans act as conductors—charged with harmonizing technical execution and serving as the associative milieu that enables technological operation. Here, humans surpass the role of mere tool-users; they become the vital context through which the entire technological system coheres and thrives.
I believe the positioning and logic of this framework can also be applied to the relationship between journalists and AI.
Q: What do you mean by “associative milieu that enables technological operation ”?
Fang: Take your phone as an example. It’s not just a standalone object. It relies on a charger, a data network, background infrastructure. None of these pieces work alone—they must be connected. Someone has to know when the battery’s low, when the network is unstable. And that someone is you.
In Simondon’s opinion, you are part of the associated context—the dynamic environment that allows the system to operate. The human role is to organize, tune, and intervene. That’s not a passive position. It’s an irreplaceable one.
Q: What about AI hallucination that people talk about these days? I mean the mistakes that generative AI makes—can we manage them?
Fang: In the early days, we conducted research on the hallucination problem in generative AI from a typological perspective. We identified more than 20 different types of errors—some involving factual and cognitive mistakes, but many reflecting more complex and profound socio-technical relationships.
Some researchers argue that the very notion of hallucinations in large models is fundamentally flawed. They contend that artificial intelligence does not possess cognitive states, let alone the ability to hallucinate. What we label as AI hallucinations merely represents outputs that diverge from human expectations and interpretations. From the model’s perspective, it is simply generating statistically plausible sequences based on its training data. The concept of hallucination itself is an anthropomorphic projection.
Q: So you’re saying error isn’t just technical—it’s political, even cultural.
Fang: Exactly. When people say a model “got it wrong,” what they often mean is: It doesn’t align with our context, our policies, our reality. But the model isn’t trying to be “wrong”—it’s following patterns in its training set. That's why contextual and situational training is so crucial.
Q: You mentioned some deeper concern about the development of AI. What’s the most critical issue do you think in today’s AI development?
Fang: Large models are moving toward closure. The training data accumulated over humanity’s long history is on the verge of being exhausted, yet the quality of synthetic data remains undefined. Large models trained on such data will become multi-layered black boxes, and their generated outputs will leave us confused about where reality starts and ends.
There’s a pattern in the history of technology: every major innovation brings a degree of disconnect. When the telegraph emerged, it created an asynchrony between the sender and their handwriting—we could no longer discern the sender’s emotional expression from their penmanship. As Walter Benjamin noted, mechanical reproduction strips objects of their aura. Now, generative AI confronts us with a new kind of disconnection—not just from the author, but from the real world itself.
If AI begins fabricating entire fictional countries, histories, and even identities, then self-trains based on this synthetic reality, we may find ourselves permanently unable to verify anything. This is how we become detached from reality, and why it poses such a danger.
Q: That’s truly alarming. Do you think there’s something China can do in AI management? Do you think we have something special to share with the rest of the world?
Fang: At tomorrow's forum, I will share some research on the globalization of large language models (LLMs). LLMs have the potential to become global infrastructure. Infrastructure is not merely composed of wires and chips; it also serves as a gateway through which people can maintain consistent and reliable access. And things can only become infrastructure when they remain stable.
If Chinese LLMs can ensure stable access—especially in regions where Western models underperform—this will present a genuine opportunity. It is not just about competition, but about forging connections.
We recently conducted a comparative study of six large language models: three from China and three from abroad. We tested connection speeds, latency, and stability across various regions, comparing their performance in both the Global North and the Global South. Encouragingly, Chinese large models outperformed some Northern-based models in the Global South. In areas with weak infrastructure, Western models often struggle to connect or experience timeouts, while Chinese large models are more accessible.
This suggests that in the Global South, Chinese AI may not only catch up but even achieve leapfrog development.
Q: Is this an opportunity for China to help fill in the gap between developed regions and developing regions?
Fang: Yes, I believe so. We used to talk about the digital divide, but now there’s another divide—the intelligence divide. The globalization of Chinese LLMs could help rebalance this gap.
Q: I was impressed by your analogy that human-AI collaboration is like the relationship between a conductor and an orchestra. This leads to my final question: How can we achieve this collaboration in practice? Specifically, as journalism students facing an AI-challenged media environment, what fundamental principles should guide our work?
Fang: That's a good question. First, build your core competitiveness: interviewing, writing, and understanding people. Stay closely connected with people and the world. No machine can replace what you see and feel. Journalism is not about mastering tools, but about maintaining connections with others, truth, and what matters.
Next is skill development. While you don’t need to be a programmer, you must understand the basics of data and how AI models work well enough to communicate effectively with engineers.
But most importantly, protect your critical thinking. Machines are tools: they neither seek meaning nor question why. But these remain your irreplaceable role as journalists.
(END)
Writer: YANG Xinrui
Proofreader: WANG Jingyang
Editor: WANG Mengqi, LI Yijie