China’s Zhipu AI: Why Full Superintelligence by 2030 Is Unlikely

The future of artificial intelligence captivates technologists, investors, and policymakers alike. Among the most provocative ideas is the possibility of artificial superintelligence (ASI) — machines that outperform humans across all domains. Recently, the CEO of one of China’s leading AI firms, Zhipu AI, offered a tempered view: achieving a full-blown ASI by 2030 is unlikely.
The CEO’s Perspective: Cautious Optimism
Zhang Peng, CEO of Zhipu AI, stated that while some AI systems might exceed human performance in specific areas by 2030, the broader vision of a superintelligence surpassing humans in *every* capacity remains distant and vague. He emphasized that “people reach different conclusions when discussing this issue,” and that the definition of “human-level intelligence” is itself ambiguous.
“Achieving or exceeding human intelligence levels by 2030 might mean surpassing humans in one or several aspects, but likely still falling far short in many areas.”
In other words: ASI might emerge in narrow slices of cognition, but holistic dominance remains elusive within that timeframe.
Zhipu AI, GLM-4.6, and the China AI Landscape
Zhipu AI, founded in 2019 out of Tsinghua University, is among the prominent names in China’s AI race. In September 2025, the company unveiled its latest large language model: GLM-4.6. This iteration builds on its predecessor (GLM-4.5), with enhanced strength in reasoning, writing, coding, and “agent” applications (automated task execution).
While Zhipu is growing overseas revenue and expanding into enterprise markets, Zhang concedes that its consumer-based subscription offering still lags behind U.S. models. Nonetheless, the firm now also offers a coding subscription plan aimed at developers — a stepping stone into direct-to-consumer markets.
Contrasting Views: Bold Claims vs. Grounded Skepticism
Some tech leaders present more aggressive outlooks. For instance, OpenAI CEO Sam Altman has predicted that superintelligence could arrive by the end of the decade, and in some interviews has suggested breakthroughs might come as early as 2026. Others — such as SoftBank’s Masayoshi Son — have speculated ASI might emerge by 2035.
Zhang’s stance diverges: he sees value in more measured forecasts. He argues that widespread claims of human-parity AI can mislead public understanding and oversell near-term expectations.
Key Arguments Against ASI by 2030
- Vagueness of definition: No consensus exists on what it means to surpass human intelligence in *all* dimensions.
- Complexity of general intelligence: Excelling in reasoning, creativity, empathy, lifelong learning, and abstraction all at once is a monumental leap.
- Compute and hardware constraints: Advanced AI systems require enormous compute power and specialized chips, which remain limited by supply and export controls.
- Data, safety, and alignment: Training models that generalize well and align with human values introduces deep challenges.
Why This Matters: Real-World Impacts
Understanding realistic timelines for AI evolution has practical consequences:
- Policy and regulation: Governments must set frameworks that balance innovation and safety.
- Investment strategy: Overhyping ASI timelines can lead to bubbles or misallocated capital.
- Public trust: Premature promises of “machines that outsmart humans” could erode credibility if they don’t materialize.
Broader Context: China’s AI Ambitions by 2030
China has publicly aimed to become a global leader in AI by 2030. Beijing’s industrial policies support AI across hardware, software, and applications. Yet, despite strong state backing, external constraints — especially chip export restrictions — limit the ability to scale compute-intensive systems.
Most of China’s progress emphasizes domain-specific AI — such as speech recognition, robotics, and diagnostics — rather than universal general intelligence.
Looking Ahead: What Might the Next Decade Hold?
- AI systems excelling in narrow but critical fields like coding, legal analysis, and scientific research.
- Deeper convergence of AI with robotics and automation.
- Global regulation and AI safety frameworks gaining traction.
- Ongoing debate about governance and human-centric AI design.
Conclusion
Zhipu AI’s Zhang Peng doesn’t rule out *some* human-surpassing AI by 2030 — but he firmly doubts that full artificial superintelligence will arrive so soon. His perspective reminds us that AI progress is neither linear nor guaranteed. Achieving broad human-level cognition across all domains remains one of the greatest scientific challenges of our time.
As competition intensifies globally, balancing ambition with realism will be key. Breakthroughs will surely come, but likely not overnight.