Quick Brief
- ByteDance released Seedance 2.0 video model February 10, 2026, generating 4-15 second clips with 2K resolution
- Zhipu AI’s GLM-5 features 745 billion parameters trained entirely on Huawei Ascend chips
- Both models follow DeepSeek’s January 2025 breakthrough demonstrating Chinese AI competitiveness
- US announces $20 million APEC fund to promote American AI adoption across Asia-Pacific
China released two significant AI models within weeks of each other. ByteDance’s Seedance 2.0 video generation model launched February 10, 2026, while Zhipu AI made its GLM-5 language model accessible via the Z.ai platform in mid-February 2026. These releases build on DeepSeek’s January 2025 models that demonstrated Chinese engineers could develop frontier AI despite US semiconductor restrictions. The developments prompted the Trump administration to announce a $20 million fund at February 2026 APEC meetings in Guangzhou to support American AI technology adoption.
ByteDance Seedance 2.0 Video Generation Capabilities
ByteDance released Seedance 2.0 on February 10, 2026, through its Jimeng AI platform in China. The model generates videos ranging from 4 to 15 seconds using text prompts, images, videos, and audio as input. According to testing by Chinese media and early partner documentation, Seedance 2.0 produces 2K cinema-grade video output with character consistency.
The model’s quad-modal reference system allows users to upload up to 12 files, 9 images, 3 videos, and 3 audio clips and assign them specific roles using an @ reference system. Users can designate uploads as character references, motion references, or rhythm references, providing director-level control over generated content. This approach differs from primarily text-based prompting systems used by competing models.
Seedance 2.0 employs a dual-branch diffusion transformer architecture with one transformer dedicated to video and another to audio. The two branches communicate during generation to ensure visual events synchronize precisely with corresponding sounds, such as footsteps or object impacts. Frame rates range from 24 to 60 fps depending on platform settings.
Zhipu AI GLM-5 Technical Specifications
Zhipu AI made GLM-5 accessible via its Z.ai platform and WaveSpeed API in mid-February 2026. The model features approximately 745 billion total parameters in a Mixture of Experts (MoE) architecture with 256 experts, of which 8 activate per token, resulting in 44 billion active parameters per inference. This represents roughly twice the scale of predecessor GLM-4.5, which contained 355 billion total parameters.
GLM-5 incorporates DeepSeek’s sparse attention mechanism for efficient long-context processing up to 200,000 tokens. Zhipu AI trained the model entirely on Huawei Ascend chips using the MindSpore framework, achieving independence from US-manufactured semiconductor hardware. The company completed a Hong Kong IPO on January 8, 2026, raising approximately HKD 4.35 billion (USD $558 million) to fund model development.
According to Zhipu AI, GLM-5 demonstrates competitive performance with OpenAI’s GPT-5 and Anthropic’s Claude Opus series across reasoning, coding, and agentic tasks. The company positions the model for advanced multi-step reasoning, coding, creative writing, and autonomous planning applications. Zhipu AI has indicated plans to release GLM-5 under an MIT open-source license following the initial API launch.
| Specification | GLM-5 (Zhipu AI) | GPT-5 (OpenAI) |
|---|---|---|
| Total Parameters | ~745B | Undisclosed (estimated trillions-scale) |
| Active Parameters | ~44B (MoE) | Undisclosed |
| Architecture | MoE + Sparse Attention | Unified router, multimodal |
| Context Window | 200K tokens | 400K input / 128K output |
| Training Hardware | Huawei Ascend | NVIDIA / Azure |
| API Pricing | ~$0.11/M tokens (GLM-4.x) | $1.25/M input, $10/M output |
| Open Source | Expected MIT license | Closed-source |
DeepSeek’s January 2025 Technical Breakthrough
DeepSeek released its R1 and V3 models in January 2025, demonstrating that software optimization could achieve frontier-model performance despite limitations in advanced semiconductor access. The Chinese startup’s approach challenged assumptions about the relationship between hardware capabilities and AI model performance.
DeepSeek founder Liang Wenfeng prioritized curiosity-driven research over short-term commercialization, stating his goal was developing artificial general intelligence rather than incremental applications. This philosophy influenced subsequent Chinese AI development approaches. Four of the five most-downloaded models on Hugging Face in recent weeks originated from Chinese laboratories, driven primarily by cost advantages.
US APEC Technology Initiative
The Trump administration announced a $20 million fund at Asia-Pacific Economic Cooperation meetings in Guangzhou, China, during February 2026. Casey Mace, US senior official to APEC, stated the fund would support adoption of American AI technologies by partner economies as part of efforts to strengthen US leadership in emerging technologies.
A State Department spokesperson stated that “China’s AI technology promotes CCP propaganda and censorship, while its vision for AI governance seeks to enable authoritarian repression“. China rejected these characterizations, saying it supports global cooperation on AI governance. President Trump signed an executive order in July 2025 aiming to “ensure that American AI technologies, standards, and governance models are adopted worldwide“.
The APEC initiative comes ahead of President Trump’s expected April 2026 visit to China and China’s hosting of the APEC leaders summit in Shenzhen in November 2026. The State Department noted these events will likely spotlight US-China competition over technology and economic influence across the Asia-Pacific.
Chinese AI Development Momentum
Chinese AI models are gaining adoption among developers through open-source availability and pricing 50-80% below American alternatives. Stanford University’s January 2026 report concluded that Chinese AI models “seem to have caught up or even pulled ahead” of global competitors across multiple performance dimensions. Jeff Boudier of Hugging Face noted that emerging startups overwhelmingly select Chinese models due to cost factors.
Alibaba’s Qwen-Image-2512 emerged as a top-performing free model with high-fidelity rendering capabilities, while ByteDance’s Seedream 4.0 trained on domestic chips. The pattern of open-weight releases with aggressive pricing creates pressure on American companies to justify higher costs while considering public offerings.
China’s progress training GLM-5 entirely on Huawei Ascend chips demonstrates advancing independence from American semiconductor supply chains. Zhipu AI’s successful Hong Kong IPO provided $558 million in funding for continued model development. The combination of technical progress, cost advantages, and open-source strategies is reshaping competitive dynamics in global AI markets.
Current Limitations
Seedance 2.0 remains in limited beta with restricted public access through China’s Jimeng platform. International users typically access it through third-party API services, with full global rollout expected around February 24, 2026. DataCamp’s analysis notes the model handles complex layered scenes involving glass with difficulty, and music performance scenarios sometimes exhibit unnatural motion.
GLM-5 benchmark performance comes primarily from vendor-provided data, requiring validation through extended real-world deployment. The model’s competitive positioning relative to GPT-5 and Claude Opus will require independent verification as more users gain access beyond Zhipu AI’s controlled launch environment.
Frequently Asked Questions (FAQs)
When did ByteDance release Seedance 2.0?
ByteDance released Seedance 2.0 on February 10, 2026, through its Jimeng AI platform for users in China with paid subscriptions starting around 69 RMB. International access is primarily through third-party services, with full global rollout expected around February 24, 2026.
What are GLM-5’s key specifications?
GLM-5 features approximately 745 billion total parameters with 44 billion active parameters per inference using a Mixture of Experts architecture. The model supports 200,000-token context windows and was trained entirely on Huawei Ascend chips.
What was DeepSeek’s contribution to Chinese AI?
DeepSeek released R1 and V3 models in January 2025 that achieved frontier performance despite US chip restrictions. The breakthrough demonstrated that software optimization could compensate for hardware limitations, influencing subsequent Chinese AI development approaches.
How much funding did the US announce at APEC?
The Trump administration announced a $20 million fund at February 2026 APEC meetings in Guangzhou to support partner economies in adopting American AI technologies. The initiative aims to strengthen US leadership in emerging technologies across the Asia-Pacific region.
Can Seedance 2.0 compete with other video models?
Early demonstrations show Seedance 2.0 produces quality comparable to competing models with advantages in direct control through its quad-modal reference system. DataCamp’s analysis notes the model’s simultaneous audio-video generation provides workflow benefits for commercial applications.
Why are Chinese AI models gaining adoption?
Chinese models combine competitive performance with pricing significantly below American alternatives and open-source availability. Cost advantages drive adoption among startups and developers, particularly in emerging markets with budget constraints.
What chips did Zhipu AI use for GLM-5?
Zhipu AI trained GLM-5 entirely on Huawei Ascend chips using the MindSpore framework. This represents a milestone in China’s development of self-reliant AI infrastructure independent of US semiconductor supply chains.
What video formats does Seedance 2.0 support?
Seedance 2.0 supports multiple aspect ratios optimized for social media, advertising, and professional film production. The model produces high-resolution output up to 2K with frame rates between 24 and 60 fps.
How did DeepSeek achieve competitive performance with limited chips?
DeepSeek’s engineers optimized software architectures to extract maximum performance from available hardware. Their work demonstrated that strategic algorithmic innovation could partially compensate for limitations in cutting-edge semiconductor access, achieving frontier performance at substantially lower training costs than American competitors.
What is the US position on Chinese AI exports?
The State Department argues that Chinese AI systems promote censorship and authoritarian governance models in countries that adopt the technology. Washington frames AI competition as involving both economic competitiveness and questions about governance standards embedded in technology exports.

