
Chinese researchers are proposing a bold workaround to US semiconductor restrictions: stack older, domestically-producible chips together to match the performance of advanced chips they can no longer access Cryptopolitan. The chip stacking strategy centres on combining 14-nanometer logic chips with 18-nanometer DRAM using three-dimensional hybrid bonding Cryptopolitan, positioning system architecture innovation over transistor miniaturization as Beijing adapts to export controls that block access to cutting-edge lithography equipment.
Wei Shaojun, vice president of the China Semiconductor Industry Association, stated that 14nm logic chips paired with 18nm DRAM using 3D hybrid bonding and near-memory computing architecture could reach comparable performance to NVIDIA's 4nm-class silicon used in current AI GPUs Artificial Intelligence News. The chip reportedly delivers 120 teraflops of processing power with power efficiency of 2 TFLOPS per watt Thriveholdings.
However, NVIDIA's A100 GPU, which Wei positions as the comparison point, actually delivers up to 312 TFLOPS more than 2.5 times the claimed performance Cryptopolitan. The strategy prioritizes use of mature-node chips such as 14nm logic and 18nm DRAM which remain accessible despite technological constraints, focusing on integrated systems that maximize performance through sophisticated packaging solutions OpenAI.
Rather than fighting an unwinnable battle for process node leadership as TSMC and Samsung push toward 3nm and 2nm processes that remain completely out of reach for Chinese manufacturers, the chip stacking strategy proposes competing on system architecture and software optimization instead Cryptopolitan. The 3D hybrid bonding technique creates direct copper-to-copper connections at sub-10 micrometre pitches, essentially eliminating the physical distance that slows down conventional chip architectures Cryptopolitan.
Despite US restrictions, China has managed to continue developing highly advanced AI models with many being run on homegrown chips, leveraging a plethora of cheap energy and giant chip clusters from Huawei underpinning AI advances H2S Media. Huawei's CloudMatrix 384 connects 384 of its Ascend 910C chips to deliver performance rivaling NVIDIA's GB200 NVL72, though solutions are less power efficient than NVIDIA systems H2S Media. The approach mirrors broader industry recognition that AI competitiveness requires full-stack autonomy from silicon to software.
Wei described NVIDIA's CUDA platform as a "triple dependence" spanning models, architectures, and ecosystems, noting that Chinese chip designers pursuing traditional GPU architectures would need to either replicate CUDA's functionality or convince developers to abandon a mature, widely adopted platform Cryptopolitan. The chip stacking strategy, by proposing an entirely different computing paradigm, offers a path to sidestep this dependency.
Computer scientist Jawad Haj-Yahya, who has tested both American and Chinese chips, observed that China's semiconductors perform similarly to the US in predictive AI but fall short in complex analytics, stating the gap is clear and shrinking but not something they will catch up on in the short-term Yahoo Finance.
NVIDIA CEO Jensen Huang warned that China was "nanoseconds behind" the US in chip development, highlighting China's hardworking talent pool, intense domestic competition, and progress in chipmaking IT Pro.
Beijing and local governments from Shanghai to Shenzhen have offered subsidies or vouchers to reduce costs for companies renting computing power, with China benefiting from its abundance of cheap energy from massive investments in green energy including solar, wind, and rapidly expanding nuclear infrastructure H2S Media.
The United States maintains an estimated fivefold advantage in AI supercomputing capacity over China, with US and allied export controls successfully constraining China's ability to train and deploy frontier AI models at scale Tekedia. For enterprises evaluating AI infrastructure strategies, the developments signal increasing diversification in chip architectures beyond traditional GPU-centric approaches. China's chip announcements function as bargaining chips in trade negotiations, pressuring Washington to relax export controls or risk losing relevance in fastest-growing AI markets Ainvest.
Whether chip stacking succeeds in closing the performance gap with NVIDIA remains uncertain, but what's clear is that China's semiconductor industry is adapting to restrictions by pursuing innovation in areas where export controls have less impact system design, packaging technology, and software-hardware co-optimization Cryptopolitan. The key constraint is China's capacity to produce enough chips domestically to keep up with the gap in capability as NVIDIA and others continue improving performance H2S Media. Decision-makers should monitor whether architectural disruption through advanced packaging proves competitive against continued process node miniaturization, as outcomes will fundamentally reshape global AI infrastructure investment strategies.
Source & Date
Source: Artificial Intelligence News, Tom's Hardware, TechPowerUp, CNBC, Indian Defence Review, Institute for Progress
Date: December 2, 2025

