← Back to Library

Intel 18a details & cost, future of dram 4f2 vs 3d, backside power adoption (or not), China's…

This piece cuts through the usual hype cycle to reveal a stark truth: the semiconductor industry is hitting a wall where physics and economics collide, forcing a radical rethinking of how chips are built from the atom up. Dylan Patel's analysis of the VLSI conference doesn't just list new technologies; it exposes the fragile bridge between theoretical breakthroughs and the brutal reality of manufacturing at scale. For anyone tracking the future of computing, the takeaway is clear: the era of easy scaling is over, and the next decade belongs to those who can master the chaos of complexity.

The Virtual Factory

Patel opens by dismantling the assumption that we can simply build our way out of current bottlenecks. The core argument is that as chip design becomes exponentially more complex, the cost of physical trial-and-error has become prohibitive. "Digital twins allow for design exploration and optimization to be done in an accelerated virtual environment," Patel writes, noting that engineers can now ensure designs work before any silicon is run through the fab. This framing is crucial because it shifts the focus from hardware capability to data infrastructure. The industry is no longer just about moving electrons; it's about simulating them.

Intel 18a details & cost, future of dram 4f2 vs 3d, backside power adoption (or not), China's…

The coverage details how companies like Synopsys are using machine learning to replace centuries-old physics models. Patel highlights a stunning efficiency gain: "Machine-Learned Force Field simulation using Moment Tensor Potentials demonstrated near-DFT accuracy with 17 min compute cost vs 12 days with traditional DFT." This isn't just a speedup; it's a fundamental change in the R&D timeline. The ability to simulate quantum interactions in minutes rather than weeks means the industry can iterate on materials engineering at a pace that was previously impossible.

However, the path to a fully automated "lights-out" factory is fraught with data silos. Patel notes that Lam Research envisions a future where tool fleets are orchestrated in virtual twins, but admits that "the main barrier facing lights-out fabs is with data and connectivity across tools from different vendors." Critics might argue that this reliance on perfect data integration is a fantasy in a fragmented supply chain, but the direction is undeniable. The industry is moving toward self-aware tools that request their own maintenance, a shift that will redefine the role of the human operator from machine tender to system architect.

Digital twins span the entire scale of semiconductor design, from atomic-level quantum interactions to fab-level fleet management, turning the impossible complexity of modern chips into a solvable equation.

The Memory Wall and the Architecture Shift

The most provocative section of Patel's analysis concerns the future of memory. For over a decade, the industry has relied on a standard 6F2 layout for DRAM, but Patel argues we have reached the physical limits of this approach. "At 1d the process and tooling reach their limits for a workable, high-yield process," he writes, pointing to shrinking contact areas and rising resistance as the primary culprits. This is a critical inflection point where the old rules of scaling no longer apply.

Patel champions the 4F2 architecture as the necessary evolution, explaining that it solves congestion by moving the bitline to a different layer. "The current path is also much shorter, directly down from capacitor, through the vertical channel, directly to the bitline," he notes, contrasting this with the longer, resistance-heavy paths of current designs. This architectural pivot is not merely an optimization; it is a survival strategy. Without it, memory density would stall, choking the performance of the very processors that drive the AI boom.

Yet, the transition is not without significant hurdles. Patel acknowledges that implementing 4F2 requires high-aspect-ratio etching and deposition capabilities that are still maturing. "Until just a few years ago deposition tools were not capable of filling a deep trench with the required metals for the bitline," he admits. Furthermore, the industry faces a strategic fork: should they move peripheral circuitry under the cell array (requiring complex wafer bonding) or keep it on top? Patel suggests that while 4F2 is the likely winner for the next few nodes, the ultimate solution may be 3D DRAM. Here, he introduces a geopolitical wildcard: "Chinese chipmakers are a potential disruptor here, as they have strong incentive to develop 3D because it is not dependent on advanced litho." This observation reframes the technological race as a strategic maneuver to bypass equipment restrictions, adding a layer of geopolitical tension to the engineering challenge.

The Illusion of Non-Volatile DRAM

Patel also tackles the resurgence of Micron's NVDRAM, a technology promising non-volatile memory with the speed of DRAM. The analysis is refreshingly skeptical. While acknowledging the impressive technical scaling—"The bitcells were scaled by an impressive 27% since the previous paper, to 41nm on a side"—Patel quickly pivots to the economic reality. "Unfortunately, the electricity savings amount to roughly $1 per year," he writes, dismissing the value proposition for most commercial applications. This is a vital reality check in an industry often seduced by technical novelty.

The commentary on 2D materials and next-gen logic architectures like Forksheet and CFET follows a similar pattern of cautious optimism. Patel notes that while Intel and Samsung are making impressive strides in contact formation and transistor design, the fundamental blocker remains manufacturing. "2D materials are not yet practical to work with on an industrial scale," he states, emphasizing that "on-wafer growth is the key blocker." Without a scalable way to grow these materials, even the most elegant transistor designs remain stuck in the lab. The argument here is that the industry's greatest challenge is no longer just design, but the physics of material synthesis itself.

If chipmakers or labs are solving the problem of on-wafer growth for 2D materials, they're keeping it quiet, because the gap between a lab breakthrough and an economic manufacturing process remains the industry's widest chasm.

Bottom Line

Patel's strongest contribution is his refusal to treat semiconductor scaling as an inevitable law of nature; instead, he frames it as a series of hard-won engineering compromises where physics and economics are in constant tension. The piece's greatest vulnerability is its heavy reliance on the assumption that the necessary tooling for 4F2 and 3D DRAM will mature in time to prevent a bottleneck, a risk that could derail the entire roadmap. Readers should watch closely for the next 18 months: if the industry fails to transition to these new architectures, the AI revolution may hit a hard wall of memory constraints that no amount of software optimization can fix.

Sources

Intel 18a details & cost, future of dram 4f2 vs 3d, backside power adoption (or not), China's…

by Dylan Patel · SemiAnalysis · Read full article

Long time readers will recall that SemiAnalysis covers more than just datacenters and AMD. Today we’re back to semiconductors with a tech-focused roundup of the best from this year’s VLSI conference, the premiere design and integration. That includes the latest in chips manufacturing: fab digital twins, the future of advanced logic transistors and interconnects, DRAM architectures beyond the 1x nm nodes, and more. We’ll discuss Intel’s 18A process and compare with TSMC, where backside power will be adopted (and where it won’t), and the likely winners in 4F2 versus 3D DRAM.

Digital Twins: From Atoms to Fabs.

Semiconductor design and fabrication is getting exponentially more complex, increasing development costs and lengthening design cycles. Digital twins allow for design exploration and optimization to be done in an accelerated virtual environment. With this, engineers can ensure that designs work before any silicon is run through the fab.

Digital twins span the entire scale of semiconductor design:

Atomic-level: Simulate the quantum and Newtonian interactions between atoms in materials engineering of transistor contacts and gates

Wafer-level: Optimize tool chambers and process recipes in virtual silicon for yield and performance

Fab-level: Maximize fab productivity with orchestrated maintenance and management across the fleet

On atomistic simulations, Synopsys provided an overview of their QuantumATK suite, used in materials engineering in transistor contacts and gate oxide stack design, which are critical to device performance. Traditional Density Functional Theory (DFT) modelling of quantum effects between atoms is the most accurate but computationally expensive, while conventional force field simulation of Newtonian atomic interactions is quick but with limited accuracy. GPU accelerated DFT-NEGF (Non-Equilibrium Green’s Function) demonstrated a 9.3x speedup using only 4x A100 vs CPU, while Machine-Learned Force Field simulation using Moment Tensor Potentials demonstrated near-DFT accuracy with 17 min compute cost vs 12 days with traditional DFT.

These atomic models are critical in understanding the electrical interactions occurring at the interface between different material layers. In contact engineering, MLFF is used to generate the contact interface between crystalline silicon and amorphous silicides, simulating the depth of interdiffusion where the boundary undergoes silicidation. DFT-NEGF is then used to calculate contact resistance and current-voltage curves across the interface. For gate oxide design, the complex multi-layer work function metal stack is built using MLFF and simulated to check its structure and chemical composition. Dipole dopants can then be introduced and optimized with DFT, which also does electrostatic analysis to calculate key ...