← Back to Library

How Meta built a new ai-powered ads model for 5% better conversions

Most industry analysis treats a 5% lift in ad conversions as a marginal gain, but Alex Xu reframes this metric as a fundamental architectural revolution in how artificial intelligence scales recommendation systems. By dissecting Meta's Generative Ads Model (GEM), Xu reveals a paradox: the most powerful model ever built for this purpose is too heavy to run in real-time, forcing engineers to invent a "teacher-student" architecture that distills its intelligence into smaller, faster agents. This isn't just a case study in efficiency; it is a blueprint for how the industry will handle the next generation of massive foundation models without bankrupting the cloud.

The Scale Paradox

Xu's central thesis hinges on a counterintuitive engineering challenge. When Meta announced that GEM drove a 5% increase in ad conversions on Instagram and a 3% lift on Facebook Feed, the raw numbers seemed modest. However, Xu points out the reality of the situation: "at Meta's scale, these percentages translate to billions of dollars in additional revenue and represent a fundamental shift in how AI-powered advertising works." The significance here is not the percentage, but the underlying shift from siloed, reactive models to a unified, predictive brain.

How Meta built a new ai-powered ads model for 5% better conversions

The core problem GEM solves is the fragmentation of user data. Traditional systems treated platforms like Instagram and Facebook as separate entities, missing cross-platform behavioral patterns. Xu argues that "meaningful signals like clicks and conversions are extremely sparse compared to total impression volume," making it difficult for older models to learn effectively. By creating a system that processes "trillions of potential ad impression opportunities" with a holistic view, Meta has moved beyond the limitations of models that could only remember a user's last 10 to 20 actions. This echoes the historical pivot in deep learning from simple feed-forward networks to the Transformer architecture, which first allowed systems to weigh the importance of different parts of a sequence simultaneously rather than sequentially.

"GEM is the largest foundation model ever built for recommendation systems. It has been trained at the scale typically reserved for large language models like GPT-4 or Claude."

This comparison to large language models (LLMs) is crucial. It signals that the era of specialized, narrow AI for ads is ending, replaced by generalist foundation models that understand context, nuance, and long-term intent. Critics might argue that applying LLM-scale compute to advertising is an over-engineered solution for a simple matching problem, but Xu's data suggests the complexity is necessary to capture the "progression from casual interest to serious purchase intent that might develop over months."

The Teacher-Student Architecture

The most compelling part of Xu's analysis is the solution to the speed problem. A model as massive as GEM cannot make decisions in the "tens of milliseconds" required for a scrolling user. Xu explains that "GEM is so powerful and computationally intensive that Meta can't actually use it directly to serve ads to users." Instead, the engineering team deployed a teacher-student architecture.

In this setup, GEM acts as the "master teacher" that trains hundreds of smaller, faster Vertical Models (VMs). These VMs are specialized for specific contexts—like predicting clicks on Instagram Stories or conversions on the Facebook Feed. Xu details the transfer mechanism: "Student models learn to replicate GEM's reasoning process, not just final predictions." This is a sophisticated application of knowledge distillation, a technique that has evolved significantly since its early days in the 2010s, where the goal was simply to compress a model. Here, the goal is to preserve the reasoning of the large model while shedding its computational weight.

The efficiency gains are staggering. Xu notes that the system employs "knowledge distillation with Student Adapter," "representation learning," and "parameter sharing" to achieve "twice the effectiveness of standard knowledge distillation alone." This creates a continuous improvement cycle where user interactions feed back into the data pipelines, GEM re-trains, and the intelligence flows back down to the VMs.

"The continuous improvement cycle works like this: Users interact with fast VMs in real time... GEM periodically re-trains on this fresh data, updated knowledge transfers to VMs through the post-training techniques, and Improved VMs get deployed to production."

This feedback loop effectively turns the entire user base into a real-time training ground, allowing the system to adapt to shifting trends faster than any human team could. It represents a shift from static model deployment to dynamic, self-correcting intelligence.

Infrastructure as the Real Innovation

While the algorithmic architecture is impressive, Xu dedicates significant space to the infrastructure required to build it. Training GEM required Meta to "rebuild its training infrastructure from the ground up," achieving a "23x increase in effective training throughput while using 16x more GPUs." This is not merely a hardware upgrade; it is a systemic re-imagining of how massive parallel computing works.

Xu highlights specific innovations like "multi-dimensional parallelism" and "custom GPU kernels designed for variable-length user sequences." These technical details matter because they explain why this model is possible now and wasn't five years ago. The use of PyTorch 2.0 graph-level compilation and FP8 quantization allowed Meta to reduce memory footprints and communication bottlenecks.

"These might seem like minor details, but when you're training models that cost millions of dollars in compute resources, every percentage point of efficiency improvement matters enormously."

This observation grounds the high-level AI concepts in the gritty reality of engineering economics. Without these infrastructure breakthroughs, the theoretical benefits of GEM would remain locked in a research lab, too expensive to deploy at scale. A counterargument worth considering is whether this level of optimization creates a barrier to entry so high that only a few tech giants can compete, potentially stifling innovation from smaller players who lack the capital to rebuild their training stacks from scratch.

The Future of Seamless Integration

Xu concludes by looking ahead to a future where the distinction between organic content and advertising dissolves. The roadmap includes "true multimodal learning" where the model processes text, images, and video simultaneously, and "inference-time scaling" to dynamically allocate resources. Perhaps most ambitiously, Meta envisions a "unified engagement model that ranks both organic content and ads using the same underlying intelligence."

"This would fundamentally change how advertising integrates into social feeds, potentially creating more seamless experiences where ads feel like natural content recommendations rather than interruptions."

This vision suggests a future where the friction of advertising disappears entirely, replaced by a hyper-personalized stream of content that happens to be monetized. While this sounds ideal for user experience, it raises profound questions about the nature of attention and the power of algorithms to shape behavior. If ads are indistinguishable from organic content, the user's ability to opt-out or maintain a critical distance diminishes.

Bottom Line

Alex Xu's analysis succeeds by moving beyond the hype of "AI for ads" to reveal the specific architectural and infrastructural breakthroughs that make it viable. The strongest part of the argument is the detailed explanation of the teacher-student architecture, which solves the critical tension between model intelligence and inference speed. The biggest vulnerability lies in the potential for this technology to create an insurmountable moat for the largest tech companies, leaving competitors unable to match the scale of data and compute required. Readers should watch for how this unified engagement model evolves, as it will likely redefine the boundary between content and commerce in the coming decade.

Deep Dives

Explore these related deep dives:

  • Knowledge distillation

    The article describes GEM's 'teacher-student architecture' where a large model trains smaller models - this is the formal ML technique called knowledge distillation, and understanding its origins and mechanics would give readers deeper insight into why Meta chose this approach

  • Transformer (deep learning)

    GEM's InterFormer component is built on transformer architecture with its interleaving attention layers. Understanding the foundational transformer concept helps readers grasp why this architecture enables processing long behavioral sequences efficiently

  • Recommender system

    The article states GEM is 'the largest foundation model ever built for recommendation systems' - understanding the history and evolution of recommender systems from collaborative filtering to modern deep learning approaches provides essential context for appreciating GEM's significance

Sources

How Meta built a new ai-powered ads model for 5% better conversions

Cut Code Review Time & Bugs in Half (Sponsored).

Code reviews are critical but time-consuming. CodeRabbit acts as your AI co-pilot, providing instant Code review comments and potential impacts of every pull request.

Beyond just flagging issues, CodeRabbit provides one-click fix suggestions and lets you define custom code quality rules using AST Grep patterns, catching subtle issues that traditional static analysis tools might miss.

CodeRabbit has so far reviewed more than 10 million PRs, installed on 2 million repositories, and used by 100 thousand Open-source projects. CodeRabbit is free for all open-source repo’s.

Disclaimer: The details in this post have been derived from the details shared online by the Meta Engineering Team. All credit for the technical details goes to the Meta Engineering Team. The links to the original articles and sources are present in the references section at the end of the post. We’ve attempted to analyze the details and provide our input about them. If you find any inaccuracies or omissions, please leave a comment, and we will do our best to fix them.

When Meta announced in Q2 2025 that its new Generative Ads Model (GEM) had driven a 5% increase in ad conversions on Instagram and a 3% increase on Facebook Feed, the numbers might have seemed modest.

However, at Meta’s scale, these percentages translate to billions of dollars in additional revenue and represent a fundamental shift in how AI-powered advertising works.

GEM is the largest foundation model ever built for recommendation systems. It has been trained at the scale typically reserved for large language models like GPT-4 or Claude. Yet here’s the paradox: GEM is so powerful and computationally intensive that Meta can’t actually use it directly to serve ads to users.

Instead, the company developed a teacher-student architecture that lets smaller, faster models benefit from GEM’s intelligence without inheriting its computational cost.

In this article, we look at how the Meta engineering team built GEM and the challenges they overcame.

Goodbye low test coverage and slow QA cycles (Sponsored).

Bugs sneak out when less than 80% of user flows are tested before shipping. However, getting that kind of coverage (and staying there) is hard and pricey for any team.

QA Wolf’s AI-native solution provides high-volume, high-speed test coverage for web and mobile apps, reducing your organization’s QA cycle to minutes.

They can get you:

80% automated E2E test coverage in weeks—not years

Unlimited parallel ...