← Back to Library

Why one AI executive quit his job to protest creators' rights

In an industry obsessed with speed and scale, Casey Newton captures a rare moment of conscience: a senior executive quitting his job to protest the very business model his company champions. This isn't just a personnel change at Stability AI; it is a fault line cracking open the legal and moral pretense that generative artificial intelligence can thrive without the consent of the humans whose work powers it.

The Executive Rebellion

Newton frames Ed Newton-Rex's resignation not as a personal grievance, but as a systemic rejection of the industry's dominant strategy. While courts have been slow to side with creators—recently dismissing comedian Sarah Silverman's lawsuit against Meta because a judge called the idea of an AI model being a "derivative work" nonsensical—Newton-Rex argues that the legal definition of fair use is being stretched to a breaking point. Newton writes, "Companies worth billions of dollars are, without permission, training generative AI models on creators' works, which are then being used to create new content that in many cases can compete with the original works."

Why one AI executive quit his job to protest creators' rights

The core of Newton's argument is that the current "fair use" defense ignores the economic reality of market displacement. He contends that when a model is good enough to replace entry-level jobs, the legal shield of fair use evaporates. Newton explains, "The problem is the same models can already be used to generate full, essentially human level creative output... And if you do that, then that's clearly affecting the market for the original work."

This is a crucial distinction often lost in technical debates. The industry claims these tools are merely "assistive," yet the executive admits they are rapidly becoming competitive. As Newton puts it, "It's really about what goes into it... ultimately only work with data that is provided with the consent of the people who own that data."

"I'm not a Luddite, and I do think generative AI will have huge benefits. I just think it needs to be built in the right way."

Critics might argue that Newton's insistence on consent is a luxury that slows innovation, making it impossible to train models on the vast datasets required for general intelligence. However, Newton counters this by pointing to existing successes where licensing works. He notes that his own team at Stability AI released Stable Audio using licensed data from a stock library, a model that was well-received and even named one of Time's best inventions. He argues, "It'll slow us down a bit. Sure. It'll cost a bit more, but ultimately, I think it's the right thing to do."

The Scalability Myth

A major pillar of the industry's defense is that licensing is not scalable. Newton dismantles this narrative by highlighting competitors who are already proving otherwise. He points to BRIA, an Israeli company that pays rights holders based on how much their images influence a generated output. Newton observes, "There are definitely this narrative in the industry that these approaches don't work and aren't scalable. I think that's probably wrong."

The author's framing here is particularly effective because it shifts the debate from abstract legal theory to concrete business cases. By citing specific examples of companies that are paying creators, Newton-Rex suggests that the "all or nothing" approach of the major tech giants is a choice, not a necessity. He notes that while the music industry has a long history of AI collaboration, other modalities like image and video generation have lagged behind in ethical standards. "Most people I know in the space... are musicians," Newton says, "And that might be why, I think in general, you have more music AI companies who are working with creators as opposed to citing fair use."

The stakes are high for the labor market. Newton warns that while AI might democratize creativity in education, it threatens the economic viability of human creators at the entry level. "These are industries that are going to be dramatically changed by generative AI," he states, "and I don't think it's necessarily the right thing to do to try to stop that... if you're working towards that, you have to be doing it in the right way."

The Path Forward

Newton concludes with a surprising note of optimism, arguing that it is not too late to repair the relationship between technology and creators. He believes the current models are merely the first generation and that the industry still has time to pivot. "We are still very early," he asserts. "The models that are live today will not be live in four months time, let alone a year's time."

This perspective challenges the fatalism that often surrounds AI ethics. Rather than accepting a future where human creativity is a relic, Newton-Rex envisions a paradigm where technology augments rather than erases human labor. He acknowledges the potential for AI to act as a personalized educator, potentially making music learning less elitist, but insists this must not come at the cost of exploiting existing work. "I think creators and rights holders are open to the conversation," he says, leaving the door open for a negotiated settlement rather than a perpetual legal war.

Bottom Line

Newton's coverage succeeds by centering the voice of an insider who chose principle over profit, effectively humanizing a debate often dominated by legal jargon and corporate press releases. The argument's greatest strength is its reliance on tangible examples of licensed AI models, proving that the "fair use" default is a choice, not a technical constraint. However, the piece leaves the reader with a lingering question: can the industry's massive momentum be slowed enough to allow for these ethical adjustments before the market is permanently reshaped?

"It's really about what goes into it... ultimately only work with data that is provided with the consent of the people who own that data."

The most critical takeaway is that the legal battle in the courts may be lost, but the moral and economic argument is still being fought in the boardrooms. As the executive branch and copyright office weigh new rules, the testimony of those who built these systems from the inside may carry more weight than the lawsuits of those they displaced.

Sources

Why one AI executive quit his job to protest creators' rights

by Casey Newton · Platformer · Read full article

Programming note: Platformer will be off Thursday for the Thanksgiving holiday.

One of the fiercest debates in artificial intelligence — perhaps second only to who should be CEO of OpenAI — is whether and how creators should be compensated when their work is used to train models for generative AI. For the most part, today’s most popular large language models have been created without the consent of the people whose work now powers them. And arguments about the issue have now spilled over into lawsuits.

Earlier this month, AI companies, including StabilityAI, Anthropic, Meta, Google, and Microsoft, submitted comments to the US Copyright Office on potential rules that would govern how corporations use copyrighted work. While the companies offered varying opinions, the core of their argument was that AI companies should be able to train LLMs using copyrighted works without compensating their creators, according to Wes Davis at The Verge.

Perhaps unsurprisingly, many creators disagree. But to date, they have struggled to gain much ground in court. Today, a federal judge dismissed most of the claims made by comedian Sarah Silverman in one such lawsuit against Meta.

Here’s Winston Cho at the Hollywood Reporter:

U.S. District Judge Vince Chhabria on Monday offered a full-throated denial of one of the authors’ core theories that Meta’s AI system is itself an infringing derivative work made possible only by information extracted from copyrighted material. “This is nonsensical,” he wrote in the order. “There is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs’ books.”

But while they have struggled to convince judges, creators won support in surprising place last week. Ed Newton-Rex, who was the vice president of audio at Stability AI, abruptly resigned from his job. In an op-ed, Newton-Rex wrote that he disagreed with the company’s stance on fair use. “Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works,” he wrote. 

On Friday, Newton-Rex spoke with Platformer about why he left, what the ideal arrangement is between AI companies, and how much he expects to make from his own musical efforts. 

Zoë Schiffer: You recently resigned from your role as the VP of Audio at Stability AI, and published an op-ed saying that you disagree with the ...