← Back to Library

What will happen with AI music?

jaime brooks cuts through the hype of artificial intelligence in music by identifying a specific, sonic fingerprint that separates current outputs from human craft: a persistent, nostalgic "fuzz." While the industry fixates on legal battles and viral stunts, brooks argues that the technology has already quietly crossed the threshold of acceptability for the average listener, rendering the debate about "quality" largely irrelevant to mass consumption.

The Sonic Signature of AI

The piece begins not with a policy paper, but with a memory of the NSynth Super, a hardware instrument from Google's Magenta team seven years ago. brooks recalls the device's output as sounding like it was "hearing them through a kind of audible fog or haze." This historical anchor is crucial. It draws a direct line from early experiments to today's platforms like Suno, noting that while the "fuzz" remains, it has become less obvious. The author suggests that as long as the content is engaging, the audience will overlook the technical artifacts.

"The picture behind the fuzz is a lot clearer, to the point where you kind of forget it's there if the content behind it is engaging enough."

This observation reframes the entire conversation. Instead of asking if AI music is "good," the author asks if it is "good enough." brooks draws a parallel to the early days of sampling, when producers like Marley Marl were forced to load drum hits from old records into samplers with limited memory, resulting in poor audio quality that nonetheless produced classics. The argument here is that fidelity has never been the sole determinant of success in popular music. The shift from hardware samplers to digital workstations made music "palpably cold, rigid, and minimal," yet it remained successful. The implication is clear: the "tinny, stilted" nature of AI generation is just another aesthetic shift, not a barrier to entry.

What will happen with AI music?

Critics might argue that this comparison ignores the fundamental difference between a producer curating a sample and an algorithm reassembling phonemes without intent. However, the author's point stands that the listener's ear is surprisingly adaptable to new textures.

The Illusion of Organic Success

The commentary then pivots to the business side, dissecting recent headlines about AI-generated hits. brooks is skeptical of the narrative surrounding the country song "Walk My Walk," which reportedly hit number one on a Billboard chart. The author points out that this was a digital sales chart, a metric that has "fallen off a cliff" and is easily manipulated. The real story, the author suggests, is not about a sudden surge in AI country music, but about the mechanics of chart manipulation.

"Digital sales charts are some of the easiest to manipulate, which is why it's so common for fans of relatively unpopular artists to cite iTunes sales charts in an effort to make it seem like their faves are doing better than they actually are."

A more serious case is presented: Telisha "Nikki" Jones, who releases R&B under the name Xania Monet. With a three-million-dollar deal and radio play, Jones appears to be a legitimate success story. Yet, brooks views her as a "perfect vehicle through which to market AI to the masses," noting that the cost of promotion is negligible when the artist "doesn't really exist in corporeal reality." The author argues that this project is less about organic talent and more about a music company capitalizing on the "AI narrative" to secure investment.

"Xania Monet seems much more like an example of a music company trying to find a way to capitalize on such narratives in their industry than it seems like an example of organic success."

This framing is sharp, stripping away the human-interest veneer to reveal the corporate strategy underneath. It forces the reader to question whether the success of these projects is a reflection of consumer demand or a manufactured illusion designed to attract capital.

The Legal Trap and the Real Canary

The most critical section of the piece focuses on the song "I Run" by the artist "Haven." Unlike the heavily promoted Xania Monet, this track went viral organically on TikTok before being pulled from Spotify following a complaint from singer Jorja Smith. brooks uses this incident to illustrate the core legal vulnerability of AI music: the models are trained on copyrighted works, meaning the output is often an unintentional collage of existing artists.

"If 'I Run' can be considered an impersonation of Jorja Smith, then every Suno output with vocals can probably be considered an impersonation of some other artist."

The author notes that the major labels are not trying to stop AI music from existing; they are trying to control it. The lawsuits against platforms like Suno and the settlements with Udio are strategic moves to turn these tools into proprietary assets. The goal is to ensure that any AI-generated track that goes viral becomes a "derivative work" owned by the major labels, forcing creators to sign over their rights.

"The goal of the lawsuits that locked down Udio and seek to do the same to Suno is to turn these platforms into proprietary tools that can only be used to create commercial music with the labels' blessing and explicit financial participation."

This is the piece's most chilling insight. The majors are not fighting the technology; they are fighting the independence of the technology. They want to create a system where the only way to distribute AI music commercially is through a major label contract. As brooks writes, "If said creators are entirely dependent on the use of AI platforms to generate commercially viable music, they'll have zero leverage and little choice but to comply."

The Future of the Medium

Despite the legal crackdown, the author remains convinced that the genie cannot be put back in the bottle. The technology is advancing, and alternatives exist, including local models and Chinese tech companies that operate outside American copyright norms. The medium of recording has fundamentally changed, and the relationship between creator and listener is shifting regardless of corporate resistance.

"New technology that has Earth-shattering implications for the future of music production is already out there, and it will still be around in some form even if all the corporations that currently hope to profit from that technology spectacularly fail."

The piece concludes by addressing the anxiety of the enthusiast who wants to protect their listening habits. brooks suggests that while platforms might try to label or ban AI content, the sheer volume of generation makes total exclusion impossible. The technology is too efficient, too cheap, and too pervasive to be stopped by policy alone.

"The goal of the majors is not to stop AI-generated music from proliferating... The economics are too appealing."

Bottom Line

jaime brooks delivers a sobering reality check: the battle over AI music is not about quality or artistic integrity, but about who controls the means of production. The strongest part of the argument is the identification of the "fuzz" as a fading artifact, proving that the technology has already won the battle for the listener's ear. The biggest vulnerability in the current landscape is the legal framework, which threatens to turn every independent AI creator into a contract-bound employee of the major labels. The reader should watch for the next wave of settlements, which will likely cement a system where AI music is only legal if it is owned by the giants who already own the music industry.

Deep Dives

Explore these related deep dives:

  • Vocaloid

    The article directly references Vocaloid as a precursor technology to AI-generated vocals, noting that Suno relies on 'tech similar to what has been powering Vocaloid since the turn of the millennium.' Understanding Vocaloid's history and how it synthesizes singing voices provides essential context for comprehending the evolution of AI music generation.

  • Marley Marl

    The article uses Marley Marl as a historical example of how audio quality limitations didn't prevent musical innovation, specifically referencing his pioneering work loading drum samples into early samplers with limited memory. His story illustrates the broader point about technology constraints and creativity that the article is making about AI music.

  • Google DeepMind

    The article mentions that Google's Magenta research team merged with DeepMind, which is central to understanding the corporate and research trajectory of AI music development. DeepMind's broader AI research provides important context for how music generation fits into larger artificial intelligence capabilities.

Sources

What will happen with AI music?

by jaime brooks · · Read full article

A long time ago, I became interested in the work of a group within Google called Magenta, who were trying to apply machine learning and neural networks to music production. To this end, they created plug-ins for Ableton Live and an open-source hardware instrument called the NSynth Super that you could build yourself according to specifications that Magenta made available online. That was about seven years ago, now. Two years ago, the research team that Magenta was part of merged with DeepMind and presumably became part of Google’s current AI efforts.

The NSynth Super is kind of a fascinating artifact in retrospect. I was very interested in it back then, but I don’t know if I really comprehended the implications of what it was doing. I remember the sounds it made being very fuzzy. They sometimes resembled familiar instruments, but it sounded as if you were hearing them through a kind of audible fog or haze. Turning the dials to try to lock in on a sound I liked reminded me very much of sitting in front of a CRT TV as a child, moving an antenna around trying to pick up a signal that would turn the fuzzy distortion on the screen into entertainment.

Today, I hear that same fuzz in the outputs of AI music platforms like Suno. Along with the robotic quirks of the vocal performances these platforms generate, the fuzziness is one of the most obvious signs that a song has been AI-generated, at least if you know what to listen for. There’s just a lot less of it now. The picture behind the fuzz is a lot clearer, to the point where you kind of forget it’s there if the content behind it is engaging enough.

Suno outputs frequently sound tinny, stilted, bitcrushed, and compressed within an inch of their lives, but so does a lot of the music that people make with DAWs and home recording equipment. When Marley Marl first started loading drum hits from old records he liked into a sampler to program his own beats with them, the audio quality of the samples needed to be very poor in order to fit them into the sampler’s limited memory. He still made classics. When producers abandoned hardware samplers and specialized studio equipment for computers at the turn of the millenium, popular music as a whole started to become palpably cold, rigid, and ...