What happens when you use music reviews as prompts for AI generated music?

Over at Pitchfork, writer Jayson Greene offers a pretty fascinating look at both the appeal, and the shortcomings, of using text-to-music LLMs such as Google's new MusicLM. In a lot of ways, Greene ends up epitomizing that classic saying that, "Writing about music is like dancing about architecture."

Consider the video above. To create it, Greene pulled a snippet of text from a review of Sufjan Stevens' "So You Are Tired," in which the reviewer wrote:

Piano, guitars, strings, and guests' backing vocals pile up as patiently as the somber narrative, coming together with such effortless elegance that someone in the next room might think you're just listening to a pretty movie soundtrack.

While the result doesn't exactly sound like that specific Sufjan Stevens song, you can see where the machine is coming from. As Greene writes:

Again, this doesn't literally sound like Sufjan Stevens. But it's close enough in its aura to give pause.

As a music critic and editor, it's hard not to find this practical tool, which is now available to try in its beta testing phase, disquieting. In a year when machine learning is advancing at a head-spinning pace, regularly reaching increasingly far-fetched milestonesthreatening workers in countless fields, and inciting chilling dystopian predictions, the generation of 20 seconds of original music from a handful of descriptive words represents another enormous step away from the known order. Before the advent of AI, the process of writing about the sound of music—either real or imagined—and then being rewarded, moments later, with that music's full-cloth existence might be best described as "sorcery."

Speaking purely on a practical level, words about music are not as natural or as direct as words about images. There is a sensory transposition occurring—from eyes to ears, or vice versa—with the opportunity for all kinds of vital data to drop off or get lost in the process. Words are already leaky containers, spilling out context whenever they're jostled.

Later in the piece, Greene also explores some of the shortcomings of these sort of large-language models, and the relationship that linguistic ambiguity and the linguistic ambiguity that's all-too-common in music reviews. What is it mean to describe a song as "spacey," for example? Is it meant to evoke the cosmos, or simply allude to something reverb-heavy that creates an illusion of physical space within the recording layers of the song itself? Greene even speaks with some engineers on the project to better understand how they tried to train the machine — and also points out the very real, very human flaws intrinsic to that process.

While you (like me) may have very valid concerns about the potential abuses of text-to-music LLMs/AIs, it's still a fascinating piece, as it takes a fair look at the appeals and shortcomings of the technology.

Unrelated, The Verge reports that Universal Music Group and several other music publishers have sued an artificial intelligence company over the use of copyrighted lyrics:

The music publishers' complaint, filed in Tennessee, claims that Claude 2 can be prompted to distribute almost identical lyrics to songs like Katy Perry's "Roar," Gloria Gaynor's "I Will Survive," and the Rolling Stones' "You Can't Always Get What You Want." 

They also allege Claude 2's results use phrases extremely similar to existing lyrics, even when not asked to recreate songs. The complaint used the example prompt "Write me a song about the death of Buddy Holly," which led the large language model to spit out the lyrics to Don Mclean's "American Pie" word for word.

Presumably, the Claude 2 AI system just scraped some lyric websites for freely available lyric text. But even lyric websites have their own legal IP complications.

Also, it's weirdly refreshing to see a massive music conglomerate actually fighting over artists' intellectual property rights, and not just over their own rights to exploit that IP. Broken clocks, et cetera et cetera.

Live From the Uncanny Valley: How AI Tools Are Turning Words Into Music [Jayson Greene / Pitchfork]

Universal Music sues AI company Anthropic for distributing song lyrics [Emilia David / The Verge]