MBW Views is a series of exclusive op/eds from eminent music industry people… with something to say. The following op/ed comes from Ran Geffen Levy (pictured inset), Founder of OG.studio, which provides insights to music tech start-ups, companies and VCs. He is also the CEO of Amusica Song Management in Israel.
A few days ago, I asked ChatGPT4-o to create a pop and punk version of Neil Young’s “Harvest Moon”.
It provided me with notation and, at my request, adjusted the lyrics to fit the genre, suggested instrumentation, gave me a full explanation of the changes it made to the original evergreen, and gave me marketing tips.
Here is the transcript of the conversation.
Legal issues aside (I will get to those shortly) this is a good thing. Music composition is a binary code, a DNA – and songwriters own it. Training AI does not fit with any of the existing rules that underpin the music industry: it’s not a mechanical right, not a performance right, and there is no compulsory licence. It’s pure and simple: a commercial use of intellectual property.
Songwriters and music publishers are finally going to get their rightful share. When it comes to data training, AI does not care who sings “Jolene”, it cares how it was written and why it became a hit.
All we need to do is make sure they care who wrote it and that they are compensated accordingly. The maths is simple: One recording equals one recording, but one composition can equal an infinite number of recordings. When it comes to AI, the power is with the songwriters.
But at the moment, the songwriters are fucked
At the beginning of the century, I was involved in one of the first deals to license ringtones. We closed a deal that gave songwriters and publishers 40% of the top. The ringtones were MIDI files that sold like hot potatoes, with all proceeds going straight to the writers’ pockets.
Then came the polyphonic ringtone and we were asked to share the income with the labels. I picked up the phone and asked a major publisher I represented at the time: “What the fuck? No recording is being used. Why should we split the revenue with the labels?” His answer was: It’s complicated, and explained. My first thought was: we’re fucked.
A quarter of a century later, songwriters have been robbed (again) of their minimal, lawful share of the streaming bonanza while the other players in the industry get to keep their rates and enjoy Spotify‘s plans for a post-streaming era revolving around the monetisation of superfans.
The only thing that has changed since is that songwriters and publishers now have partners in the form of Equity Funds that have spent billions on the acquisition of their shares and therefore sit in the shoes of the songwriter. They will want to maximize their ROI. See how they react when somebody comes to devalue their assets.
Who poked the bear?
What does it take to create music using AI? Platforms that can understand and generate lyrics, composition and sound.
Lyrics Module (LLM): Trained with a focus on the lyrics of songs with metadata and theoretical information on how to write songs.
I asked Claude, Reka, Gemini and ChatGPT4 to transform Geoffrey Chaucer’s Troilus’s Song from Chaucer’s “Troilus and Criseyde” into modern English pop and rock (Reka). Each AI returned new versions of the song including detailed explanations of the changes. When I asked, “How do you know how to do that?” Reka’s answer was: I’ve spent years studying and analyzing different forms of literature and music. This hands-on experience has given me a nuanced understanding of how to adapt and transform works across different mediums and genres.” : Three of the modules were able to create versions of Dámaso Pérez Prado’s “Mambo Number 5” in different genres (Reka, Gemini, ChatGPT). Claude refused due to copyright issues.
Composition Module (LLM): Trained in melody and harmony (notes, chords) as well as music theory (counterpoint, Western harmony, etc.). no recordings are required.
OpenAI started its journey of generative music creation with ChatGPT2 based MuseNet, which was trained solely on midi files and created a map to show how one composer was influenced by others. ChatGPT4 was able to generate the notation and lyrics of a pop song based on Vivaldi’s “Spring”.
At my request, it converted “Bohemian Rhapsody” to “Bohemian Jazzody” (Arr. ChatGPT). Gemini was able to convert “Harvest Moon” to K-Pop (including notation and Korean lyrics), and Reka converted “Shallow” to a rock song. Claude is able to create new music genres and generate ABC notation codes to generate MIDI files.
The bottom line: we can now use plain text to create new music or re-arrange existing songs using LLM AI modules. It’s clunky, at the moment, but technology can bridge the gap in a heartbeat.
Based on the above, it seems that the LLMs have already been trained on songwriters’ data and can create derivative works that are protected by existing law. Blackstone spent a few billion dollars to buy the rights for the likes of “Harvest Moon” and “Shallow”, while Neil Young has firm convictions about the use of his music, and Queen are in the process of selling their catalogue for a sizable sum of money. What does this mean for them? Have AI companies just poked the bear?
Sound Module: Trained on recordings to learn the style of the recording artist and production. Sound can be created without any existing sound recordings using the composition model to create MIDI files that can be converted to full sound records
Fairly Trained founder Ed Newton-Rex has written two excellent analysis pieces on the output of Suno and Udio. Most of his insights revolve around melody, chords and lyrics. He refers to recordings in one capacity: style and the use of real (recording) artists and bands. Sam Altman told Lex Fridman that the latter of all of the above should be compensated. The omission of songwriters and publishers is, again, glaring given that those real artists are performing the work of real songwriters.
Twenty-odd years ago, I was asked to license a composition for an educational CD ROM. I didn’t have a clue. I picked up the phone and called Jane Dyball (WCM at the time) and asked her for advice. “Let me tell you what I do when I get a request for a new form of licensing,” she said. “Stick your finger in your mouth, put it in the air, see where the wind blows and name your price”.
It seems that the wind is blowing in the wrong direction when it comes to songwriters. While the focus of the entire music industry revolves around the “fairly trained” narrative, and rightly so, song rights owners should focus on “fair compensation” from AI companies and the owners of recordings alike.
Use the C word
Compensation is based on the widely agreed principle that AI companies should disclose the origin of the training data and owners of such data can opt out. It is included in the Copyright Disclosure Act, in the EU AI Act in IMPF’s Ethical Guidelines and in IMPEL’s Sarah William’s take on the two other Cs: Content and Culture
When it comes to data input, it is very simple: AI companies have struck deals with content owners for data training; OpenAI with Axel Springer, Apple with Shutterstock, UMG with Endel and BandLab. I don’t know if these deals are based on a one-time or recurring training fee and/or if there’s an equity upside. Whatever the deal, we are talking about blanket agreements with no real parameters for the distribution of royalties.
To ensure fair distribution and create additional monetisation opportunities for rights owners, the music industry must lobby for an attribution clause that would force AI companies to keep records on which cluster of data was used to generate a new piece of music. AI21Labs is doing it with text, Bria is doing it with images. Numerous solutions could be implemented on new and existing music training datasets.
If an attribution clause and additional compensation for AI music output are not achieved, we are in real danger of flat-lining future income streams. Granting an input license without an output license is like giving sync rights without ensuring public performance income.
6 questions
Next week, music publishers and songwriters from around the globe will gather around tables at the Grosvenor House Hotel to celebrate the craft and achievements of outstanding music creators: The Ivors and the Polar Music Prize. These events would be a good place to start answering the following questions:
Are songwriters the main contributors to AI data training?
Will we allow recording owners to license recordings for data training without songwriters’ approval?
Will all music publishers fight on behalf of songwriters regardless of their affiliation?
Is royalty distribution parity the minimum when output includes recording data?
Are we willing to pull publishing rights to ensure fair compensation, as UMPG did with TikTok?
Can we put ego aside to create a unified system for the management of new revenue streams?
The answers to these questions will determine the value of songwriters’ assets in the future, the ROI from billions of dollars invested by equity funds, the value of JKBX assets, and the future role of collection societies, CISAC, IMPF, Ivor Academy and more.
I believe we have a one-time opportunity to create change and I am working with good people to develop a platform to support it.
The Ivors Academy‘s CEO, Roberto Neri, articulated the urgency of it all: “I believe now, more than ever, is the pivotal moment to ensure music creators’ interests are protected, championed, valued, and recognized for their central and indispensable role in the success of the entire music business.”
Enjoy the Ivors and the Polar Music Prize next week, start the fight today.Music Business Worldwide