If you can’t beat ‘em, join ‘em.
That may be the emerging philosophy that music recording companies are adopting in the age of AI-generated deepfakes – especially in the wake of the unauthorized, AI-generated music that took the entertainment world by storm earlier this year.
This past spring we saw a deepfake track, featuring mimicked vocals by Canadian artists Drake and The Weeknd, go viral very rapidly, before it was taken down by the streaming services.
The reaction from Universal Music Group (UMG) – to whose labels both Drake and The Weeknd are signed – was swift and unequivocal.
“The training of generative AI using our artists’ music (which represents both a breach of our agreements and a violation of copyright law) as well as the availability of infringing content created with generative AI on DSPs, begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation,” UMG said in a statement to MBW.
Yet since then, numerous examples of the potential of AI-generated music have taken the world by storm. For instance, the YouTube channel “There I Ruined It” has garnered millions of views with its song snippets, one of which is a “Hank Williams version” of NWA’s iconic 1980s rap track Straight Outta Compton. Another track on that channel features Johnny Cash singing Barbie Girl.
However one feels about AI-generated music, it’s hard to deny the creative spirit behind works such as these – and the potential for monetization that lies within.
So it may not be much of a surprise that news recently broke that the very same UMG is reportedly in talks with Google to potentially license artists’ melodies and voices for songs generated by AI.
According to a report in the Financial Times, these talks are in early stages, so no one should be holding their breath for any imminent announcements. But the core idea is to create an environment where this kind of music is created legitimately, and where creators would pay rights holders for the use of artists’ works and voices.
Importantly, the FT report indicated that Warner Music Group is also in talks with Google about such a product.
If the world’s largest (Universal) and third-largest (Warner) recorded music companies are both in talks on the issue, it’s a clear sign that this is where things are headed in the industry.
However, there are potentially large issues that will need to be ironed out if these types of deals are to prove successful in the long run.
Essentially, many of the issues that AI-generated musical deepfakes bring up revolve around some aspect of the supply-and-demand dynamic at play in the music business.
Below are three ways that this supply-demand balance could be upended by legal deepfakes:
1. A flood of new AI tracks risks reducing the value of an artist’s actual produced content
One of the paradoxes that could present itself is a reduction in the value of music created by very popular artists.
Let’s say Universal agrees to license Drake’s vocals to AI-generating apps. Now imagine that thousands of Drake fans around the world create “new Drake tracks” using that AI app.
Let’s further imagine that some small fraction of those fan-created tracks – in the single digits, percentage-wise – become popular. That could mean dozens of new “Drake hits” in the music market.
“The training of generative AI using our artists’ music… as well as the availability of infringing content created with generative AI… begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.”
Universal Music Group
Under the licensing deal, Drake and UMG would receive payment for that music – presumably some portion of overall sales and streams. But with dozens of new Drake tracks hitting the market, it could conceivably dilute the demand for Drake’s own new content. Sales of new albums could shrink, as could stream counts for new tracks.
Drake himself could face difficult questions in this scenario. If the most popular “Drake” track of the past year is an AI-generated deepfake from a fan, does he perform that track when he goes on tour? Fans might be disappointed if he doesn’t. And if he does, does he owe money to the song’s creator?
Multiply this phenomenon across dozens of major musical acts, and this could be a serious problem for the entire music industry. In essence, even licensed AI-generated deepfake tracks could capture some of the income (and attention) that once went to popular artists and their labels.
2. Artists could find themselves under pressure to allow deepfakes
A potential deal between Google and UMG reportedly includes a clause of some sort that would ensure only artists who consent to it would be licensed for deepfakes of their musical style and voice.
From what we’ve seen so far, different artists will take different approaches. There is, for instance, Sting’s take on AI. The veteran singer and bassist recently predicted a “battle” between AI and human musicians ahead.
“The building blocks of music belong to us, to human beings,” he told the BBC.
“That’s going to be a battle we all have to fight in the next couple of years: Defending our human capital against AI.”
Meanwhile, Grimes is taking the opposite approach. The Vancouver-born musician didn’t wait for the music industry to come to an agreement with any AI developers to launch her own project, in beta mode, that allows fans to create music using her voice – provided she gets a 50% cut of any royalties earned.
“The building blocks of music belong to us, to human beings… That’s going to be a battle we all have to fight in the next couple of years: Defending our human capital against AI.”
Sting
But here, again, issues of supply and demand could become a problem. Cutting off the supply of a musician’s voice or style won’t necessarily kill the demand among fan-creators. People with access to AI music makers that are sophisticated enough could still create illicit tracks imitating those artists.
The recording companies behind those artists would then be faced with a choice: Play whack-a-mole with the illicit creators, issuing takedown notices for the unauthorized music wherever it appears, or sign a deal with the creator(s) and take a cut of the revenue.
From a commercial standpoint, signing a deal is the better option. Smarter record company execs understand the importance of maintaining strong relationships with their artists, but if push comes to shove, will they be willing to leave money on the table on the wishes of an artist?
In this way, musicians could find themselves under pressure from within the industry to agree to deepfakes – even if they feel that this compromises their identity and their legacy as musicians.
3. Deepfake content could become a barrier to the rise of new musicians
Today’s music scene is marked by the constant arrival of new musical acts, spurred on, especially by the rise in popularity of genres such as K-pop, Regional Mexican and afrobeats, and especially helped along by the new phenomenon of “glocalized” music.
However, licensed deepfakes could make that more difficult. If Johnny Cash and Frank Sinatra are still “making music,” that could reduce fans’ available time and attention to put towards new artists. That could prove to be a problem for the many labels for whom A&R is their bread and butter – not to mention for the live music business.
To be sure, deepfakes are unlikely to put an end to new musicians making names for themselves; after all, novelty is a major part of the music scene. But they could certainly reduce the windows of opportunity available.
And for the live music business, there are ways around the problem. Musical acts such as the Damon Albarn-led Gorillaz have proven that virtual musical acts can be huge successes on the live music circuit. But a Johnny Cash world tour, with no Johnny Cash? Fans and the Cash estate alike might object.
It’s likely that issues precisely of this sort are why the UMG-Google deal is – as the Financial Times reported – still in an early stage. Delivering on a deal like this requires some serious innovation in thinking about the music business – innovation maybe on par with AI technology itself.
For the recording companies, it will be a matter of striking a fine balance between the desire to monetize AI-generated musical content and protecting the works and reputations of the artists that are at the heart of the music industry.
The first deals of this sort may not be perfect; they will likely just be stepping stones to better arrangements in the future, just as the early deals between social media platforms and music companies were eventually replaced by improved arrangements, once the parties involved gained greater understanding of the dynamics of music on the new platforms.
So, too will it be with AI and the music industry. What the developments of recent months have shown is that walking away from advancements in AI simply isn’t an option for the music business.
The future is here, and the show must go on.Music Business Worldwide