In January, the Trump Administration issued an executive order directing the development of an AI Action Plan “to sustain and enhance America’s global AI dominance”.
Last month, the US government invited the public to share their ideas for the AI Action Plan via the Federal Register’s website through March 15.
Last week, the Recording Industry Association of America (RIAA) and nine other industry bodies published the contents of their joint submission to the Office of Science and Technology Policy with a number of suggestions for the AI Action Plan.
The filing was submitted on behalf of music organizations including The NMPA; NSAI; AI2M; the American Federation of Musicians of America and Canada; Artist Rights Alliance; Department for Professional Employees, AFL-CIO, The International Alliance of Theatrical Stage Employees; Recording Academy; and Screen Actors Guild-American Federation of Television and Radio Artists.
The document, submitted on March 14, 2025, goes far beyond a simple list of demands. The organizations frame their argument around a core principle they believe is fundamental to the future of the creative industries: “progress in AI innovation and strong copyright protection are not mutually exclusive. It is not a zero-sum game.”
The music orgs’ comments stand in stark contrast to some of the views on AI shared by certain giants of the tech world in recent weeks.
As reported earlier this week, ChatGPT maker OpenAI, in its own submission, called for fundamental changes to US copyright law that would allow AI companies to use copyrighted works without permission or compensation to rightsholders.
Both OpenAI and Google submitted detailed policy frameworks that could significantly impact music rightsholders and other content creators.
Music stars like Sir Paul McCartney, Paul Simon, and Bette Midler have joined hundreds of Hollywood celebrities in signing a letter pushing back against the proposals.
“Ripping off AI companies through distillation to create competing models is wrong, and ripping off creators through unlicensed copying to create competing works is also wrong.”
Mitch Glazier, RIAA
Commenting on the RIAA, NMPA and other industry bodies’ joint submission, RIAA Chairman & CEO Mitch Glazier, said: “One real, practical barrier to AI adoption is the level of skepticism that end users have regarding AI…
“To overcome such skepticism and build trust in AI, including trust that training materials are legitimate and high quality, the Plan should require adequate record keeping and transparency concerning AI training materials and the outputs from AI algorithms.”
Glazier added: “America must be first in encouraging the development of both AI and IP. Ripping off AI companies through distillation to create competing models is wrong, and ripping off creators through unlicensed copying to create competing works is also wrong. We hope the Administration will stand behind American creators and AI companies who both need to thrive to win the global race on AI.
“American law protects creators, and we should demand that other countries do the same. American culture is the most popular in the world. We should stand firm against other countries passing exceptions to their laws that would allow AI companies to copy American content for free, driving AI development away from the USA. That would result in a double loss for our country.”
Here are the key suggestions from the RIAA et al’s submission:
1. Free Market Licensing for AI Training
The industry bodies demand that AI developers obtain proper licenses before using copyrighted works to train their models. They point to precedents like OpenAI’s licensing agreements with media companies such as ShutterStock and the Financial Times as a positive model.
The recommendation emphasizes that licensing of AI training creates a “symbiotic relationship” between rights owners and AI developers.
“[B]efore AI developers and deployers deploy AI systems, they must first obtain appropriate licenses for any copyrighted works they use to train their AI models, and appropriate authorization if they use a person’s name, image, likeness, or voice in connection with such training.
“Consistent with U.S. law and free market principles, these licenses should be negotiated without regulation and in the free market to ensure fair value and economic competitiveness.”
2. Opposing International Text and Data Mining Exceptions
The RIAA, NMPA and other groups advocate against international copyright exceptions that could potentially enable unauthorized use of American intellectual property.
They argue that such exceptions, often enacted before the rise of generative AI, could damage copyright industries and undermine fair compensation for creators.
“[W]hile certain TDM exceptions might be justified for very specific types of unprotected data, bad actors are using them as a trojan horse to take American copyrighted works for free, denying American copyright holders’ compensation for use of their creations, and undermining the robust US AI training data licensing market,” the orgs write in the submission.
“To level the playing field, deter the offshoring of AI sector investment, and prevent foreign control of American works and data, the US should lead in opposing, and the Plan should oppose, TDM exceptions that include copyrighted works.”
3. Upholding Existing Copyright Laws
The submission argues that current US copyright law is sufficiently robust to address AI-related copyright concerns.
“In particular,” argues the filing, the country’s “fair use doctrine (which is unique to U.S. law and cannot and should not be exported to other countries, which do not share the same values and decades of judicial precedents) provides a manageable and nuanced approach to addressing liability for unauthorized uses of copyrighted works to train an AI system”.
The submission also points to the recent Reuters ruling as reinforcement that unauthorized use of copyrighted works for AI training constitutes direct infringement and not fair use.
“The Court reaffirmed that the impact of the use on existing and potential markets is the single most important element of a fair use analysis and that there was clearly a potential market to use the materials at issue in the case to train AI,” the submission continues.
The filing adds: “[O]ur Nation’s longstanding legal precedent clearly establishes that copyright should protect only human expression. The Copyright Office’s recent report on copyrightability makes clear that this settled doctrine applies equally in the context of AI.
“This approach is consistent with our Constitutional values and promotes human flourishing. The Constitution only protects the rights of human beings. Machines cannot and should not have the same rights.”
4. Protecting Voice and Likeness Rights
The orgs also highlight growing concerns about deepfake technologies, including potential misuse for scams and personal exploitation.
As such, they strongly support the NO FAKES Act, a bipartisan bill designed to protect individuals’ voice and likeness rights against unauthorized AI replication.
“While unauthorized AI voice and likeness cloning has had a great impact on the creative community, the use of AI technologies to these ends has broader personal safety and national security implications,” the orgs note in the submission.
The filing continues: “Not only are there hundreds of unauthorized AI voice and likeness models of various celebrities, one can also easily find publicly available AI voice or image models of political or corporate figures, such as President Trump, Vice President Vance, and Elon Musk.
“In addition, several services and apps have recently come online where one can create an unauthorized voice or likeness clone of anyone’s voice or likeness without the need for any particular technical knowledge.”
The orgs also note that, “To combat these harms while adhering to these principles, the Trump Administration and the Plan should support the NO FAKES Act”.
They add: “This legislation would provide certainty by creating a national floor that protects a person’s voice and likeness rights as they relate to digital replicas and gives victims meaningful redress for unauthorized digital uses while reserving appropriate First Amendment protections.”
5. Promoting AI Transparency
In their filing, the RIAA et al. recommend that AI companies maintain detailed records of training materials and provide reasonable summaries of works used in AI model development.
They support the proposed TRAIN Act, which would create a court-administered process for copyright holders to investigate potential unauthorized use of their works.
According to the filing, “To overcome such skepticism and build trust in AI, including trust that training materials are legitimate and high quality, the Plan should require adequate record keeping and transparency concerning AI training materials and the outputs from AI algorithms”.
“AI companies should provide the public with reasonable summaries of the works used to train their AI systems.”
The filing adds: “Consistent with the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework, AI companies should keep adequate records of the training materials used to train their AI systems and the provenance of such training materials.
“As noted by NIST, “[m]aintaining the provenance of training data and supporting attribution of the AI system’s decisions to subsets of training data can assist with both transparency and accountability.
“AI companies should provide the public with reasonable summaries of the works used to train their AI systems. Those publicly available summaries should include information sufficient for copyright owners to determine if their works were used for AI training without a license for such use.”Music Business Worldwide
The best of MBW, plus the most important music biz stories on the web. Delivered for FREE, direct to your inbox each day.