Misuse of copyrighted music by AI companies ‘could exploit artists’

Misuse of copyrighted music by artificial intelligence companies could exploit musicians, a former executive at a leading tech startup has warned.

The technology is trained on a large number of existing songs, which it uses to generate music based on a text prompt.

Copyrighted work is already being used to train AI models without permission, according to Ed Newton-Rex, who resigned from his role as head of Stability AI’s audio team because he disagreed with the company’s view that training generative AI -models on copyrighted works are “fair use” of the material.

Ed Newton-Rex

Sir. Newton-Rex told Sky News that his issue is not so much about stability as a company as it is about the generative AI industry as a whole.

“Everyone is really taking the same position, and that position is basically that we can train these generative models on whatever we want, and we can do it without consent from the rights holders, from the people who actually created that content , and who owns that content,” he said.

Newton-Rex added that one of the reasons is big AI companies do not agree to deals with artists and labels because it involves “leg work” that costs them time and money.

Emad Mostaque, co-founder and CEO of Stability AI, said fair use supports creative development.

More about artificial intelligence

Fair use is a legal clause that allows copyrighted work to be used without the owner’s permission for specified non-commercial purposes such as research or teaching.

Stability’s sound generator, Stable Audio, allowed musicians to opt out of their pool of training data.

Millions of AI-generated songs are created online every day, and big-name artists are even signing deals with tech giants to create AI music tools.

Read more:
Britain’s musicians are facing an existential career crisis
Schools are encouraged to teach children to use artificial intelligence from the age of 11

Can generative AI become a hit in the music industry?

Musicians throughout the ages have embraced technology, whether it’s manipulating their voices with autotune or using digital production tools to sample and recycle music.

Sampling, which is the reuse of a sound recording in another piece of recorded music, was considered a threat to musicians’ work when the technology was first developed.

Regulation has since been introduced, meaning that an artist must obtain permission from the copyright holder to legally use a sample.

Now sampling is the cornerstone of a variety of modern music genres from hip-hop to jungle.

In some ways, generative AI is no different. But whether it is an advantage or a disadvantage for art now depends on the supervisory authorities.

Tech giants like Google, YouTube, and Sony are launching AI tools that empower everyone to generate music based on a text prompt.

Artists have agreed to have their work used in these models, but there has been an influx of AI generators believed to have scraped music without the creator’s consent.

Bad Bunny, the Grammy Award-winning singer from Puerto Rico, was the latest in a string of established artists to criticize the use of his voice without his consent in an AI-generated song that went viral in November.

He asked his 20 million WhatsApp followers to leave if they liked “this naughty song that’s viral on TikTok … I don’t want you on tour either.”

Moiya McTier
Moiya McTier

Boomy, an AI music generator that claims it does not use copyrighted work, said more than 18 million songs were produced using the platform in November.

The Human Artistic Campaign, which represents music associations from around the world, has called for rules to protect copyright and ensure artists are allowed to license their work to AI companies for a fee.

Moiya McTier, senior adviser to the campaign, said: “When artists’ work is used in these models, those artists must be credited and compensated if they have given their consent to be used in these models.”

Leave a Reply

Your email address will not be published. Required fields are marked *