Music generated by AI has flooded the large platforms of streaming without anyone having asked for it. Deezer says it detects 75,000 AI tracks uploaded every day, and the number is growing. Spotify has uploaded 75 million songs of that type in the last twelve months. And Apple Music recognizes that more than a third of everything that comes to it is “100% AI”.
Why is it important. It is not only a quality problem for the catalog or the reputation of the platform, but also an economic problem.
- Spotify, Apple Music and most platforms operate with a proportional distribution model (pro-rata): each artist receives a percentage of the total pool royalties equivalent to your reproduction quota.
- The more AI songs that accumulate listeners (even if they are fraudulent, generated by bots) the more it dilutes what a real musician earns.
Between the lines. Although more and more music of this type is uploaded, almost no one listens to it, at least on purpose (sometimes AI songs sneak into algorithmic discovery lists). The problem is not the demand, which does not exist, but the brutal and increasing amount that distorts the algorithms and erodes the income of real artists even though their songs are still the ones that people do want to hear.
Someone is uploading music that no one asks for to collect money that they do not deserve because the listeners arrive via bots. And that is money that the real artist stops earning.
The background. The most extreme case, at least documented so far, has been that of Michael Smith, an American businessman who between 2017 and 2024 generated more than 10 million dollars in royalties wearing Suno and other tools to create hundreds of thousands of songs and armies of bots to play them automatically.
That was the first case of fraud streaming with AI criminally prosecuted in the United States. According to the accusation, it accumulated 660,000 views a day. One billion views and zero fans.
Yes, but. The platforms are already facing this wave. Deezer has been the most aggressive: it has implemented AI automatic detection, excludes those songs from algorithmic recommendations and has demonetized 85% of its views. Bandcamp has outright banned AI-generated music. Apple Music has begun to roll out its ‘Transparency Tags‘ (optional for now), and Spotify has released a verification stamp ‘Verified by Spotify‘ to ensure there is a human behind every artist profile.
The problem is that both Spotify and Apple have opted for voluntary systems: it is the labels and distributors who must declare whether they have used AI. Nobody who lives off fraud is going to do it. There is an important distinction:
- It is one thing for a musician to use AI as a tool within their creative process (to refine a lyric, generate a base, experiment with sounds…) and quite another for an entire song to come out of Suno or equivalent with a pair of prompts and without real human intervention.
- The platforms, at the moment, do not distinguish between one thing and another.
And Spotify has also left a door open by noting that “the concept of artistic authenticity is complex and rapidly evolving,” which in practice means that AI artists could end up being verified one day.
Featured image | Xataka
In Xataka | Science has measured how music impacts us during exercise: choosing the right Spotify list is essential

GIPHY App Key not set. Please check settings