Popular music has already been synthetic and souless for decades now. People will listen to what sounds good to them, and we already know the bar is very low, and that the hard truth is that it is all subjective anyway.
More of a behavioural science take. Is music the sound that is played or the people making the sound?
We’ve had software accompaniment for a long time. Elevator music. The same 4 chords arranged in similar ways for decades. Hasn’t destroyed music. Neither will AI.
At some point people are going to want to know who’s on the other side making the music.
Unless your argument is that nobody values artists… which is I guess one of the primary conceits of GenAI enthusiasts today.
Popular music has already been synthetic and souless for decades now. People will listen to what sounds good to them, and we already know the bar is very low, and that the hard truth is that it is all subjective anyway.