Gets into really weird edge case of how much do you have to change before the new AI isn't the same person and doesn't count as "stealing" their identity. i.e. if you train an AI with real conversations, real voice recording etc from a real human and release it, that's one thing. But if you modify it enough to be like a different song...
Personally, I'd guess that I wouldn't consider it "stealing" an identity if it were evenly trained off a couple of individuals. But it is a weird case ethically. Can I just train a model off of my favourite bloggers' works and tweets, and release a tweetbot replying to stuff I think would be topical? I've basically done that, but not hooked it up to Twitter, and I think that might be fine as long as the model is private. But this is certainly thorny ground.