respectfully, you're raising a whole lot of arguments here that had nothing to do with any point I was raising and doesn't seem to be moving this discussion forward in any significant way. The point of this subthread thread was a user saying the following:
>But if I train my own neural network inside my skull using some artist's style, that's ok?
This post and others uses a lot of flowery language to point out that we train artificial neural networks and real neural networks in different ways. OK, great. I don't think anyone is saying that's not true. What I am saying is that it's irrelevant.
If I am an exceptional imitator of the style of Jackson Pollock and i make a bunch of paintings that are very much in that style but clearly not his work I'm not going to be sued. My work will be labeled, rightfully so, as derivative but I have the right to sell it because it's not the same thing. Is that somehow more acceptable because I can only do it slowly and at a low volume? What if I start an institute whose sole purpose is training others to make Jackson Pollock-like paintings? What if I skip the people and make a machine that makes a similar quality of paintings with a similarly derivative style? Is that somehow immoral / illegal? Why?
There's a whole lot of hand-wavey logic going on in this thread about context and opera and special human magic that only humans can possibly do and that somehow makes it immoral for an AI to do it. I am yet to see a simple, succinct argument of why that is the case.
> This post and others uses a lot of flowery language to point out that we train artificial neural networks and real neural networks in different ways. OK, great. I don't think anyone is saying that's not true. What I am saying is that it's irrelevant
Maybe I was too aulic.
The point is: you don't train "your artificial intelligence", because you're not an artificial intelligence, you train your whole self, that is a system, a very complex system.
So you can think in terms of "I don't like death, I don't want to display death"
You can learn how to paint using your feet, if you have no hands.
You can be blind and still paint and enjoy it!
An AI cannot think of "not displaying death" in someone's face, not even if you command it to do it, because it doesn't mean anything, out of context.
> Jackson Pollock
Jackson Pollock is the classic example to explain the concept: of course you can make the same paintings Jackson Pollock made.
But you'll never be Jackson Pollock, because that trick works only the first time, if you are a pioneer.
If you create something that look like Pollock, everybody will tell you "oh... it reminds of Jackson Pollock..." and no one will say "HOW ORIGINAL!"
Like no one can ever be Armstrong again, land on the Moon and say "A small step for man (etc etc)"
Pollock happened, you can of course copy Pollock, but nobody copies Pollock not because it's hard, but because it's cheap AF
So it's the premise that is wrong: you are not training, you are learning.
They are very different concepts.
AIs (if we wanna define the "intelligent") are currently just very complex copy machines trained on copyrighted material.
Remove the copyrighted material and their output would be much less than unimpressive (probably a mix of very boring and very ugly).
Remove the ability to watch copyrighted material from people and some of them will come up with an original piece of art.
You're typing a lot in these posts but literally every point you're making here is orthogonal to the actual discussion, which is why utilizing the end product of exposing an AI to copyrighted material and exposing a human to copyrighted material are morally distinct.
> which is why utilizing the end product of exposing an AI to copyrighted material and exposing a human to copyrighted material are morally distinct.
sorry for writing in capital letters, maybe that way they will stand out enough for you to focus on what's important.
WE ARE NOT AIS
an AI is the equivalent of a photocopier or sampling a song to make a new song, there are limits on how much you can copy/use copyrighted material, that do not apply TO YOUR HEARS, because you hearing a song does not AUTOMATICALLY AND MECHANICALLY translates into a new song. You still need to LEARN HOWTO MAKE MUSIC, which is not about the features of the song, it's about BEING ABLE TO COMPOSE MUSIC.
which is not what these AI do, they cannot compose music, they can mix and match features taken from copyrighted material into new (usually not that new, nor good) material.
If we remove the copyrighted material from you, you can still make music.
You could be deaf and still compose music.
If we remove copyrighted material from AIs they cannot compose shit.
Because the equivalent of a deaf person for an AI that create music CANNOT EXIST - for obvious reasons.
So AIs DEPEND ON copyrighted material, they don't just learn from it, they WOULD BE USELESS WITHOUT IT.
and morally the difference is that THEY DO NOT PAY for the privilege of accessing the source material.
They take, without giving anything back to the artists.
I'll try to address your underlying thought, and hope I'm getting it right.
I think you are right to be skeptical and cautious in the face of claims of AI progress. From as far back as the days of the Mechanical Turk, many such claims have turned out to be puffery at best, or outright fraud at worst.
From time to time, however, inevitably, some claims have actually proven to be true, and represent an actual breakthrough. More and more, I'm beginning to think that the current situation is one of those instances of a true breakthrough occurring.
To the surface point: I do not think the current proliferation of generative AI/ML models are unoriginal per se. If you ask them for something unoriginal, you will naturally(?) get something unoriginal. However, if you ask them for something original, you may indeed get something original.
> If we remove copyrighted material from AIs they cannot compose shit.
I wonder in what way you mean that? In any case the latest stable diffusion model file itself is 3.5 GB, which is several of orders of magnitude less than the training dataset.
It probably doesn't contain much literal copyrighted data.
You're making much more concise arguments now, I think that makes the discussion more useful and interesting.
I would take the position that it's self evident that if you take the 'training data' away from humans they also can't compose music. If you take a baby, put it in a concrete box for 30 years (or until whatever you consider substantial biological maturity), and then put it in front of a piano it's not going to create Chopin. It might figure out how to make some dings and boops and will quickly lose interest.
Humans also need a huge amount of training data and we, at best, make minor modifications to these ideas to place them into new context to create new things. The difference between average and world class is vanishingly small in terms of the actual basic insight in some domain. Take the greatest composers that have ever lived and rewind them and perform our concrete box experiment and you'll have a wild animal, barely capable of recognizing cause and effect between hitting the piano and the noise it makes.
That world class composer, when exposed to modern society, consumed an awful lot of media for 'free' just by existing. Should they be charged for it? Did they commit a copyright infraction? Why or why not?
>But if I train my own neural network inside my skull using some artist's style, that's ok?
This post and others uses a lot of flowery language to point out that we train artificial neural networks and real neural networks in different ways. OK, great. I don't think anyone is saying that's not true. What I am saying is that it's irrelevant.
If I am an exceptional imitator of the style of Jackson Pollock and i make a bunch of paintings that are very much in that style but clearly not his work I'm not going to be sued. My work will be labeled, rightfully so, as derivative but I have the right to sell it because it's not the same thing. Is that somehow more acceptable because I can only do it slowly and at a low volume? What if I start an institute whose sole purpose is training others to make Jackson Pollock-like paintings? What if I skip the people and make a machine that makes a similar quality of paintings with a similarly derivative style? Is that somehow immoral / illegal? Why?
There's a whole lot of hand-wavey logic going on in this thread about context and opera and special human magic that only humans can possibly do and that somehow makes it immoral for an AI to do it. I am yet to see a simple, succinct argument of why that is the case.