Hacker News new | past | comments | ask | show | jobs | submit login

unless you are in fact a living and breathing cyborg [in which case, congratulations] , the wet work inside your head is not analogous to the neural networks that are producing these images in any but the most loosely poetic sense.



No? The mechanisms are different but the underlying idea is the same - identify important features and replicate those features in new context. If an AI identifies those features quickly or if I identify them over a lifetime what's the difference? If I so that you might say my work is derivative but you won't due me. Why is it different if an AI does it?


This comment answers your questions:

https://news.ycombinator.com/item?id=33425414


Not particularly. Parent post is not concerned with or making any claims to special knowledge of the internal details of the modelling in the mind or in the machine, only the output.


> The mechanisms are different but the underlying idea is the same

no.

they are the same as asking a person to say a number between 1 and 6, then asking the same question to a dice and concluding that men and dice work the same.

> identify important features and replicate those features in new context

untrue

if you think that that's what people do, obviously you can conclude that AI and humans are similar.

But people don't identify features, people first of all learn how to replicate - mechanically - the strokes, using the same tools as the original artists until they are able to do it, most of the time people fail and reiterate the process until they find something they are actually very good at and only after that the good ones develop their own style.

based either on some artistic style or some artistic meaning.

But the first difference we learn here is that humans can fail to replicate something and still become renown artists.

An AI cannot do that.

Not on its own.

For example, many probably already know, but Michelangelo was a sculptor.

He was proficient as a painter too, but painting wasn't his strongest skill.

So artists, first of all, are creators, not mere replicators, in many different forms, they are not good at everything in the same way, but their knowledge percolates in other fields related to theirs: if you need to make preparatory drawings for a sculpture, you need to be good at drawing and probably painting (lights, shadows, mood, expressions, are all fundamental for a good sculpture)

Secondly, the features artists derive from other art pieces are not the technical ones, those needed to make an exact replica of the original, but those that make it special.

For example, in the case of Michelangelo, the Pietà has some features that an AI would surely miss.

First of all the way he shaped the marble that was unheard of, it doesn't mean much if you don't contextualize the opera and immerse it in the historical period it was created.

An AI could think that Michelangelo and Canova were contemporary, while they were separated by 3 centuries, which make a lot of difference in practice and in spirit.

But more importantly, Michelangelo's Pietà is out of proportion, he could not make the two figures in the correct scale, proving that even a genius like he was could not easily create a faithful reproduction of two adults one in the lap of the other, with the tools of the 16th century.

The Virgin Mary is very, very young, which was at odds with her role as a grieving mother and, the most important of them all, the Christ figure is not suffering, because Michelangelo did not want to depict death.

An AI would assume that those are all features of Michelangelo's way of sculpting, but in reality it's the result of a mix of complexity of the opera, time when it was created, quality and technology of the tools used and the artist intentions, which makes the opera unique and, ultimately, irreproducible.

If you use an AI to reproduce Michelangelo, everybody would notice, because it's literally something a complete noob or someone with a very bad taste would do.

So to not say the difference, you should copy the works of lesser known artists, making it even more unethical.


respectfully, you're raising a whole lot of arguments here that had nothing to do with any point I was raising and doesn't seem to be moving this discussion forward in any significant way. The point of this subthread thread was a user saying the following:

>But if I train my own neural network inside my skull using some artist's style, that's ok?

This post and others uses a lot of flowery language to point out that we train artificial neural networks and real neural networks in different ways. OK, great. I don't think anyone is saying that's not true. What I am saying is that it's irrelevant.

If I am an exceptional imitator of the style of Jackson Pollock and i make a bunch of paintings that are very much in that style but clearly not his work I'm not going to be sued. My work will be labeled, rightfully so, as derivative but I have the right to sell it because it's not the same thing. Is that somehow more acceptable because I can only do it slowly and at a low volume? What if I start an institute whose sole purpose is training others to make Jackson Pollock-like paintings? What if I skip the people and make a machine that makes a similar quality of paintings with a similarly derivative style? Is that somehow immoral / illegal? Why?

There's a whole lot of hand-wavey logic going on in this thread about context and opera and special human magic that only humans can possibly do and that somehow makes it immoral for an AI to do it. I am yet to see a simple, succinct argument of why that is the case.


> This post and others uses a lot of flowery language to point out that we train artificial neural networks and real neural networks in different ways. OK, great. I don't think anyone is saying that's not true. What I am saying is that it's irrelevant

Maybe I was too aulic.

The point is: you don't train "your artificial intelligence", because you're not an artificial intelligence, you train your whole self, that is a system, a very complex system.

So you can think in terms of "I don't like death, I don't want to display death"

You can learn how to paint using your feet, if you have no hands.

You can be blind and still paint and enjoy it!

An AI cannot think of "not displaying death" in someone's face, not even if you command it to do it, because it doesn't mean anything, out of context.

> Jackson Pollock

Jackson Pollock is the classic example to explain the concept: of course you can make the same paintings Jackson Pollock made.

But you'll never be Jackson Pollock, because that trick works only the first time, if you are a pioneer.

If you create something that look like Pollock, everybody will tell you "oh... it reminds of Jackson Pollock..." and no one will say "HOW ORIGINAL!"

Like no one can ever be Armstrong again, land on the Moon and say "A small step for man (etc etc)"

Pollock happened, you can of course copy Pollock, but nobody copies Pollock not because it's hard, but because it's cheap AF

So it's the premise that is wrong: you are not training, you are learning.

They are very different concepts.

AIs (if we wanna define the "intelligent") are currently just very complex copy machines trained on copyrighted material.

Remove the copyrighted material and their output would be much less than unimpressive (probably a mix of very boring and very ugly).

Remove the ability to watch copyrighted material from people and some of them will come up with an original piece of art.

It happened many times throughout history.


You're typing a lot in these posts but literally every point you're making here is orthogonal to the actual discussion, which is why utilizing the end product of exposing an AI to copyrighted material and exposing a human to copyrighted material are morally distinct.


> which is why utilizing the end product of exposing an AI to copyrighted material and exposing a human to copyrighted material are morally distinct.

sorry for writing in capital letters, maybe that way they will stand out enough for you to focus on what's important.

WE ARE NOT AIS

an AI is the equivalent of a photocopier or sampling a song to make a new song, there are limits on how much you can copy/use copyrighted material, that do not apply TO YOUR HEARS, because you hearing a song does not AUTOMATICALLY AND MECHANICALLY translates into a new song. You still need to LEARN HOWTO MAKE MUSIC, which is not about the features of the song, it's about BEING ABLE TO COMPOSE MUSIC.

which is not what these AI do, they cannot compose music, they can mix and match features taken from copyrighted material into new (usually not that new, nor good) material.

If we remove the copyrighted material from you, you can still make music.

You could be deaf and still compose music.

If we remove copyrighted material from AIs they cannot compose shit.

Because the equivalent of a deaf person for an AI that create music CANNOT EXIST - for obvious reasons.

So AIs DEPEND ON copyrighted material, they don't just learn from it, they WOULD BE USELESS WITHOUT IT.

and morally the difference is that THEY DO NOT PAY for the privilege of accessing the source material.

They take, without giving anything back to the artists.

They do not even ask for the permission.

is it clearer now?


I'll try to address your underlying thought, and hope I'm getting it right.

I think you are right to be skeptical and cautious in the face of claims of AI progress. From as far back as the days of the Mechanical Turk, many such claims have turned out to be puffery at best, or outright fraud at worst.

From time to time, however, inevitably, some claims have actually proven to be true, and represent an actual breakthrough. More and more, I'm beginning to think that the current situation is one of those instances of a true breakthrough occurring.

To the surface point: I do not think the current proliferation of generative AI/ML models are unoriginal per se. If you ask them for something unoriginal, you will naturally(?) get something unoriginal. However, if you ask them for something original, you may indeed get something original.


> If we remove copyrighted material from AIs they cannot compose shit.

I wonder in what way you mean that? In any case the latest stable diffusion model file itself is 3.5 GB, which is several of orders of magnitude less than the training dataset.

It probably doesn't contain much literal copyrighted data.


You're making much more concise arguments now, I think that makes the discussion more useful and interesting.

I would take the position that it's self evident that if you take the 'training data' away from humans they also can't compose music. If you take a baby, put it in a concrete box for 30 years (or until whatever you consider substantial biological maturity), and then put it in front of a piano it's not going to create Chopin. It might figure out how to make some dings and boops and will quickly lose interest.

Humans also need a huge amount of training data and we, at best, make minor modifications to these ideas to place them into new context to create new things. The difference between average and world class is vanishingly small in terms of the actual basic insight in some domain. Take the greatest composers that have ever lived and rewind them and perform our concrete box experiment and you'll have a wild animal, barely capable of recognizing cause and effect between hitting the piano and the noise it makes.

That world class composer, when exposed to modern society, consumed an awful lot of media for 'free' just by existing. Should they be charged for it? Did they commit a copyright infraction? Why or why not?


You are romanticizing brains. Please stick to logical arguments that can be empirically tested.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: