Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do we allow artists to withhold their works from the minds of eager, learning children? [1]

Tell me how ML is different than the mind of a toddler ravenous for new information.

For every billion dollar start-up using data at scale, there are tens of thousands more researchers and hobbyists doing the exact same, producing wonderful results and advances.

If we stop this growth dead in the tracks, other countries more willing to look past the IP laws will jump ahead. And if Stability locks away their secret sauce, some new party will come and give away the keys to the kingdom yet again.

You can't block the signal. Except, of course, by legislating against it in some Luddite hope we can prevent the future from happening.

Instead of worrying careers will end, we should look at this as being the end of specialization. No longer do we need to pay 20,000 hours to learn one thing to the exclusion of all others we would like to try. Now we'll be able to clearly articulate ourselves with art, music, poetry. We'll become powerful beings of thought and expression.

Humans aren't the end or the peak of evolution. We should be excited to watch this unfold.

[1] Maybe Disney would like you to pay more for a premium learning plan for your child, but thankfully that's not (yet) possible.



Most machine learning is assigning weights in a chain of matrix multiplications and normalization functions.

There is no known experimentally verifyiable model of toddlers' brains, let alone one based on matrix multiplication and normalization. Developing such a model would be a noteworthy achievement.

Therefore these are different.


Some Artificial Neural Networks have been shown to significantly (at least up to 50% concordance) model brain function.

Not the mention the laborious work of neuroscientists to build out the connectome of the human brain.


Two systems that produce the same output for some set of inputs doesn't show the systems are the same. My phone can produce the same results as my brain for short arithmetic problems. My phone is not a brain.

The neuroscientists I know in the field would be among the first to tell you that our ability to model the brain is nearly non existent. In fact we don't even have a great model of a single neuron [1]. This statement doesn't invalidate the work folks are doing to try and reach that goal. Biology is hard.

[1] https://en.m.wikipedia.org/wiki/Biological_neuron_model


As a working neuroscientist, I’ll co-sign this!

Understanding 50% of the brain, whatever that would even mean, is an utter fantasy.


I should have clarified that I was talking about the specific brain function of semantic comprehension.

I am not suggesting that we are anywhere near having a complete analytical model of 50% of the brain.

I am suggesting that we do have tools to continue answering questions about functional aspects of the brain.

Or am I missing something that indicates the non-utility of “function analysis” of biology-based artificial neural networks?



These articles use far more cautious language than you suggest and if they don't everyone working in the field is hopefully aware that such claims are the academic equivalent of clickbait at best.


>No longer do we need to pay 20,000 hours to learn one thing to the exclusion of all others we would like to try. Now we'll be able to clearly articulate ourselves with art, music, poetry. We'll become powerful beings of thought and expression.

I'm a 20000 hours person. Knowing what I know about what I do, it's real sad to see someone misunderstand what goes into creativity this egregiously. Prompt engineering is such an unbelievably watered down "version" of making a painting. It's like writing a page, or even a folder! of bullet points and handing it to a ghostwriter, then telling them "put the end result between Shakespeare and Poe".

That's not unleashing your creative voice. Unleashing your voice and acquiring technical skills in a chosen field are the same. If you endlessly mixed all the prior classical works, it doesn't matter how you weight them, it won't spit out Mozart. You're stuck in the gamut of the model, between the maxima points of each artist.

It's an incredible tool to generate stuff quickly, and to some extent it will help artists whose work depends on quantity over quality.


You can prompt with images, which let's you control colour and composition, and with masking you can iteratively work on sections to guide the image to what you are picturing. That can shift the creative part more towards the user.


Yes, I've seen the photoshop plugin. You're comparing playing with duplo blocks to marble sculpture.


> Tell me how ML is different than the mind of a toddler ravenous for new information.

If a person published a work that clearly plagiarized or violated a patent, that person would be open to legal action.

I’m all for systemic change, but uses like this may end up having a chilling effect on human-created work.


> I’m all for systemic change, but uses like this may end up having a chilling effect on human-created work.

Everytime this comes up, whichever party fears for it's livelihood always says something like this and ignores the other side: that rigorous enforcement activity is going to do the same thing, to human created work. Richard Stallman wrote a short story about this very issue.[1]

There are already people hurling abuse around on Twitter at artists because they think that something they made was produced with Stable Diffusion or something else.

[1] https://www.gnu.org/philosophy/right-to-read.en.html


> Everytime this comes up, whichever party fears for it's livelihood always says something like this and ignores the other side: that rigorous enforcement activity is going to do the same thing, to human created work.

I may be providing a counter-example to your argument.

At this time, I’m not advocating for anything other than self-censorship by generative AI systems (see https://news.ycombinator.com/item?id=33194623 for some initial thoughts) and, as aggregated from some of my other comments in this thread, the following:

I think that it will be important to ensure that we have symmetric information, going forward, otherwise trying to put the genie back in the bottle may just end up further disadvantaging those that try to follow the rules.

-

Society needs to change the laws regarding the preservation of value of intellectual labor, as has long been suggested.

Acting like the law doesn’t matter is a bad thing, if we are making value judgements.

-

If society doesn’t value commodity intellectual labor, then society may need to address the commoditization of intellectual labor, directly, through things like UBI / vocational rehabilitation, etc.

Similar arguments can be made about robots and the commoditization of manual labor.


Funny that you cite Stallman, when Copilot using GPLed code in closed-source projects is a real concern.


The criticism is that AI works are not transformative, but are recognizable “regurgitation” of training set.

It’s not that AIs are too good. They look like crude knockoff products to trained eyes. And crude knockoffs are usually considered bad things.


"Good artists borrow, great artists steal."

A lot of artists get started with tracing before taking off the training wheels. You also see new art styles quickly proliferate across the entire community, so clearly there's some unspoken copying happening.

These models are producing new works in nearly identical styles. That's something a trained human could conceivably do.


> A lot of artists get started with tracing before taking off the training wheels.

Sure, but only privately. Publishing something you traced is a massive no-no, and selling it even more so.


Picasso was a hack, and it's reflected in that quote of his.


Yeah, but when couple lines match up with existing arts, you go up in flames and you change the careers. NAI is doing the first half of that.


>Tell me how ML is different than the mind of a toddler ravenous for new information.

The toddler is human. AIs are not humans.

It's a human right to learn. Non-humans don't (and shouldn't) have human rights.

>Humans aren't the end or the peak of evolution. We should be excited to watch this unfold.

Spoken like a true evolutionary loser.


Well a toddler isn’t making money off the information they are absorbing for one. If these are open to the public models that is one thing. But no, these are proprietary models whose sole purpose is to make money for large corporations.


Artists and engineers do exactly this. It just takes a decade.


They are taking your code verbatim and injecting it into numerous code bases around the world violating the license while getting paid for it?


> Tell me how ML is different than the mind of a toddler ravenous for new information.

Well, I can't keep a toddler in a data center, pumping out work on demand. Or copyright it and limit who it chooses to work for when it grows up.

For instance.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: