Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Adobe is training off of images stored in their cloud systems, per their Terms of Service.

OpenAI has provided no such documentation or legal guarantees, and it is still quite possible they scraped all sorts of copyright materials.



Google scrapes copyrighted material every day and then presents that material to users in the form of excerpts, images, and entire book pages. This has been ruled OK by the courts. Scraping copyrighted information is not illegal or we couldn't have search engines.


Google is not presently selling "we trained an AI on people's art without permission, and you can type their name in along with a prompt to generate a knockoff of their art, and we charge you money for this". So it's not really a 1:1 comparison, since there are companies selling the thing I described right now.


That pretty clearly would fall under transformative work. It is not illegal for a human to paint a painting in the style of, say, Banksy, and then sell the resulting painting.


Humans and AI are not the same thing, legally or physically. The law does not currently grant AI rights of any kind.


If a human isn't violating the law when doing that thing, then how is the machine violating the law when it cannot even hold copyright itself?


In some locales sitting on the street writing down a list of people coming and going is legal, but leaving a camera pointed at the street isn't. Legislation like that makes a distinction between an action by a person (which has bounds on scalability) and mechanized actions (that do not).


I'm not sure how to explain this any clearer: Humans and machines are legally distinct. Machines don't have the rights that humans have.


Fair Use is the relevant protection and is not specific to manual creation. Traditional algorithms (e.g: the snippets, caching, and thumbnailing done by search engines) are already covered by it.


What's not prohibited is allowed, at least in the US.


Scraping is only legal if it's temporary and transformational. If Google started selling the scrapped images it would be a different story.


What is not transformational for generative AI ?


No they are not. They train their models on Adobe Stock content. They do not train on user content.

https://helpx.adobe.com/manage-account/using/machine-learnin...

"The insights obtained through content analysis will not be used to re-create your content or lead to identifying any personal information."

"For Adobe Firefly, the first model is trained on Adobe Stock images, openly licensed content, and public domain content where the copyright has expired."

(I work for Adobe)


> OpenAI has provided no such documentation

OpenAI and Shutterstocks publicly announced their collaboration, Shutterstocks sells AI generated images, generated with OpenAI models.


There is in fact, an extreme amount of circumstantial evidence that they intentionally and knowingly violated copyright en mass. It’s been quite a popular subject in tech news the past couple weeks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: