Hacker News new | past | comments | ask | show | jobs | submit login

Based on my experience, I've found that I like using AI (GitHub copilot) to do things like answer questions about a language that I could easily verify in the documentation. Almost basically 'yes/no' questions. To be honest if I were writing such documentation for a product/feature, I wouldn't mind the AI hoovering it up.

I've found it to be pretty crap at doing things like actual algorithms or explaining 'science' - the kind of interesting work that I find on websites or blogs. It just throws out sensible looking code and nice sounding words that just don't quite work or misses out huge chunks of understanding / reasoning.

Despite not having done it in ages, I enjoy writing and publishing online info that I would have found useful when I was trying to build / learn something. If people want to pay a company to mash that up and serve them garbage instead, then more fool them.




I've argued years ago based on how LLMs are built that they would only ever amount to lossy and very memory inefficient compression algorithms. The whole 'hallucination' thing misses the mark. LLMs are not 'occasionally' wrong/hallucinating sometimes. They can only ever return lower resolution versions of what was on their training data. I was mocked then but I feel vindicated now.


They can combine two things in a way that never appeared together in the source material.


Youtube compression algorithm also produces lots of artifacts that were never filmed by the video producers


And datamoshing lets you produce effects that weren't in the source clips.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: