Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That definition ought to be reserved for ASI (S meaning super) not AGI (G meaning general).

That said I agree "human like" is unlikely, although LLMs and diffusion models are much closer than I was expecting.



That's because their training source is human output. Human In Human Out (HIHO).


Even so, was expecting more dissimilarities or even just types of inappropriateness that are very human — humans are a broad bunch, no reason the LLMs wouldn't just default to snarky and lazy, like the example from the OpenAI Dev Day of someone who tried to fine tune on their slack messages, asked it to write something, and it said "Sure, I'll do it in the morning".

Despite people calling them stochastic parrots and autocomplete on steroids, ChatGPT is behaving like it is trying to answer rather than merely trying to continue the text the user enters. I find this surprising.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: