Hacker News new | past | comments | ask | show | jobs | submit login

There's no such thing as AI hoarding though. New self-hostable models are popping up all the time. Everyone is going to have access to the technology.

Wikimedia is perfectly capable of deploying LLMs without external help.




I've tested a few of the self-hosted models, and also compared GPT 3, GPT 4, Palm 1, and Palm 2.

Currently, GPT 4 blows everything else out of the water, there is simply no comparison.

More importantly, the limited context size of the open-source models precludes them from being used for this type of task. It takes well over 10K tokens to load some of the longer Wiki pages, let alone the page, a prompt, a reference, and the output. The GPT 4 32K model could handle most such scenarios.


GPT-4 is exactly two months old today. The world will catch up.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: