Yeah, it is crazy how confidently LLMs can say something when it has never existed. Having said that, I'm still a HUGE fan of LLMs, as I know it is very unlikely that multiple LLMs will brain fart at the same time. If you know how to navigate things, you will get a solution much faster than you probably would have in the past.
As a user it feels that you get cosy with stuff they know and you gain a lot of time until you hit something they don't and you lose more time than the sum you gain from the beginning because finally you have to learn everything and more to be able to understand how the LLM put you on the wrong track.
That is why I always go in with a mistrust mindset and why I am building my chat app this way. If accuracy is important and if I am unfamiliar with something, I mainly use LLMs as a compass and rely on them to tell me when another LLM (including itself) is wrong. I'm pretty sure I will learn the wrong things over time, but these wrong things in my mind are not critical.