A use-case that can be carefully considered requires more knowledge about the use-case than the LLM, it requires you to understand the specific model's training and happy paths, it requires more time to make it output the thing you want than just doing it yourself. If you don't know enough about the subject or the model, you will get confident garbage
> A use-case that can be carefully considered requires more knowledge about the use-case than the LLM
I would tend to agree with that assertion…
> it requires you to understand the specific model's training and happy paths
But I strongly disagree with that assertion; I know nothing of commercial models’ training corpus, methodology, or even their system prompts; I only know how to use them as a tool for various use-cases.
> it requires more time to make it output the thing you want than just doing it yourself.
And I strongly disagree with that one too. As long as the thing you want it to output is rooted in relatively mainstream or well-known concepts, it’s objectively much faster than you/we are; maybe it’s more expensive but it’s also crazy fast—which is the point of all tools—and the precision/accuracy of most speedy tools can be often deferred until a later step in the process.
> If you don't know enough about the subject or the model, you will get confident garbage
Once you step outside their comfort zone (their training), well, yah… they do all tend to be unduly confident in their responses—I’d argue however that it is a trait they learned from us; we really like to be confident even when we’re wrong and that trait is borne out dramatically across the internet sources on which a lot of these models were trained.