I think of LLMs like smart but unreliable humans. You don't want to use them for anything that you need to have right. I would never have one write anything that I don't subsequently go over with a fine-toothed comb.
With that said, I find that they are very helpful for a lot of tasks, and improve my productivity in many ways. The types of things that I do are coding and a small amount of writing that is often opinion-based. I will admit that I am somewhat of a hacker, and more broad than deep. I find that LLMs tend to be good at extending my depth a little bit.
From what I can tell, Sabine Hossenfelder is an expert in physics, and I would guess that she already is pretty deep in the areas that she works in. LLMs are probably somewhat less useful at this type of deep, fact-based work, particularly because of the issue where LLMs don't have access to paywalled journal articles. They are also less likely to find something that she doesn't know (unlike with my use cases, where they are very likely to find things that I don't know).
What I have been hearing recently is that it will take a long time for LLMs will be better than humans at everything. However, they are already better than many many humans at a lot of things.
1. Any low hanging fruits that could easily be solved by an LLM easily probably would have been solved by someone already using standard methods.
2. Humans and LLMs have to spend some particular amount of energy to solve problems. Now, there are efficiencies that can lower/raise that amount of energy but at the end of the day TANSTAAFL. Humans spend this in a lifetime of learning and eating, and LLMs spend this in GPU time and power. Even when AI gets to human level it's never going to abstract this cost away, energy still needs spent to learn.
With that said, I find that they are very helpful for a lot of tasks, and improve my productivity in many ways. The types of things that I do are coding and a small amount of writing that is often opinion-based. I will admit that I am somewhat of a hacker, and more broad than deep. I find that LLMs tend to be good at extending my depth a little bit.
From what I can tell, Sabine Hossenfelder is an expert in physics, and I would guess that she already is pretty deep in the areas that she works in. LLMs are probably somewhat less useful at this type of deep, fact-based work, particularly because of the issue where LLMs don't have access to paywalled journal articles. They are also less likely to find something that she doesn't know (unlike with my use cases, where they are very likely to find things that I don't know).
What I have been hearing recently is that it will take a long time for LLMs will be better than humans at everything. However, they are already better than many many humans at a lot of things.