I suspect it’s about marketing. I’m not sure it would be so easy to sell these tools to enterprise organisations if you outlined that they are basically just very good at being lucky. With the abstraction of hallucinations you sort of put into language why your tool is sometimes very wrong.
To me the real danger comes from when the models get things wrong but also correct at the same time. Not so much in software engineering, I doubt your average programmer without LLM tools will write “better” code without getting some bad answers. What consents me is more how non-technical departments implement LLMs into their decision making or analysis systems.
Done right, it’ll enhance your capabilities. We had a major AI project in cancer detection, and while it actually works it also doesn’t really work on its own. Obviously it was meant to enhance the regular human detection and anyone involved with the project screamed this loudly at any chance they got. Naturally it was seen as an automation process by the upper management and all the humans parts of the process were basically replaced… until a few years later when we had a huge scandal about how the AI worked as it was meant to do, which wasn’t to be on its own. Today it works along side the human detection systems and their quality is up. It took people literally dying to get that point through.
Maybe it would’ve happened this way anyway if the mistakes weren’t sort of written into this technical issue we call hallucinations. Maybe it wouldn’t. From personal experience with getting projects to be approved, I think abstractions are always a great way to hide the things you don’t want your decision makers to know.
To me the real danger comes from when the models get things wrong but also correct at the same time. Not so much in software engineering, I doubt your average programmer without LLM tools will write “better” code without getting some bad answers. What consents me is more how non-technical departments implement LLMs into their decision making or analysis systems.
Done right, it’ll enhance your capabilities. We had a major AI project in cancer detection, and while it actually works it also doesn’t really work on its own. Obviously it was meant to enhance the regular human detection and anyone involved with the project screamed this loudly at any chance they got. Naturally it was seen as an automation process by the upper management and all the humans parts of the process were basically replaced… until a few years later when we had a huge scandal about how the AI worked as it was meant to do, which wasn’t to be on its own. Today it works along side the human detection systems and their quality is up. It took people literally dying to get that point through.
Maybe it would’ve happened this way anyway if the mistakes weren’t sort of written into this technical issue we call hallucinations. Maybe it wouldn’t. From personal experience with getting projects to be approved, I think abstractions are always a great way to hide the things you don’t want your decision makers to know.