Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This argument could be made for every level of abstraction we've added to software so far... yet here we are commenting about it from our buggy apps!


Yeah, but the abstractions have been useful so far. The main advantage of our current buggy apps is that if it is buggy today, it will be exactly as buggy tomorrow. Conversely, if it is not currently buggy, it will behave the same way tomorrow.

I don't want an app that either works or does not work depending on the RNG seed, prompt and even data that's fed to it.

That's even ignoring all the absurd computing power that would be required.


Still sounds a bit like we've seen it all already – dynamic linking introduced a lot of ways for software that wasn't buggy today to become buggy tomorrow. And Chrome uses an absurd amount of computing power (its bare minimum is many multiples of what was once a top-of-the-line, expensive PC).

I think these arguments would've been valid a decade ago for a lot of things we use today. And I'm not saying the classical software way of things needs to go away or even diminish, but I do think there are unique human-computer interactions to be had when the "VM" is in fact a deep neural network with very strong intelligence capabilities, and the input/output is essentially keyboard & mouse / video+audio.


You're just describing calling a customer service phone line in India.


Please, I don’t need my software experience to get any _worse_. It’s already a shitshow.


> This argument could be made for every level of abstraction we've added to software so far... yet here we are commenting about it from our buggy apps!

No. Not at all. Those levels of abstractions – whether good, bad, everything in between – were fully understood through-and-through by humans. Having an LLM somewhere in the stack of abstractions is radically different, and radically stupid.


Every component of a deep neural network is understood by many people, it's the interaction between the numbers trained that we don't always understand. Likewise, I would say that we understand the components on a CPU, and the instructions it supports. And we understand how sets of instructions are scheduled across cores, with hyperthreading and the operating system making a lot of these decisions. All the while the GPU and motherboard are also full of logical circuits, understood by other people probably. And some (again, often different) people understand the firmware and dynamically linked libraries that the users' software interfaces with. But ultimately a modern computer running an application is not through and through understood by a single human, even if the individual components could be.

Anyway, I just think it's fun to make the thought experiment that if we were here 40 years ago, discussing today's advanced hardware and software architecture and how it interacts, very similar arguments could be used to say we should stick to single instructions on a CPU because you can actually step through them in a human understandable way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: