IMO, with LLMs we won't really need information density except for certain classes of people.
Even now - clicking through some insurance company's website hierarchy to find something out is insanely painful.
But even for researching things that we should probably care about enough to do it ourselves, correlating different sources of information or working through abstract/ambiguous problems... the vast majority of ordinary people will 100% take the easy way out and let LLMs do most of the thinking for them. Even with free GPT-3, people are unflinchingly having LLMs solve problems they don't want to think about too deeply. What they pay for, with occasional inaccuracy, is more than offset by convenience.
> IMO, with LLMs we won't really need information density except for certain classes of people.
Maybe, but I don’t know if that day is here yet. I think “most people” do actually consume information. Like reading an insurance company’s website is pretty rare compared to things like using the Amazon App. Like it’d be hard to consume a list of 5+ push notifications via voice if you had to listen to them 1 by 1 instead of skimming them in a list next to their icons.
Even simple things like scrolling through a list of songs becomes painful. I have like 10k songs in my (streaming) library Sometimes I randomly scroll through it to find old music. That sounds impossible on voice. I’d be stuck with “shuffle” mode.
Being able to summarize and search text conversations via voice queries from their demo would be nice, but today that’s a task that you need a screen for.
The demo video shows the man buying a book online via voice after holding it up to the camera. How often is that the online shopping experience? I can’t imagine shopping without a screen 95% of the time.
we may not need it but we certainly prefer it. People went completely voluntary from voice calling to texting and within texting to ever terser forms to the point were an entire website was built around a short character limit.
Except for people with disability I have not really seen a single case where that tendency towards compactness is reversed in communication.
Even now - clicking through some insurance company's website hierarchy to find something out is insanely painful.
But even for researching things that we should probably care about enough to do it ourselves, correlating different sources of information or working through abstract/ambiguous problems... the vast majority of ordinary people will 100% take the easy way out and let LLMs do most of the thinking for them. Even with free GPT-3, people are unflinchingly having LLMs solve problems they don't want to think about too deeply. What they pay for, with occasional inaccuracy, is more than offset by convenience.