Taking a step back: The UX/UI for LLMs in general are very immature. We're in the very early days of to best interact with these tools. We need more experimentation like this to help figure out what works, and doesn't work.
Totally agree. It feels like we've barely scratched the surface.
I'm working on a project where I'm experimenting with pretty obvious things like removing the annoying markdown syntax all the LLMs show you for a split second and smoothing out how it renders the characters to match your frame rate.
> I'm working on a project where I'm experimenting with pretty obvious things like removing the annoying markdown syntax all the LLMs show you for a split second and smoothing out how it renders the characters to match your frame rate.
I have a suggestion for the page you linked. The sample on the top page that shows it doing its thing, while it is useful seeing it in action, as a result of looping the example output it was a bit difficult to read. I would suggest extending the amount of time the loop lasts to give slow readers a little more time. An extra 3-5 seconds after it finishes output would be helpful from a mobile UX viewpoint.
I will say the bare bones chat interfaces are so so much better than the awful copilot side panes, and quasi-material designed to death Google attemps at interfaces so far. I am sure with multi-modal, and with special cases for deep research there may be improvements, but insofar as straight text chat is concerned I think the simplest interfaces are hard to improve upon.
Kudos!