Hacker News new | past | comments | ask | show | jobs | submit login

Am I alone in thinking assistants were useful when they had a constrained set of operations that fits our heads? Now Google Maps tells Spotify to play a song called "Whats the name of this road?"



It's one of my great fascinations that assistants worked better in 2010 in my opinion than they do now. I remember setting timers, adding reminders and asking the weather back then and it was amazing, it could interpret very flexibly worded versions of my requests almost perfectly every time ("Add a reminder a week from now" etc). But steadily over the years this functionality seems to have degraded. It's got more and more hit and miss and finally I've just stopped using it. I don't really understand why ... I assume it's happened as they've tried to make the systems more flexible and accommodate wider inputs (different accents etc), it's made them worse at the more constrained use cases.


You're not alone. Maybe not 2010 though, hah.

2015-2017 was the golden age at least of Google assistant (or was it still Google Now?). Fast, responsive, and accurate.

Now in 2024 it's grindingly slow and unreliable.


I believe the hope is the nuanced language understanding of LLM could correctly parse "What's the name of this road?" and know it's not a request to pass to Spotify.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: