Am I alone in thinking assistants were useful when they had a constrained set of operations that fits our heads? Now Google Maps tells Spotify to play a song called "Whats the name of this road?"
It's one of my great fascinations that assistants worked better in 2010 in my opinion than they do now. I remember setting timers, adding reminders and asking the weather back then and it was amazing, it could interpret very flexibly worded versions of my requests almost perfectly every time ("Add a reminder a week from now" etc). But steadily over the years this functionality seems to have degraded. It's got more and more hit and miss and finally I've just stopped using it. I don't really understand why ... I assume it's happened as they've tried to make the systems more flexible and accommodate wider inputs (different accents etc), it's made them worse at the more constrained use cases.
I believe the hope is the nuanced language understanding of LLM could correctly parse "What's the name of this road?" and know it's not a request to pass to Spotify.