Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The problem is that Alexa is just a consumer voice command line. UI discoverability is impossible, and everything that you can do is just a utility that does something else

This is a really important point, and both a strength and a weakness.

The iPhone began as essentially a front end to existing services (with visual discovery, which a voice interface inherently lacks). I used to name mine "FEP" (as in Front End Processor) -- a front end to a subset of "real" computing or as I think of it these days: multiple windows into a shared computing space.

A watch (like the apple one) is really a crappy general UI device; discoverability is pretty bad because of the limited area and speed. But it's great in the role the phone had: subsetted interface to a limited number of "real" computing tasks (yes, it has a few of its own tricks too but mainly as data collection for apps on your phone).

Thompson captured this issue by talking about devices and software in terms of "the task it was hired to to". The problem is the voice assistants haven't figured that out yet.

The thing about this approach is it creates pressure to make more functionality available at the edge.

Alexa and google home tried to jump right to the edge in one go, which skips too much phylogeny.

Apple seems to understand this, but gets it wrong in the opposite direction: the iPad hasn't moved far beyond being "most of an iPhone but with a larger screen". And if you have an apple speaker, a phone, and iPad and call out "hey Siri, set an alarm for 10 minutes" you may get three devices chiming in 10 minutes. They don't act like a single device.



Just for the final point, they’re pretty good for letting 1 device respond.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: