Google glass was ahead of its time. And was creepy with its recording ability and other features it launched with. Also, it was only a display for an attached phone for the most part. It couldn't really do anything.
Why would you want a computer strapped to your head? Computers generate heat and have weight that you'd rather put in your pocket than have to support with your neck. To make it work you're stuck with a trade off between performance, weight and battery life when none of those is fun to sacrifice in this context.
Moore's law has been dead for a while now and nobody really knows how much more we can get in terms of performance per watt. But even if it continues to improve, you're implying that what they've got right now is this:
Or maybe worse than that, because the trade off never really goes away. Desktops and servers are still faster than laptops, but portability is a huge advantage, so sacrificing some performance is worth it to have something with you everywhere you go. But how much performance is it worth sacrificing to avoid having a phone in your pocket? Nowhere near as much as it is to avoid lugging a desktop computer around with you.
Or to measure it from the other side, lots of people still have a Macbook even though they also have an iPhone.
That's why I said Moore's law limping along.. it has slowed significantly, and the original transistor doubling every 18~24 months is definitely dead, but transistor processes are still improving, slowly.
The trade off is always there, but more people are using their phones over their desktop computer these days.
That said, I do have serious concerns the greater impact to society in general when transistor technology hit a hard roadblock. Probably outside the scope of this discussion though.
> That's why I said Moore's law limping along.. it has slowed significantly, and the original transistor doubling every 18~24 months is definitely dead, but transistor processes are still improving, slowly.
What they're really doing is getting more power efficient. The "feature size" (number of nm) is fiction at this point. And the tricks they're using to eek out improvements at this point are, uh, interesting. Like they can't make it better without causing errors to occur so they just do that anyway and cover it up by using more error correction. It's really not obvious how much more of this there is to be found.
> The trade off is always there, but more people are using their phones over their desktop computer these days.
That isn't exactly true. The growth in the number of people with a desktop computer has leveled off, but it hasn't really gone down, and the things people use them for are still the same things. If you're going to write a long document, you're going to want a full-sized keyboard. The GPU needed for modern AAA games isn't going to fit in a phone. Professional activity that requires a lot of computation, like compiling code or editing video (or, going forward, a lot of this AI stuff), is a lot faster on a high-power device with more CPU or GPU cores.
What actually happened is that at least the same number of people still use a desktop computer, but now everybody has a phone, including the people who traditionally had a desktop (and still do). So more people have a phone than have a desktop, but not because fewer people have a desktop.
And it would be hard for some new technology to do the same thing to phones because nearly everybody has a phone. A new device can't have 20% more users than phones do when 97% of people have a phone.
This is a whole computer strapped to your head.