But here [1] it says "One of AMP's biggest user benefits has been the unique ability to instantly load AMP web pages that users click on in Google Search. Near-instant loading works by requesting content ahead of time". So AMP content is implicitly prefetched. How is this any different than regular prefetching?
And as far as mobile is concerned, the trivial optimizations that are available on desktop, such as firewalling by content type via e.g. uMatrix, are not at all advertised to end users. AFAIK Chrome browser on mobile does not allow such extensions. Page load times are significantly reduced by using such browser extensions effectively. Why skip to the "extreme" that is AMP?
This, and in particular, the network has much higher _latency_ over mobile. AMP is aggressive about reducing the number of round trips between browser and server.
Extreme optimization that can be achieved without AMP.
Performance of web pages is heavily dependent on the amount of assets and their size. Hacker News loads extremely quickly without AMP, and the same can be achieved for other sites. HTML/CSS (with a small amount of JS if really needed) can achieve the same thing as AMP regarding the web page size and rendering time.
CDNs are well established and can be used to serve content from a nearby server, and HTTP/3 will reduce the number of round-trips needed.
If you have a round-trip-time of 1 second, it will take you 1 second to load a text file with 1 byte in it. However, an amp page will have loaded before you clicked, so it will take only the handful of millis in CPU time to swap frames and display.
The problem with prefetching the publisher URL from a search results page is that it leaks the user's query intent to an origin they have not visited, which violates the user's privacy.
By prefetching a signed exchange from the same origin as the search results page, privacy is preserved. Once the user clicks on a result, the user intent can be shared with the third party origin, no problem, and is in the AMP cache.
> By prefetching a signed exchange from the same origin as the search results page, privacy is preserved
How so? Unless I misunderstand (which is entirely possible), all that this does is change where privacy is violated from the publisher to the search engine.
If a Google search results page instructs your browser to request some bytes to preload from a Google server, that request does not reveal anything new to anyone. Google already knows it instructed your browser to preload, so knowing that your browser mechanically followed that request tells it nothing new about you or your behavior that it didn't already have another way to know.
Let's consider the alternative. Imagine you searched for [headache] and a preload request was made to mayoclinic from your browser for their headache document. Your browser when making that fetch would send to mayoclinic your ip, any stored mayoclinic cookies, and the document URL that you prefetched (not the precise query, but the approximate query is easy to guess). This is sent to mayoclinic _even if_ you never click on that document at all, which is not what you would expect privacy-wise.
Once you do click, mayoclinic can very easily log this visit even if the document bytes were preloaded from elsewhere. They can use javascript analytics or simply even load an image on their server (https://amp.dev/documentation/components/amp-pixel). And you as a user are not surprised that clicking on mayoclinic shares your interest in the document with mayoclinic.