Hacker News new | past | comments | ask | show | jobs | submit login
Why did the web take over desktop and not mobile? (subconscious.substack.com)
266 points by pfraze on Sept 11, 2021 | hide | past | favorite | 300 comments



When iPhone and Android hit the scene the 'rich' web experience was primarily flash, silverlight, and some java applets (taking their last breathe).

Input APIs for touch, multitouch and gesture were a mess for many years in the browser.

Media queries didn't become a w3c recommendation until 2012 so building a ux that looked pleasant really wasn't possible.

The memory, cpu, GPU, storage and battery constraints were unbelievably tight.

In short mobile hardware and the iOS and Android SDK environments were consumer + developer ready for practically 10 years before the web was ready for mobile.

The real question is how did the web remain relevant while being so far behind this tectonic platform shift?

I think major points that kept the web alive: content addressibility (via urls), ease of connectivity between content (via links), vast amounts of information already in the platform, smoother learning curve for developer technology, and the vast amounts of money/influence Google was willing to put in to keep it's search ads property relevant.


> The real question is how did the web remain relevant while being so far behind this tectonic platform shift?

We didn't all shift to to smartphones the instant they became available. It took a while for people to transition to smartphones. Likewise getting an app up and running (in addition to their websites) took time for most businesses, and many probably waited before jumping on the band wagon

I am sure everyone on HN moved quickly, but for the rest of the non-tech world I am sure they were doing just fine using their home computer for several more years.

Even now, I still think "I'll wait and do this on a proper computer". It's not like apps have solved all computing-ills - many are total garbage with bugs and/or poorly thought through designs.


"I am sure everyone on HN moved [from computers to smartphones] quickly"

I may be wrong, but I suspect you have that backwards. My guess is that at least older HNers were slower to shift from powerful but complex creation tools (computers) to easy consumption toys (smartphones) designed for consumers who found computers with files and folders just too darned complicated to use. Until good, portable mapping and good-enough cameras came along, I had no use for the "smart" features of a phone that were vastly inferior to the smart features of a real computer. For a long time, all I needed from a phone was the phone, which a small flip phone handled just fine. For anything "smart", I used the laptop & real camera in my backpack.


I suspect everyone is wrong, but we're all subject to an availability bias.

We assume that things we actively notice other people doing in public are representative of what everyone (or everyone in some category to which we've assigned that person) is doing. We don't notice all the people not engaging in that behavior because they're not noteworthy.


==We assume that things we actively notice other people doing in public are representative of what everyone (or everyone in some category to which we've assigned that person) is doing. We don't notice all the people not engaging in that behavior because they're not noteworthy.==

Doesn’t this comment include it’s own set of biases?


Such as?


I think that what they were getting at is that people noticing and commenting on things is more noticeable than people not doing so.

Which would imply that there's a tendency to notice the noticing more than the not noticing, and, as a consequence, over-estimate how often people do it.

It's a good point.


I really dislike the 'consumption device' label, to me it seems lazily dismissive and missing the point. Smartphones, like phones, are primarily communication devices and communication is an active process. Texting/chat, email, audio calls, Twitter, Slack, Teams, Reddit, facebook, Discord, snapchat, etc, etc are all interactive communications media people use to create content for as well as 'consume'. Yes of course they're also used to watch videos and memes, read web sites, listen to podcasts and audiobooks. Sure, but active engagement is the killer feature for these things.


Go to any commuter train on a Monday morning and look what every last one of those people seated in the car are doing.

They're consuming, not interacting. It is not lazy nor is it dismissive to suggest smartphones are largely devices that inspire consumption. It's just a proper observation.


> They're consuming, not interacting. It is not lazy nor is it dismissive to suggest smartphones are largely devices that inspire consumption.

This is also probably one of the reasons TikTok took off; it allows high quality content creation via smartphone that anyone can do.


Crashing before consuming is also why personal tech in the 80s and 90s had a very different kind of following.


If there billions of consumers and just 1 in 1000 creates something then we have millions of creators


That's neither relevant to the point under discussion (the primary function of the device), nor a valid defense of the smartphone (which should be judged on its intrinsic merits, not whether a tiny fraction of people manage to use it productively solely as a statistical consequence of Earth's vast population).


Of course they get used for that, but I think it's a mistake saying that because they are consumption devices therefore HN crowd would have limited use for them. That's not how that works. In two different people's hands the same device brings completely different value. I got an iPhone in 2008 and since then have used them heavily as communications devices and that's where the main value for me is.

The same goes for the comment about iPhones being 'designed for consumers'. They were designed to be easy to use, but that in no way means they were under powered or less capable because of those design considerations. Again, that's not how that works. Powerful or easy to use is a false dichotomy.

They're incredibly powerful tools and mistaking a major, even the major use case for being defining of the tool (therefore everyone uses it this way, or therefore that's where the main value lies, or therefore these people here wouldn't use them) is fallacious. There are 'influencers' who have built fortunes almost entirely on their phones.


> Smartphones, like phones, are primarily communication devices and communication is an active process.

Yeah… no. Maybe for some people, but I primarily use it as a consumption device. But then, I don’t use social media, I don’t answer my phone (I listen to voicemails once a day and choose whether I want to respond or not). I don’t use any work apps on my phone, like slack.


The real irony of the smartphone situation is that if you are older HNer, you will return to flip phone, real camera and laptop and never go any further, until proven FOSS solution is available on the market. Smartphones were designed from the get go as a surveillance economy devices, catered to minimal the computing needs of the mass audience.:)


Down-voting cannot remove the true of the statement.

Using corporate censored devices is not making you free or enlightened in any way. We are approaching real danger in the tech community where criticism and non-conforming to the gospel trends are becoming the norm.


You are very right.

Once you are used to the power/freedom available on a Laptop/Desktop Computer you could never get comfortable with the restrictions on a Smartphone. The latter is for less tech-savvy people and almost always used as a media consumption device.


> Even now, I still think "I'll wait and do this on a proper computer".

I'm routinely in the same boat, though it's less about the computer and more about not fighting with a tiny monitor. Most things built for mobile are "light" versions, as easily exposing all the bells and whistles on a 6" monitor would be a UI nightmare.


Get a larger screen and a keyboard (bluetooth typically), and you'll still find you're fighting the platform.

- Memory management means that whatever application you're using at any given moment can vanish, with all its user state.

- Applications are crippled versions of full-desktop variants, where they exist at all. E.g., FennecFox vs. the Firefox browser. Support for user-empowering extensions is limited.

- You. Will. Not. Have. Root. Some devices are rootable. Many are not. This means that whole sections of the OS are not available to you.

- Information is siloed into applicationd databases (inaccessable on the device, since You. Will. Not. Have. Root. --- or worse, maintained on some distant cloud server.)

- The OS itself is not upgradable by you. Android device vendors often provide absolutely no post-purchase OS updates.

- Even power kits are very third-class citizens. Termux, The Only Android App That Does Not Precisely Suck[TM], is useful and powerful but its 1,400 packages are only a minuscule fraction of those available on a full Linux system (Debian contains over 60,000 packages), and are limited in functionality as You. Will. Not. Have. Root.

Numerous affordances and capabilities are simply missing or buggy as all hell.

Using an ebook reader, as an example, with an external keyboard, I cannot enter a space character into a search field as that scrolls the screen rather than editing the search dialogue. Just one of many, many, many cuts in a death by thousands.

Source: Have used Android for over a decade, tablets for over five years.


You think it's the tiny monitor, but it's really the primitive UI designed for caveman-style point-and-grunt. I've been using a GPD Micro PC extensively over the past few months, which is equipped with a 6" screen, but also - critically - a touchpad and physical thumb keyboard. Guess what, I can do programming, CAD, image editing, the works - everything you can do on a larger machine.

The Micro PC proves that we were sold a myth; that dumbed-down UIs were a necessary evil to enjoy computers in our pockets. They aren't. Pocket sized laptops work amazingly.


The apotheosis of the netbook? How does it perform with external keyboard, mouse and monitor?

Edit: I've been a fan of the netbook concept since well before the term was coined, when the Toshiba Portegé first came out. But my experiences with them, like everyone else, were disappointing. I'd love it if these new units had good performance.


I mean, it's a quad core mobile Intel with 8 gigabytes. It feels snappy. It runs KDE Plasma. Thermals aren't spectacular, and you won't be running any AAA games on it, but it's vastly more capable than the netbooks of yore. I tend to keep it throttled to 6 watts, which means I can keep the fan off all the time.


That's interesting to me because I've been seeing some people impatient for the PinePhone external keyboard, but I've felt strong doubts I'd use one much.


You might be right. Having also spent time with the Cosmo Communicator, I can vouch that a phone with a keyboard isn't the same thing as a phone-sized laptop. Apart from the hostile platform (which hopefully wouldn't be an issue on the PinePhone), two aspects that make the phone less practical for desktop software are the extremely high aspect ratio, and the lack of any sort of mouse input beyond poking at the screen. These things significantly compromise the desktop experience.

However, I will also say that a touch-type keyboard on a phone is still awesome, and I will certainly be buying a PinePhone+keyboard when it comes out. I live in forlorn hope for the day I can evict Android from my life entirely (the Micro PC does not, alas, make phone calls).


Thanks for spelling out some issues.

Yeah, the inherent fatness of fingers problem. And the lack of buttons on those fingers other-problem.

Looking at the resolutions, 1280x720 vs 1440x720, I would have just expected the phone experience not to feel terribly different. Of course it remains to be seen how much the PinePhone software adapts to landscape in wake of the keyboards coming.


Tiny monitor, obnoxious keyboard. I haven't enjoyed typing on a phone since I reluctantly (that word took me 5 tries) set down my Lumia 920 Windows phone.

Also my mouse is (insert big number) times more precise than my index finger.


I use a NexDock from time to time for this; some of my Galaxy S20's stock software scales pretty well to a full-size display as they expect you to use one.

I don't think I'd make my phone my primary machine despite this until Linux phones are daily-driver ready for me though. The mobile OS is still too limiting.

(That and while the NexDock is serviceable, its touchpad is awful. Good enough to use occasionally, but I wouldn't want to be stuck with it as my main machine.)


The existence of the tablet shows that it's not the device size but rather the app experience that's the limiting factor.

I've tried using an iPad but it's no match for the power, control and flexibility of a full desktop/laptop device, and the larger iPad has an even bigger screen than some smaller notebooks.


> Most things built for mobile are "light" versions.

The good sites are :)

The not so good ones "force" you to download an app with no alternative and require obtrusive permissions even though all you want to know is the answer to a quick question.


> I am sure everyone on HN moved quickly, but for the rest of the non-tech world I am sure they were doing just fine using their home computer for several more years.

A whole generation of people in India never used Computers as the western population did but large majority of same people directly moved to smartphones.


>Even now, I still think "I'll wait and do this on a proper computer".

Of all the things that makes me prefer a proper computer, the shenanigans that devs do with various SDKs that are privacy invading pieces of shit have put me off of trusting any apps. Because of the all of the information that makes a mobile device so convenient to use is precisely the data that makes privacy invading data collection so desirable, it will be imposible to ever make it a serious compute platform for me.

For me, the mobile platform is primarily used as the Bar Bet Settler 5000(TM).


>We didn't all shift to to smartphones the instant they became available.

Oh, yes we did. Within 3 years (say 2007-2010) it was game over for feature-phones for the mass population, whereas the state of web APIs was still bad.


I feel you have this completely backwards. I think for the vast majority of non-tech users, the smartphone was their first everyday computer.


Anecdotally, that's not what I witnessed for family members 30+, but it has been the case for the younger generation (< 20).


A vast majority of young adults had their own everyday computers before smartphones. Today young adults gets smartphones instead, yes, but that didn't use to be true.


Kids use computers regularly at school from a very young age.


What transition? Mobile is the lite version.


This. I can even give a concrete example of it. I noticed a charge to my bank account from Amazon and knew I didn’t buy anything. I went to the Amazon app and saw there were no orders. I thought maybe my prime renewed today so o tried to find that on mobile. I could not find it easily so I immediately went to my laptop and found it in 2 clicks via the Amazon website. To me the web is really great when there is enough screen real estate to keep information out of nested menus.


> I am sure everyone on HN moved quickly

This is me. For years my employer offered to get me a phone, but I declined. Why would I want a mobile phone. I don’t want people to get ahold of me whenever they want. But, then that first iPhone came out that was actually a full-fledged computer with phone capabilities on the side. I immediately jumped on that.


I would agree with you if not for the fact that a lot of these popular "native apps" are just webpages within a native wrapper, which kind of defeats your argument.

Brands and vendors don't like the web because often they cannot 100% control what browsers do, and of course, a browser website cannot get access to the whole contact list, phone numbers and files on a phone device unlike a native app...

Installing an app via an appstore on mobile is low friction. Good luck getting someone to install your Windows or MacOS app on your desktop...

Even PWA that were all at rage a few years ago, called a replacement for native apps by the dev community, how many here have a single PWA installed on their desktop computer?


>I would agree with you if not for the fact that a lot of these popular "native apps" are just webpages within a native wrapper, which kind of defeats your argument.

This doesn't seem true at all. What popular "native" apps are you talking about? The most popular native apps couldn't exist on the web on the early days. YouTube couldn't work on the web due to the dependence on Flash. Instagram couldn't exist without camera access (and arguably fast image processing which JS couldn't do at the time). The technology to build some of the most downloaded apps like Words With Friends or Angry Birds could not be done on the web.


Amazon's app is an example.


PWAs were crippled by the mobile operating system vendor with 60% of the mobile OS marketshare in the US. PWAs didn't happen because Apple doesn't want them to compete fairly with native apps on iOS.


I would say that in the beginning most apps weren’t that way- precisely because of the issues with building rich web apps. Now that it’s possible to replicate a lot of the “native” feel on the web, more apps are choosing the “wrap in a webview” approach as it vastly simplifies the development process across web and multiple mobile/desktop platforms.

Agreed on the control- another thing to consider at least in this crowd is that Safari content filters (aka ad blockers) aren’t applied in apps.


> Brands and vendors don't like the web because often they cannot 100% control (...)

App permissions exist to do the same for native apps.


> The real question is how did the web remain relevant while being so far behind this tectonic platform shift?

Not sure it is relevant anymore for a non-trivial chunk of people - at least in mobile. Lately I have quite often needed to explain about a good web app accessible in browser but not in app store. And I am a bit bewildered. The amount of people pretty much unable to distinguish between browser, app store and url is... not too small. (And don't get me started about iphone users' understanding why their location services "do not work" in safari. I assume there is a difference in default permissions there between iphone and abdroid there)


I don't have an iPhone, can you expand on that location services part? Does Safari not show the permission popup when a website asks for it or something?

On Android, websites can request location permissions, you can't access any of that stuff by default.


I do not have iphone either, but based on my observations many (most?) iphone owners have disabled safari location services from the os side, so you can't just allow them from the permission popup, but you need to go to the os settings and enable them for safari before you can bypass the permission popup. Again, I do not own an iphone, so not sure if it is exactly like this, but something like that needs to be done in many iohones trying to access location data from safari.


I wish I could do that on macOS so I stop getting hit with pop ups in Safari asking me for my location. Homedepot.com is particularly annoying about this.


Use a browser that doesn't purposely nag you for permissions like Safari does. Firefox fits the bill for me, and doesn't throw nagging pop ups at me all of the time.


You can go to safari preferences>websites>location and set "when visiting other sites" to "deny". I don't get any popups that way.


Oh thanks!


It does not stop the pop ups. Safari asks from you permission for location services, you tap ok, and you get an error message that says getting location services failed.


Not sure what op meant, but most computers don't come with built in GPS. Location services rely on other indicators of location, which are far less precise.


Firefox lets you specify your location, though not easily. (Set `geo.provider.network.url` to `data:application/json,{"location":{"lat":0.00000,"lng":0.00000},"accuracy":1000}`.) Chrome doesn't, so even sites with location permission place me at my ISP's HQ 500 miles away.


> The real question is how did the web remain relevant while being so far behind this tectonic platform shift?

Because CSS +HTML + JS is 20x easier to learn than any mobile code language. Anyone in a few hours can create a website, that has never been the case for mobile apps.

You can even create a basic website by copy pasting some html template and modifying the content with notepad.

As a result - as long as Google doesn't decide to deprecate HTML and CSS because it doesn't suit their complex requirements - the web is assured that much more people will be able to create website than apps. When you look at Kotlin or Swift, HTML looks like a no-code language


Expo.dev makes mobile app development pretty easy.


Even today, in the year 2021, browsers do not have native support for gestures which are ubiquitous in the touch screen world.

To pan/pinch/swipe you either need to implement it yourself or use a JS library like Hammer which adds unnecessary bloat.


Don't do this without a good reason. We finally got a lot of mobile sites rendering good and you want to take that away.


A good reason is simply that gestures are a staple of mobile UIs.


1. It's a good thing that you have to implement them yourself. A lot of this stuff is super arbitrary.

2. Opera had gestures 15+ years ago and it was glorious for those that liked them


> content addressibility

I'd say the app ecosystem promotes closed content whereas WWW promotes open content.


The app ecosystem promotes closed content only because it requires native code to run for the UX people expect on mobile, at least until recently with responsive UIs dominating the web framework space.


What open content? Wikipedia? That's <1% of the Web.

If we're judging whether the Web promotes open content by the actual results after three decades, the exact opposite is the case: the Web promotes closed content.

Netflix, ESPN, Google/Gmail/Maps/News/etc, Microsoft/Bing/Office/mail/etc, NYTimes, Washington Post, USA Today, MSNBC, CNBC, Reuters, AP, CNN, Bloomberg, FoxNews, ABC, CBS, NBC, Facebook, Instagram, Twitter, TikTok, LinkedIn, Pinterest, YouTube, Imgur, eBay, Amazon/Twitch/Prime/etc, Target, Walmart, ETSY, Disney, Viacom, Verizon/Yahoo, Match.com, Quora, PayPal. Along with pretty much all corporate sites and porn sites. Most blogs are privately owned, copyrighted; same with substacks; same with most mailing lists. Most photography on the Web is owned by someone, and captive on a platform. Travel sites/services (Booking, Expedia, etc), review sites/services (Yelp, Trip Advisor, most cooking sites, etc), weather sites, real-estate sites (Realtor.com, Zillow, Apartments.com, etc); ticket sites (Vivid, Stubhub, Ticketmaster, Live Nation) - they're largely in the same box, they're all privately owned, copyrighted content, closed corporate platforms.

Now keep going with that list for most of the next 5,000+ largest sites (with few exceptions like Wikipedia or Stack Exchange).

Nearly everything on the Web is centralized, closed, privately owned, copyrighted content and or platforms. They're all little walled gardens.


It's <1% of the viewed web maybe (I don't know that for sure, but let's stipulate).

But open content is not 1% of the existing web; there are more than a billion[1] websites, and anybody can make one.

You might not easily be able to find those websites (currently, there's still a good chance you might, via the major search engines, but that is subject to corporate hegemony, state actor malfeasance, shit-flooding the zone, etc).

But I think the ability to "put it out there" counts for most of the definition of "open".

Being able to consume content on your terms is also cool, but not as fundamental as being able to produce it.

[1]: ¯\_(ツ)_/¯ — https://duckduckgo.com/?q=how+many+websites+are+there+in+the...


The protocol. Websites are accessible over a known , open protocol to every visitor, (Geofencing etc is there, but...), but apps use proprietary protocols which only the particular app can decode. You can see the ESPN website on any old browser, but the ESPN app content needs the ESPN app, and nothing else will work.


Open protocols don't make open sites or apps.

Cases in point: Facebook, Instagram, Twitter (increasingly closed), Reddit (increasingly closed). All of them have sites built on open protocols.


Open as in no spyware required.


Uhhhhhhhh do you think websites have no ““spyware””?

Have you heard of biscuits?


In their defense they did say required.

You've been able to turn off cookies since the beginning in any sort of competent browser.


And you’ve been able to allow or deny apps individual permissions on any sort of competent OS.


I think, since we're talking mobile here, the implication is that the OS is part of the spyware here.

Or I completely screwed up my understanding of this thread. That's also very possible.


Firefox ships with Enhanced Tracking Protection enabled by default, disabling those tracking cookies and other trackers from the first time you launch the browser[1].

[1] https://support.mozilla.org/en-US/kb/enhanced-tracking-prote...


uhm... everything? "open" as in anyone can make a website and if it's has one good article, it will show up in some google query's result.


> "open" as in anyone can make a website

Anyone can make an app too.

> if it's has one good article, it will show up in some google query's result.

Maybe buried deep in the 100th page, but not the front.


> Anyone can make an app too.

On iOS, this is only true if they're in good standing with Apple and have their approval, while also paying the yearly $100 Apple tax.


All hosting providers or app platforms also have a ""tAX""

Wait till you find out about paying for bandwidth.


This is hacker news, none of us has heard of nor care about the highly technical details you speak of


> Anyone can make an app too.

After they install some bloated IDE, learn some "tech", and pay the gatekeepers

> Maybe buried deep in the 100th page

But they can be found and share, as opposed to apps of a non existent category where average people publish what they are working on, their thoughts, or whatever. App Store and Google Play search would crumble at the scale of the number of websites in existence right now (~2 billions). Remember that search engines index what's inside those websites too not just their name and description.


> After they install some bloated IDE, learn some "tech", and pay the gatekeepers

Tell me how making and hosting a website is absolutely free.


You could not make this argument even 10 years ago, when you could absolutely make a sites on any of wordpress, tumblr, yahoo or blogspot completely free. These days there are even more choices ranging from WYCIWYG (Squarespace & the like) to absolutely free full CI/CD-ed platforms (heroku, Zeit, GitHub pages...). It's not like there is a duopoly on web hosting.


But I guess the more important aspect of the word "free" is that no one could gatekeep you from making a website, unlike app stores where there is a review process.


Open in this context means open to access without platform preference. If you can access it with Firefox on Linux then in this context it's open.


> When iPhone and Android hit the scene the 'rich' web experience was primarily flash, silverlight, and some java applets (taking their last breathe).

A lot of people have forgot that YouTube were using FLV (video streaming using Flash) during a lot of time because there wasn't any better multiplatform viable alternative.

You are totally right. The web wasn't ready for new universal types of information exchanges and we needed to use 3rd parties to solve that, in form of a Flash plug-in or an installed app.


I’d argue that it’s having to do with standards bodies too. The time it takes for many to agree on standardized APIs for mobile. Apple and Google can simply say: “this is our native API” and people will use it quickly and effectively. Also there’s incentive for Apple to keep people on their platform and standards are tangential.

Also, desktop browsers have been around for over 30 years, and hence the click, double click, scroll, and basic key shortcut metaphors have as well.


webOS was from the same ~2009 era, powering the Palm/HP Pre models, and its apps were developed in HTML, CSS, and JavaScript. It had a huge selection of 3rd party and open source apps developed using web technologies.

This was the case for the first iteration of the iPhone, as well.

The truth is that the web didn't flourish on mobile because the two mobile operating systems giants knew that they could make a killing by keeping a stranglehold on the mobile app distribution market with their app store models. And they did so successfully for a decade, making tens of billions of dollars each year from said stranglehold.

Giving the web equal footing on their mobile operating systems like webOS did would put those billions of dollars of yearly revenue at risk.


> The real question is how did the web remain relevant while being so far behind this tectonic platform shift?

this is an "apples vs oranges" question


> the vast amounts of money/influence Google was willing to put in to keep it's search ads property relevant

Mobile search ad clicks have been surpassing desktop search for a while now, If it was done solely through Google app and if iOS wasn't a thing then Google would have happily killed the web by this time.

I think instant apps is(was?) an attempt at it.


Also I think the app "look up random info and articles" was never lost for the mobile web. I don't think there are many apps for that even now. Yes there is the wikipedia and other such apps, but they are a sliver of the web.


Native phone apps are more invasive and can therefore be used for more extensive tracking and spying, which is good for ad revenue.

Remember how all the largest spying products like Instagram, Facebook, LinkedIn, Reddit, Twitter, etc aggressively push you to install their phone apps even though they are (or were) perfectly usable through their mobile websites.


Reddit is a really weird one to me. Their mobile site is very nice - so nice that I don't feel any desire to install the app - aside from the fact that they desperately want me to stop using it and push the app at every opportunity. (Recently they even decided to block NSFW-flagged content from the mobile site unless you're logged in - you get a box telling you to install the app to see it.)

What I don't get is why they put all the effort into maintaining a high-quality mobile site if they hate people using it this much. Just get rid of it and load the desktop version. Pop-ups saying "Reddit works better in the app" would be less off-putting if the alternative was an unoptimized desktop site, rather than a perfectly good mobile site whose only downside was said pop-ups.


You must be using a different reddit website, the one I know is horrible. It's slow, it's buggy, videos don't always work without a refresh after waiting 10 seconds to be able to click play. The UI is all wrong:

for example, you open the comments of a post, it opens a faux-modal with a X button. You tap it, it closes the post but opens the subreddit. It doesn't go back to where you were.

Another example is how to collapse a thread on the comments you need to press the empty space beside the username?!?!

It's so slow and wrong that I've built my own front-end, but it's virtually impossible to load reddit videos correctly...


Maybe they mean old.reddit.com. I use it on mobile and it's fine. Fast, not too many pushes to change to the app, r/friends/comments works.

Twitter on mobile has become almost unusable (unable to see most content without making an account), though I've found a workaround: it's not totally broken in incognito windows, so I guess that's how I read it now.


Same with Medium, can only see posts on Incognito mode...


I actually use the old desktop version of Reddit on my phone, because I find all other versions of the site worse. All of the other ones have a bloated UI with giant elements while hiding useful elements.

My only gripe is the upvote, downvote, and " comments" buttons are a little too small compared to the other elements. Oh, and that new slide or multi picture UI sucks.

It loads quickly too. Quicker than the mobile versions, which is ironic.


The more users who download the app, the higher their rankings are. The higher the rankings, the more users see and download your app. It's very obvious why they're doing this


Also there is no app store on the web, no 30% fee. So no incentive for mobile platform owners to make it first class citizen.


Going to point out the irony of Steve Jobs not wanting an App Store at all and preferring a rich web experience. And devs getting mad at that.


I think the anger was justified at the time. The rich web experience back in those days was extremely limited (as it still sometimes is, on iOS). There was no way anyone was going to write a music player that could compete with the built-in one using just the "rich web experience".

Connection speeds in the original iPhone era were low, as was processing power. Spotify wouldn't have been able to provide you with the service it can provide over the web now.

The focus on the web was also a complete break from existing smartphones, where installing applications had been the norm for years. iPhones weren't exactly the first to feature apps, or even app stores, and delivering an iPhone experience that was as smooth as a Windows Mobile or Blackberry experience was going to be difficult without native code. At the same time, everyone already knew that upper and middle management (and all the other departments to convince when introducing new software) were going to buy iPhones because new, shiny Apple stuff often scores well as a status symbol.

At the time, it made complete sense to be mad. It's not that people didn't want to create websites, it's that websites were simply not equivalent to the real deal. A lot has changed since then.


As I recall, the "rich web experience" that Jobs was pushing for didn't mean it would be mobile Web for everything. Wasn't his idea that apps could be installed locally, but would be written in JavaScript and use a Web stack for the UI?

Given the popularity of tools like Cordova and React Native, and the absolute dominance of Electron on desktop, I'd say he wasn't exactly wrong.


> Wasn't his idea that apps could be installed locally, but would be written in JavaScript and use a Web stack for the UI?

That would essentially have been Firefox OS, minus HTML5 that wasn't yet ready back then.


Devs didn't want an app store, they just wanted the ability to install native apps and the SDK to build them.


The web isn’t a first class citizen on desktop as you can’t run new AAA games in one. The real difference is the web is a more limited UI that breaks down when your also forced to use the more limited interface of mobile.

For example on desktop the top 10% or so of the screen is taken up by the browsers UI and tabs which is normally fine. On a tablet in keyboard mode your left with ~40% of the screen being useable. Native applications sidestep this by not needing the browsers UI.


AAA games are a weird example. It's one of the few app categories that really needs full native performance, although a lot of them can also scale nicely to run on older hardware.

WebGPU and WebAssembly are getting close to matching the speed and capabilities of older hardware.

Tab UI is possible to hide, and has been for some time. You can "app-ify" a web site into a dedicated window with no browser UI.

ISTR apps that we doing this >10 years ago.

I think the real differences break down to economics. Can't charge an app store fee for web sites or force sites to use your advertising network.


For me, the big thing I can't do with a web app is work around the keyboard. If I try to write a chat app, the keyboard will often cover the text box, and there's often no way to prevent this. There's no APIs for "tell me where the on-screen keyboard is so I can move the app UI out of the way" or even "automatically resize the window to exclude the keyboard".

The options for showing a numpad are also very limited. <input type="number"> exists, but it doesn't work for things that aren't numbers, like codes starting with 0.


> Tab UI is possible to hide

That’s not a solution to the issue though. An independent application is managed by the OS, but websites are generally managed by the browser which has multiple tabs. Users don’t want every tab to be treated like it’s own app, which means websites need to play nice with the browsers UI even if the OS did give them full control.

Now, mobile OS have mechanisms to treat websites as native apps but your still stuck with web baggage. From a users perspective installing this website as a native app is simply worse than installing native app. Basically when you start improving the websites as app enough, such as gracefully handling network outages, you just end up with a native application.

From the other direction anything that would be fine as a website is already fine. You don’t get websites killing huge numbers of native apps because in that case there weren’t any native apps built to be killed.


This is a very important point. Mobile web apps primarily don‘t feel right because of the browser UI and also because the interaction with scrolling/tapping is never quite as perfect.


People say this a lot, but really, what can a native app track that a web app can't? To track location, you need to give the app permission, just like the web. To receive push notifications, you need to give the app permission, just like the web.

I think this claim is unfounded.


Native apps have much more access to lower level APIs and had fixed identity that could be shared with all other apps.

The web was always limited by JS access, processing power, network connectivity, and a very fragile identity graph that was routinely reset. That's why Safari's battle on privacy is routinely seen as a misguided effort by most of the adtech industry that knows that SDKs in mobile apps reveal a magnitude more data.


Computers are kept in a fixed or limited number of locations. Phones are tracking devices that follow you throughout the day. The data they generate is far more valuable for building a consumer profile.


You're talking about desktops vs. laptops / mobile phones. That has nothing to do with mobile vs. web. A phone can run a web browser.


A mobile app can get location updates every 15 minutes even when you're not using it. A web page can only request location when the browser is processing its requests.


To give you an idea, web apps can't track location while running in the background no matter what permissions you give them. Native apps can track you while running in the background


And they only do it with specific permissions.


On Android an app can do all kinds of crazy stuff without permission like draw over other apps; I assume some of that could be useful for tracking

On iOS though you can't do that stuff, and I would tend to think apps are more sandboxed than websites (no cookies, for one). I don't know for sure though; I'd love to hear more from someone who has more firsthand experience.


On the web you can browse most sites without even logging in. Not so in mobile apps. This is makes a huge difference in terms of data gathering. And no, you can't just force everyone to login to your web app since then your search ranking will tank. Some sites tries to work around this, but so far most sites people use lets you browse without logging in.


It's not spying, it's the first party revenue. It's easier to collect a fee for an app install and for Apple to take a huge cut. App dominance was very deliberately engineered by Apple. It didn't happen organically.


> Native phone apps are more invasive and can therefore be used for more extensive tracking and spying, which is good for ad revenue.

It's probably because the specificiation for native apps is always cooked up by a single company, whereas the web specificiation is always under scrutiny of multiple organizations.


While noble, I think this misses the profit motive and me-too FOMO effect of industry strategy.

Never attribute to purely good engineering and design that which is incentivized by profit.


Yes, the conflicting incentive angle was implied in my comment, and is imho an important reason why these specifications should not be written and imposed by a single actor.


Also, what we call "mobile" is actually just 2 nice walled gardens with doors called "App Store" and "Play Store".

That's at least another reason why the web is not pervasive on mobile: Apple and Google are quite happy that it's not that way because they can monetize their platforms with far more control than through the browser.

The article references app stores pointing out that they reduced user friction in payments and subscriptions around apps. Of course, under the guise of user experience you can focus your development efforts on native tool chains and not invest in the mobile browser experience.


That's right. Web standards are spearheaded by Google, while Apple's job is to make sure they don't work on iOS.


Tell me, why does Apple need to do exactly what Google does? I thought we liked different ideas and competition ‘round these parts?


Apple just doesn't do it in any way. There's no way to permamently store data on Safari. At least Google Chrome has the File writer API (even though it would be good to have permament IndexedDB storage as well).


IIRC you can permanently store information under iOS Safari if the site is installed as a PWA, which makes some amount of sense. By associating long term storage with the intentional action of adding to Home Screen, sites are prevented from storing things without user permission and the feature's capability as a method of fingerprinting is greatly reduced.


Apple has gimped PWA support for iOS. It’s support and lack of features is atrocious.


Yeah, but Microsoft is equally unhappy about windows native software going extinct and being strong armed to use google chrome engine for their edge browser !


A good open successor to flash would have made the app stores much less appealing. But that never happened.


I worked with a company middle of last decade that had to port its Flash content to Canvas and friends, its not even close the power Flash provided, not to mention the content development ecosystem surrounding Animate. Like, yeah, go ahead and export Canvas content from Animate, but be prepared for highly unoptimized Canvas runtime code.


I found the article really hard to read, the author took a long and convoluted path to expose his ideas.

As for the question that the article raises, there are probably many co-occurring reasons, and a lot of comments make good points. However, one that I couldn't find listed is IMHO performance. I still find web apps (and web sites) to feel really sluggish and slow on everything that is not a top-tier phone. Again, many reasons for this (JavaScript bloat, tracking, intermittent connectivity, ...) but the user experience is just not the same.


It's interesting how different one can perceive the same article. I quite enjoyed how he first explained some general ideas in a narrative style.

I agree that there are likely different reasons that had an impact. Performance is a good addition and I think also the bad mobile connectivity in the earlier years of the smartphone area is another reason.


It's quite typical in this Malcolm Gladwell-style guru-writing. Start with some obscure non-business-related factoid story that demonstrates how intelligent and widely read you are an then give your opinion on the topic of business and hustling dressed in the aforementioned story. People write entire books where each chapter rehashes this formula.


I enjoyed his prelude with "The Great Oxidation" event... I've always known about it - but not under that term. Yes, a bit convoluted and contrived - but I enjoyed it anyway.


I enjoyed the rhetorical journey the author took us on, personally.

I wonder if you read the whole article, since the author specifically calls out performance as one of the factors.


>I found the article really hard to read, the author took a long and convoluted path to expose his ideas.

aka "the substack style"


Lot of potential theories here. Allow me to offer one of my own: it's our physical posture and ergonomics. When on the desktop, you're usually sitting down in front of a desk after deciding to set aside some time for the express purpose of doing something on a computer. A small wait for the UI to render is tolerable. But when you're on a mobile device, it's usually the the thing you're using to kill time while you're waiting for something else. Delays are a lot less tolerable.

Another thing: on desktop, the burden of task switching is lower -- you click another tab while the page loads. It's literally just another instantly responsive click back when the thing loads. On mobile, it's a slightly slower, more involved process as you tap and hunt for icons.


> A small wait for the UI to render is tolerable.

Absolutely not. In fact, I expect more responsiveness from my desktop.


Agreed, when using my phone I am very frequently waiting for it to do something. Start Google Maps, wait 5 seconds, click the search bar, wait 3 seconds before the keyboard comes up, type a few letters, wait between 0 and 10 seconds for any letters to show up...

It's hilarious when you think about how ludicrously powerful these devices are compared to what we were using 15 years ago that wasn't that sluggish. And of course by hilarious I mean dreadful.


This is also me, but as an engineer. It's not how the average user's mind works.


A difference I haven't seen highlighted in this thread yet is that there are phone apps and then there are tablet apps, and the latter have already made big in-roads with content creators, e.g. amateur/pro musicians, illustrators/artists, etcs.

Tablets can provide the ergnomic benefits and the high-level interface for their tasks that a mouse and keyboard can, depending on context, add more complexity to. A lot of the times, you don't need all the power user features at once for your work in the course of creating some art form.

Tablets are lighter than laptops generally and are faster to boot up for when the spark of creativity hits.

Posture and ergnomics are definitely big wins for mobile as a whole, and lot of comments here are desktop power users speaking strictly of their workflows translating to a phone.


Two words: Application permissions.

Any application I install on the desktop can potentially steal my data and ruin my life.

And no, big names like Spotify or Valve don’t offer complete protection either due to supply chain attacks.


Yes, but normies don't really understand or care.

The big difference is the "inviting" experience of installing apps on smartphones. Yes search the name and tap "install app".


I think normal users do have a fear of installing software on desktop. There's this fear of "viruses." They know they exist, they don't know how they work, but they know you shouldn't install anything or you'll let them in. They view third-party software like vampires - intruders trying to trick them into opening the door.

Which, given the history of adware toolbars and registry bloat on Windows, is maybe warranted.

https://www.pcworld.com/article/149951/registry-cleaner.html


That's the correct default view of software.

Where we've gone astray is that people don't treat "apps" the same way.


We do the same in Windows App Store too. … except it’s unlikely to be the easiest way to get the apps you love and not effective as a means of discovering new apps, plus and because of that they haven’t succeeded at attracting developers to their concept. Why would a developer spend however long making their app available in Windows store unless they were already a big name? And how easy is it if you’re on any one of the now disjointed and incompatible litany of display technologies (forms, wpf, silverlight, uwp, dos, powershell) and pay big platform costs for the privilege. I am in reality hoping to do it ASAP.


I believe this is probably the most important difference. Desktop platforms, and particularly Windows, have not historically offered a convenient experience for installing, updating and removing software. Web apps just work. Mobile apps and app stores offer a simpler and much slicker experience than desktops, so there is less need for the easy access that web apps offer.


Your desktop does not have much private data (other than your files, which reputable programs don't scan without reason). Your phone has your location, your activity status, your phonebook, which 99.9% of people cannot spoof


> Your desktop does not have much private data (other than your files, which reputable programs don't scan without reason)

Disagreed. It's not just about files; once you run an untrusted binary on a desktop it will typically be able to capture your screen, keyboard input, etc. It doesn't need to be in a file for malicious software to be able to pick it up.

In contrast, mobile applications typically run within a sandbox where short of an OS/kernel exploit there is no way for the app to access anything else but its own data (besides a very limited subset like photos, etc that you still have to authorize explicitly).


'Files' includes %appdata%, which is often a bunch of secrets like browser cookies or authentication keys.


After a long discussion of sea algae the author makes several good points. Here is a short summary to save others time. On Mobile:

> The basis of performance shifted from small binaries to smooth interaction.

> Log-in disappeared completely. The web’s de facto identity system is designed for a world of keyboards. Thumb-typing usernames and passwords is excruciating. But native app log-in is one-and-done.

> Discovery shifted from search to app store.

> Engagement shifted from links to icons.

> Business models expanded to IAP, subscriptions, and app purchase. Security shifted from sandbox to app review.


UX, harder to optimize to a small screen. Nowadays phones are huge compared to 10 years ago and native apps got a very early advantage.

Connectivity, 2.5G and 3G were slow, bandwidth is capped, no internet at all somewhere.

And yes, what can you do on iOS if Apple forces their browser on all their customers?

But (Android) I still spend most of my time in a browser. Notable exceptions: WhatsApp, Telegram, K9, Maps, OSMAnd, Camera, Gallery, YouTube / NewPipe. Maybe 1 hour vs all the rest.


Notable exceptions: WhatsApp, Telegram, K9, Maps, OSMAnd, Camera, Gallery, YouTube / NewPipe

Why not use the web versions of WhatsApp, Maps, YouTube, Photos etc.?


Because of anticompetitive actions taken by the companies that dominate the mobile operating systems market in order to dominate the mobile app distribution market, as well as the mobile app payments market.

Those companies made billions of dollars each year through their anticompetitive behavior.


If iOS mobile web supported push notifications and permanent app links on the phone, I don't know if we would be asking this question.


Maybe I missed it, but I don't think the article even provides figures for the percentage of time spent in a browser on mobile versus desktop.

If it is indeed skewed toward non-browser activities on the phone, that's got to mostly be due to hardware integration for things that don't exist or aren't used the same way on most desktops (camera, voice assistant, GPS) and better notification interfaces (messaging apps, which surely occupy a larger percentage of mobile users' time than desktop). Most of the stuff in the article seems way less relevant than those (e.g. "typing passwords is awkward on a phone"—OK, but I don't type them, they auto-fill, and Safari even generates them for me so many of my passwords I've never typed on any platform, so....)


> camera, voice assistant, GPS

Phones could add APIs for webapps to use those.


Camera and GPS are usable from webapps.


Most of the stuff I use on my computer are native apps, and plenty of corporations still based on air gaped networks with native applications.

Also despite WebGL, or due to the way it was evolved during the last 10 years, game development for the Web is a failure, hence why everyone is having yet another go at streaming with graphics cards neither WebGL nor WebGPU will ever support.

Also nice touch forgetting about PalmOS, Symbian and Windows/PocketPC.


I think the main reason is distribution. Desktop apps were/are difficult to distribute. To get an desktop app into organisation, you either needed to tell people to install something, and they need disk space, RAM, different versions of Windows, might not work on Mac, almost never on Linux. Then they need to update the app regularly somehow, still not solved for all apps. Bigger orgs have managed distribution, but then smaller groups can’t install things without going through IT. Then you’ve got servers, which require someone to setup, maintain and have their own lead time. Webapps, especially free ones, have none of these issues an individual can just start using it. Webapps mean everyone using the latest version straight away.

Phone apps seem to be modelled on webapps, typically the server is run by the app maker, installs are easy and updates are automatic. Additionally, phone apps automatically go on the home screen and have notifications, which means you’ll open them more often.


Additionally, in the 2000s we had IE-only webapps, meaning that when companies only cared about Windows clients, they still thought making webapp was a better way to reach their customers than a Win32 app.


Installation threshold is my guess.

To install an app on mobile you go to a specific place, find it, peruse screenshots and reviews if you want to, hit one standardized button, and it's installed in seconds.

On desktop, you Google it. You hope the first link is the right one and not malware. You poke through a few pages until you find some sort of download button. Your OS asks you where you want to save it. You save it and then go find the file and open it. Then some kind of installer usually runs that may ask you further questions. It's probably going to dump some clutter in various places and will hopefully give you an icon that you can use to run it without searching for the install location.

For companies trying to ratchet-up conversions, this friction matters a whole lot

These days there's also the developer experience, but I think developer experience for desktop apps tanked because marketshare tanked, not the other way around


Cell networks were also really terrible when smart phones hit the scene. I remember outside of big metro areas 2G with 20+ second latency, at times (yes, seconds)

It's pretty hard to use traditional responsive web apps over such an abysmal network

Native apps can offer a layer of insulation with prepulled assets and more advanced network handling (you can change pages and even close a native app with outstanding requests and it's usually fine--it'll handle changes asynchronously when the network is available)


Mobile code can not burn battery on unnecessary cpu use, and certainly not on unnecessary use of power transmitting over a radio link repetitively loading things: desktop gets power plugged into a wall and is therefore less constrained.


> unnecessary use of power transmitting over a radio link repetitively loading things

Web apps can be installed on the phone. Has been working on Android for a long time, and it won't need to reload anything. I don't see them being much less efficient than "native" Android apps, especially considering their average quality.

> desktop gets power plugged into a wall and is therefore less constrained

Unless it's a laptop.


May I ask for some examples of those web apps? Does this really exist outside some note taking sample apps?


to your point iOS has had a web view object forever too.


Android and iOS specifically are not keen on creating a more powerful browser experience because it hurts their app revenues.

Mobile APIs for apps are fairly modern.

The limited nature of Mobile experience + the more modern APIs and integrated build/deployment/storefronts make those a much easier choice than otherwise on mobile.

Desktop app frameworks are old and complicated, and there aren't very good options for apps in between 'web' and 'major installs'. Desktop UI generally require a lot more component types and layouts, and frankly there is no framework that has conquered that domain very well. Every single desktop UI framework falls quite short.


The answer is simple. Mobile devices lag behind desktop devices by about 10 years in terms of memory and processing power. When did web applications really start to dominate on the desktop? Think Electron apps and things like that - less than 10 years ago. My prediction? Within 5 years web apps will have taken over in the mobile space.


I agree with this. In fact I believe that THE WEB IS TAKING OVER MOBILE. Every day a new mobile experience needs to be created and one more developer thinks: "well I'll just make a responsive website/webapp rather than making 2 native apps". At waiterio.com I personally started with a native Android app in Java, then we added a native Objective-C app then... I just rewrote everything in React and wrapped in a Webview. Today a simple mobile app can totally be written in Javascript to look identical to a native app. I was on the edge of the mobile wave 12 years ago and yet today I barely ever install a new native app on my phone. There are very few use cases for things that are better done on mobile (ex Google Maps, Uber). Most of stuff are just better done on a larger screen. That said, there is and will be a huge part of the population doing stuff on mobile but frankly they are the CONSUMERS that did not know how to properly use a computer. Sure you can buy an airplane ticket from a smartphone but in reality if you own a laptop and are good with it it will take you less to walk to your laptop and buy it on the laptop then buy it from the couch on your smartphone making a typo every 5 characters you type in.


There's more going on here. The orginal iphone had no sdk. So the only way to run stuff on it that wasn't built by Apple was via Safari. It ran ajax applications just fine. I actually worked at Nokia Research at the time and a few of my colleagues were responsible for porting webkit to Symbian, that was around 2005; around the time Ajax applications were becoming a thing before the iphone was more than a rumor.

For a while, there were many iphone optimized mobile websites. E.g. Google reader had an iphone version. Typically these worked well on the S60 webkit browser as well and looked and functioned much nicer than the typical mobile websites optimized for the much crappier browsers that were common before that. Opera mini became a thing around the same time, which was an actual browser implemented in j2me that you could run on feature phones.

Performance was not really an issue with this. That was the problem: it worked a bit too well and you didn't need an iphone to use these websites.

Apple fixed it with a native SDK, an appstore, and a by now well established practice of systematically crippling the browser experience on IOS in subtle ways. For a long time you had weird memory restrictions, they never bothered with progressive web apps, and they have very strict policies in the app store that further ensure users and developers focus on building "native" experiences. It would be fairly easy to fix it. But of course there's a rule against third parties doing that too. Safari is the one and only browser that is allowed on the platform. Officially that's because it benefits the user but the obvious actual reason is that Apple does not want people removing obstacles that would allow for a better application experience inside browsers. Flash was banned for the same reasons before the iphone even launched. So were applets and the whole j2me ecosystem that existed at the time.

Google on the other hand shipped chrome on a far more open platform. But they make money from ads rather than hardware sales. Which is why the playstore exists and why anyone earning money with apps and ads prefers using native apps as well. The status quo is these two fairly locked down platforms and a few niche platforms that don't seem to move the needle much when it comes to what people build.

That might change in the next few years as the Apple/Google duopoly slowly heads for inevitable court cases which might introduce more application stores and platforms. When that happens, web first becomes a cheap strategy to target mobile because testing and building apps for each platform is already getting quite expensive with just two of them. Add wasm to the mix and what's native and browser based becomes kind of blurry in any case. Five years is about right in terms of timeline. But you can bet that Apple will drag their heels with all of this.


Early smartphones had poor connectivity and the web at the time had very poor offline support.

While Apple initially said that apps would be web apps they never really walked the walk. No html-css frameworks for optimizing sites for the iPhone. No frameworks for offline support.

And this was at a time when Apple and Google were good friends and Google were developing their Google Gears for offline support (you could use gmail offline with google gears back then)

Apps offered 4 things: a development environment, offline support, a monetization strategy and a cool factor.

Make it 5: they also supported things that are more than a glorified webpage.


Did it?

I almost exclusively use the mobile browser and don’t like installing lots of apps. They’re intrusive, they always want to collect things like location data, drain the battery, take up space, and spam me with notifications. The app is often built to steal more of my attention by being faster and shinier…

For example, no matter how hobbled the Reddit web experience is, I don’t want their annoying app. I’d rather sandbox it to iPhone safari and protect my sanity.


And apps are worse.

When looking at news sites, HN, ... my way of using those is that i open interesting articles in background tabs and then go through them. With an app I constantly have to go back and forth between front page and article. Especially annoying with longer reads.


If you ask me, the number one reason is that Android and iOS have UI libraries that are specifically meant for mobile devices. Yes, you can design a web app for a mobile phone, but there is nothing that guides you or gives you any help in that direction.

For example, navigation between screens. On iOS, you have a nav bar at the top of the screen, and when you navigate to a new screen it slides over the current one, animating in a back button back to the previous screen. All apps behave this way, and when you have an iPhone you don't even notice this - it is just the way you interact with all apps.

You would have to hand-build that on the web, or use some random library that probably uses CPU animations and looks and feels bad.

Compare that with the corresponding iOS app code:

    self.navigationController?.pushViewController(vc, animated: true)
Then add on top of that other platform-specific things like location detection, push notifications, or camera interaction. The web got those, but years later. The native libraries are simply built for the native use case, and it's a huge competitive advantage.


That's definitely not the reason. When the iPhone first came out the idea was that all third party apps would be web apps. It was the official way.

Of course everyone hated it because performance sucked and interactions weren't smooth and reliable.


So mobile web was not enjoyable for users, and native apps (which use the APIs that I attribute their success to) won out.

In what way does that go against what I said? (it doesn't)


There's something I don't understand. I've been hearing about "responsive web design" for years, that you should make sure your websites work well on mobile phones, because they are the majority of the traffic. I know that doesn't mean that the majority of phone usage is spent on the web, but still, mobile users use the web, and they use it a lot.


In my first programming job (2000-2004), we mostly used "native" applications (the company was mostly mainframe but the "PC Programming" was done in VB6). Then the .com crash happened and we hired all of these "web developers" that were out of work and all they knew was the web. So the next programs we wrote all of a sudden web based. I remember thinking that they weren't really a good fit since we were doing mostly file processing work. They were slower and needed permissions and required servers.

I would imagine something similar happened everywhere. All of a sudden there wasn't many web developer jobs and web developers had to take "regular" programming jobs. And when all you have is a hammer...

In the long run, of course the Web grew, but in that short period of time there was definitely a displacement of skills and the rise of "Web Applications". I'm not talking of public applications, but rather internal business applications.


The web took over desktop when browsers functionality approaches those of desktops. Almost every desktop apps are now in web or electron, with some interesting exceptions that I know of:

1. graphic / gpu intensive apps 2. memory / cpu intensive apps 3. background process / push notif apps

Both point 1 and 2 can easily be games, which is why most of them are still in native apps. I think (cmiiw) webassembly is trying to tackle this.

Now the 3rd point is, IMO the strongest reason that makes browser-based apps cannot penetrate mobile native beside ux, performance and lacking functionality for web in mobile. People in their mobile simply cannot run browser all the time in background for process and push notif. It's far easier with native apps.


I guess this is because on the mobile device, the manufacturer like Apple, they are more easy to limit the develop of web. Or we can say, on the desktop, the owner of the most popular browser is not the OS maker, imagine we now just have Windows, and IE....


> Why did the web take over desktop and not mobile?

Because the desktop is not portable so your applications still need to move with you. Phones are with you wherever you are so interaction always happens on the same physical hardware.


If IE was the only browser allowed on Windows, we could write articles about how native apps on desktop are just so much better than all web-based applications, and I guess it’s just the way things are.


I remember native mobile apps disrupting the web, rather than the other way around.

The reason native apps disrupted web apps was because they were (all else equal) more performant, and had access to more of the device, and because companies drove users to using their apps because they could get more value out of them that way.

My take on it at the time was that native apps gave companies a way to reset their relationships with web users, and bring them into an environment where users had less control, and companies had more.


I am not sure in what world. 90% of the applications I use are native desktop apps. Look ma, no web.


You don’t use Gmail, Google Docs and Maps, Outlook, Youtube? My wife uses a transcribing web app called Otter. All the fantasy map authoring tools I’ve used recently (for TTRPG use) are web apps, like Inkarnate. Arguably even Facebook, Reddit and LinkedIn are really more web apps than web sites. On mobile most people would use an app for all of these services, on desktop they’re mostly tabs in a browser.


Not GP but I don't find their claim implausible.

> Gmail, Google Docs

Outlook and the Office suite are common desktop apps, especially in certain industries.

> [Google] Maps

Most people probably use Maps on mobile. Most of my use of Maps outside mobile is definitely just in Google search results.

> Outlook

I use OWA but long-time Outlook users often still prefer the (extremely legacy) desktop app.

> YouTube, Facebook, Reddit

Some people primarily consume media on mobile devices. Especially if they have a dedicated work or gaming desktop.


I do not use Gmail. I have my own domain like since forever and use Thunderbird to read my email. I do not use Google Docs, I use Office Suite from SoftOffice instead.

I do use Youtube, Google Maps, Google Translate, Netflix and Amazon store. I do not think they make anywhere close to even 10% of what I use. I do not use Facebook, Reddit and LinkedIn. Well I do have accounts and I used those couple of times to find particular person but that's the extent.

On mobile, the only apps I use are offline GPS - OsmAnd and some that control my gizmos like drone. Other than that my phone works strictly as a phone and does not even have data plan.


Kinda similar boat here, I kinda hate electron for similar reasons it's just bloated as hell.


the biological analogy is not really fitting. there is no natural ecosystem that is as concentrated a monoculture as the technology sector.

in nature, if your carefully nurtured monoculture (don't call me Microsoft) failed, your field would normally grow a dozen types of weeds, not another monoculture. the different tech epochs feel more like the passing of the "torch" from one dominant entity to the next rather than any broad based competition for survival of the fittest. this structure has as much to do with the political / regulatory systems of those eras as any intrinsic aspects of digital technology.

but on the substance of the future of the "web" (=the future of non-owned computing) its rough outlines are already there and they are beautiful: it will break the confines of the "browser" (the OS becomes the browser), and break the confines of the http protocol. mobile will be just another form factor in "convergent computing", offering another UI into both self-sovereign data spaces and federated interaction platforms

mark my words, the era now on its final legs will be remembered with disdain as a stagnant cash-cow period that had no moral scruples and exploited any and all human behavioral failings to turn users into exploitable idiots.


Managing apps on the desktop is a pain. They get their dirty fingers everywhere and they're a pain to uninstall. For lightweight tasks, the web solved this, though local apps still ran smoother than webapps. App stores solved package management (so did Linux distros), and getting a true webapp to perform as well as a local app is more trouble than it's worth, and that's when you have a good network connection.


Let's agree to disagree. Web didn't take over anything, it found its own niche. Native apps, web apps and mobile apps they all live in this together. Before mobile billions of people could not afford a computer, to have desktop at their disposal. Nowadays an inexpensive Android (like $20) with enough power to run WhatsApp can be found in hands of dirt poor rural Indians. Hence why mobile devices are more numerous. But they didn't ate anything from dekstop. Same with web, it found a place to exists.

Do you want a truly "great oxidation event"? That would be a headband activated by Alpha/Beta/whatever brain waves so you can wear your computer on your head and command it using your thoughts. That's gonna be a true device, that will disrupt both desktop and mobile altogether. Until then the next "wave" is gonna be IoT where you'd play Doom on your coffee maker. Nothing disruptive about IoT either, just another gizmo for us to play with.


For many people wireless connectivity is spotty at best; particularly rural USA, most of Canada, and much of the non-western world.

Whereas home internet connections tend to be fairly reliable.

Anecdotally, I don't rely on web apps because the terrible network reliability that I have means that they are frustratingly slow and frequently outright break. Native mobile apps work just fine.


> terrible network

And it's worse than just reliability. It's also performance.

TCP just does not work well over wireless networks. Its congestion control is designed to allow multiple TCP streams to coexist on a network that has steady bandwidth. It works fine if the network layer is basically stable and the problem is sharing it.

On mobile networks, the available bandwidth fluctuates wildly over a short time period. When the bandwidth drops, TCP backs off exponentially, and when the bandwidth returns, TCP does not ramp back up quickly enough. So it ends up sitting around doing almost nothing when there's bandwidth available.

So even when the network is working, native apps typically perform better because they lend themselves to separating network activity from user interaction. With the web, unless you do a single page web app, navigating to different screens and views of things tends to be a page load, which is just too slow too often.


Properly-written PWAs can keep code and data locally cached to avoid this problem. Of course most companies just don't bother implementing that stuff


If most of the business logic is on a server then caching doesn't help; there's too many misses.

Besides, major draws for building web apps includes control of user data and keeping business logic behind an API.


There's nothing technical that stops you from putting business logic in the web client any more than on native (except I guess working with local files, but not many mobile apps do that these days anyway). I'm just saying, if the key benefit is not having to download the app every time you open it, and instead only downloading the actual data as needed (to save on bandwidth), PWAs can serve that usecase (if they're designed to).


I think this is technically viable, but I wonder if a lot of projects fail (from a performance perspective) because teams don't realize the limitations of mobile networks early enough.

So they develop everything on wifi, and it seems OK, and they make certain design choices. Then after they launch, performance is terrible when people try to use it on cell networks. By then it's too late to fix performance because they'd have to re-architect the whole thing.

The lesson they could take away from it -- the one that would be most accurate -- is, "We should have designed our mobile web client differently." But the subtlety is lost and the message everyone learns is more like, "Last time, when we did a mobile web client approach, it was a disaster. Let's never do that again. From now on, it's native apps."


Maybe it is addressed in the article (I find it hard to read), but I always thought it was because internet on computers is/was quite reliable, while on mobile it is either not so reliable or data plans were quite expensive. Like you want to download the app and some data for the app to be able to use it when you are offline.


So the Google suite of web apps has a feature called "Create shortcut" that turns a web app into a desktop app sitting in your launcher. Even with that feature, it's a subpar experience compared to macOS's native apps. So I wouldn't be so quick to claim that web took over desktop.


I thought it was because, originally, native apps on mobile were better performant than web apps, which did not need to be optimized for the device they would run on, or efficient in terms of battery usage.

Desktop hardware was more powerful and forgiving to unoptimized web apps or poorly performing web engines.


> native apps on mobile were better performant than web apps, which did not need to be optimized for the device they would run on, or efficient in terms of battery usage.

It’s still the case now, desktop or mobile.

Not to mention lack of accessibility features.


Because it's the best fit. They're both unique complex systems which leads to different use cases and pros/cons.

Mobile is best taken advantage of via native. It has a rock solid distribution system, which sort of doubles as a marketing platform? It's more portable. There's lots of system features exposed to native apps. Your app can be on always, collecting telemetry and pushing notifications. The net connection is not always great.

Desktop is best taken advantage of via the web. Portability is a pain for native desktop, and there's no unified distribution system. You don't need background apps to be on all the time, people spend more time with their phones anyway. There isn't as much unique hw/sw functionality. But you have the fastest net access.


Because mobile phones' hardware and software are much closer integrated and closed systems than the desktop or the laptop. Bringing the web (or web tech) to phones is both technically more of a challenge and economically in particular Apple opposes it. Ceding control to the open web couldn't really be resisted on desktops.

However I'm relatively confident that the web is going to eat mobile regardless, starting with web technology and cross platform JS based frameworks gaining popularity, regulatory action against platforms, and when we're at the point where the web on mobile isn't disadvantaged any more it should be the clear preference.


Because internet explorer? Mobile web was very bad until a few short years ago. LTE and Chrome have been changing the narrative. Even when webs app are fine and readily available, I prefer installed apps for some such as ms office at work, slack, discord on desktop. Push notifications require local installation on mobile or desktop, unless yours like to live a tab open. It’s often the mobile messages that bring the human back to the desktop nowadays? Sensors such as iBeacon and NFC require local installs and no guarantee of a camera or scanner on a desktop.


> I’ll offer my sense of the landscape in part 2, next week.

God. Fucking. Damn. It. Just wait and publish the whole thing. There's no Netflix for the web. I don't want to subscribe, I just want the next episode when it comes available. Not the next 100 episodes. One of the great things about the current video systems (HBO, Apple TV, etc) is that these shows accumulate a few seasons before they catch fire. And then you can watch an episode. And then maybe another episode. Or not. You don't have to wait for next Thursday's episode of Friends.


Short version: Browsers evolved as desktops evolved. In 2008 when mobile was new, an iPhone (600mhz 32 bit CPU, 256MB RAM) or TMobile G4 (528Mhz, 192MB RAM), were pretty puny compared to the desktop machines of the time (1Ghz +, 4-16GB RAM). As a result, if you wanted to do something meaningful, you had to compile.

Originally, Apple said that iPhone apps would be Javascript and web view based. Palm bet the farm on Javascript + webviews for WebOs. Apple changed course quickly, and WebOS just didn't get enough traction.


Palm’s webos was likely 10 years ahead of its time.

It was one of the few devices where web based technologies were first class citizens.

Applications and the entire UI were built in JavaScript and HTML.

iOS and Android were, and are not as able to express via web interactions. It was difficult to even compile web apps as mobile apps for the iOS web store.

The desktop didn’t belong to anyone and was reasonably open from an app perspective. So was the web.

Maybe webos or something like it will emerge in the future and let us see what tightly managed mobile environments we live with.


Constraints.

More memory requires more power and weight, but portability and features restrict battery size.

Non-native languages use garbage collection, which implies 6x memory or limited performance. It is possible to emulate manual memory management, but this does not seem a common practice.

Hence, if developers write apps the standard way using web technologies, they will perform poorly on mobile devices compared with native apps.

This is primarily about ios though. Android is a different world, since it seems apps primarily use the Java ecosystem.


Inertia and path dependence (or in plain language, history). Web started on the desktop and the browser co-evolved with desktop focused web apps.

Desktop as a platform is also more hostile to native apps: a lot of users don't have permissions to install native apps, and another slice of users is sufficiently clued about security that they don't want to. These probably make up >50% of your potential users. The same effects exist on mobile too, but in smaller portions.


Mobile platforms deliberately worked to prevent it by making PWAs hard. A secondary reason is that desktops are way faster and have way more RAM than most mobile devices.


- Because desktops wasn't limiting web capabilities. Apple does.

- Because PWAs started working on iOS in 2018.

- Because native solutions are faster than JavaScript and HTML.

- Headstart for development tools.


I can give some answer: because web pages were artificially throttled on devices (at least iPhone) and mobile applications were promoted with endless stream of money from central banks - if you took any kind of IT subsidy (at least EU) you needed to make mobile app obligatory. Mobile app is much better for malicious and hidden functionality of the app - that's why the transition was needed. Cheers!


I'd like to add that web performance in Android was terrible, last time I had a Nexus 2. Collapsing threads on Hackner News was terribly slow. Any Discourse forum site was almost unusable. Then I switched to an iPhone and it was much better.

Anyway, I just mean that "modern" mobile sites couldn't have taken over native apps on Android. Its web engine is just not performing well enough.


I think there’s a lot of good insight in the article, but it may be overthought. When I saw the question in the title, the answer that immediately came to mind was:

1. Switching apps on mobile is significantly less friction than switching browser tabs. (And it’s somewhat less friction for many people than switching apps on desktop.)

2. There’s significantly more investment in quality, mostly native, apps on mobile platforms.


Why do we as a community feel that one technology has to “take over” another all the time? Native apps continue to exist on desktop alongside web apps and that’s fine, as the capabilities of PWAs increase we might see more coexisting with native apps on mobile. With web assembly and projects like wasmer the lines are likely to become blurry anyway.


For me, opening and dealing with the browser and web apps on mobile is more of a hassle than doing the same thing on desktop. On the latter, I have my browser with pinned tabs with the apps I use (email apps, project management apps, etc.) and it just stays there.

Trying to do the same thing on mobile is a pain, so it's more convenient to use the mobile apps there.


Safety is the only differentiator. Sun legally controlled sandboxed execution on PCs, while abandoning it. HTML/JS was a poor substitute for iOS/Dalvik, but the only option without legal issues.

The linking issue is important, but not present in the mobile alternative. And networking support via XmlHttpRequest is trash compared to native sockets.


Is it true that the web on mobile is dying or dead? I'm using HN here on mobile. At least half the visitors on most sites I build are on are on mobile. My most oft-used app is by far the browser.

And like others have said, often mobile apps are just thin containers around an embedded web app.

Facebook etc. apps are hugely popular, but the web is still very alive.


Regarding the second question, I think the real reason why didn’t the web disrupt mobile is because mobile has been too slow. That "native" performance gets over the last hump and I think that made the difference. The other reasons and observations definitely contributed, but I feel we forgot just how slug slow things were.


This is one of the least clear and most overwrought articles I’ve seen on HN in some time. The extended comparison to algae is not helpful. Neither system - modern computing or ancient evolution - are simple enough to benefit from a direct comparison.


…did it?

I still use tons of native apps on my laptop, and almost exclusively native apps only on my desktop.


You're thinking about software not apps


They’re the same thing. “App” is short for “application”, aka installable software.


I thing one main missed point here is that the web-as-app shift on desktop happened before computers were super-mainstream and daily-used. It happened when most of its users were fine with new techniques to get stuff done.


Actually, the web itself happened after computers had gone super-mainstream. That led to the dotcom-boom and burst of the late 90s/early 00s. From there, web 2.0 developed. Google started pushing its plugin (Google Gears) to deliver webapps, but uptake was too low and therefore their webapps suffered. Google solved that by building a browser that deliver the JavaScript speed they needed. That led to a proliferation of web apps as the term is understood nowadays.


Because of the duopoly that pushed native apps way harder then the web, mostly.


> The main reason I care about the Web is because it's the world's biggest software platform that isn't owned.

And now we live in a world where the web is owned by Chromium and thus Google.


I think web is the consumer preference. I’ll be 100x more likely to use your service if there’s a workable web version.

Unfortunately the gatekeepers don’t expose most phone functions to websites.


Ads and privacy and horrendous in native apps. User has zero control over what they can do to enhance their experience. Of course the web will never take over in mobile.


Apple contributed hugely to kill Flash and Java on the Desktop (let alone things like Silverlight). ALso non-native code is still 2x slower all things considered.


Because it's slow, requires a connection, because it behaves weird (rubberband, scrolling, selecting text, etc, etc), no discovery (app store), etc.


Apple. It is as simple as that. If another browser could run on iOS (not having to use safari under the hood), then things could be different.


Constant authentication dialogs kills it for me.


great article! the fact that the same businesses and applications exist on both web and mobile doesn't mean the "web has to do apps". The flexibility you get on a desktop or a laptop cannot compare to the limited UI of the mobile apps. My guess is they'll both co-exist for a while.


- Computers have more CPU than mobile phones.

- Standard and easy installation process of Apps.

- Apps born with modern UI.


Strange.

I use Web apps on mobile whenever I can.


I hope the web takes over mobile. So sick of being nagged at to install apps.


“Discovery shifted from search to app store.”

No.

The App Store is like going to the dentist.


The article appears to see "the web" as "browsers". This is apparent in the "OS within OS" language. The web isn't the browser although that is its facade. If you consider search on the desktop as "where do find X?" And on mobile as "how do I get X done?", Sans the real web the mobile device will be far less useful. From that view, it almost looks like the web already ate mobile .. just that we don't see it .. as made clear in this classic https://xkcd.com/1367/


And for true blue native apps, the web hasn't quite eaten desktop just yet. Try to get a professional video editor to use a website for production work like movies, or get an audio engineer to use a web based DAE (the latter is getting a little more feasible). You'll find it similarly hard to pry Maya or 3d studio Max from the hands of 3d professionals.


Mobile was built as a spyware platform down to the roots. You can make a lot of money with spyware. It's that simple.


Is it hard to see? App stores and the motivations of the companies governing them.

Nothing else than that is needed to explain it.


personally i hate using phone apps. Whenever there is a choice I just use the browser. I think apps are a fad. It is completely possible to just use a browser on or off your smart phone for almost everything you can do with an app.


One word: Apple.

Preventing PWAs is detrimental to users and an abuse of their monopoly.


Its not sufficient of an explanation.

For desktop browsers there are also no application launch icons.

Probably has most to do with the limits of the UX of a phone. Pulling up a browser and typing in a url, hitting autocomplete is done in 2 seconds. On mobile everything is so much more painful, a lot more desire for one click wonder buttons


Wait, there is sort of speeddial/favorites tab in every browser, desktop and mobile. E.g. a food ordering webapp is my first “wonder button” in safari’s empty tab. I could even put it to the phone home screen (no need for that though).


Chrome has the ability to save sites as a desktop shortcut. It even prompts you to do so on some web apps, like Stadia, so you can use it like a more "normal" desktop app.

Safari on iOS can do this, too. You can pin a website to your home screen, which is something that I have done for PWAs.

It could have easily gone another way where this was the "normal" behavior of the mobile ecosystem.


You should read the article. To sum it up: the web beat desktop apps because traditional OS were not designed for a networked world.

The iPhone however was designed for a networked world so it didn't have all the limitations of desktop OS.

The web wasn't designed for a mobile world so it had a lot of limitations: hard to do a good UX, passwords to type on a tiny keyboard, no offline mode (or so complex to use that no dev do), URL vs app icons...


Only that's not how the world works.

The world works with power: Apple used their power to make strangle the entire idea of the web on mobile by putting a break on change. Why? Mobile web tech helps their competitors more than it helps them.

If Apple's own platform / APIs had had the same rate of change as they effectively forced on the mobile web, then they would be a decade behind Android.

This is exactly the same thing as Microsoft did in the 90s with productivity software. They had secret undocumented APIs which made Office a fantastic experience and non-office "meh".


If you look carefully you'll see this tactic all over: throw mud in your opponents eyes to slow their rate of change.

See: US banking (in the EU I can transfer cash, instantly, for free to a friend's bank account and have been able to do so for a decade), Fossil fuel vs Climate change, most commercial standard bodies. It's everywhere.


> US banking (in the EU I can transfer cash, instantly, for free to a friend's bank account and have been able to do so for a decade)

This one actually seems like a fault of the US government dragging its feet on making advancements on a nationwide protocol for transferring money and staying stuck on ACH. So much so that, that the biggest banks had to get together and create their own system 10 years ago:

https://en.wikipedia.org/wiki/Zelle_(payment_service)

> Launched in April 2011, clearXchange was originally owned by Bank of America, JPMorgan Chase, and Wells Fargo.

Supposedly, the US government is finally rolling out a proper system in 2023:

https://www.federalreserve.gov/paymentsystems/fednow_about.h...


  > Apple used their power to make strangle the entire idea of the web on mobile by putting a break on change. 
but this seems to neglect the fact that

  1. apple/jobs wanted web tech initially for apps
  2. native app devs were beating down the doors for access to native, not web apis


That was very very early on. Once they were big and Android came along the strategy was turned around.


Mobile Web is enough for like 90% of CRUD stuff, regardless of PWAs.

Yet, most business still go native due to the development experience, like not having to turn <div> magically into beautiful dropdown combo boxes with multiple selections via an HTML/CSS/JS soup, that don't feel quite right with the native ones.


I don’t think so - PWAs on Android are also usable at best.

It’s hard to twist the web into something it isn’t. So much effort goes into making the web do what native apps already do instead of trying to complement all that towards a better experience.


Apple is the reason. App store greed. We with our "design and virtue signalling" addiction helped a lot in creation of this monster. As far as I can remember, the real UI/UX innovation was killed fast. Palm/WebOS.


Is android native development still a thing?


It's always confusing when people say "native" on Android... you never know if they mean "not a web app" (SDK) or "machine code" (NDK). Do you mean the latter?


I'm pretty sure it's "not a web app" in most contexts, but that's an interesting point I hadn't thought of before.

What do you mean by "machine code" though? To me, the most native you can get is a Java/C++ app that uses Android APIs directly. Anything lower is systems development, something not generally possible for normal developers.


> What do you mean by "machine code" though? To me, the most native you can get is a Java/C++ app that uses Android APIs directly.

I mean like C/C++ (which compiles to machine code) and not Java (which compiles to bytecode).

Same as the distinction the terminology made on desktop: https://stackoverflow.com/a/855774


Android UI is implemented with Java libraries. If you want native Android L&F, you need to use those libraries. You can write your app with C++ and invoke those libraries via FFI, but that's extremely cumbersome way to develop and does not bring any advantages. Java is the native way to develop GUI apps for Android. And recently Java was replaced with Kotlin, so nowadays Kotlin is the native way to develop GUI apps for Android.

Just like C# is one of the native ways to make Windows applications.


> Just like C# is one of the native ways to make Windows applications.

Please do everyone a favor and, at least for the sake of desktop development, don't misuse the terminology like that if you want people to understand what you're saying. The entire reason ".NET Native" was developed was that C# did not produce "native" applications. Saying C# produces native Windows applications is going to confuse the heck out of everybody.


Someone is wrong on the Internet.

.NET has always supported AOT via NGEN, although it only supports dynamic linking and was optimized for fast startup of applications.

Windows 8 introduced the Bartok compiler used by Singularity, where applications would be pre-AOT compiled in the Windows store minus linking, with on-device linking happening on installation.

Windows 10, improved the later scenario with the introduction of .NET Native, slightly based on the Midori experience.

The new Windows 11 store is still fully based on .NET Native, as it makes use of WinUI 2.6.


> .NET has always supported AOT via NGEN

I didn't claim otherwise. But AOT != "native".

What makes something "native" is not merely the fact that you compile to machine code. It's one of the main features of native code but far from the only one. Again: there's a reason they came up with ".NET Native" and called it that despite the fact that NGen always did AOT. And there's a reason the Android NDK has an N, unlike its SDK. It actually means something beyond AOT.

You can go against the grain if you want and call them all native apps, telling people they're Wrong On The Internet, but you're just confusing people.


What makes an NGEN compiled WinForms .NET application not native on Windows?

Curious to find out, how those people distinguish it from an MFC/ATL or an Win32 one.


It's not just one thing. Just like what distinguishes a human from a chimpanzee isn't just 1 thing.

But see for example https://stackoverflow.com/a/855774

If you still don't like the terminology though, I'm not going to keep arguing. I didn't coin the term. You should go ask Microsoft why they didn't call C# native when NGen was already there. I'm just saying that terminology is already established and you're confusing people by using it differently.


That is one possible interpretation of the term, yes Microsoft does use native/managed to distinguish between environments with GC runtime and those without.

Which isn't what users talk about when arguing about native apps, they don't even know what a GC is.


> Which isn't what users talk about when arguing about native apps, they don't even know what a GC is.

Because I'm sure if you went and asked the vast majority of "users" what a "native app" is, you'd get a coherent answer instead of a blank stare.

Let's lay this matter to rest. You don't like the definition, I get it. It's fine.


Would you consider something that's not a web app, but built with a higher level framework like Flutter to be native in either sense?


Me? Personally I hate any definition that conflicts with the old one, but that ship sailed long ago.

Nowadays, my understanding is, if it's not loading a webpage off the internet, people call it "native". Doesn't matter what framework it uses to actually display things (even if it uses web technology).


Java compiles to machine code on Android, via JIT and AOT compilers.


I'm aware. Windows has NGen too. But that's not what makes people call Java or C# a native language, or apps based on those native apps.


Native is overloaded, however Java is the "native" language of the Android SDK, and on Windows unless one has a morbid pleasure to still use MFC, ATL, bare bones Win32, or use C++/WinRT like ATL is fashionable again, .NET UI toolkits will be the way to go.

Or are you going to argue that Visual Studio, SQL Server Management Studio, Microsoft Store, Microsoft Blend, Office AddIns, Power Automate Desktop aren't native?


> unless one has a morbid pleasure to still use MFC, ATL, bare bones Win32, or use C++/WinRT like ATL is fashionable again, .NET UI toolkits will be the way to go.

I'm not sure how the discussion turned from "native" to "the way to go".

I get the feeling like you're jumping all over HN trying to reply to me at every comment you find because... you took "not native" as some kind of insult to tools you like/consider superior?


Most of the time, I don't bother to read the author, so it is a matter of chance that those posts are yours.

I am jumping because the distinctions you are making aren't the ones that users care about.

So each one can go on their merry lives with their own dictionary version.


Native on Android, is what the native platform SDK offers as development experience when you install it on the computer.

So Java, Kotlin, C, C++ and Web.


I meant “not a web app”, so both, I guess.


I've only ever seen the NDK used for games


and all the packers, protectors and obfuscators that are sold as "must have"


Usually people that buy those never heard of IDA Pro and Hex-Rays.


Yes, very much so.


Apps


I like the overall framework of trying to identify asymmetries that web was able to exploit in each epoch. Unfortunately, the details are a real mixed bag:

* Why did the web disrupt desktop?

It's not even really worth addressing these, because they're all so weak. None of the actual reasons that web was successful are here. And the ones that are there are nonsense. For example, to suggest that security in the browser was a significant factor in the success of the web on the desktop is utterly detached from reality. Even if average people cared (which they didn't) then they'd be better off trusting software bought at the store than going into the wild west of the internet with the shambles that we called browser security back then.

* Why didn’t the web disrupt mobile?

- the web’s network advantage had evaporated

If anything this argues against the thesis that "make anaerobes do photosynthesis" is insufficient. How would apps become successful by merely matching the existing networked nature of the web? At best this item is filler that adds nothing.

- The iPhone was a completely new thing

Yes, this is actually a good point. New paradigms and new APIs for a new type of device. Makes sense.

- The basis of performance shifted from small binaries to smooth interaction

What? No. This is silly. There's no way that the web prioritizing download size had a significant impact here. And "smooth interaction" has absolutely nothing to do with binary size, so I have no idea why the author is associating them.

- Navigation shifted from keyboard to springboard

Kind of a repeat of the smartphone being a new device, but okay, good additional flavor.

- Log-in disappeared completely

I mean I guess. But if there's any difference, it's more a matter of shared devices not being so much of a thing with phones. Web is perfectly capable of "one-and-done" logins, and many sites do that. And you could easily have a longer session cookie on your mobile site based on the single-user-per-device assumption.

- Discovery shifted from search to app store

Decent point, but only by accident. "Lowering the friction of software install" is nothing, since web has zero friction in that regard. I don't think that typing-free discovery was really that huge of a deal either. But having a large ecosystem of app developers, with ratings and featured content, was enormously influential. This might have been possible with web technologies but obviously nobody did it well enough to gain critical mass.

- Engagement shifted from links to icons

You could have single-page applications on web, with shortcuts on your home screen, send notifications through the browser, etc. You can use icons instead of links on web. Behind the pile of buzzwords, this item is nothing. "Deep, branded experiences" really?

- Business models expanded to IAP, subscriptions, and app purchase.

Yeah I think this is a good point. Having a single payment processor in the play store / app store that knows your payment details did reduce friction.

- Security shifted from sandbox to app review

No, no, no. Not this again. People don't really care about security, and anyway again, app security was still dogshit by the time it had won on mobile, just like browser security was by the time it had won on the desktop. Apps requesting tons of permissions to spy on users were rampant for years and years, even with apps being reviewed by apple/google. And how does this item even constitute a competitive advantage? You can have a browser remember permission to access your location or camera or whatever. There's nothing here.

I feel like this article really dazzled with its introduction tying in business and evolution and setting up the framework for the problem, but then fell to absolute pieces when it came time for the author to actually know anything about technology. Even if there are some good points here, the missteps really undermine their credibility. The analysis of the success of web on desktop is especially lacking, but I don't feel like it's a great analysis on the web vs app section either...


Unfortunately the author didn't do basic diligence on that intro - anaerobic photosynthesis is indeed a thing and was likely around for about a billion years before the water splitting component evolved (such anaerobes get their electrons for photosynthesis from sources other than water, such as reduced iron in an anaerobic ocean).

All the author had to do to check this was type 'anaerobic photosynthesis' into any search engine. (Add 'banded iron formations' for some really interesting stuff).

So if you get something like that wrong, and it's the analogy at the heart of the argument, well... it does make one question the accuracy of the rest of it.


o


Lot of words to say "Apple's App Store".


It will take over mobile too. It’s just a matter of time.


I've been writing a big React app for about a year, it runs really well on mobile too, but it is a bit hamstrung by iOS safari due to some errant touch interactions I've not been able to turn off despite much trying: double tapping, pinching in certain random areas can lead to zooming, and there is no good way for the user to undo it.

I'm going to agree that the reason is Apple.


Is this the reason that your app don't work on IOS: https://reactjs.org/blog/2018/05/23/react-v-16-4.html#pointe...

The safari do not support the pointer events.


And there are some polyfill for it: 'If your application depends on pointer events, we recommend using a third-party pointer events polyfill.'


Yes, I agree, how selfish


"The sandbox made software safe. Installing software on a PC is a risk. When you install an app, it can do anything to your computer, no restrictions. This design choice was enormously generative, and enabled the open-ended evolution of whole new product categories. It also enabled the evolution of viruses. The web was globally networked, and could not assume every link was safe. Code on the web was sandboxed—trapped in a bubble—and it could only interact with your computer through carefully controlled APIs."

I don't agree with this paragraph. There are many ways to constrain what a native application can do on a PC and conversely there are many ways a malicious website could harm you using Javascript.


Only by technical people and with massive tradeoffs. For example the fact that applications could until very recently read any file almost anywhere on the computer that the user could. Even if, clearly, that application had no reason to be able to read that file in particular.


Aside from running a native application in a VM, most other sandboxing solutions are escapable, mostly because no mainstream desktop OS was architected with sandboxing in mind. Jails are as good as we got, and even then the kernel is far too insecure IMO. Its very hard to support a POSIX api and then retrofit sandboxing. And with VMs, there are numerous escapes, its just another frontier of security issues, not unlike browsers. But browsers were built with sandboxing as a core part of the design, and if you were to not use the newer, far buggier APIs, like Bluetooth and WebGPU, I'd argue that the browser would be relatively safe for a layperson. And much more practical than a VM.


This is an example of an imaginary web browser to surf the imaginary internet, using OpenAI's Codex. It's built into Pen.el and is completely free as in freedom.

https://news.ycombinator.com/item?id=28489942

I made a post on HN but didn't get enough hits, so crossposting to relevant HN posts such as this.

https://semiosis.github.io/posts/the-imaginary-web-with-code...

- Visit any website you can imagine, even the ones that are not real!

- Edit and re-imagine as you go see alternative website realities – change the sentiment of the author!

- Peer into the future – read about GPT-5!

- Generate relevant URLs (often real, sometimes imaginary) from any text selection

- Read an article on anything from your favourite blogger.

- … This is the future of the web.


Native platforms are just a transitory phase -- because the web people coulnd't make up their minds fast enough about the suitable APIs for mobile. Native mobile apps are basically imitating the web, they even have URLs and links. It's inconvenient, since mostly they take you in and out of web pages. When google's economic interests align again with the web, we ll probably see the end of the native-binary packaging of the web, just like how we experienced the end of CD-ROMs


Because of brainwashing and manipulation by big corporations which made people believe that native mobile apps are superior to web/browser based ones. The reason for this is that big companies didn't want to be sandboxed in a browser, they wanted full access of users devices, sensors and data.

It's the same reason why software developers use paid enterprise subscription services instead of open source software. They were successfully brainwashed to believe that the open source software is difficult to operate or doesn't work as well as the enterprise software. Meanwhile enterprise service companies are collecting and monetizing data from all their apps... In addition to recurring revenue.

People are suggestible and easy to manipulate, that's why. If you flash some big stacks of cash in front of their faces, they'll believe anything you tell them. You don't even need to give them any cash, just show them you have it. People follow the money like sheep to a slaughterhouse.


Appreciate your comment.


I think this article misses out on a couple things. First writing a cross platform App for desktops is a lot more work than especially the GUI aspect than a web app.

A desktop app is just as capable of using the internet as a phone app. So that point does not make any sense to me.

The main reason apps are more popular on phones is first battery usage is gonna be better and the was especially important early on when phone processors were less capable.

2nd there has been a big push particularly from apple so they could capture those purchasing fees. Combine this with the fact they prevent the device users from side-loading unless they find an exploit to jailbreak their device the Apple's incentives are clear.

Combine this fact the apple when it comes to implementing standards for browser featuers drags their feet at times. Developers will just often deal with implementing a native app, also combine the fact because users can't side load your only stuck to one browser engine on the iPhone.

This bleeds over to android as it's kinda pushed the defacto method of getting apps on a phone being to use a store. Also google is not going to really try to change it either because they are also collecting fees. Although a developers options are certainly more free on that platform, but again to the users if you stray to far from what has become the defacto method getting users will be difficult.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: