Modern browsers these days are powerful things - almost an operating system in their own right. So I'm asking the community, should everything now be developed as 'web first', or is there still a place for native desktop applications?
As a long-time Win32 developer, my only answer to that question is "of course there is!"
The efficiency difference between native and "modern" web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.
Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.
For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II. Even the web stuff at the time was nowhere near as disgusting as it is today --- AJAX was around, websites did use JS, but simple things like webchats still didn't require as much bloat. I lived through that era, so I knew it was possible, but the younger generation hasn't, so perhaps it skews their idea of efficiency.
In terms of improvement, some things are understandable and rational, such as newer video codecs requiring more processing power because they are intrinsically more complex and that complexity is essential to their increase in quality. But other things, like sending a text message or email, most certainly do not. In many ways, software has regressed significantly.
I recently had to upgrade my RAM because I have Spotify and Slack open all the time. Today RAM is cheap but it is crazy those programs take up so much resources.
Another program I use a lot is Blender (3D software). Compared to Spotify and Slack it is a crazy complicated program with loads of complicated functionalities. But it starts in a blink and only uses resources when it needs to (calculations and your 3D model).
So I absolutely agree with you.
I also think it has to do with the fact that older programmers now more about the cost of resources than younger programmers do. We used computers without harddisk and KBs of RAM. I always have this in my mind while programming.
The younger programmers may be right that resources don't matter much because they are cheap and available. But now I had to upgrade my RAM.
It really surprised me when I downloaded Godot only to get a 32MB binary. Snappy as hell.
Web apps masquerading as desktop apps are terribly slow and it's a surprise we've got so used to it. My slack client takes a few seconds to launch, then it has a loading screen, and quite often it will refresh itself (blanking out the screen and doing a full-blown re-render) without warning. This is before it starts spinning the fans when trying to scroll up a thread, and all of the other awkward and inconsistent pieces of UI thrown in there. Never mind having to download a 500MB package just to use a glorified IRC client.
I'm really enjoying writing code outside of the browser realm where I can care a lot more about resource usage, using languages and tools that help achieve that.
It's interesting to compare Ripcord[0] to Slack. Ripcord is a third-party desktop client for Slack and Discord. It has something like 80% of features of the official Slack client and a simpler UI (arguably better, more information-dense), but it's also a good two orders of magnitude lighter and snappier. And it also handles Discord at the same time.
I wish so much that 3rd party clients weren't directly against the TOS of Discord. I sorta miss the old days where it seemed like anyone could hook up to MSN/Yahoo/AIM.
I wish that too. More than that, I keep wondering whether there could be a way to force companies to interop, because right now you generally can't, without getting into some sort of business relationship with these companies. That's the problem with services - they take control of interop, and the extent to which interop is allowed is controlled by contracts between service providers.
While it does not explicitly state that, it does say:
"(ii) copy, adapt, modify, prepare derivative works based upon, distribute, license, sell, transfer, publicly display, publicly perform, transmit, stream, broadcast, attempt to discover any source code, reverse engineer, decompile, disassemble, or otherwise exploit the Service or any portion of the Service, except as expressly permitted in these Terms;" [1]
Given that the API is not public if you are not using a bot key, I would think that using it with a third party client would take some form of reverse engineering.
The devs also stated that other client modifications like betterDiscord are against the TOS.
> I also think it has to do with the fact that older programmers now more about the cost of resources than younger programmers do.
I'm not convinced it's the programmers driving these decisions. Assuming that it takes less developer effort - even just a little - to implement an inefficient desktop application, it comes down to a business decision (assuming these are programs created by businesses, which Spotify and Slack are). The decision hinges on whether the extra cost results in extra income or reduced cost elsewhere. In practice people still use these programs so it seems the reduced income is minimal. What's more, the "extra cost" of a more efficient program is not just extra expense spent on developers - it's hard to hire developers so you probably wouldn't just hire an extra developer or two and get the same feature set with greater efficiency. Instead, that "extra cost" is an oppotunity cost: a reduced rate of implementing functionality.
In other words, so long as consumers prioritise functionality over the efficiency of the program, it makes good business sense for you to prioritise that too. I'm not saying that I agree with it, but it's how the market works.
> In other words, so long as consumers prioritise functionality over the efficiency of the program, it makes good business sense for you to prioritise that too.
And the kicker is, consumers don't have a say in this process anyway. I don't know of anyone who chose Slack. It's universally handed down onto you from somewhere above you in the corporate, and you're forced to use it. Sure, a factor in this is that it works on multiple platforms (including mobile) and you don't have to worry about setting it up for yourself, but that has nothing to do with the in-app features and overall UX. Or Spotify, whose biggest value is that it's a cheap and legal alternative to pirating music. And that value has, again, nothing to do with softare, and everything to do with the deals they've managed to secure with artists and labels.
I exercise my preferences wrt. Slack by using Ripcord instead of the official client. Most people I know exercise their preferences wrt. Spotify by using YouTube instead (which is arguably lighter resource-wise). And speaking of alternative clients, maybe that could be the way to go - focus on monetizing the service, but embrace different ways of accessing it. Alas, time and again, companies show they prefer total control over the ecosystem surrounding their service.
> And the kicker is, consumers don't have a say in this process anyway. I don't know of anyone who chose Slack. It's universally handed down onto you from somewhere above you in the corporate, and you're forced to use it.
The consumer here is the business itself, not their employees.
Technically yes (well, the customers, not consumers), but that's the problem itself: the feedback pipeline between end-users and producers is broken because the end-users aren't the customers.
As a younger developer I'd say I agree. But it's not just developers being used to resources being plentiful.
I do webdev mostly and there it's also a matter of management. I want to optimize applications to be less hungry, those are interesting challenges to me. But I've been told by management to just upgrade the server. Either I'd spend a day optimizing, and maybe fixing the issue. Or we just spend 50 euros a month more on a server.
Sometimes the optimization is not worth the effort. For applications like Blender? Optimization means a lot.
Yes, that was my thought process as well. But management didn't agree. To them, the short term cost of me optimizing the problem was higher than the long term costs would be.
Something I noticed over my career. Programmers tend to get super beefy machines. My machine has 64GB of memory, and 12 cores. But the typical users who use our software don't have anywhere near those same specs.. but programmers often just say "it worked on my machine" without thought about the specs.
More seriously though, Spotify and Slack are optimised to intentionally be huge time wasters, so it makes sense the organisations that produce them don’t care about performance / efficiency.
Most Spotify user-hours are probably office workers or students pumping music into headphones while working. If anything it's a productivity application because it trades flagrantly unnecessary resource usage (streaming the same songs over and over) for users' time (no more dicking around crafting the perfect iPod).
On the topic of flagrantly unnecessary resource usage...
My first child was born six months ago. Newborns (we discovered) sleep better with white noise. So of course we found a three hour white noise track on Spotify and played it from our phones for every nap, never bothering to download it.
I find it hard to believe at least some of that that data wasn't cached on your device. Setting a track to be downloaded just means the cached data is evicted differently. If you run their desktop client with logging enabled you'll see this happening and I'd say it's likely to be the same across platforms. That is of course the actual reason they have a non-native app - to reuse the codebase and save money.
But I can't. My ram is soldered on. How many tons of carbon dioxide should I emit so that you can use React? There are ways to do declarative ui/state management without dom...
My computer still computes with 2 GB of ram. It’s just that developers are gluing more and more stuff together to do things we did on Pentium processors with 64 MB of ram.
I guess the question becomes: what is the native ecosystem missing that means devs are choosing to deliver memory/CPU hungry apps, rather than small efficient ones?
HTML, CSS and Javascript. Most of these electron apps are basically wrappers around actual websites to give a place in the dock and show notifications and access the filesystem.
But that isn't what's missing. It's a restatement of the problem. DOM based apps are much more resource intensive than native. What is missing from native that makes business choose DOM?
If there was some modern tool like WxWidgets that supported modern apis like DOM, Android and UWP, would we see more use of native? Electron would therefore become pointless.
The hypothetical business has two choices. Choose Electron, or choose some other toolkit that has native, cross-platform support (like Qt). It's far easier for the business, and the developers there, to take their existing website HTML, CSS, and Javascript; and simply wrap it in Electron (which costs $0), and call it a day. Every other choice is (perceived as being) more expensive.
Qt is a modern toolkit with native-cross platform support, but costs money for commercial use, and businesses and software developers don't want to spend the money on it.
as someone who has done both desktop apps and electron apps, it is much faster to write some html/css and wrap it in electron than to do the same in qt/gtk/etc...
Not to mention, the HTML/CSS combo is possibly the best we've come up with for designing user interfaces.
If you don't mind me asking, how much RAM did you have before, and what did you upgrade to?
I recently got a new PC myself and decided to go for 16GB, my previous one (about a decade old) had 8GB and I didn't feel I really hit the limit, but wanted to be future proof. Because as you said, a lot of 'modern' applications are taking up a lot of memory.
I also went from 8GB to 16GB recently (virtual machines are hungry); but I had gotten rid of Slack even before that. I mean, yes, it has round edges and goes ping and has all those cutesy animations - but 2GB of RAM for a glorified IRC client, excuse me, what exactly is it doing with billions of bytes worth of memory? ("Don't know, don't care" seems to be its developers' mantra)
I upgraded from 8 to 16GB. But I'm in the process of ordering a new desktop that will have 32GB.
Spotify and Slack are not problematic as individual programs but since I have a lot of other programs open they are the ones that take up more memory than they should. I mean: Spotify is just a music player. Why does it need 250MB RAM?
Because it is not just a music player. It plays music, aside from giving you an entire experience that consists of connecting with your friends, managing your music library, consuming ads, having a constant connection with the server, and ...
This was meant to be sarcastic, but I'm not even sure how to continue. Maybe someone else can bulk up that list to get to something that requires 250MB. :)
I work on a desktop CAD / CAM application, and I need every one of the 12 cores and 32 GB RAM on my windows workstation. I know this because I also have a mac workstation with lower specs (16 GB RAM, don't know offhand how many cores) and developing on it is intolerable (let's play "wait an hour to see if clang will compile my changes" - I know, I know, I should read the C++ standard more carefully so I'm not disappointed to discover that MSVC was overly permissive).
Parenthetically, we do use Slack and I am double-dipping on a lot of heavy functionality by having both Spacemacs (which I use for code editing and navigating / search within files) and Visual Studio (which I use for building, debugging, and jump-to-definition) open at the same time.
You are looking back at the past with rosy goggles.
What I remember from the time was how you couldn’t run that many things simultaneously. Back when the Pentium II was first released, I even had to close applications, not because the computer ran out of RAM, but because the TCP/IP stack that came with Windows 95 didn’t allow very many simultaneous connections. My web browser and my chat were causing each other to error out.
AJAX was not around until late in the Pentium II lifecycle. Web pages were slow, with their need for full refreshes every time (fast static pages an anomaly then as now), and browsers’ network interaction was annoyingly limited. Google Maps was the application that showed us what AJAX really could do, years after the Pentium II was discontinued.
Also, video really sucked back in the day. A Pentium II could barely process DVD-resolution MPEG-2 in realtime. Internet connections generally were not the several Mbit/s necessary to get DVD quality with an MPEG-2 codec. Increasing resolution increases the processing power geometrically. Being able to Zoom call and see up to 16 live video feeds simultaneously is an amazing advance in technology.
I am also annoyed at the resource consumption, but not surprised. Even something “native” like Qt doesn’t seem to be using the actual OS-provided widgets, only imitating them. I figure it’s just the burden we have to pay for other conveniences. Like how efficient supply lines means consumer toilet paper shortages while the suppliers of office toilet sit on unsold inventory.
FWIW i do not remember having issues like that, i had mIRC practically always open, a web browser, email application, etc and i do not remember ever having networking issues.
Internet was slow but that was largely because for the most part of the 90s i was stuck with a very slow 2400 baud modem - i got to appreciate the option that browsers had to not download images by default :-P.
But in general i do not remember being unable to run multiple programs at the same time, even when i was using Windows 3.1 (though with Win3.1 things were a bit more unstable mainly due to the cooperative multitasking).
Me neither, I'm not going to lie and say that I had 40 applications opened, but I DID have 5-10 apps using the web with 0 issues (A browser+IRC App + Email Client+MICQ+MSN Messenger+Kazaam/Napster+Winamp in stream mode).
Very very few of the web and desktop applications of today are as snappy and user-friendly as classic Winamp.
Sure, but if you could have Spotify as is, or a light weight player, like WinAMP, both with equal access to the Spotify service, which would you pick?
People aren't using Spotify because the player is fantastic, they use it because Spotify has a huge library, is reasonable price and the player is sort of okay.
Totally agree. But the DRM monster rears its head. Everyone is afraid you'll steal their choons if you're allowed to play them on whatever player you like sigh
Still, all iTunes content can be de-DRM-ed in 500 lines of C code, so it's not like "the industry" actually requires it to be secure.
Like everything these days, it's barely good enough. And why bother implementing your DRM as a 1KB C++ library when you can use a 5MB Objective C framework instead?
Spotify is a case in point: it used to have a fantastic, small and fast native desktop app. It replaced it with the bloated web-based one we see today.
That's an artifact of IP laws. The reason you don't have to curate your own mp3s again is because some service managed to find a way to give you a searchable, streamable collection of music that's also legal. But that in no way implies Spotify needs to be so bloated.
In a better world, you really wouldn't need to. Winamp was great - The weak point was always the playlist editor, but winamp's interface for simply playing music and seeing what was up next was wonderful. Spotify could provide you with a playlist plugin that simply gave a list of URLs, or let you download one that lasted X hours.
Same here. About the only time when by browser + mIRC + WinAMP + IM + Visual C++ 6.0 combo slowed down was when the VC++ was compiling the game I was working on. I would then close the IM, because doing so would speed up the compile times by 1.5x.
IRC, SMTP, IMAP are protocols from back when desktop operating systems didn’t even come with TCP/IP. They would use a single connection for unlimited messages. I was using a “modern” chat program, AOL Instant Messenger.
Alright, the missing part of my story was that I was also using a proxy program to share a single connection with my brothers. NAT wasn’t widely available yet, and Winmodems were much easier to find than hardware modems. (And I hadn’t discovered Linux and the free Unixes yet.)
So, every TCP connection that AIM made was 2 connections in the proxy program. We quickly discovered that AIM on more than one computer at a time made the entire Internet unusable.
Every generation of developers decries the next generation for bloat, but Windows 95 had preemptive multitasking that made the computer so much snappier (plus other features), at the cost of multiple times more RAM needed than Windows 3.1. (16 MB was the unofficial minimum, and often painfully small. Microsoft’s official minimum was impractical back then.) Windows XP had protected memory that made it more feasible to run multiple applications, because they were much less likely to crash each other (plus other features, including a TCP/IP stack featuring NAT and a useful connection limit), at another several multiples more RAM needed.
There have always been tradeoffs. Back in the day, programs were small and developers focused more on making sure they did not crash, because they didn’t have lots of RAM and crashing would often require the computer to reboot. That developer focus meant less focus on delivering features to users. (Also, security has often meant bloat.) Now, you barely need to know anything about computer science, and you can deliver applications to users, at the cost of ginormous runtime requirements.
It may be true that people are partially looking back in rose-tinted glasses, but there's more than just an inkling of truth to their side. Casey Muratori (game developer for the Witness) has a really good rant [1] about bloat in Visual Studio specifically, where he demonstrates load times & the debugger UI updating today vs on an Pentium 4 running XP. Whether or not you attribute the performance difference to new features in Win10/VS, it's worth considering the fact that workflows are still being impacted so significantly on modern hardware. We were able to extract 100s of times more out of hardware and gave it up for ???
The Visual Studio 6 on Pentium 4 demonstration starts around 36th minute.
I used Visual Studio 6 for years, and yes, I can confirm, it was really that fast.
It's also not true that there were problems with more applications running etc as "Decade" claims. Or to be more precise, there were no problems if one used Windows NT, and I've used NT 3.51, NT 4 and 2000 for development, starting with Windows development even before they were available. And before that, Windows 3.x was indeed less stable, but it is the time before 1995. Note that the first useful web browser was made in 1993, internet as we know it today practically didn't exist. There were networks, but not web.
Maybe it’s possible for opposite things to be true if they happen to different people. I wasn’t a developer back then.
Windows NT required a multiple more RAM to run than the consumer versions of Windows (oh no, bloat!), and was much more picky about what hardware it ran on. Starting with XP, the professional and consumer versions of Windows have merged. We are so lucky.
> Windows NT required a multiple more RAM to run than the consumer versions of Windows (oh no, bloat!), and was much more picky about what hardware it ran on.
Allow me to claim that that is also not true, in the form you state it. Again, I've lived through all this, and I can tell you what that was about. The "pickiness" of NT was even at these times not about the motherboards and the chipsets. It was about the hardware consumer devices. Many things that probably don't even exist as the products today, like a black-and-white hand-scanner that scanned as you moved your hand over the paper and had only Windows 3.x drivers on the floppy with it. There was never a problem of having a developer machine running NT in any reasonable price range, with a reasonable graphic card, monitor, keyboard and mouse. And, at the start, a phone line modem transmitting some kilobytes per second!
The RAM needs did exist, but again not such as they are made to be believed by later distortions. If I remember correctly (it changed relatively fast), at the time NT was published, Microsoft had to deliver it claiming that it will run on 4 MB, the OS and the programs and the graphics, all had to fit. Let me repeat, 4 MB. It run, but not comfortably for bigger programs. But the point is -- as soon as you at that time had 8 MB you haven't had a problem. A little later, for comfortable work, 16 MB were more than a good choice. It was a hundred, two or three of $ more than the cheapest possible offer (yes, that were the prices then), but that was it. RAM was the only thing you had to care about to have NT running.
The point is, at that time there were enough those who didn't want to use Windows NT at all, clutching to 3.x and then 95 and these are those who promoted the horror stories about OS problems. But it was just their ignorance. 95 was also reasonably stable, unless you used, like many, some "utility" programs that were more malware than of real use (the "cleaning", "protection" or even "ram expander" snake oils were used by some even then -- no to mention that a lot of people believed they had to try any program that happens to access them).
The good development tools were good and stable, especially command line (in GUI areas, there were some snake oils among them too). But Word did crash even under NT, and even during the first half of 2000-s decade, and that's completely different story, that was intentional at that time for these products.
The word “reasonable” is doing a lot of work, here.
Most Pentium II systems were not running Windows NT. They were running Windows 95 or 98, which had arbitrarily severe limitations and lacked memory protections.
So, while it was technically possible to run lots of applications simultaneously on 256 MB of RAM, for most people it was a fun adventure in whether some buggy program has destabilized the system into needing to reboot to run properly again. Or whether it’s still usable with degraded functionality. In my case, that’s without using the cleaning, protection, RAM expander programs.
And even on professional operating systems, web browsers crashed a lot, and any other program that had to deal with untrusted input, which is basically anything that can open files or connect to the network, has gradually bloated as they learn security or add features.
> Most Pentium II systems were not running Windows NT. They were running Windows 95 or 98
Once again: only somebody using a computer not selected for serious development used Windows 95 and 98. No developer who knew what he was doing was using Windows 95 and 98 as his primary development machine. So if you complain about that, you used the wrong tool for your work. Like I've said, it was easy to install Windows NT, and I don't know any computer which wasn't able to run it, if it had reasonably enough RAM.
> on 256 MB of RAM
To illustrate "reasonably" once again, that changed at these times: I remember buying an AMD-based notebook in 2002 with 256 MB and running absolutely without problems Windows 2000 on it for a few years, before upgrading to 512 MB, which was the maximum for that notebook. And that was the time of Pentium III and IV, not Pentium II, and like I've said, I've run Windows NT on 8 MB computers, all with compilers, resource editors, debuggers and even IDE. And even before, I've run Windows 3.11 on 2 MB computer and used that for development too (the development tools being in text mode, of course).
> some buggy program has destabilized the system into needing to reboot to run properly again
Only on non-NT systems, and surely not developer tools. I used Windows 3.x and Windows 9x, and never had to reboot due to the developer tools "making system unstable." Not even on a 4 MB or a 16 MB machine.
> web browsers crashed a lot
I've used both Mosaic and Netscape, and before 2000 my main problem was surely not them crashing. Surfing mostly worked (only the pages loaded slowly, there were no CDNs then). Again, on a NT system.
I think we’re losing the plot. The ggp post was about doing all sorts of Internet programs at the same time on Pentium II era computers, and now you’re talking about developer tools on a Pentium 4.
Maybe it’s simultaneously true, that you could run many developer tools at the same time on Windows NT with hundreds of dollars of RAM, and attempting to run a bunch of consumer network programs at the same time (especially on consumer Windows) was asking for trouble.
I remember one of the attractions of IE 5 back in the day was how each newly launched window was its own process (not windows opened by the open link in new window menu option), so unlike Mosaic and Netscape, a crash in one copy of IE did not necessarily bring down all the other windows. Multiple windows being useful because surfing with a modem was slow regardless of CDN. Remember when Yahoo was scandalous, because banner ads took so much bandwidth?
> and now you’re talking about developer tools on a Pentium 4.
It's to illustrate that arguments are wrong: it's Decade who uses "256 MB" as an argument which is not "small memory" for Pentium II, and I illustrate that it was common in 2002 for notebooks, the time when Pentium IV was common for developer machines.
> The ggp post was about doing all sorts of Internet programs at the same time on Pentium II era computers
Let me check again:
"For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II."
OK. That is also obviously a bit off. 256 MB with Pentium II is quite a lot, as I showed 256 MB was normal even in 2002 for notebooks, as Pentium III was already common on notebooks and IV on desktops. Working with email -- at that time e-mail clients, if they used html at all, were limited to html formats of that time so "using email" completely worked, no crashes of system on NT (Outlook did have a limit of single PST having to be less than N GB, I remember that). IM too just worked, and also without crashes on NT.
That leaves "video/audio calls". Video calls were surely not common at that time, and I personally also haven't used audio calls.
But the "stability" problems you claim to have been common definitely didn't exist the way you claimed, as soon as one used NT, that is, since around 1994, or later on Windows 2000 or even later on XP or Server 2003, all NT-based. And as I've said, it was not that "too much" RAM was needed, as I've run NT on 8 MB with no problem.
So I still don't understand why you continue to stick to the narration that was simply not true. No, it was not that bad like you claim. Computers were quite stable even then for those who knew what they were doing. On NT, almost nothing crashed the system, except for failed hardware. Like I've said, it was that some apps were indeed less stable, like Word crashing or saving the invalid DOC file. But Excel, for example, while being in the same "suite" I don't remember to have ever crashed. I also don't remember browsers actually crashing, just the pages downloading very, very slowly.
The 256 MB number came from the ggp post. At the beginning of the Pentium II era, that was very expensive, but it was not the only issue with running multiple programs at the same time.
But clearly you want to have the last word, so I guess I should let you have it.
We gave it up for slightly higher profit margins enabled by hiring slightly less qualified programmers at a slightly lower rate.
In a similar vein, Industrial Light and Magic used to have a few highly talented people crafting incredibly intelligent solutions to make their movies possible: https://youtu.be/AtPA6nIBs5g
By now, most of those effects would instead be done using CGI and outsourced to Asia.
There's probably a long rant waiting to be written on this topic. Myself, I've observed how over the last four decades, CGI effects went from worthless, through novelty, through increasingly awesome, all the way to "cheapest garbage that can be made that looks convincing enough when the camera is moving very fast".
A Pentium II could barely process DVD-resolution MPEG-2 in realtime.
According to http://www.vogons.org/viewtopic.php?p=423016#p423016 a 350MHz PII would've been enough for DVD, and that's 720x480@30fps; videoconferencing would more commonly use 320x240 or 352x288 which has 1/4 the pixels, and H261 or H263 instead as the codec.
Being able to Zoom call and see up to 16 live video feeds simultaneously is an amazing advance in technology.
I'm not familiar with Zoom as I don't use it, but it's very likely you're not actually receiving and decoding 16 separate video streams; instead a MCU or "mux box" is used to combine the streams from all the other participants into one stream, and they do it with dedicated hardware.
That said, video is one of the cases where increased computing power has actually yielded proportional returns.
> videoconferencing would more commonly use 320x240 or 352x288 which has 1/4 the pixels, and H261 or H263 instead as the codec.
Modern videoconferencing solutions (WebRTC) usually use 1280x720 and either H264 or VP8. Some apparently use HEVC. Also most modern processors and SoCs come with hardware-accelerated codecs built in, so most of the work related to compression isn't even done by the CPU itself.
> I'm not familiar with Zoom as I don't use it, but it's very likely you're not actually receiving and decoding 16 separate video streams
Don’t think it can be an MCU box. You can select an individual stream from the grid to make it larger almost instantly. The individual feeds can display both as grid and a horizontal row. I’m assuming they send individual feeds and the client can ask for feeds at different predefined resolutions.
Without having used Zoom much I can't definitively say how it works, but I've used BlueJeans quite a bit and noticed compression artifacts in various parts of the UI (e.g. text underneath each video source). That means BlueJeans is definitely muxing video sources and it really does not have a noticeable delay when changing the view. Since each video is already so compressed I think they can get away with sending you really low bitrate streams during the transition and you'll barely notice.
With Skype you're definitely able to receive separate streams from each participant, as I can access them individually via NDI and pipe them into OBS to do live multi-party interviews. You can see the resolution of individual feeds change when their bandwidth drops, and you can choose high/low bandwidth and latency modes for each feed. I would guess Zoom does the same but doesn't provide an NDI feed (yet).
I have an iMac G4 from 2003 (the sunflower things) on which I installed Debian PPC and it is able to stream 720p content from my local network and play it back smoothly on VLC
I could see Street View-like vistas under a Pentium3/Amd Athlon.
On power, I did the same you can do today but with an Athlon XP and Kopete. On video, since BeOS and Mplayer I could multitask perfectly XVid movies good enough for its era.
To be fair, 10-20 years ago was the age of Windows XP and Windows 7, not Windows 95. There barely was anything good about Windows 95, and there are likely not many people missing it, but it was also a complete different era from the later "modern" desktops, hardware as also software-wise. If anything I would call that era the alpha-version, problems included.
Most of those have nothing to do with OP's point, which is that some software uses way too much processing power than it should.
While on the topic, let's remember the speech recognition software available for Windows (and some for Android 2.x) that was completely offline and could be voice activated with, gasp, any command!
Google with its massive data centers can only do "OK/Hey Google". Riiight. I can't believe there are actually apologists for this bs.
Anyway, old speech recognition software was quite horrible. Most did not even worked without prior training. And Google does have now offline-speech recognition too. But true, the ability to trigger with any desired phrase is something still missing.
The ability to trigger with any desired phrase is easy, but not done for privacy reasons, to reduce the chance of it accidentally listening to irrelevant conversations.
The inability to change it from Hey google is done for marketing / usability reasons.
Microsoft had speech recognition since WinXP. And also Dragon Naturally Speaking. Both needed a couple of hours of training, but worked really well, completely offline, it was amazing for me at the time. It did have a very high processor usage, but that was on freaking single core Athlon and Pentium. I'm not even a native English speaker, though dare I say my English is on par with any American.
Voice recognition used by things like Google Assistant, Siri, Cortana, and Alexa usually relies on a "wake word", where it's always listening to you, but only starts processing when it is confident you're talking to it.
Older speech recognition systems were either always listening and processing speech, or only started listening after you pressed a button.
The obvious downside of the older systems is that you can't have them switched on all the time.
I think it would be really easy to create an app that would also listen to a very specific phrase (like "Hey Merlin", simple pattern match, with a few minutes of training for your own voice) and then start Google Assistant.
It's so embarrassing saying Hey Google all the time, and for me, it just feels like I'm a corporate bitch, tbh. It's true, which just makes me feel worse :D
There were always idiots writing buggy code. The issues you mention are about “old software” on “old hardware”. GP is only talking about “old style of software development”. Granted Qt, X, Win API is unnecessarily complicated.
> Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.
With Moore's law being dead, efficiency is going to get a lot more popular than it has been historically. I think we're going to start seeing an uptick in the popularity of more efficient GUI programs like the ones you describe.
We see new languages like Nim and Crystal with their only value proposition over Python being that they're more efficient.
Similarly, I predict we will see an uptick in popularity of actually native frameworks such as Qt over Electron for the same reason. We may even start seeing wrapper libraries that make these excellent but complicated frameworks more palatable to the Electron crowd, similar to how compiled languages that look like Python or Ruby are getting bigger.
I said that 20 years ago, but so far I've been proven completely wrong. Skype/etc just keep getting bigger and slower despite, from what I can tell adding absolutely no additional functionality. In fact if you consider it can't seem to do peer to peer anymore, its lost features.
Very few companies are rewritting their electron apps in win32 (although they should be). Instead it continues moving in that direction, or worse. Crashplan rewrote their java GUI a while back in electron. Java UI's are mostly garbage, but compared with the electron UI it was lightweight and functional. The electron UI (besides shipping busted libraries) has literally stripped everything out, and uses a completely nonsensical paradigm/icon set for the tree expand/file selection. Things like slack are a huge joke, as they struggle to keep it from dying under a load my 486 running mIRC could handle. So blame it on the graphics and animated gif's people are posting in the chat windows, but the results speak for themselves.
Without a way for end-users to actually make judgements about application efficiency, there will never be any real pressure to make efficient, native apps.
Though the only measurement I think people would actually care about is battery impact, and even that is pretty much hidden away on phones except to the few people who actually look.
But the other problem is: who cares if Discord or a browser's HN tab aren't optimally efficient? You're just going to suck it up and use it. With this in mind, a lot of the native app discussion is technical superiority circlejerk.
Without a way for end-users to actually make judgements about application efficiency, there will never be any real pressure to make efficient, native apps.
I'd say it's more of a "without a way for end-users to compare" --- the average user has no idea how much computing resources are necessary, so if they see their email client taking 15 seconds to load an email and using several GB of RAM, they won't know any better; unless they have also used a different client that would do it instantly and use only a few MB of RAM.
Users complain all the time when apps are slow, and I think that's the best point of comparison.
There is an economic theory that's escaping me right now, but the gist is that with certain goods, the market will hover at the very edge of efficiency; they have to become just scarce enough to break a certain threshold, then the market will realize that they are in fact a scarce resource, then correct to achieve a high efficiency equilibrium.
THIS. I do remember outright revelations in user experience, as I showed people how much better Firefox 2.0 was, compared to IE6 (and looking back, version 2.0 wasn't all that wonderful, from present point of view - tells you more about IE than about FF).
Instacart website has dreadfully slow search. Looks like instant search update takes forever to update with each character. The whole site is so slow. It makes my Mac Safari complain that the page uses significant resources.
This weekend I noticed that Amazon Fresh now delivers the same day—-for the past few months they had no slots. I switched to Amazon away from Instacart at once. The Amazon website lacks some bells and whistles compared to Instacart but it is completely speedy. If Instacart website were satisfactory I would never have switched.
Slow, bloated websites can absolutely cost companies money.
I think the other major, major thing people discount is the emergence of viable sandboxed installs/uninstalls, and the accompanying software distribution via app stores.
Windows 95 never had a proper, operating-system supported package manager, and I think that's a big part of why web applications took off in the late 90s/early 2000s. There simply wasn't any guarantee that once you installed a native app, you could ever fully remove it. Not to mention all the baggage with DLL hell, and the propensity of software to write random junk all over the filesytem.
Mobile has forced a big reset of this, largely driven by the need to run on a battery. You can't get away with as much inefficiency when the device isn't plugged into the wall.
> [the absence of a package manager was] a big part of why web applications took off in the late 90s/early 2000s.
Of course apt-get is very convenient but I can't see a Microsoft version of it letting companies deliver multiple daily updates.
Based on my experience of the time the reasons were, in random order
- HTML GUIs were less functional but easier to code and good enough for most problems
- we could deploy many times per day for all our customers
- we could use Java on the backend and people didn't have to install the JVM on their PCs
- it worked on Windows and Macs, palmtops (does anybody remember them?) and anything else
- it was very easy to make it access our internal database
- a single component inside the firewall generates the GUI and accesses the db instead of a frontend and a backend, which by the way is the modern approach (but it costs more and we didn't have the extra functionality back then, js was little more than cosmetic)
There simply wasn't any guarantee that once you installed a native app, you could ever fully remove it. Not to mention all the baggage with DLL hell, and the propensity of software to write random junk all over the filesytem.
Bloated, inefficient software is certainly present on the native side too, but it's also possible to write single-binary "portable" ones that don't require any installation --- just download and run.
OS API sets have evolved toward more sandboxing. Things are more abstract. Fewer files on disk, more blob-store-like things. Fewer INI files in C:\Windows, more preference stores. No registry keys strewn about. .NET strong naming rather than shoving random DLLs into memory via LoadLibraryA()
IMHO web applications took off because developers learned pretty fast how useful "I can update any time without user consent" is, especially when your software is a buggy mess (or a "MVP" if you like buzzwords) and you need to update every five minutes.
> Similarly, I predict we will see an uptick in popularity of actually native frameworks such as Qt over Electron for the same reason.
I would predict that if only Qt didn't cost a mind-boggling price for non-GPL apps. They should really switch to pay-as-you-earn e.g. like the Unreal engine so people would only have to start paying as they start earning serious money selling the actual app. If they don't Qt popularity is hardly going to grow.
Qt through the LGPL license is free for non-GPL apps. Tesla is using it as LGPL in their cars without paying a dime to Qt Company (which is, imho, super shitty given the amount of money they make).
I wonder how much memory management affects this. My journey has been a bit different: traditional engineering degree, lots of large Ruby/JS/Python web applications, then a large C# WPF app, until finally at my last job, I bit the bullet and started doing C++14 (robotics).
Coming from more "designed" languages like C#, my experience of C++ was that it felt like an insane, emergent hodgepodge, but what impressed me was how far the language has come since the 90s. No more passing raw pointers around and forgetting to deallocate them, you can get surprisingly far these days with std::unique_ptr and std::shared_ptr, and they're finally even making their way into a lot of libraries.
I sense there's a bit of a movement away from JVM/CLR-style stop-the-world, mark-and-sweep generational GC, toward more sophisticated compile-time techniques like Rust's borrow checker, Swift's reference counting, or C++ smart pointers.
I mention memory management in particular both because it seems to be perceived as one of the major reasons why languages like C/C++ are "hard" in a way that C#/Java/JS aren't, and I also think it has a big effect on performance, or at least, latency. I completely agree we've backslid, and far, but the reality is, today, it's expensive and complicated to develop high-performance software in a lower-level, higher-performance language (as is common with native), so we're stuck with the Electron / web shitshow, in large part because it's just faster, and easier for non-specialists to develop. It's all driven by economic factors.
Java is also making good progress on low latency GC.
Reference counting can be slower than GC if you are using thread safe refcounts which have to be updated atomically.
I don't want to have to think about breaking cycles in my data structures (required when using ref counting) any more than I want to think about allocating registers.
Yet we still read articles and threads about how bad the Go GC is and the tradeoffs that it forces upon you.
I get the feeling that the industry is finally starting to realize that GC has been a massive mistake.
Memory management is a very important part of an application, if you outsource that to a GC you stop to think about it.
And if you don't think about memory management you are guaranteed to end up with a slow and bloated app. And that is even before considering the performance impact of the GC!
The big hinderence has been that ditching the GC often meant that you had to be using an old an unsafe language.
Now we have rust, which is great! But we need more.
The Go GC isn't that great, it's true. It sacrifices huge amounts of throughput to get low latency: basically a marketing optimised collector.
The new JVM GCs (ZGC and Shenandoah) are more sensibly designed. They sacrifice a bit of throughput, but not much, and you get pauseless GC. It still makes sense to select a throughput oriented collector if your job is a batch job as it'll go faster but something like ZGC isn't a bad default.
GC is sufficiently powerful these days that it doesn't make sense to force developers to think about memory management for the vast bulk of apps. And definitely not Rust! That's one reason web apps beat desktop apps to begin with - web apps were from the start mostly written in [pseudo] GCd languages like Perl, Python, Java, etc.
I don’t think it’s fair to call garbage collection a mistake. Sure, it has properties that make it ill-suited for certain applications, but it is convenient and well suited for many others.
Same applies with manually memory management, you get instead slower allocators unless you replace the standard library with something else, and the joy of tracking down double frees and memory leaks.
I'm using Rust, so no double frees and no accidental forgetting to call free(). Of course you can still have memory leaks, but that's true in GC languages too.
That is not manually memory management though, and it also comes with its own set of issues, like everyone that was tried to write GUIs or games in Rust is painfully aware of.
That's true. The comment by mlwiese up-thread, that I responded to, praised Go's low GC latency without mentioning the heavy memory and throughput overheads that come with it. I felt it worth pointing out the lack of a free lunch there; I think a lot of casual Go observers and users aren't aware of it.
Agreed, although if Go had proper support for explicit value types (instead of relying in escape analysis) and generics, like e.g. D, Nim, that could be improved.
I don't think that's as hard as you make it out to be. Notably, Zig does not have a default allocator and its standard library is written accordingly, making it trivial to ensure the use of the appropriate allocation strategy for any given task, including using a debug allocator that tracks double-free and memory leaks.
No, and as far as I am aware it makes no attempt to do so other than some allocators overwriting freed memory with a known signature in debug modes so the problem is more obvious.
> Coming from more "designed" languages like C#, my experience of C++ was that it felt like an insane, emergent hodgepodge, but what impressed me was how far the language has come since the 90s. No more passing raw pointers around and forgetting to deallocate them, you can get surprisingly far these days with std::unique_ptr and std::shared_ptr, and they're finally even making their way into a lot of libraries.
I worked for a robotics company for a bit, writing C++14. I don't remember ever having to use raw pointers. That combined with the functionality in Eigen made doing work very easy --- until you hit a template error. In that case, you got 8 screens full of garbage.
This sentiment is why I've moved to write Elixir code professionally three years ago, and why I write Nim for all my personal projects now. I want to minimize bloat and squeeze out performance from these amazing machines we are spoiled with these days.
A few years ago I read about a developer that worked on a piece-o-shit 11 year old laptop, he made his software run fast there. By doing that, his software was screaming fast on modern hardware.
It's our responsibility to minimize our carbon footprint.
My normal work computer is a Sandy Bridge Celeron laptop. I might need to upgrade it soon, but I'd still prefer something underpowered for exactly same reason; perhaps I'll purchase an Athlon 3000 desktop.
https://tsone.kapsi.fi/em-fceux/ - This is an NES emulator. The Memory tap in Developer tools says this takes up 2.8MB. Runs in 60fps on my modern laptop.
It seems possible to build really efficient applications in JS/WebASM.
Multiple layers of Javascript frameworks is the cause of the bloat, and is the real problem I think.
> Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.
If we save developer-cycles, it's not wasted, just saved somewhere else. In the first place we should not go by numbers, because there always will be someone who can complain for a faster solution.
> For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II.
Yes, no. The level of ability and comfort at that time was significant lower. Sure, the base-functionalitify was the same, but the experience was quite different. Today there are a gazillion more little details which make life more comfortable, which you just don't realize there are there. Some of them working in the background, some being so naturally that you can't imagine them not being there since the beginning of everything.
No, an externality is when a cost is passed to a external party (not involved in the transaction), like air pollution or antibiotic resistance. Passing a cost to the user is just a regular business transaction, like IKEA sending you a manual so you can build the furniture yourself.
> The efficiency difference between native and "modern" web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.
Except for the 25 years support, you can get the same features if an electron runtime was introduced and you avoid using too many libraries from npm. In most electron apps, most bloat is caused by the bundled runtime instead of the app itself. See my breakdown from a year ago of an electron based color picker: https://news.ycombinator.com/item?id=19652749
While true, it also had plenty of limitation. You have to keep carrying around a huge legacy, you're locked in to the APIs, SDKs and operating systems of a single vendor, often locked themselves to a single type of hardware.
The win32 code doesn't run anywhere, except on Windows, but most of the compute devices are mobile (non-laptop) systems and those don't come with Windows.
Running your native apps now takes both less work and more work: you can write (Somewhat) universal code but the frameworks and layers required to get it to build and run on Windows, macOS, Linux, iOS, Android, and any other system the market you target relies on now comes in as a dependency.
It used to be that the context you worked in was all you needed to know, and delivery and access was highly top-down oriented meaning you'd have to get the system (OS, hardware) to run the product (desktop app). That is no longer the case as people already have a system and will select the product (app) based on availability. If you're not there, that market segment will simply ignore you.
That is not to say that desktop apps have no place, or that CEF is the solution to all the cross-platform native woes (it's not, it's the reason things have gotten worse), but the very optimised and optimistic way to writing software from the 90's is not really broadly applicable anymore.
Is it practical to target wine as an application platform? That will require building without vs. Or build on windows and test with wine. What are the apis one would need to avoid in order to ensure wine compatibility?
What are some solid resources for learning more about optimization? I graduated from a bootcamp, and at both jobs I have had I ask my leads about optimization and making it run even faster and am often told that we don't need to worry about it because of how fast computers are now. But I am sitting there thinking about how I want my stuff to run like lightning for every system.
256MB RAM? How extravagant! My first computer had 3kB.
This is just the nature of “induced demand”. We might expand the power of our computers by several orders of magnitude, but our imaginations don’t keep up, so we find other ways of using all that capacity.
You might have used these words as a way to say "way faster", but factually you are incorrect. several orders of magnitude = thousands of times faster. No way.
If the browser is computationally expensive abstraction, so is the various .NET SDKs, the OS, custom compiler and the higher language of your choice. Yes there were days were an game like prince of persia could be fit in to the memory of apple IIe and all of it including the sound graphics, mechanics and the asset was less than 1.1 MB !
However the effort required to write such efficient code, hand optimise compiler output is considerable not to mention very few developers will be able to do it.
Unless your domain requires high performance(with wasm and WebGL this will also be reduced) or something niche a browser cannot not currently provide it no longer make sense to develop desktop applications. The native application is too much hassle and security risk for the end user compared to a browser app and is worth the trade-off in performance for vast majority of usescases.
While the browser security sandboxes have its issues, I don't want go back to the days of an native applications constantly screwing my registry, launch processes , add unrelated malware and billion toolbars to your browser ( java installers anyone ?) .
Till late 2000's Every few months I would expect to do reinstall the entire OS (esp Windows and occasionally OS X) because of this kind of shareware / malware nonsense native apps used to pull. While tech savy users avoid most of this pitfalls maintaining the extended family's systems was constant pain. Today setting a chromebook or surface( Default S mode enabled) and installing an ad blocker is all i need to do , those systems are clean for years.
I do not think giving effectively root access and hoping that that installing application will not abuse is not a better model than a browser app. It is not just small players who pull this kind abuse either, Adobe CC suite runs like 5 launch processes and messes up the registry even today. The browser performance hit is more than worth not having to deal with that
Also just on performance from a different point of view, desktop apps made my actually system slower, you would notice this on fresh install of the OS, your system will be super fast , then over few weeks it will slow down, From antivirus to every application you added , they all hogging more of my system resources than browser apps do today.
I use windows (although not a heavy user, I mainly use Linux these days), and only outsider apps I have installed are lightweight open source ones and some "official" versions of software. You don't need an antivirus apart from built-in Windows Defender. And I don't notice any slowdown. I have a non-admin account which I regularly use and admin account is separate.
Arguably many users don't know how to use a Windows desktop. But that's not a failure of desktop; that's failure of Windows. They could have provided an easy way to install applications to a sandbox. On Android you can install from apk files and they are installed to a sandbox. If Windows had such a feature easily available, I think most of genuine desktop app makers would have migrated to it. This would have the advantages of the browser and no battery drain, no fan noise, no sluggishness.
You already can use UWP that has sandbox. Win32 apps can be converted to it.
So no one cares about more security. Most vendor stuck at "just work" Win32.
Can convert does not mean thats only way to install , as long you give an insecure option your security is still weak .
It is not that OS developers are not improving , for example S mode on surface is a good feature, however as long as adobe’s of the world still abuse my system the problem is not solved .
It is just not older gen software either , Slack desktop definitely takes more resources than web version while delivering broadly the same features
Sure it is electron abstraction , however if a multi billion company with vc funding can not see value in investing on 3 different stacks for windows,MacOS and Linux how can most other developers?
The reason that people don't write them is because users aren't on "the desktop". "The desktop" is split between OS X and Windows, and your Windows-app-compiled-for-Mac is going to annoy Mac users and your Mac-app-compiled-for-Windows is going to annoy Windows users. Then you realize that most users of computing devices actually just use their phone for everything, and your desktop app can't run on those. Then you realize that phones are split between Android and iOS, and there is the same problem there -- Android users won't like your iOS UI, and iOS users won't like your Android UI. Then there are tablets.
Meanwhile, your web app may not be as good as native apps, but at least you don't have to write it 6 times.
> Meanwhile, your web app may not be as good as native apps, but at least you don't have to write it 6 times.
I must be living in a parallel world because I use a ton of desktop apps that aren't "written 6 times" - and write a few, including a music & other things sequencer (https://ossia.io).
Just amongst the ones running on my desktop right now, Strawberry (Qt), Firefox (their own toolkit), QtCreator (Qt), Telegram Desktop (Qt), Bitwig Studio (Java), Kate (Qt), Ripcord (Qt), all work on all desktop platforms with a single codebase. I also often use Zim (GTK), which is also available on all platforms, occasionnally Krita (Qt) and GIMP (GTK), and somewhat rarely Blender. Not an HTML DOM in sight (except FF :-)).
In my experience Java GUIs are consistently even more laggy and unresponsive than Electron apps. They may be lighter in terms of memory, but they never feel lighter. Even IntelliJ and family - supposedly the state of the art in Java apps - feel like mud on a brand-new 16" Macbook Pro.
Lighter in terms of memory? No way. Intellij is always at a few gb per instance. They are indeed laggy as hell. With the latest Mac os intellij products specifically bring down the entire os for ten to twenty minutes at a time requiring a hard reboot without which the cycle starts again. Except it's not java or intellij, it's the os. I only wish they were electron apps. That way I wouldn't have to return a $4400 brand new 16" mbpro because of its constant crashing due to horrible native apps. All apps can be shitty. At least electron ones are cross platform, work, and generally do not bring the whole system to a standstill followed by a hard crash. While using about the same resources as electron apps.
Interestingly they seem to run exactly the same on horribly low spec machines. I blame the jvm's love for boxing and unboxing everything in IL land. Of course by now I'd hope it's less wasteful - last I spent serious time in Java was 2015.
I've definitely noticed the same on IntelliJ but weirdly enough Eclipse feels just fine. IIRC both are written in Java, so maybe it comes down to the design of IntelliJ moreso than the limitations of the JVM?
I used Eclipse for a while before switching to IntelliJ around ~2015 and it actually seemed like a vast improvement, not just in terms of features but in terms of performance. It still wasn't "snappy", but I figured I was doing heavy work so that was just how it was.
Fast-forward 5 years and I've been doing JS in VSCode for a while. My current company offered to pay for Webstorm so I gave it a try. Lo and behold it was still sludgy, but now unbearable to me because I've gotten used to VSCode.
The one other major Java app I've used is DBeaver, which has the same problem to an even greater degree. Luckily I don't have to use it super often.
I work daily in a codebase with 20M lines and RubyMine can still search near-instantly compared to say VS Code. One thing that's still true is that there are sometimes long pauses, presumably garbage collection, or I suspect more likely bugs as changing window/input focus can sometimes snap out of it.
If that's the case regarding IntelliJ then you probably haven't changed the jvm heap size, which is defaulted to something very small (2GB maybe) by IntelliJ.
QT is excellent, but C++ is quite a tough pill to swallow for many. Especially as QT layers a macro system on top. I predict that native desktop apps will make a comeback when there's QT-quality cross-platform framework in a more approachable language (Rust, nim, or similar).
The difference between "Qt native" and "native native" (e.g. Win32 or Cocoa) is still noticeable if you pay attention, although it's not quite as obvious as between Electron and the former.
(Likewise, applications using the JVM may also look very convincingly like native ones, but you will feel it as soon as you start interacting with them.)
Is it really even worth highlighting though? I use Telegram Desktop (Qt) daily and it is always, 100% of the time completely responsive. It launches basically instantly the second I click the icon and the UI never hangs or lags behind input to a noticeable degree. If we transitioned to a world where everyone was writing Qt instead of Electron apps we would already have a huge win.
You're fundamentally mistaken about where Qt sits in the stack - it effectively sits in the same place as USER32/WinForms in Windows or NS/Cocoa GUI widgets of OSX. It is reasonable to think of it as an alternative native GUI library in that sense. If it is slower, it's because an implementation of something is slower, not because of where it lives or an abstraction cost.
Qt pretty much draws using low-level drawing APIs on the respective platform. And although Qt itself is not written in the most performance sensitive C++, it is still orders of magnitude faster than most (and it's not like Chrome doesn't pay overhead) - people rag on vtable dispatch speed but jeez its still orders of magnitude faster than something like ObjC which served Apple quite well for years.
The performance of a Qt app is more likely a function of the app itself and how the app developers wrote it.
But no, you're not noticing any micro-seconds differences in C++ overhead for Qt over "native native" - and you're basically comparing the GUI code of the platform - since Qt does it's own rendering. Win32 is mostly pretty good, NS is a mixed bag, and Gtk+ is basically a slug. In all cases there is some kind of dynamic dispatch going on, because that is a fundamental pattern of most GUI libraries. But dynamic dispatch is almost never a factor in GUI render performance. Things like recalculating sizes for 1 million items in a table on every repaint are the things that get people into trouble, and that is regardless of GUI library.
This gets said a lot, and granted VSCode is certainly one of the best performing Electron apps, but it definitely is not indistinguishable from native apps. Sublime, Notepad++, or TextAdept all fly compared to VSCode in terms of performance and RAM efficiency.
On Mac, VSCode does a better job than many apps at emulating the Cocoa text input systems but, like every electron app, it misses some of the obscure corners of cocoa text input system that I use frequently.
If we’re going to use JavaScript to write native apps, I’d really like to see things like React Native take off: with a good set of components implemented, it would be a first class environment.
I use VS Code daily (because it seems to be the only full-featured editor that Just Works(TM) with WSL), but it can get pretty sluggish, especially with the Vim plugin.
I don't think so, Microsoft wanted to fork electron in the past to replace Chromium with edgeHTML, but it didn't happen. VSCode is powered by Monaco Editor github.com/microsoft/monaco-editor, and VSCode feels snappier than let's say Atom, probably because of Typescript.
This is something with your configuration. OOB VSCode will immediately show you the file but disable tokenization and certain other features. I regularly open JSON files upto 10 MB in size without any problem. You probably have plugins which impede this process.
When you say web platform do you mean a browser? Using a browser is the mosted optimised and performant over installing an application on your desktop?
Curious what desktop do you run your browser under?
I would give you an example of a simple video split application. A web platform requires uploading, downloading and slow processing. A local app would be hours quicker as the data is local.
- Qt is actually the native toolkit of multiple operating systems (Jolla for instance and KDE Plasma) - you just need to have a Linux kernel running and it handles the rest.
It also does the effort of going to look for the user theme for widgets to mix in with the rest of the platform, while web apps completely disregard that.
- Windows has at least 4 different UI toolkits now which all render kinda differently (win32, winforms, wpf, the upcoming winui, whatever is using Office) - only Win32 is the native one in the original sense of the term (that is, rendering of some stuff was originally done in-kernel for more performance). So it does not really matter on that platform I believe. Mac sure is more consistent, but even then ... most of the apps I use on a mac aren't cocoa apps.
- The useful distinction for me (more than native and non-native) is, if you handle a mouse event, how many layers of deciphering and translation has it to go through, and are these layers in native code (eg. compiled to asm). As it reliably means that user interaction will have much less latency than if you have to go through interpreted code, GC, ...
Knowing what I know about Qt and what I've done with it in my day job, it's basically the best kept secret on hn. What they're doing with 6+licensing... I'm not sure how I feel, but from a pure multi-platform framework it really is the bees knees.
I've taken c++ qt desktop apps that never had any intention of running on a phone, built them, ran them, everything "just worked. I was impressed.
This is not really accurate. Qt relies on a lower level windowing system (X Window, Wayland, Cocoa, win32 etc. etc.).
Also worth noting that many creation-centric applications for the desktop (graphics, audio, video etc. etc.) don't look "native" even when they actually are. In one case (Logic Pro, from Apple), the "platform leading app from the platform creator" doesn't even look native!
> This is not really accurate. Qt relies on a lower level windowing system (X Window, Wayland, Cocoa, win32 etc. etc.).
Qt also supports rendering directly on the GPU (or with software rendering on the framebuffer) without any windowing system such as X11 or Wayland - that's likely how it is most commonly used in the wild, as that's one of the main way to use it on embedded devices.
Well, yes. I can't tell too much because of NDAs but if you go buy a recent car there is a good chance that all the screens are rendered with Qt on Linux or a RTOS - there is likely more of those than desktop linux users as much as this saddens me
On macOS Qt doesn’t really use Cocoa, it use Quartz/CoreGraphics (the drawing rather than the application layer). Note that Apple’s pro apps are native controls with a UI theme: they usually behave like their unthemed counterparts.
Kinda - QML is a programming language, Qt Quick is an UI scene graph (with the main way to use it being through QML) which also "renders everything" and makes by default less effort than widgets to look like the OS.
It's not platform-provided in my experience, but browser provided. The result of <button/> when viewed in a browser on macOS has no relation to the Cocoa API in any meaningful sense.
I'm pretty sure that when you render just a <button> in at least Safari, the browser will render a native Cocoa button control. If you set a CSS property like background colour or change the border, then it will "fall back" to a custom rendered control that isn't from the OS UI.
I did a small bit of research into this, and found plenty of "anecdotal" evidence, but nothing confirming for sure. I'm looking and interacting with the controls and they seem pretty native - if they're a recreation than that's pretty impressive :)
A GUI is a collection of elements with specific look and behaviour. A Desktop Environment is a collection of GUI(s), tools and services. Native means you have something which follows this look and behaviour 100% and can utilize all the tools and services.
Implementing the look is simple, adding behaviour quite harder and utilizing the service the endgame. WebUI usually does nothing from those or some parts, it all depends on the constellation. But usually there is a obvious difference at some point where you realize whether something is native or just made an attempt.
I'd also love for Mac and Windows to make it really easy to get a vendor blessed version of QT installed.
Imagine if when trying to run a QT app on Windows a dialog box could popup saying. "Program X is missing the Y, install from the Windows Store (for free): Yes / No"
I don't think it makes sense to use it on even small laptops screen to be honest so I don't really see the point. You'd have to redo the UI and whole paradigm entirely anyways for it to be meaningful on small devices. But there is certainly not any obstacle to porting - from my own experience with ossia & Qt, it is fairly easy to make iOS and Android builds, the difficulty is in finding a proper iOS and Android UX.
In particular C++ code works on every machine that can drive a screen without too much trouble - if the app is built in C++ you can at least make the code run on the device... just have to make something pretty out of it afterwards.
The point is that the parent poster mentioned tablets and phones which you don't address in your point. Of course your examples aren't written 6 times, but they support fewer platforms too (only desktop).
Off-topic, but regarding Bitwig: of course it makes perfect sense to use it on smaller devices. Not phones, but tablets. It's even officially supported with a specific display profile in your user interface settings (obvious target amongst others: windows surface). This is particularly useful for musicians on stage.
I think he did not mean "written 6 times", but more like Compiled 6 times, with 6 different sets of parameters, and having to be tested on 6 different devices.
Isn't it? The UI is rendered using web technologies inside a specialized browser and it's written in a web-specific language. I might consider an electron app an hybrid app (that leans heavily towards the web side), but for sure not a native app
Concerning the desktop, I honestly don't see Windows users caring much about non-native UIs. Windows apps to this day are a hodgepodge of custom UIs. From driver utilities to everyday programs, there's little an average Windows user would identify as a "Windows UI". And even if, deviations are commonplace and accepted.
Linux of course doesn't have any standard toolkit, just two dominant ones. There's no real expectation of "looking native" here, either.
Which leaves macOS. And even there, the amount of users really caring about native UIs are a (loud and very present online) minority.
So really, on the Desktop, the only ones holding up true cross-platform UIs are a subset of Mac users.
During my days of Windows-exclusive computing, I wondered what people meant by native UIs, and why do they care about them. My wondering stopped when I discovered Mac OS and, to a lesser extent, Ubuntu (especially in the Unity days). Windows, with its lack of visual consistency, looked like a hot mess compared to the aforementioned platforms.
And now that I think about it, would this made it easier, even by an infinitesimal amount, for malware to fool users, as small deviations in UI would fail to stand out?
I don't know exactly what time period you're referring to, but back when Java was attempting to take over the desktop UI world with Swing, it was painfully obvious when an app wasn't native on Windows. Eclipse was the first Java app I used that actually felt native, thanks to its use of native widgets (through a library called SWT) instead of Swing.
As far as I know, you can even write your own applications based on SWT which would make jvm apps pretty consistent and performant across platforms, but not many people seem to have chosen that route for some reason.
> And now that I think about it, would this made it easier, even by an infinitesimal amount, for malware to fool users, as small deviations in UI would fail to stand out?
I don't think that's how fraud works in actuality; malicious actors will pay more attention to UI consistency than non-malicious actors (who are just trying to write a useful program and not trying to sucker anyone), inverting that signal.
I don't know, I've read that e.g. spam will not focus on grammatical accuracy because they want to exclude anyone who pays attention to details. Also most fake Windows UIs from malicious websites I used to see weren't exact matches of the native UI.
I think this has changed. People used to be very particular about how their apps looked on different native platforms, like you say. But I don't think it's like that anymore. People are more agnostic now, when it comes to how user interfaces look, because they've seen it all. Especially on the web, where there's really no rules, and where each new site and web app looks different. I believe this also carries over to native apps, and I think there's much more leeway now, for a user interface to look different from the native style, as long as it adheres to any of the general well established abstract principles for how user interface elements ought to behave.
The other thing is that I trust the web browser sandbox. If I have to install something I’m a lot more paranoid about who wrote and whether it’s riddled with viruses.
Qt is LGPL licensed is it not? LGPL license means you can distribute your app closed source, so long as the user can swap out the Qt implementation. This usually just means dynamic linking against Qt so the user can swap the DLL. The rest of your app can be kept closed source.
On iOS and Android the situation might be a bit more complicated, but this discussion[0] seems to say that dynamically linking would also work there.
Qt doesn't require that but even if it did writing it 6 times is vastly more expensive. People would just rather spend 500k on writing it 6 times than 5k on a license because they are somehow offended at the notion of paying for dev software or tooling.
It's a major reason UI coding sucks. There is no incentive for anyone to make it not suck, and the work required to build a modern UI library and tooling is far beyond what hobbyist or spare time coders could ever attempt.
Targeting Windows alone gets you 90% of the desktop market. 95% if you make it run reasonably in Wine. This argument is often used, but it's an excuse.
Anything that you need to run on a desktop can't be used effectively on a touch screen anyway, so phones and tablets don't really count for serious software. (Writing this comment is stretching the bounds of what I can reasonably do on an IPhone).
95% of a market that has shrunk nearly 50% over the last decade.
In many ways, the consumer and non specialty business are post desktop. Turns out documents, email, and other communication apps cover 90% of use cases. Anything that requires major performance gets rendered in a cloud and delivered by these other apps.
They're not refuting that. They agreed that it's "95% of the market." Their point is that the overall desktop has shrunk, regardless of Windows's share of that.
Shippments of desktops/laptops doesn't tell the whole story. I'm still using a 2009 desktop (with some upgraded components) and wouldn't show up on any of those stats. Similar story for a lot of my friends. They still use desktops/laptops daily, but they don't replace them as often as in the 2000s.
What do you consider as a specialty business? There are hundreds of millions of professionals - scientists, engineers, accountants, animators, content creators, visual artists, chip design folks, folks writing drivers for equipment, photographers, musicians, manufacturing folks, etc who simply cannot earn a living without native apps. Sure, maybe when those people go home, they don't always need native apps, but IMHO its a mistake to only think about them in such a narrow scope.
You name several that are speciality businesses and are part of that 10%.
But there are definitely examples within Accountants, Animators, and Musicians where Phones, Tablets, and Chromebooks (not specialty desktop apps) have taken over the essential day to days.
For animators; the Ipad VS. Surface face off is a great example -- also where they offload concepts to "the cloud" to render instead of a Mac Pro.
Well, I am not talking about examples, I'm talking about entire industries. For e.g., There is absolutely no way for my industry (vaccine r&d) to any work without native apps. Even for animators, no native apps = no pixar. Maybe you were thinking of some other kinds of animation. I don't disagree that you can find small examples here and there of people not needing native apps in any industry.
Lots of people use an android or an iPhone as their main computer nowadays. If you're targeting keyboard/mouse style input, then Windows is probably close to as popular as ever. But if you're targeting people using some kind of device to access your service in exchange for money, Windows is wasting your time.
Any PC from the past decade is still mostly serviceable.
I'm finally upgrading from an Intel Sandy Bridge processor after nine years, and I still don't need to - it's cranking along pretty well as a dev and gaming machine still.
I'm surprised nobody mentioned Emscripten. Unfortunately I have no experience with it but I get that you could write a native app and also get it to work in browser with this. I also get there could be a performance penalty but hey... There's also a native app! It feels like we could reverse steam and get 1st class native apps again.
And many of those apps end up with terrible performance. I'm sure it's possible to write a performant electron app, but I don't see it happen often and it's disappointing.
Or you could use languages that allow you to share code, so you have 6 thin native UI layers on top of a shared cross-platform core with all the business logic and most interactions with the external world.
Windows-app-compiled-for-Mac is going to annoy Mac users
And they'll let you know it, too. Unfortunately this has been an issue since the first Macs left the assembly line in 1984. If you point out that based on their share of the software market they're lucky they get anything at all, the conversation usually goes south from there.
Every single app that I use, I try and make sure it is native. I shun electron apps at all cost. It's because people who put in effort to use the native APIs put in a lot more effort in the app in general based on my anecdotal evidence. It is also more performant and smaller in size, things that I cherish. It also pays homage to limits and striving to come up with new ways of overcoming them, which hackers would have to do in the past. I don't think not worrying about memory, CPU, etc are not healthy in the long run. Slack desktop app is almost 1 gig in size. That is crazy to me, no matter the "memory is cheap" mantra.
Agreed. If you have a CPU from 2012 onwards, 16GB of RAM and an SSD, that’s a respectable hardware setup. It might not be the fastest piece of kit on the planet, but I don’t see any reason why it couldn’t last another 3 to 5 years without feeling slow.
Electron apps invariably make such kit feel slower than it actually is. You can get good performance out of even older hardware if you treat it well and load it with good software that respects the hardware.
I type this from a 2014 MacBook Air with 8GB of ram, still going strong, no upgrades except a battery replacement. Everything is still as snappy as the first day I got it.
Yep, the new Steam library being a bloated web app is what forced me to install more RAM. I normally don't close out of applications when I'm not using them, so I was always hovering around ~6-7 of my 8 gigs of RAM in use. Then they update the library to the new bloated version, and my computer starts freezing because the library memory footprint is so much larger that my computer was having to use over a gig of swap space.
I have a rig powerful enough to run a lot of last-gen games at 4K with high quality settings, and a lot of current-gen ones at mid to high settings and 1080p. The fucking redesigned Steam library lags, none of the animations (why does it need animations?!) are even close to smooth, and there's massive delay on every input. I've never once encountered a move toward webtech, or toward heavier and "appier" JS use in something that's already webtech, and gone "oh good, this works much better now, I'm so glad they did that."
And somehow it's still dramatically better than the Epic Games launcher.
I don't know what these companies are doing. Clearly they're not paying attention though. This is not merely low-hanging fruit that's going ignored, it's watermelons.
The "<resource> is cheap" mantra also really only makes sense if you are writing server code, where you yourself pay for all the memory your code will ever use. If you deploy code to a large number of users it makes little sense. If a million users start your app daily, and your app has a 5 second load time and uses 300mb of ram, you are wasting over 50 days of user time, and hogging close to 300 terabytes of ram.
So you are telling me that I can easily exchange development time which I would have to pay for end-user resources, which I would not have to pay? Sounds like a great deal.
Then I hope we would also tax the bad UX of the competing 20-year old Frankenstein applications, which lead to slower business processes (= more resources used as well).
Your last sentence hits the nail - many users don't have a choice in selecting the application, and due to industry fads can't expect to have a better option.
I can make a great chat system that uses fast native client, but it won't change the fact that Corporation A paid for a slack license and won't switch to mine.
People also takes it a bit to far. Sure, RAM is cheap enough, but if your application requires 64GB of memory you may start having other issues.
We have customers who requires servers with 64GB+ memory, of single applications. This is running on VMs in VMWare. If a ESXi host crashes, you'd want VMWare to migrate your VM to another ESXi host, but that becomes somewhat tricky if you need to locate one with 64GB of available memory. Unless of cause you're way over-provisioned, which is actually pretty expensive. More realistically VMWare will start moving a ton of VMs around to put all those with little memory usage on other hosts, in an attempt to find 64GB for your VM. This takes time.
It can be difficult to explain to people that really this should look at their memory consumption, if nothing else to plan for fail-over.
Waste the company does not have to pay for is not waste as far as they are concerned. Customers rarely notice that kind of waste either, or at least not enough to do anything about it.
But, I am traiding my time for my user's CPU and RAM, and evidence suggests many users are willing to pay the CPU and RAM to get more apps with more features.
It is true that there is a a correlation between lower level programming and better programming in general. You probably won't see someone writing asm but creating crazy O^2 algorithms that run on every frame with memory allocations that run in the inner loop.
At the same time a native win32 program can pack significant functionality into a 20KB exe. Put these together and you have a program where everything is instant, all the time on any computer. The original uTorrent was just over 100KB and installed then ran in an instant.
These two refinements together are such a massive difference from any electron program that it melts my brain when people say that it isn't a problem to have a chat program feel like using windows 95 on a 386.
People say talk about needing cross platform programs, but something like fltk has everything most people will need and also runs instantly, while adding only a few hundred kilobytes to an executable.
> You probably won't see someone writing asm but creating crazy O^2 algorithms
I watched a lecture by Bjarne Stroustrup that he gave to undergraduate CS majors at Texas A&M where he coded a solution to a problem using linear scans and then a "better" solution using better algorithms with better big O performance.
Then he did something interesting. He did a test on a tiny data set to demonstrate that the solution with linear scans was faster, and he asked the audience to guess at what data size the more efficient algorithms would start to beat the linear scan. After the audience members threw out a wide range of guesses he confessed that he didn't know. He had tried to test it that afternoon, but the linear scans outperformed the "better" algorithms on any data set that he could allocate memory for on his laptop.
IIRC he finished by telling them that professionals often do performance optimization the opposite of how the books present it. Using an algorithm with optimal big-O scaling isn't the optimized solution. It's the safe answer that you start with if you aren't bothering to optimize. When you need better performance, you evaluate your algorithms using real data and real machines and qualify your evaluations based on the characteristics (size, etc.) of the data.
You are focusing on an example and conflating it with the actual point that I'm making, which is that electron is not only slow, but compounded by slow programming in top.
That being said... I know exactly what you are talking about and it was always strange to me, because it was actually iterating through every time to find a value first, so the iteration through the linked list would always kill the performance. Even so, basic linked lists are practically obsolete. This is not a good example of algorithmic complexity, because the complexities were actually the same.
I strongly disagree with the idea that lower level is better.
Yes, lower-level languages allow for programs with good performance, small executables, and so forth. There are many domains where they are clearly the way to go.
But higher-level languages allow for better safety, tremendous productivity, portability, exploration, and flexibility.
If you keep your data in an SQL database and you can easily query and update it in any number of ways that you didn't initially realize you wanted. If you instead keep it in hand-crafted C structs, you can probably provide awesome performance. for whatever you originally thought you needed. Once your needs go outside of that box, you'll have to spend significant development effort.
The correct choice depends almost entirely on the domain.
You are arguing against a point I didn't make. I'm not trying to rehash nonsense language arguments. I'm saying that many times the easy electron route is also correlated with programming that gives poor performance even outside of electron functions.
How about "being aware of what your abstraction layers cost, and being palpably aware of every needless contortion you create"
know what everything does.
call if from up on high? fine, but only if you literally can trace that high level call down to the machine code it emits :D C compiler suites can do that no problem
"gcc -S mycode.c"
For an appropriate dose of humility, so that you know that I'm not elevating myself here, but pointing out reality, check out GCC or LLVM source code.
Something like Lua or Berkeley DB can be defined inside your program in a matter of a few hundred lines of included library code, but what does it DO?
Bringing SQL and a database on board is rather odd for a desktop app, wouldn't you say? Configuration should be flat files, ideally, or managed via the apps gui, in which case an embedded database like Berkeley DB is usually more relevant. Your mention of SQL smacks of "all things are nails, always use hammers", to me at least.
Have you worked with the actual computer itself in any capacity? I mean ASM, C, C++, etc, but essentially being aware of what an ABI is, what types actually are (memory shape patterns so we can define physical memory in terms of our data structures)
Javascript is not computer programming, but rather programming the browser, or it's disembodied transplanted javascript engine. the animal is completely different from physical memory and actual instructions.
Computers essentially manipulate memory structures.
The further away from this you get, the more likely that your abstractions will be leaky, not fit what computers are actually DOING with your data, and this results in beautiful script driving janky machine code.
Seriously, while we all like to pretend that everyone is equally special, let's recall that someone is a VBscript for Word expert, and that this is basically a virtual machine that itself is just defined inside someone elses program.
Technological stacks are defined in terms of semi-arbitrary made-up things other people made-up and that you just need to know how to use.
The "all things are nails, always use hammers" mentality is almost explicitly what I am arguing against.
My mention of SQL was particularly deliberate. It's an especially successful high level declarative language with clear semantics. Implementations provide sophisticated execution engines for optimizing and efficiently running queries. It is quite a lovely separation of concerns that gives you great flexibility and good performance.
Obviously SQL would be a disastrous choice for, say, storing the pixel data in your video codec. Meanwhile, hand-coded C data structures and algorithms would be a disastrous choice for an inventory management system. Tradeoffs everywhere.
a well DESIGNED technological stack WILL allow for high-level control of low-level structures.
Electron and Browser-based apps make a deliberate tradeoff that may be suitable for some kinds of apps (Balena Etcher, as I mentioned. You click a button and some process starts and alerts you when it's done.)
I would simply say that the OP should reverse the question:
"in which cases can an electron app suffice for a desktop application" and not presume the death of desktop apps.
Not sure that I entirely agree there: I could easily imagine someone writing too close to the metal to stick to list iteration where a hash would be better and so on, unless it's prohibitively slower, whereas someone slightly higher would freely choose whatever they feel appropriate for the situation. They might often guess wrong and take a structure that is overkill for the typical dataset but the penalty for that will be negligible compared to the one paid for using a badly scaling structure on way too much data.
I agree, in part because of performance, but also in part because I value being able to bootstrap from source as much as possible. There are only a few projects seriously working on this, one being GNU Guix: https://guix.gnu.org/
The trouble with Electron apps though (and most Node apps) is the sheer number of dependencies. It's just infeasible to package them for a distro if you care about their dependencies being packaged as well - at least, not without the entire process being automated.
There’s nothing to really block much on the aur. There are some guidelines in naming and no duplicates, no maliciousness etc that are enforced, but that’s it. But anyone can upload a build script (PKGBUILD) for anything.
If you want to see how it’s done search the package name and AUR and you can see the build script right on the website.
I use it daily. It's OK. It's better than having a full electron app/browser running, but it's fairly incomplete. (For example, in Slack I can't find a way to set my status.) I also have to reauthenticate every Monday morning which means launching Slack in a web browser to grab my credentials. It's a pain.
not saying you have any choice in the matter of using Slack or not, but I can assure you that Slack software is aware of their bloat and inefficiency and they don't care.
The core task Slack is charged with WAS done 2 decades ago with significantly lower resource usage?
Why does this matter? because Slack is not your main productivity app (I HOPE) it's just a background process, effectively, most of the time, a communication channel you keep open and check in with from time to time.
Being able to keep an app open while you run your main productivity suite is a clear win. Slack loses in this respect.
But go on, downvote and flag me some more for speaking truth to power...
it's rather obvious, i would say...
> OTOH, if you are using Slack, you probably deserve it :P
then you say:
> not saying you have any choice in the matter of using Slack or not
Now, I don't actually run Slack nowadays. I run Teams, which sounds like it's efficient since it only uses 600 MiB RAM rather than a full gigabyte.
But the choice between Slack, Teams, and any other product we might use is determined not by the user, but by the company we work for. Companies have a tendency to decide what works without respect to empirical data. Business people are unconcerned with mere technical matters; they optimise for purchase processes.
I think you got downvoted for saying we have any choice in the matter. When you made a valid criticism, your comment fared much better.
I am on the other side, building electron apps, I appreciate the flexibility and ease because I would rather iterate on ideas then learn three different OS-hooks.
I do agree that memory usage is too high on these types of apps and we as developers can be lax about performance.
If you use typescript, the tooling is great and the parent also mention it so I assume they are using it.
Code reusability between browser/desktop/mobile is one. Easier to find developers, faster development speed due to ecosystem, previous experience/familiarity etc. I guess.
JavaScript is actually not too bad. It lacks static types, but Typescript gives you one of the better type systems. But I'd take JavaScript over PHP or Bash any day of the week and twice on Fridays.
If I were no experience for GUI development, I'd prefer to learn Web tech than Desktop tech like Qt because it look like more popular and versatile skill.
I disagree. I mean they may literally not know that your app uses Electron, but they'll certainly have the feeling of, "Oh, that janky app that doesn't quite work correctly and makes my system slow."
I think that might be true on Windows or Linux, but not on the Mac. It seems like Mac developers care more about making their application feel good to use.
Bonus points if it includes offline documentation. As nice as it is to have "updated" online docs, the reality to me seems to be defined more by broken links than by actual up-to-date documentation.
I'm going to be slammed for using these two words, but for any real work you need to have as few layers of indirection between the user and the machine as possible, and this includes the UX, in the sense that it is tailored to the fastest and most comfortable data entry and process monitoring.
I don't see any `web first` or Electron solution replacing Reaper or Blender in a foreseeable future. One exception I'm intrigued with is VS Code, which seems to be widely popular. May be I need to try it to form my own opinion.
My personal evolution has gone from Sublime Text 3 to Atom to VS Code to Sublime Text 3. I've never been a heavy plugin user, mainly sticking to code highlighting. The thing I really like is speed. Sublime Text rarely chokes on me. I love being able to type `cat some_one_gigabyte_file | subl` and getting it to open up with little difficulty. VS Code chokes on files of non-trivial size, and that was the thing I liked about it the least.
For anyone wondering why I'd open up a 1 GB file in a text editor, I guess the answer is largely because it's convenient. Big log file? No problem. Huge CSV? No problem. Complete list of AWS pricing for every product in every region stored as JSON? No problem.
VS Code isn't really designed as a general purpose text editor. It's meant as a development environment.
If MS choose to optimise the experience of 99% of the use cases (i.e. editing source code, which should never even approach 1GB), then that's the correct call IMO.
>> For anyone wondering why I'd open up a 1 GB file in a text editor, I guess the answer is largely because it's convenient.
I can completely appreciate the use of a text editor to open a massive log file, etc, I just don't think that's something VS Code is designed for. You can always use Sublime or Atom to open those files; while getting the nicer (IMO) dev experience with VS Code.
Over time I've tried quite a few popular text editors, Notepad, Emacs, Vim, UltraEdit, Sublime Text, and of course VSCode.
VSCode is surprisingly good for a Microsoft product and they had to do some crazy smart engineering work to make it not sucks while being built on top of Electron.
That said, it still quite slow and memory hungry, I've gone back to Sublime 3 a few weeks ago and I am not coming back to VSCode.
I'm an atom user, and it chokes for the very same reason.
However I also use vim for tweaking server side stuff, and use less by default whenever I want to read something (logs is an obvious one)...
This is both for speed but also UX, i believe vim style navigation (which less basically gives you), is great for reading and searching - what I cannot stand though is doing more than small edits in vim, for development (and i mean code is flying around like crazy stage of development, not read for 1 hour and make tweaks), then I am fastest with the kind of flexibility atom provides.
I know it can be tempting to have one tool for everything when it seems like the tools are supposed to be doing the same thing, but in my mind lean text editors tend not to compete with the big fat slow electron style editors - so just use them both, for their respective strengths.
> Complete list of AWS pricing for every product in every region stored as JSON
Is this a hypothetical file that you mention or something you actually have? Asking since I have a use-case for this data and am interested in knowing how to get it. I have read AWS has APIs for pricing info - is that where you got the data from?
I don’t open big files at all. There’s no point. Files exist as data made to be transformed from one form to another. It is only worth looking at a file in its final form unless you are making some kind of edit.
And even then, I make edits on large files through a series of commands, never opening the file.
By thinking of files in this way, it becomes easy to create programmable tool chains for manipulation.
Jetbrains IDEs (and others for that matter) provide so much more than the glorified text-editors, including extensive debugging support (both in my own code, library code, platform code), code-autocompletion, code navigation, code-formatting, refactoring, linting, static-analysis (works well for Python as well), great syntax highlighting, spell-check, a good plugin ecosystem. I'd never go back to editing code without that support.
The Jetbrains ecosystem is real cheap too, I pay US$160/year for the personal-license all-product suite I can install and use anywhere ... and I use PyCharm (Python), IntelliJ IDEA (Java et al), and Datagrip (DB) extensively, dipping into CLion (C++/Rust) as well ... but they have IDEs for many other languages and ecosystems as well. It's definitely a good deal.
Jetbrains suite, along with Docker, Atlassian SourceTree, and Homebrew (and connection to AWS/Kubernetes) are my main tools these days.
I keep periodically re-trying VSCode, but holy cow. It's a massive step down from a Jetbrains IDE, in every single language I've dev'd in.
Jetbrains stuff works, VSCode mostly handles the basics if it's possible to configure it correctly. Which is quite the achievement, and it's a very reasonable option and far better than much that came before it. But it's not where I want to spend my time if I can avoid it.
You're working with statically typed compiled languages tho. Once you try using a dynamic language you realize another editor is enough IMO. I use emacs for anything dynamically typed (including compiled languages like elixir) and intellij for scala/java.
Phpstorm compared to vim is like an oceanside resort compared to a walk in the desert without a water bottle. If there's not thousands of bugs I have prevented or discovered thanks to phpstorm, I am not surprised.
Jetbrains products are absolutely critical if you're using a dynamic, uncompiled language and to turn down the offer is professional misconduct even if it requires buying a new computer with more ram. I don't know who thought it's a good idea to pretend that a typo in one use of a variable is a legitimate expression of developer intent, but Jetbrains saves your users from that hell.
I feel like "tool use" follows roughly the same curve as a Gartner Hype Cycle, but without an upper bound on the right (as it implies a below-peak asymptote).
In the very beginning, less is often better, since the tool is prompting you with too many things you don't understand. Then you get past that point and you're massively more productive, since it's catching all your simple mistakes. Then you become disillusioned since it doesn't catch all mistakes, and you start learning in detail how it has failed you, and you just (╯°□°)╯︵ ┻━┻ the whole thing (the "real man" trough). And in the end you go back to sophisticated tools, as you realize a 70% solution can still give you magnitudes more productivity.
But the tooling on emacs or whatever is up to par. That was my original point, not that a plain text editor is a real man's way, just that it's good enough. PyCharm or whatnot isn't necessarily much better than what you can do in emacs, but it isn't portable to other languages or as flexible. A text editor like vi or emacs isn't much different than an IDE's functionality when looking at a dynamic language. It's a WHOLE DIFFERENT WORLD with scala though (IMO - that's arguable but refactorings etc aren't on parity with intellij), and I don't foresee myself ditching intellij for scala dev any time soon.
I remember showing a scala developer who was using sublime the "extract method" feature and some refactorings in intellij and he was like "HOW DOES IT KNOW ABOUT THE CODE THOUGH???" - the IDEs have great features, but they're less differentiating for dynamic languages as a lot of the OSS tools are just as good. Eg VS Code MS Python extension for example. It's just great.
It's miles better on Python and most javascript that I've touched (VSCode's ecosystem does tend to have more breadth, and if you're working on something that VSCode has plugins for but Intellij does not, yea - VSCode can be noticeably better for most purposes). Most commonly around stuff that requires better understanding of the structure of the language / project, like refactoring and accurately finding usages.
But yes, for many dynamic languages a fat IDE is less beneficial, especially for small-ish projects (anything where you can really "know" the whole system).
Native UIs seem to matter more for small to medium size apps. Huge all-encompassing things like Blender, IDEs, etc. seem benefit from a unified, attractive, easy to use UI but it doesn't seem to matter quite as much that it's native. These things are intrinsically huge too so bloat matters less.
I don't think I agree with you; but I will accept that a cross platform toolkit is necessary because you cannot write three tools of the quality of one Jetbrains. But I think you could do the same in Qt with ~manual memory management and it would be significantly more efficient. Qt, I think, is less like Java and rather more akin to an alternative native toolkit (in much the same way that you can choose on X, "oh, I will use Gtk; I will use motif; I will use Qt", you can say the same on Windows "oh, I will use winapi; I will use WPF; I will use UWP; I will use Qt" or MacOS "oh, I will use Cocoa; I will use Qt"). It just happens to be provided by a third party so there's no OS lockin.
I have no problem getting more ram for a Jetbrains product, because they are cheap at the cost of a new laptop. But it would nice to find a 16 GiB laptop would be able to cope with my codebase, my web browser and my vm.
For comparison, I just opened up an org-mode file in Aquamacs (Emacs with macOS native GUI) and it weighs in at 105MB, which is actually lower than I would have guessed.
I am running Emacs on macOS, and it only takes 43 MB barebones, and my regular setup takes about 81 MB. Both opening the same org-mode file. Point being that it really depends on what you choose to run on Emacs, and it does not have to be about 100 MB.
Just to add to that, I don't think anyone should be concerned about their text editor taking 200 MB anymore. I doubt it is worth worrying about.
If the electron apps that I ran were the main programs I'm using and weigh in at 200 MiB, I wouldn't worry. But they're mostly background apps - chat, music - that should be using limited resources since I want them on all the time without inhibiting me. Genuine background tasks. And they're using a lot more than 200 MiB.
(I use PhpStorm and Visual Studio Pro and Rider as my main foreground apps. If Jetbrains products used a mere 200 MiB, I would be worried that something had broken and reset them or reinstall them. But they're not text editors.)
I love vs code, and its plugin system, but if I’m on my laptop without a charger, I use something else. When I’m running vs code, my battery life is cut nearly in half.
What do you use instead? In my experience, normal IDEs (Android Studio, Xcode, Visual Studio) all perform worse than VS Code in terms of memory use and battery. :/
Qt Creator or if my battery is real low, a text editor like Gedit (yes, I’m probably saving more there just by not having features like code completion and syntax checking).
As a Visual Studio user, I can't get into VS Code. For once, the interface moves constantly. Things resize at all times. It feels sluggish, however Visual Studio is becoming also worse so in the end VS Code end up feeling quicker... VS Code feels _very far_ from Sublime Text usability to me.
Same here. I am all native guy, including back end C++ servers but VS code is very decent. But then again I think the level of developers who did the main task is way above the average. And MS itself wrote that they worked super hard to optimize it for memory and performance. Something that regular developer usually ignores / does not really know how to do due to their lack of understanding how lower level tech works.
Both VS Code and Atom use significant amounts of WebAssembly and low level libraries to achieve that performance. In addition to that they've written their own view layer for an IDE in modern JS which makes it more performant and stable.
I despise electron and html-wrapper apps. But I gotta give credit where it's due. VS Code is pretty good.
With the advent of the new WinUI, React+Native on Windows, and Blazer. I'm betting the future of windows is more web based technologies mingled with low-level native libraries.
> Both VS Code and Atom use significant amounts of WebAssembly and low level libraries to achieve that performance.
Atom has some internal data structures written in C++. VSCode uses a native executable to do the file search, but no further low level magic is used to make it go fast.
I don't think any of them are using WebAssembly yet.
Figma is pretty much replacing all web design applications precisely because it’s leveraging web tech for collaboration on a single document at the same time.
I think Figma’s success is less about being web first and more to do with filling in gaps in what Sketch offered, especially in collaboration. Today you need to buy at least 2 apps, Sketch and Abstract, to match the feature set of Figma.
Design is one of the areas where one could arguably create a native app, largely because the user base is much more homogenous in OS than most other user bases.
I think we can safely exclude 'collaborative web design' applications from the set of hardcore tools not gaining much from being implemented as web apps for understandable reasons.
Huh? Why? Designers had previously been using native apps for ages — Photoshop, Illustrator, Sketch... Figma has been successful not just because it's collaborative but also because it's performant, powerful, and reliable. Not sure why you think that can't be achieved with other kinds of software
Ok, I see my point being not so understandable, after all. I didn't mean Figma isn't a hardcore tool, I meant that we exclude it because it specifically concerns with web technology (being a tool for web design) and leverages web to implement collaborative usage. So it's probably logical for it to be a web app.
You're still way off. It's not a "web" design tool; it's a visual design and prototyping tool. Folks are designing a lot more than websites and web apps with Figma.
Figma is a great tool and all but it will still take it years, if not decades, to replace immersive content creation technologies like Ventuz, one of the best in the game! https://www.youtube.com/watch?v=nu2FnEVk9_U
No, it’s not an ad...I posted in defense of the power that native desktop apps bring since the use case of Figma was being stretched into territories other than web/mobile UI design...immersive content/interaction creation is that category!
VSCode, because of electron, doesn't allow you to have multiple windows that share state while working on a project. This makes it terrible with multiple screens.
It’s not because of electron. They could have multiple windows but it would be a massive over haul of the architecture . So they say just use another instance
I would argue that it is, because Electron doesn't allow you to share a js context across windows. So while it is not impossible, it is much more involved than it would be in most other frameworks. In fact, this is my only gripe with Electron where I think the normal HN objections about performance, bloat and lack of native UI elements are overstated and not something that bothers me.
That's basically the whole reason I'm sticking with PyCharm these days. Apart from that VSCode seems to tick all my boxes, but it's a deal-breaker for me. There's some kludgey workaround possible involving workspaces but it's rubbish.
VS Code is slow at basic things like having characters show up on screen after hitting the key. It's good at everything else though so that lag doesn't matter as much.
It probably depends on your hardware. I have an "older" (several years at least) Windows work laptop with iGPU that is quite sluggish with VS Code when hooked up to an external 4K display. However, it's snappy compared to Microsoft Teams in the same situation.
Meanwhile, my similar era MacBook with dGPU hooked up to the same screen is very snappy and I honestly would probably not be able to tell the difference in a blind test between VS Code and typing in a native text box (like the one here in Safari).
I'd consider myself pretty anal about latency -- I was never able to deal with Atom's, for example (disclaimer: I haven't tried it in years). I even dumped Wayland for X11 when I had a Linux desktop because of latency (triple buffering or something?) I couldn't get rid of.
On a real system or just in some syntetic benchmark? Because for me it looks quite fast in barfing out characters.
At least in base-mode. It can becoming slower when the IDE-features kicks in and autocomplete needs some time to meditate about the stae of it's world. But this also scales with size of your active sourcecode, the codebase and used language.
Also, this is some problem of all IDEs, not exclusive with VS Code.
I'm using a 5K iMac with Radeon Pro 580 8 GB. I'm making the comparison vs Sublime Text which has no lag and is my standard. This is based on my experience, not benchmarking, just try it side by side and you'll see.
My Sublime Text is decked out in plugins and so on too, there are instances where it slows down but only when it's obviously doing some processing. The basic rendering is fast, totally unlike vscode. But like I said vscode is still "fine" just uncomfortable.
That lag matters to me, but in my experience it's no worse than, say, VIM with the number of plugins that I normally run. Fully bare-bones I imagine vscode is performant as well.
Usually professionals using text editors for their work are not concerned with the absolute keystroke-to-screen latency. It’s totally fine if it’s fast enough, and it is.
“Professional level” covers a lot of different uses... Juicero and Airbus both use CAD, I seriously doubt the latter are going to replace CATIA with OnShape
I'm in agreement with @namdnay...I don't think it will replace enterprise packages like Siemens NX or CATIA anytime soon. I do all of my CAD design in Solidworks and I do like the out-of-the-box thinking/features of OnShape...but...their pricing model ends up being more expensive in the long run ($2100/year for Pro Version). I paid $4k for Solidworks in 2016 and it's paid itself off more than 10x times since then...all without the need for a forced upgrade! When a newer version substantiates its value for my workflow, I will upgrade. Not to mention, most of my work can easily be done in Solidworks 2008-2010 because that is the innate nature of CAD packages, regardless of the version, they will get the job done.
So is Autodesk Fusion. But you won't see Autocad stop selling their desktop software. People don't buy extreme rigs to use for production, and then trade even 5% of the perf for convenience.
I work for an indirect competitor backed by the same commercial geometry kernel (Parasolid) and it did not do well with our models (which, granted, are pretty different from typical mechanical CAD models).
It's quite simple to use the web view process for nothing but the actual UI, and to move any intensive logic to a separate process (or even native code). It's also very possible to make that UI code quite performant (this takes more work, but VSCode has shown that it's possible).
If you don't see web app replacing Blender, give a try to OnShape. I was so surprised by it. It is slower than comparable desktop app, but it is usable for real world projects.
I don't find it that crazy, if properly compiled with web assembly. The thing is that Blender's UI is all synchronous python, so, yeah, that and the addons system would be to need rewritten. Python in the browser is a no-go performance-wise, of course.
> Python in the browser is a no-go performance-wise, of course.
"Running the Python interpreter inside a JavaScript virtual machine adds a performance penalty, but that penalty turns out to be surprisingly small — in our benchmarks, around 1x-12x slower than native on Firefox and 1x-16x slower on Chrome. Experience shows that this is very usable for interactive exploration."[1][2]
The main point though, is that running Python in the browser it's an unnecessary abstraction because you get a crappier version of something that runs pretty well natively. If you're starting from scratch, I think that the browser might be close to native performance in some tasks. Porting existing applications is a pain when you start looking into the details.
The problem is not so much the run-time performance of the code, it's the overhead of loading the Python run-time environment over the network the first time you open the page.
I will come at this from a different, philosophical perspective:
Web apps come from a tradition of engaging the user. This means (first order) to keep people using the app, often with user-hostile strategies: distraction, introducing friction, etc.
Native desktop apps come from a tradition of empowering the user. This means enabling the user to accomplish something faster, or with much higher quality. If your app distracts you or slows you down, it sucks. "Bicycle for the mind:" the bicycle is a pure tool of the rider.
The big idea of desktop apps - heck, of user operating systems at all - is that users can bring their knowledge from one app to another. But web apps don't participate in this ecosystem: they erode it. I try a basic task (say, Undo), and it doesn't work, because web apps are bad at Undo, and so I am less likely to try Undo again in any app.
A missing piece is a force establishing and evolving UI conventions. It is absurd that my desktop feels mostly like it did in 1984. Apple is trying new stuff, but focusing on iPad (e.g. cursors); we'll have to see if they're right about it.
You may not be aware of this, but the person you replied to has worked for years on a native UI toolkit. And they provide justification, too: skills don’t transfer between websites as readily as they do between apps. And while I wouldn’t associate we applications are somehow morally inferior, the fact is that many of today’s issues with increasing friction to drive engagement originated on the web and are easy to perpetuate on the web.
Worst thing that ever happened to HN was self-awareness, now your comment is worse than if it had just focused on what he was saying rather than where he said it and my comment is also worse because I included this paragraph. We should probably ban mentioning that this is HN on HN.
He said something like "OS apps have the quality of being windows, you can open different windows into the same data" (okay he literally said "bringing knowledge from one app to another" but it can be re-framed as bringing different apps to the same knowledge - which I argue is more characteristic of the "local experience").
Your "refutation" was to list a few web apps that are considered useful.
A more sane way to refute his argument is to talk about open APIs and how you can bring your data into different contexts using these tools as well as GUI web tools like these things for converting file types, making small adjustments to PDFs, GSuite or other tools.
However that refutation falls on its face when you want the window quality; i.e. looking at the same data with different perspectives. The reason is that the computers running these web systems are foreign and disjoint so you are dealing with a distributed system, sometimes you are lucky enough that it was designed to function how you are using it (google suite is this to some extent), however most of the time you have to bring your data to them to use these utilities and then things float out of sync as you move between tools and your Downloads folder fills up with intermediate artefacts.
We are moving back to the local system, and Electron (and those browser APIs for local storage and persistence) are steps in the conversion process. Eventually we will abandon browsers (read: Chrome) altogether in favor of "package management"; something like nix-shell (except secure) has a much more user-friendly social contract while being pretty much the same UI as a browser (but still much much much worse UX). That's where we will end up (some evidence: NLNet is funding the nix-packaging of all the projects they support).
I prefer well-designed desktop applications to web applications for most things that don't naturally involve the web:
* Email clients (I use Thunderbird)
* Office suites
* Music and media players
* Maps
* Information managers (e.g., password managers)
* Development tools
* Personal productivity tools (e.g., to-do lists)
* Games
As Windows starts on-boarding their unified Electron model (I can't recall what they have named this), I suspect we'll see more lightweight Electron desktop apps. But for the record, I like purpose built, old-fashioned desktop applications. I prefer traditional desktop applications because:
* Traditional applications economize on display real-estate in ways that modern web apps rarely do. The traditional desktop application uses compact controls, very modest spacing, and high information density. While I have multiple monitors, I don't like the idea of wasting an entire monitor for one application at a time.
* Standard user interface elements. Although sadly falling out of favor, many desktop applications retain traditional proven high-productivity user interface elements such as drop-down menus, context menus, hotkeys, and other shortcuts.
* Depth of configuration. Traditional desktop apps tended to avoid the whittling of functionality and customization found in mobile and web apps. Many can be customized extensively to adapt to the tastes and needs of the user.
Bottom-line: Yes, for some users and use-cases, it still makes sense to make desktop apps. It may be a "long-tail" target at this point, but there's still a market.
This is a big part of why I still use MacOS. The mail, notes and reminder apps are simple, easy, fast and can be used with third party providers like Fastmail. The Windows apps are fairly sluggish by comparison. I prefer most native MacOS apps in general, Finder/Explorer is a big exception though.
Only if you're using a ridiculously outdated copy:
changed
Add-on support: Add-ons are only supported if add-on authors have adapted them
changed
Dictionary support: Only WebExtension dictionaries are supported now. Both addons.mozilla.org and addons.thunderbird.net now provide WebExtension dictionaries.
changed
Theme support: Only WebExtension themes are supported now. Both addons.mozilla.org and addons.thunderbird.net now provide WebExtension themes.
Literally here's a doc explaining how XUL has changed as of Thunderbird 68, the most recent version, released about a month and a half ago. Yes, some elements have been removed, but others have been modified and still exist.
And that's in the add-on documentation, not even just internal development docs.
Also, describing information changed in the most recent stable release, a month and a half ago, hardly qualifies any older as "ridiculously outdated ".
I'll grant you that an Electron app is generally 90% C++ (ships a web browser), but I'm not sure if that makes Thunderbird (ships a web browser) any better.
I believe they’re referring to the web as port 80/443 http(s) traffic. It’s the old World Wide Web vs internet distinction, if you will.
Email really is just a protocol for message sending, and it lives on it’s own port with its own server. If you have an email client and access to an email server (POP/SMTP/however), you can use email over the internet but without the “web”.
Basically, the web email client ought not be the only email client.
`Web`[0] is shorthand for `World Wide Web` which is specifically about HTTP/HTTPS and/or the applications built on that protocol. It is an entirely unambiguous word in this context.
`Internet`[1] is distinct, and that's the general purpose network of networks that you refer to which the Web is built on top of.
Totally fair! Frankly, I only know the distinction from a high school computers teacher who was adamant about the distinction.
I guess the easiest way to get the name is to see the “Web” as a “web” of hyper text documents, where hyperlinks act as the strands in the web (graph edges, if you will).
Honestly, like you say, it’s all built on top of a computer network (yet another web/graph). As a consequence, the distinction never really made a ton of sense to me, either.
Alas, this is the common parlance, so it is what it is.
Nope, different protocols. You don't need web browsers for email, and the email clients that run in web browsers are using mail servers to send and receive.
If the web didn't exist, which it didn't prior to 1991, email would still work fine. There just wouldn't be any web-based email clients.
I make a living developing software only available on Windows and macOS. That said, if I didn't need to interact so much with the operating system, I'd be making a web app. It all depends on what you want to make though. Video editing software? Native app. CRUD app? Web app.
You may also want to consider pricing implications of both. Desktop software can usually be sold for a higher up front cost, but it's tough sell to make it subscription based. SaaS would make your life a lot easier if you have a webapp. People are starting to get used to paying monthly for a service anyway.
Pro tip: If you decide to make a native app, don't use Electron. Instead, use the built-in WebBrowser/WKWebView components included in .NET and macOS. Create the UI once using whatever web framework you want, and code the main app logic in C#/Swift. Although the WebBrowser control kind of sucks right now, Microsoft is planning on releasing WebBrowser2 which will use Blink. I think they might also have it where the libraries are shared between all apps using it, to further reduce bloat. The old WebBrowser component can be configured to use the latest Edge rendering though by adding in some registry keys or adding this meta tag:
> Pro tip: If you decide to make a native app, don't use Electron. Instead, use the built-in WebBrowser/WKWebView components included in .NET and macOS. Create the UI once using whatever web framework you want, and code the main app logic in C#/Swift. Although the WebBrowser control kind of sucks right now, Microsoft is planning on releasing WebBrowser2 which will use Blink. I think they might also have it where the libraries are shared between all apps using it, to further reduce bloat. The old WebBrowser component can be configured to use the latest Edge rendering though by adding in some registry keys or adding this meta tag:
>
>
> <meta http-equiv="x-ua-compatible" content="ie=edge">
I understand the concept of making a native app to include using the native UI platforms. What you described is hardly more native than electron, which is basically a web app at heart.
Or maybe there needs to be a consensus on terms. Do people consider electron apps to be native? I would put them in some weird middle ground, but definitely closer to web technologies than native development.
The main complain about electron app is due to their bundling of complete web browser runtime, which is over a hundred of MB, not to mention the big memory requirements. By using platform's built-in webview component, your app size will hardly any bigger than the total size of the zipped html+js+css of your app. If you're going to use web stack to develop your desktop app anyway, might as well try using native ui webview first if you don't need any electron-specific feature.
With Electron, it feels more like starting from web and making it behave like a native app, from the standpoint of operating system app handling (task bar, notification). It can also fill a gap left by web for exposing a filesystem and other things that are native to some abstract computer.
Using WKWebView for UI is different in that you are starting from a native app and using web technologies to leverage code sharing and programming model of the user interface (js, css, html).
I think for this view to make sense you have to see web apps and native apps as fundamentally different things, which I believe they are.
From a capabilities point of view, they're native. You can access the OS api just like any other native app.
From a developer side, it looks like developing a webapp without the usual limitations of API access, albeit at an extra cost of marshaling or build-complexity.
There really is no reason to think of HTML/CSS/JS as Web only though.
Great to read about someone in a similar situation to me. I work as the developer and maintainer of a niche-market financial / real-estate application. This application has been developed and supported since the late 80s, first being done in Turbo Pascal, then Delphi, and then under my stewardship we moved to C#. I refactored the calculation and report production code into a library, and since that time we've built a Mac version and Web version, all utilising the same 'core' library. This means that for critical calculations and data output we - my business partner, who is the 'domain brains', and I - can do all the hard work on the Windows version (with which we are most familiar and comfortable, and IMO VS on Windows is still miles ahead of VS on Mac), and then 'just' do the GUI work for the other versions.
We did look at doing exactly as you said, i.e. using a web view within Windows and Mac, however I couldn't really get things working well enough at the time (as TBH I am bit of a noob WRT web development, and just pick things up as necessary as we go along).
For our market, there is strong demand for the desktop versions, and this is even with a subscription model; people get access to the most recent major and minor versions of the software as well as phone and email support while under subscription. When their sub runs out they are entitled to minor version updates, but nothing else. My biz partner is very good with people and very knowledgeable in the domain we operate, so this kind of arrangement suits everybody. Oh, and I get to work remote, and have done with him for ~15 years. The current situation really makes one appreciate fortunate arrangements such as this.
For a personal project I am currently using this approach and can confirm it works great.
I wrote just enough PyObjC to get myself a trayicon in the Mac menu bar, which shows a popover containing a wkwebview to localhost. Then I have all the app logic in Python, exposed to the webview through a bottle server, and Svelte for the UI. Highly recommended.
It sounds like a macOS app, and python is distributed with the OS already. Just watch out for Apple trying to take away scripting language support in the future. There's also an upcoming Python version change in macOS 10.16 to look out for.
I actually use py2app to bundle the whole virtualenv into an .app file. Worked pretty much out of the box after fiddling a bit with pyenv (you have to build your python version with framework support).
I mean, we built the Windows Terminal as a native application because we didn't want users to have to be saddled with 40MB of a webview/Electron just to boot up a terminal. It might take us longer to make the terminal as feature rich than it would have with JS, but no amount of engineering resources could have optimized out that web footprint. When we think about what the terminal looks like in 5 years, that's what we were thinking about.
Thank you. As you say, in the long haul, Windows Terminal is considerably better than it would have been thanks to that decision. It feels responsive and lightweight; unlike any Electron app, and that is greatly appreciated by many users, myself included. I look forward to each new version.
They still do; for years now I read the patch notes and every time there's a segment dedicated to the terminal with things like font support or sane selection, things that IMO are both basic features and which should be in native terminal emulators already.
also I've never used the VS Code terminal, iterm works better for me. Also because it's its own dedicated app, so I can actually alt-tab to it instead of having to learn whatever shortcut it would be in the editor.
I am sorry but Windows Terminal is the suckiest Windows app ever and gives me a headache on a daily basis. Try opening multiple instances of this app in overlapping windows, like most developers do. Then fill them with text. Now see if you can tell where one window ends and the next window starts. Because of nearly invisible borders all the windows sort of blend into each other. Worst usability issue ever. This terminal appears to be designed for people who open only one command prompt at a time. Please bring back the old terminal.
I don't like the new Windows Terminal and I still use conhost.exe since it actually doesn't crash, but I don't think that's a fair criticism. The borderless design is part of Windows' theme. The terminal merely renders into the box its given. I think Microsoft (and Gnome) have made a mistake in this borderless fetish, but you can't blame the app for the designers fault.
And the fact that it crashes each time I resume my system seems like a much bigger issue than a person who refuses to keep a few pixels separation manually, or to color the windows differently.
I specialize in data recovery / digital forensics tools, which require very low-level disk access to be able to read physical media at the block level. I doubt there will ever be an HTML5 standard for low-level disk access.
But aside from my particular specialty, I also prefer any other software I use to be fully native. I'm surprised that's such a controversial thing to ask for these days. All I ask is so precious little:
* I want the software I use to be designed to run on the CPU that I own!
* I want software that doesn't require me to upgrade my laptop every two years because of how inefficient it gets with every iteration.
* I want software that isn't laughably bloated. I think we have a real problem when we don't bat an eye upon seeing the most rudimentary apps requiring 200+ MB of space.
I remember hanging out in /r/Unity3d and some newbie posted a download for their completed game. It was super basic - just a game where you move a cube around a grid, but the size of the game was insane, like half a gig.
The dev who posted it seemed perplexed when people told him the game was 100x bigger than it should be.
I hope that you did the kind thing and showed them how to adjust what gets compiled into their package, depending on whether they are targeting a development or production build.
Nothing worse than people who mock beginners showing their first work. We were all there, once.
> I doubt there will ever be an HTML5 standard for low-level disk access.
It's clear you've never worked with Electron; nothing about its system-level access has anything to do with HTML5 or related standards.
All of that lives in NodeJS, which offers reasonably low-level APIs for accessing system resources. For cases where that's not enough Node can easily call out to logic written in other languages, either directly through FFI (foreign function interfaces) or by spinning up an independent binary via the shell.
This is the problem with this discourse: the vast majority of the Electron haters are people who have no idea what they're talking about when it comes to the actual thing they're criticizing. It's particularly hypocritical when they go so far as to frame "JavaScript hipsters" as some combination of ignorant, inexperienced, and/or lazy.
I don't mean to hate on any particular groups of people, I'm simply a proponent of using the right tools for the job.
I can only speak from my own experience, but I have never dealt with an Electron app that wasn't noticeably slower than other apps, to the point of standing out like a sore thumb.
Or, put another way, when I notice an app performing especially slowly, I think to myself, "that's probably an Electron app", and I'm usually right.
Have you looked into webassembly and how browsers give access to filesystems[1], there could be hope for a future of high performant filesystem access.
This interface will not grant you access to the users filesystem. Instead you will have a "virtual drive" within the browser sandbox. If you want to gain access to the users filesystem you need to invoke the user by eg. installing a Chrome extension. The relevant Chrome API can be found here.
If you are accessing the file system, you are accessing an abstraction, not the actual disk data (which is another abstraction on top of the actual hardware).
There are still a lot of fields where performance matters. This is especially true with apps that need low latency, like most games. Something like Stadia may be fine for a casual gamer but it still feels laggy to many, especially those used to gaming at 144Hz+ with almost zero input lag and gsync.
VR is another area where native desktop is still superior.
Then there is anything that is dealing with a lot of local data and device drivers. Video editing for example.
Development tools that work in a browser are getting better but native (or even just Java-based like IntelliJ stuff) still seems superior for now.
Stuff that doesn't use TCP, like network analysis tools, either need to be done as a desktop app or need to run a local server to point the webapp used to control them to.
I guess what I'm getting at is that if you need low-level access to the local device or if you care a lot about things like rendering performance then native is still the way to go.
Yes, but I think that's the point isn't it? Of course the game runs on a Linux server because it couldn't possibly run in a web page due to performance reasons. Hence the complaint about lag.
You either get lag from the network roundtrip or lag from the crap performance in a browser (or you choose to degrade the experience, e.g., by reducing graphics fidelity), but one way or another you're experiencing lag (or a compromised experience).
Ergo the same game implemented as a native app running locally is going to be better[1].
[1] At least technically. It could of course still be a rubbish game.
Can you elaborate on the security considerations of this? It seems like you could just as easily say it's more secure because it reduces the attack surface to a well-tested program with mature sandboxing.
It goes both ways, the sandboxing means that the app has less access to the OS, but the flip side is that the open web has more access to the app (CSRF, XSS, evil extensions, etc.)
Exactly. The browser becomes (basically) an OS running within the host OS, so s/he's invented a kind of half-cocked virtualisation. There's a joke in there somewhere about turtles all the way down.
IMO desktop apps aren’t quite equivalent to native apps. Native apps look and behave in a consistent way. They have
• Familiar UI primitives:
- Controls and chrome are in the same place
- Font sizes are the same across apps
- Consistent icons and button shapes
• Support standard keyboard shortcuts (including obscure ones that developers re-implementing these UIs might not know about)
- All the Emacs-style keybindings that work in native macOS text fields but are hit-or-miss in custom web text fields
- Full keyboard access (letting me tab and use the space bar and arrow keys to interact with controls)
• And consistent, predictable responsiveness cadences
- Somewhat contrived example: In Slack (browser or Electron app), switching between channels/DMs (via ⌘K) has a lag of about 0.5–1 second. If I start typing my message for that recipient during this lag, it actually gets saved as a draft in the channel that I just left and my content in the new channel gets truncated. I don’t think that kind of behavior would happen in a native macOS app, which renders UIs completely synchronously by default/in-order (so it might block the UI, but at least interactions will be in a consistent state)
I don't agree with the first two points. Native applications aren't consistent in this way. There are dozens of cross-platform GUI kits and they all behave slightly different, just like Electron apps. If you want consistency, you need to build multiple apps, one for each OS with their respective toolkits. Ain't nobody got time for that when you can easily build on Electron and target browsers, macOS, Windows, and Linux in one single app. No wonder Electron is winning the battle so far, regardless of your last point.
Native implies that you are building for each OS and their native toolkit. On macOS, you write Cocoa. On Linux you write GNOME or KDE or CDE. On Windows you write...I dunno. Win32 probably.
Current tech is C++/WinRT, for this year, but last year it was WPF and five years ago it was XAML, then previously it was MFC/ATL, and original Win32 somewhere back in the old days.
And Linux isn't better. I think OSX is the only desktop OS that has an idea of what an app should look like.
MacOS used to have a choice between Carbon and Cocoa, didn't it? Maybe still does.
C++/WinRT uses XAML. But XAML isn't a control library/toolkit. You can tell because it isn't called the Extensible Application Control Library. I'd say it shouldn't be included in your list, but MFC/ATL is just a way of accessing Win32 via C++ -- it makes the same function calls -- so it's not clear that your purpose was to make a fair statement about native development, but perhaps to complain about Windows turnover of apis.
On Linux, it's quite simple to live entirely in Gtk-compatible land. I think Firefox and JetBrains are the only foreigners I use on my box, but I'd be using them on any opertaing system so it's not exactly a fair cop.
Let's go with a file picker dialog, a simple OS-provided component. Windows provides three versions of this dialog (the "app" view, the tree-view, and the explorer-in-your-app view) depending on which API you use. You see this pattern repeated. It being the same calls "underneath" is true, but it's also irrelevant, as the user experience noticeably changes depending on which API is invoked.
I purposefully make my FF unlike the other apps on my system. I use a couple of workarounds to prevent OS level keybinds from working in some apps. Sometimes a completely purpose made UI is better, sometimes.
In general, consistency [in desktop UI] is good, but there are good reasons to break it.
The difference is you breaking consistency for your use cases, vs some product designer somewhere breaking consistency for all of their users for whatever reason without the end user having any say in it.
Pithily, the cost of a product designer’s novel portfolio piece is externalized onto their captive users.
Depends heavily on the user community, use case, time-in-environment (short- and long-term), inter-user communication about application, and rate of change. There's almost certainly a set of interrelationships between these.
For a highly technical userbase, a highly bespoke UI may be defensible,even preferable, especially:
- User community is expert, highly skilled, and highly computer-literate.
- If daily-weekly time-in-environment is high. User "lives in" application.
- If lifetime time-in-environment is high. Users base career in application.
- If there's relatively little communication between users about application state, interactions, or activities. Put another way: users interact with the app, but not about the app with others (users, clients, management, techs).
- The UI avoids drastic change over time.
By contrast, reverse virtually any of these conditions and you'll want a UI that conforms closely to current standards:
- Users are inexpert, poorly computer-literate (the vast majority, see OECD computer skills study), or simply nontechnical with app.
- Time-in-environment is low. At the extreme, all users are one-time novices.
- If users must communicate with others regarding app state or tasks -- close management, team interaction, client / management interactions. All parties need a ready and clear mental model of the UI and state.
- If the UI changes drastically over time, it should do so consistently with all other major elements. (Both should change as little as possible.)
Somewhat more concretely, the absolute worst feature a UI can have is change. Users get confused, lose trust, and are burdened by obsolete knowledge. This afflicts both expert and nonexpert users, profoundly, though in somewhat different ways.
For very technical tools used principally creatively --- virtually all editors, development environments, and many reader/browser/search/analysis tools fit this description --- a highly distinctive (and customisable) interface may be appropriate for advanced users. This is a small, but generatively critical, user community. There's a requisite complexity to such tools, simplified UI comes at the cost of vastly less efficient performance.
Virtually all "weird" tools in heavy use today evolved from what were at the time of their creation, common motifs, at least within the environment of origin. Think of Unix, vi, emacs, Photoshop, Excel, and Eclipse, say.
For standard workflow, control, and transactional tools, highly standard UIs are preferred. Here, users are interacting closely with others regarding state or interactions with the interface, and clarity, consistency, and common knowledge of the UI and state changes matter. Point-of-sale systems, equipment controls, general monitors, enterprise applications, end-user/customer support tools, and the like.
Occasional-use, public use, and similar tools must be generally usable without training. Adherence to standard motifs is critically important. UIs and capabilities are generally simple.
The trade-offs:
- More expert users and more heavily-used tools can support more novel UIs.
- Less literate users, more unfamiliar tools, and greater communications about UI state and interactions, demand more standard UIs.
- Change is generally bad, but evolutionary change (within the tool) or conformant change (with the overall environment) are generally less disruptive than either sudden drastic or idiosyncratic changes.
If you are going to expect me to run your software in an always-on manner, I would greatly appreciate a native application.
I frequently do light computing on a Surface Go. It's a delightful little device and I love it, but it is not powerful enough that I can leave gmail, Slack, and Discord open all the time.
I don't have enough RAM to run another web application but I could very easily afford a native app or two.
I have an old ThinkPad X200s that I turn on from time to time. I keep a fedora installed and updated just in case, and since I'm there, i also sync my nextcloud stuff.
E-mails using claws-mail are not a problem. It generally runs fast enough until I open firefox and web stuff in general.
Claws Mail on an old Atom is a lot faster than webmail on pretty much anything else. This probably illustrates best the difference between the two approaches.
Indeed, I asked on another HN thread, why do you developers need so much RAM, and I got lots of good answers. But it occurs to me that a few lightweight, quick-starting apps will help me stave off the day when I have to get a new PC because my old one ran out of juice.
I'm not sure that's enough of a reason to develop a commercial app, because tightwads like me also like our stuff to be free.
Yes, I hope so. I have just released a new data transformation/ETL tool for the desktop (Qt/C++ app for Mac and Windows). The advantages of desktop are:
-performance/low latency
-richer UI (e.g. shortcuts)
-privacy
But there are trade-offs:
-the user has to install/upgrade it
-less insight into what the user is doing
-harder to sell as a subscription
Lots of software would be better (from the user's POV) as a desktop app. However as a developer and as a software business owner/investor, it's (much) better to write web apps. So it depends on what you're asking here. Should you invest in desktop dev skills to further your career? No. Should you write your software idea as a desktop app if you want to make a business of it? Not if you can avoid it. If you're asking something else, well I guess the answer is 'maybe'.
OTOH, creating desktop apps is a skill in itself that one might want to master. At least in the Mac ecosystem this is a regarded skill and amongst the users there is a willingness to pay for apps. SaaS might still be more lucrative though.
Scaling out cross-platform, lower cost of investment to support multiple OS/Devices.
User does not need to worry about dependencies and libraries - they just need to make sure they're running a somewhat up-to-date version of a modern browser.
As a business, it's not to your advantage to have to ask permission from app store owners. The web does not require permission from a third-party that may decide to compete with you.
How do you have more "control over your software" with a web app? With a native app, I can do virtually anything. With a web app, I can really only do what (Google ∩ Apple ∩ Microsoft)'s web browser teams decided to prioritize, and allow, and optimize.
As for 'pirating', is that a serious concern these days? I've only ever heard about it being an issue at big companies selling software to other big companies, where it's solved with "license servers".
I think they mean rapid, continuous deployment. If it’s a web app you can push out changes every day to near universal adoption. With a native app you have to cajole users to update
I think pirating is a serious issue in other countries (i.e mine), I practically never saw anyone actually pay for any office suite, adobe software and some offline games. All of them get pirated and very quickly too.
Huh? Given how most web applications these days are using client-side rendering there's nothing stopping someone from just downloading all of the frontend assets. You can also connect to a server from your desktop application so I don't understand how desktop makes it easier to pirate anything.
Web applications have lots of essential parts on the server-side even if they do client-side rendering. And each paying user is logged in when using it, so the company can accept/deny requests depending on the user.
Proprietary desktop applications are usually downloaded after a payment, and then the full software is available locally. By hacking the security parts of it one can then have a totally free version and distribute it. That's why it is easier to pirate.
How does this address my point about being able to connect to a server from your desktop application? Just because historically companies have not deployed their product in such a fashion doesn't mean it isn't technically possible. How do you think social mobile applications work? Have you ever worked on an application that wasn't running in a browser?
> That's why it is easier to pirate.
There are no technical reasons why a desktop application is inherently easier to pirate. Only implementation details.
If you develop an application that runs in the web browser, I won't use it. That's not some dogmatic principle of mine, it's just an empirical fact.
I use only one browser-based application: Gmail.
I've never used another browser-based application and I can't imagine that I ever will unless there's truly no alternative and it's forced on me by an employer.
I've happily paid for dozens of desktop applications, and I'm even semi-happily paying for around ten of them that have switched to a subscription model, but I never have and likely never will use browser-based applications even if they're free.
I don't get the downvotes: the original question asked about opinions on web-based vs. native apps, and this guys is giving exactly that. And what else could you do, you can either cite some usage statistics or give personal assessments.
I'd be surprised if that is the case—or perhaps we have different definitions of what constitutes a "browser-based application." For example, I do all my online banking in my browser; it's the most fully-featured way to do so. It might not obviously be a PWA or an SPA, but I certainly think it deserves to be called a "browser-based application." What else could it be? It's certainly not a "web page." In fact I'd argue that any site that has content that's scoped to a user and can be manipulated by the user is a "web application."
In some (most) cases, desktop apps could be better - performance, latency, offgrid capabilities, and even privacy. In most cases, I prefer offline desktop apps, then their online counterparts.
One area which is really tough to nail is cross-platform support though. Getting a good app on one system is hard enough, getting it in all three - rarely done. This is one of the things where web shines.
From business standpoint, I think web-first with an eye on native works for majority of cases. That is, as long as the majority of users don’t care about the above. In some future, if we start valuing efficiency and especially privacy more, this could turn around. But it feels like, even then, web will probably find a way to make more sense for most people.
But building a good app on all three major operating systems is not solved by an abstraction layer, neither by a cross platform gui toolkit nor by a web layer. Different operating systems have different conventions, metaphors and standards. A portable application will usually feel foreign on all, but the developer's primary platform. Unless, the developers invest in adapting to the differences of the platforms.
I'd say that that when you're writing an application which is fundamentally just a pretty wrapper (e.g. it exists to take user input and pipe it over HTTP to some web service or use it to generate a command for some other binary) and your users don't care about performance, resource usage or reliability, it makes sense to use a browser. Your application is very UI-focused and if you're already familiar with HTML, CSS and JS, use what you know.
However if you're working on an application that has strict resource usage, reliability and/or performance requirements like say a control system for industrial equipment, a 3D game, a video encoder, photo editing software, or software that's going to be run on an embedded system, you're going to find it difficult to do what needs to be done with a browser/wrapper. It can be done for sure but it'll be something you work around rather than with.
I like to take my laptop out to a park and work with all the radios off to get the best use out of my battery. I also like to do complicated things with a lot of files that need to be organized in a real filesystem, the directory structure of a graphic novel can easily match the complexity of a program’s source tree.
Your web app, which requires several extra levels of indirection between the code and the bare metal, an online connection, quite possibly is built on a framework that tends to suck down a significant percentage of my CPU even when it’s idle in the background with all windows closed, and its own weird file management that’s probably a giant hassle when I need to get my work into another program, has no place in my world.
We're building POS applications for major retailers, and for this kind of software, native is king and will stay for the foreseeable future (with a few exceptions confirming the rule, of course). These applications need tight integration with exotic hardware, must satisfy weird fiscal requirements often written with native applications in mind, must run rock-solid 24/7 with daily usage and predictable response times for weeks without a restart, must be able to provide core functionality without interruption in offline situations while syncing up transparently with everyone else when back online and usually run in an appliance-like way on dedicated hardware (start with the system, always in foreground, only major application on the system, user should not be able to close or restart it, updates and restarts triggered by remote support/management).
All of this is much easier to do with native applications, running them in a browser just adds the need for crazy kludges and workarounds to simulate or enforce something you get for free if running native. Also you end up with managing hardware boxes with periphery attached and software running on them anyway, so whether managing a native application that is a browser which then runs the POS application or whether directly managing the POS application does not save you any work; if anything it even gives you an additional thing to manage, which increases maintenance effort and potential for failure (which quickly is catastrophic in this business, POS down today effectively means the store can close its doors).
Back-office applications in the same space are actually pretty well-suited for a web application, and frequently implemented as such today.
A lot of ATM machines and POs systems are glorified web apps. Not sure why a web app can’t be rock solid. Certainly easier to go native since you only have one platform, but I don’t see it being required
Notice that I didn't say anything about that being impossible. I just said it's harder to meet most of these requirements with a web app, because you have to solve problems that you simply wouldn't have otherwise while gaining practically no advantage whatsoever, which is why most people decide to keep building native applications. Some people happily shooting themselves into the foot does not invalidate this assertion.
Also especially ATMs are notorious for hanging inexplicably for seconds, not reacting swiftly to user input, and generally providing a rather poor UI experience. The performance and quality standards for POS systems, at least our standards, are quite a bit higher.
I'm all for web apps, unless you need to do things they don't do well. If you are doing, say, video editing -- yeah I want a native desktop app for that. At least currently.
But those things are getting fewer and fewer. And it annoys me to no end that I can't, say, run my favorite screencast/video editor (screenflow) on my Windows or Chromebook machine, since it seems pretty deeply tied to the OS. I don't want to have to learn another one, and I don't want to replace my Mac which is on borrowed time.
That said, I use a lot of apps like Gimp and Inkscape on my Mac, and they may be technically native, they can be really awful about "feeling native." I don't mind inconsistent user interfaces so much, as long as it is mostly cosmetic. But I've spent SO much time in both of those searching for lost windows, etc. (OMG Inkscape devs, has anyone even tried it on multiple monitors???) Things you never run into with "true" native apps (those two use GTK toolkit).
So, I certainly recommend web apps if you app can run sufficiently fast or otherwise can get away with being a web app.
Take any app that uses all cores nearly 100%, maybe maxes out the GPU eats 3-5GB of ram, and is a 2-100GB install.
Those will always be native.
These are your Cad programs, your video editors, your AAA games.
You can make a cad program in a browser too. But you trade a chunk of perf for convenience, and thats only rarely acceptable.
Anything that could ever be done in a mobile app (chat, media consumption, ..) those might be possible to do in a browser. But you didn’t even really need a computer for them to begin with.
Tell that to Autodesk. Fusion is like 70% web, slow, and bloated. They don’t even have a good excuse since the whole UI is just a shell around a canvas and could easily be made native, it’s not as if they’re actually benefiting from the DOM or CSS.
Well the browser is still not good at things that say Blender or Protocols can do. Media pros still need desktop software for example audio latency in chrome is far too high to use it for any serious pro audio applications.
It's true that many apps could be replaced today with a website, specially those that are basically capturing and showing data.
But there are still many areas where native is king.
- Games
- Audio / Video work
- Adobe type of work (photo editing, motion graphics, vectors, etc)
- 3d
- Office. For me Google docs is enough, but not for heavy users.
- Desktop utilities (eg: Alfred, iStatMenus, etc). You could certainly use a web UI for those, and it would probably be fine , but you'd still need some native integration.
A 3d modeling package? Although Clara.io exists most of the time I'm dealing with 100s of megs of data so native wins. Creating a game? Mostly same though I can imagine some limited online game creation tool even the small sample 3D platformer for unity is 3gigs of assets so a game editor native seems to win. Photo editing, when I get home from vacation there's 100gig of photos so native wins for me. Video editing same thing.
On the other hand there are apps I have zero interesting in being native. WhatsApp, Line, Slack, Facebook, Office, Email, Discord, etc. I'm 100% happy with them in a browser tab. Native apps can spy on way more than browser apps (maybe less on Mac). They can install keyloggers, root kits, scan my network, read all or most of my files, use my camera, mic, etc.
I also use 7 machines regularly. Being able to go to any one and access my stuff is way more convenient than whatever minor features a native app would provide.
Installable web applications are an incredible concept that have saved my customers and myself countless hours (and money).
The experiences are fantastic. The applications look native to the platform, with coloured title bars and OS specific window decorations.
The performance is not noticeably different than the equivalent native experience. I am taking advantage of multi-threading through web workers, web push notifications (sorry iOS), and the (single) code base is maintainable and easy to work with.
I don't see how a GUI framework like QT or several native applications would make a more effective alternative either aesthetically or financially.
I'd consider it uncontested once installable web applications have deeper system access (filesystem, etc).
The addition of web assembly bindings for direct DOM manipulation and directly importing wasm binaries via a script tag would complete the browser as the most sensible customer facing front-end environment.
Browsers have one huge problem and that is they override important hotkey space (like cmd+w, cmd+f for searching etc). So native apps will always provide a better experience.
And nowadays even websites that don't hijack the shortcut itself are non-"find in page"-able, due to the trick of disposing unused/out-of-view components (which is actually I suspect what Docs is doing, too, but they made their own find so people wouldn't get stuck)
I've seen lots of tools that spin up a local webserver and then use that to serve the webapp even offline. But then the question becomes is this really a webapp if I have to install a native server?
I was excited to see how this worked, so I visited the page, clicked around to see what it did, disconnected from my network, and tried opening the page again in a new window. I got the usual browser error page: "You Are Not Connected to the Internet". So I connected to the network, loaded the page, then disconnected from my network, clicked the Demo button, and got: "Couldn't fetch demo SVG".
So by trial and error, I found that I need to load the page before disconnecting, and only use the 1st of the 5 buttons on the side (even "About" requires network connectivity). Then it works offline. Which is pretty cool!
But every native application I have works perfectly offline, and I don't need to do anything special ahead of time, or worry about which parts might not be available. There's a big difference between "some parts may work offline sometimes" and "entire app will definitely work offline always".
From decades of experience, when a "trivial" app struggles with demoing a feature, there's little chance it'll be widely supported among real apps. PWAs have been "any day now!" for 10 years now.
Sounds like the service worker didn’t register for you.
The logic that decides whether the browser accepts service workers is a little bit iffy. My vague recollection is that browsers default to something like “when you access the site for the second time, keep the service worker”. (Don’t quote me on that, and if anyone knows better, please correct me.) “Add this to your home screen” functionality will definitely make the service worker go. (That’s mostly for mobile browsers only at this time, though desktop browsers will eventually get it consistently available, as we’ve been promised for… hmm, about 8±2 years, I think.)
Privacy and blocking extensions may block service workers from being registered, too.
If the service worker is loaded, then it loads just fine with no internet connection, and the demo works too. The contribute and about links, being to GitHub, still don’t work.
I first actually discovered this completely by accident: I was offline, and thought “I need to shrink this SVG file, so I’ll open a tab to the SVGOMG URL and that error page will remind me to do it when I reconnect; huh, it loaded, must have a service worker. This is really great.” On later reflection, I realised that it being by Jake Archibald (a big mover in the service worker space) pretty much guaranteed it was going to work offline.
I had revolut or wealthsimple in mind that both make trading/exchanging money more like a game and a wholesome experience.
I think because of the app the user is "forced" to concentrate and use the whole screen of the app (especially for the phone) it enables a more immersive experience. Whereas most browser apps feel like a transactional thing.
I‘ve always been on the fence about this. I can see both sides, and don’t have a strong opinion either way. But answering is there still a place for native? I think yes for sure! I guess it comes down to if it is really important to your philosophy as a developer, or your type of app could really benefit from native capabilities.
A good example of a recent app that chose to leverage native platforms is Table Plus. They are developing native apps on Mac, Windows, and Linux. I respect the effort/skill and dedication required to pull this off! https://tableplus.com/
I imagine DAWs like many others could be a very small native audio processing binary with a gui on top that can be whatever it wants. Some are already Java UIs over C++ backend (e.g Bitwig). Should be possible to do an electron or web frontend over the C++ backend too. Replacing the processing bits will be hard though. Driver interfaces aren't exactly the strong point of web dev.
Native apps can take advantage of OS APIs that are much richer than those via the browser (or Electron) pass-through APIs.
For example, I've made a Mac app that lets you customize your Spaces (virtual desktops), assign names to them, jump to specific ones, track your time spent across them, and trigger custom events when you move to specific ones. None of this would be possible via a web app or electron app. Project homepage https://www.currentkey.com, app link: https://apps.apple.com/app/apple-store/id1456226992?pt=11998...
And as long as there is differentiated hardware between platforms, there will be opportunities for innovative native apps. For example, though I personally don't love the Touch Bar, there are interesting native app projects around that like Pock: https://github.com/pigigaldi/Pock
I would simply say that the OP should reverse the question:
"in which cases can an electron app suffice for a desktop application" and not presume the death of desktop apps.
It's a very web-dev-centric view to imagine that this model is right for everything and will eat all software.
There are clear performance and efficiency tradeoffs.
If you KNOW the constituent bits of the software stack of an electron style app, you will be horrified at what you are doing in the name of being 'normal cool and popular'.
It would be considered extremely bad engineering if you whiteboarded the actual layers and proposed it sans the singular justification of "there are lots of JS devs and this is considered easier than QT which requires a bunch of picky expensive C++ devs'
More's the pity that the eventual use case is not first priority in such a decision making process.
"Background communication channel that should be kept open while the users main productivity software is afforded the computers resources for actual work"
that would suggest that Slack would be better written as a native or near-native app. Hello, McFly...
A naive desktop app is the deluxe option. It'll always be more efficient than even the best web-technology based app, because it can skip tens to hundreds of layers of abstractions (e.g. Javascript, HTML, CSS, DOM) if done right.
So the questions are:
- do your customers care about performance? (gaming, 3D animation, music editing)
- are people concerned about battery life? (pagers, medical equipment)
If none of these reasons for native apply, you can probably make your users suffer through a web app, which will be much cheaper for you to produce and maintain.
That said, people definitely notice the sluggishness that all web apps have. I mean those 100ms from click to screen update. So your customers will most likely be able to intuitively feel if your app is native or not, with native feeling better.
For some groups of customers, this premium feel might be a decision factor. For example, Apple TV (super responsive) versus Netflix (objectively slow website).
IMHO native applications represent a valuable class of niche software tools that deliver very highly specialised functionality in concert with desktop software. Add ons for MSProject and Excel abound and there really isn't an equivalent for online tools or indeed a viable or stable market.
The amount of ignorance in here masquerading as experience and knowledge is staggering.
My father was right about so many things. The one I have in mind right now is when he said "age is another form of strength against the nitwit, because those with experience see straight through those with misplaced confidence."
He was right. I'm not calling anyone here a nitwit or anything, please be clear on that.
It's just amazing how wrong some of you are, while sounding so absolutely sure of yourselves. A few don't even get easily researched technical details correct, while trying to sound authoritative.
My point is that maybe this is related to why software development is in the sorry state it is in today: the ignorant are confident they know it all, and the knowledged are confident that they know very little.
Whenever this sort of question comes up, I always look over at my macOS dock and see what's running or permanently docked there:
- Marked (Markdown previewer/converter): native.
- Transmit (file transfer): native.
- PDF Expert: native.
- ImageOptim (image optimizer): native.
- Fantastical: native.
- MailMate: native.
- Terminal: native.
- iA Writer: native.
- Safari: native.
- CodeKit: native.
- Dash: native.
- GitUp (GUI Git client): native.
- 1Password: native.
- Telegram: Electron.
And a few that aren't running now but I run very often:
- Slack: Electron.
- Visual Studio Code: Electron.
- BBEdit: native.
- Nova[1]: native.
- MacVim: reply hazy, ask again later[2].
So, I mean, I can't speak for everyone, but it doesn't seem to me like native apps are going away in the near future, at least.
[1]: Nova is a still-in-beta code editor I'm trying out as a possible replacement for VS Code. Code still "wins" on features for me, but Nova is pretty cool, and still in beta, so...?
[2] I mean, MacVim is a native "GUI," but it's, you know, Vim.
I think there are some applications or problems that are likely to be favourable to native desktop apps for a long time. For inspiration, simply look at what hasn't already become web based. Some things I thought of:
1. Heavy lifting - As others have mentioned, running some code in a browser is quite a few times slower than locally. As Moore's Law comes to a screaming halt we're going to be need to get better at creating efficient software rather than relying on the underlying hardware to get faster.
2. Capability - Some things are inherently difficult to do in the browser, such as custom networking, calling kernel functions, accessing various hardware, etc, etc. You can always have your native app launch a web-based front-end, but going back the other way is not possible by design.
3. Hardware Access - Sometimes you need to access a camera, USB device, GPIO, I2C, SPI, run architecture specific instructions on the CPU, access the GPU, etc, etc. Again, the browser typically won't let you access these by default.
4. Security - This comes in a few parts: (a) You're able to bypass "most" security and do what you want within reason. As long as the user ran your application you usually have the same privileges. (b) Now that you're dug-in you can enforce a level of security that may not easily be available otherwise. (c) Features such as app signing mean that the user can more easily guarantee the app came from you, rather than relying on their ability to read the exact URL in some email at 2am. If I run `apt-get install <X>` or equivalent in other OSes there is a chain of accountability.
5. Memory - Put simply the browser adds massive overhead to any application and typically has inefficient data structures. Compare something like Atom [1] to any equivalent native editor for example. (There is some existing efforts in comparing editors [2].)
Finda’s architecture[0] is great for this discussion.
On the one hand, you can say “look, an Electron app that’s actually fast!”
On the other hand, you can say “wow web apps are slow; it takes ~50x longer to render a basic list than to regex search across tens of thousands of strings”.
From a performance perspective, the JS part of the stack certainly isn’t helping.
I have a slightly different take - I wrote my own CMS based on Elixir. It's a static site solution, which means it generates, static HTML files that are then uploaded to a CDN (Eg. Netlify). My UI is done in VueJS and my database is actually inside of my application. I wrote a simple electron wrapper combined with docker in the background to deliver my CMS solution to my clients and it has worked really well for me. The reason being, I don't first of all need to collect my clients' data and store them on a central server, at the same time, my clients don't need to bother finding a hosting provider to maintain the site. They can just run this thing off their desktop and publish and be done with it. What's nice is, if they need updates and new features, they got to pay, which supports me and my work as well.
In fact, the whole project started out as writing a replacement for Wordpress from scratch. At least 6 of my clients' websites got hacked and one of them had a million visits a month. Simply because of stale plugins (it's easier to accumulate them than you think). So, long story short, I absolutely believe there is a place for desktop apps even in 2020.
Oh yes there is. For one, thinking "desktop" is very very different than thinking "web".
Dealing with "state" is much better/easier/clear in desktop than in web.
An app on desktop, if well made, will be insanely more responsive than a web app. That's one thing - the other thing is there are cases where speed/resources will dictate that the app should be desktop. A simple example is a video editor (such as, the one I'm developing, but that's besides the point). Sure, you can have a video editor as a JS app, but that will be incredibly trivial compared to a desktop app.
I'm not saying that you can't match any desktop feature on to web. I'm saying that some will take 10x+ time and resources (and thus, an insanely higher complexity) than desktop. And some features, they are simply not feasible to do on the web. Let me give you an example: time-remapping for a video editor (one thing that I'm gonna implement soon). This is such a complex issue, requiring advanced caching + lots of RAM + fast rendering, that implementing it in a browser is simply unfeasible TODAY.
As things become feasible on the web, lots of them begin by being 10x+ more complex than desktop (this gets lower in complexity in time), for one thing. And for another thing, that basically means more things that were unfeasible for desktop will now become feasible there (but still not feasible on web). And this cycle continues.
In conclusion - there will always be a place for native desktop apps IMHO.
In the short term 3D MMO's are desktop only. In the long term everything goes back to being desktop because the abstractions waste energy. Everything beyond vanilla HTML + CSS + .js for GUI is going away!
I'm also going to burn some karma and re-iterate that there are only two languages worth using: JavaSE on the server (build everything yourself) and C(++) on the client. We need this to be understood so that fragmentation can be reduced!
I share the frustration with everything moving to slow and bloaty electron apps.
But wrt to electron apps and using web technology instead of native frameworks, I think it depends a lot as well how well the web code is designed. I've been prototyping a Matrix chat app [1] in a very minimalist way:
- no framework or library used for UI, "state", ... to have complete control on how and when things are updated and rendered.
- use indexeddb optimally to keep as little things in memory as possible.
My 2 main conclusions from this are:
- Web technology can perform very well. The chat client uses only 3.5mb of Javascript VM memory on a 200 room account. It visually outperforms some native apps as an installed PWA on Android on a low-end device. I attribute this to the fact that web browsers are very optimized.
- It takes more time to engineer an app properly like this, even in a language like javascript. I can imagine it's hard to justify the expense, when most people don't know who to blame when their computer is slow.
There definitely niches for native apps, but they're just niches. It's like massive C++ vs others debates in the 90s. Eventually, people only use C++ where it suits the niche, instead of using it for most of the things like it was in the 90s.
Economically, it's basically 'web first' or at least web by default. Not only it's cheaper, but also faster to iterate, since major clients' logic could be unified.
Development wise, most non-desktop developers hate native desktop development. There are too many quirks in different platforms.
PS: There are so many Electron hates in this thread. Sure, there are a lot of crappy Electron apps, but they're more likely assembled in like one afternoon either because it's a personal hobby project tool or their company wanted only a working software and demanding their developer to release it within one week. Given the same amount of investment, I believe Electron wouldn't be too much short than native apps.
VSCode like everybody mentioned is pretty slick. And personally I feel Discord's Electron app and the React Native app are pretty nice as well.
Just to add a counter-point to the readily expressed opinion here on HN that the web is terrible and we should all go back to programming Coltran; the only place for desktop apps is testing new APIs for new technologies which the Web can adopt once they've stabilized. So if your phone gets some new breathalyzer sensor or something, and you can't interact with it via WebUSB or WebBT then yeah, you're gonna have to drop down to the OS runtime to play with that until the standards bodies finish arguing about what WebBreathalyzer API should look like. Or more likely, that never happens because it turns out that few people need a phone with a breathalyzer. The web is the common denominator, the platform that runs on the most devices supporting their most common features. That makes it clearly superior for applications which need to reach the broadest audience. That only grows as more and different platforms emerge. But there's always going to be something new to play with, and that's fine too.
it's crazy that we've reached the time where this is a question. it just feels like web browsers are just worse operating systems but that's somehow where everyone thinks things should be.
i generally hate using web-apps. the usability and performance just isn't there, and the web was not built for this use case. even now, it's a terrible experience.
for industrial engineering, scientific, creative jobs and more, almost everything you use aside from confluence is a desktop application. visual studio, visual studio code (line is blurred, but it's still a desktop app), solidworks, opera, other modeling software, xilinx vivado, matlab, visio, simulink, houdini, touchdesigner, unreal engine, logic, pro tools, studio one, VSTs, office suite, control applications, perforce visual tools, git gui clients, custom internal tools, etc.
all the real work gets done in desktop apps and yet people keep saying desktop apps are a thing of the past.
i truly don't understand what people's end game with web browsers and applications are.
I'm going to take the contrarian position here and say No, there is no place for native desktop apps. If a user has enough resources to run a graphical operating system, then they have enough resources for an extra copy of Chromium.
The exception to this is something either so simple that it only has one button (e.g. some file format converter) or so large (e.g. Photoshop, Bloomberg, AutoCAD, Mathematica, Visual Studio) that it surpasses the capabilities of the web platform.
Most things like chat, music, or word processing absolutely can and should be done with Electron or (imo) a WebView.
The reason Slack etc. is slow is not because of Electron but because the JavaScript is probably very poor, bloated, and not optimized. I used to hate Electron for being slow but after using it have changed my mind. The bottleneck is never on the Electron side; it slow because of your code. VS Code is an Electron app and is snappier than many "native" editors.
There are many people who have older hardware. For example I write this on a 2009 Macbook. My parents use older hardware still. And we're relatively well-off in a rich western country. These machines are perfectly fine to use for almost all tasks, but they're not powerful enough to run ten copies of Chrome. We might easily be able to afford that newer 500€ laptop, but for many people that multiple months of income. Not to mention the unnecessary waste the churn produces.
And also sometimes there is absolutely no need to buy new hardware! I find appalling that we need to upgrade our machines just because software gets more and more resource hungry. My parents as well use old hardware to browse, edit documents and write emails. Back at I home I still run a 10 year old PC with Linux and it just keeps running very well... No need to upgrade! I just think about the golden age of videogames and the software tricks developers came up with to make them run and the hardware those used to run on. Today basically you need to constantly upgrade your GPU if you want to play the latest cool game. Ridiculous.
I would rather reverse the question:
In which situations is it acceptable to use Electron, for example? Something like Balena Etcher makes sense, but Logic Audio not so much.
Don't let hype and popularity on HN (not to mention market share of JS as pertains to the amount of web front-end work vs desktop work) serve as a surrogate for noting actual performance given a specific application.
Wrap-a-browser approaches and bloaty non-native frameworks are good for configuration ware, and things that don't need to engage in high amounts of real time processing saturating the CPU and RAM.
Many applications continue to squeeze required performance out of each platform.
Cross platform approaches can serve just fine if there are zero-cost abstractions.
Audio/Video/3D/Image production software, for example.
CAD, not to mention developers tools and compilers, just for the tip of the iceberg.
See, everyone likes to complain about electron but when I did a market study asking “would you pay for a full-featured native Slack app that was lightweight and designed with functionality in mind” the answer I got was mainly “yeah but no.” As best as I can figure, it was mainly feeling they were entitled to it for free as it’s what Slack should have developed, so they were not about to pay to correct a mistake they’re “not responsible for” never mind that they’re the ones paying the price for it.
(I wrote a fully native iMessage client for Windows 10 [0] and enjoyed it enough to consider building a product around the code, minus the iMessage component.)
I am constantly annoyed by the web apps, not only because they consume so much resources, but because of the noticeable UI lag that drives me crazy. For example, I have been entertaining the idea recently of building a native Todo app for macOS because of how slow Todoist has become in the past few years.
Native applications will always have an edge, because they can do several things that are difficult or impossible in a browser:
1) Rich key combo support. When running your app inside a browser, many key combos are reserved by the OS (and they differ by OS), and many are reserved by the Browser (and they differ by browser). As a result, your app has to avoid a huge number of key combos, because some OS or browser uses those.
2) Latency. It's not impossible to make a fast web app, but you're already at a disadvantage, due to the inherent overhead of the browser and JS runtime. Put it this way: making a user experience that feels slow and sluggish in a native app requires a lot more mistakes than doing the same thing in a webapp.
3) Filesystem support. It's just better with native. Especially on Windows where you can fully customize the file-open dialog box with app-specific business rules and warnings.
4) Hardware. You'll always be at the mercy of the underlying browser's support for hardware. Need to allow the user to switch between sound devices? This is easy with a native app, but it may require going to the browser control panel if you're a webapp.
5) Leaky abstractions. As a user, I want to open an app and do everything inside of that app. When using a webapp, I may have to fiddle with browser settings, key combos may break me out of the immersion as I accidentally open some browser toolbar or help feature, and the browser toolbars and window is always there to distract me.
6) Updates. With a desktop app, it can show me an alert when it's time to update, and I can choose to update now, or do so later. With a webapp, the updates are normally locked to a browser refresh (I need to refresh the page to get the update, and the update will happen whether I want it or not once I reload). Sometimes, the app decides it's time to update and just force-reloads itself (in the case of an app window I've left open for too long - days or so, while working on something important).
It depends on what you’re developing, who the target users are and how much you need to charge to sustain yourself. It also depends on your skills and how much time you’re willing to invest in creating a desktop application (doing one that’s cross platform, that performs well and works like a native app on each platform would take a significant amount of effort).
Native desktop apps targeted at the average user are better done on macOS, since that platform has a higher percentage of users who will pay (compared to the percentage of users who’d just pirate it).
Applications targeted at professional users, corporate users and developers can get an audience that’s willing to pay on any platform.
If your application is better done as a service, and you’d like better control on managing versions, a web based SaaS might make sense.
>Native desktop apps targeted at the average user are better done on macOS, since that platform has a higher percentage of users who will pay (compared to the percentage of users who’d just pirate it).
That was always the standard wisdom. However I have released the same software on Windows and Mac and seen significantly worse piracy for the Mac version.
I always go for a native app over a web app/electronjs app. Native will most likely have the familiar idioms of the OS, lower overhead, and take advantage of platform specific features. I happily pay money for native apps. I generally don’t pay for blobs of JS in app form.
O God, please no! I can't see anything requiring true low-latency real time performance (like audio production DSP) ever being fast enough on the web compared to native. Also, when everything went 64 bit in recent years, developers got really lazy about memory management.
As things stand now there is no place for native desktop apps.
You have to give users a reason to prefer native desktop apps. But today, thanks to Flat UI, most end users can't even tell a native Windows app from an Electron app. If users don't demand native apps then developers have no reason to limit their customer base to Windows users (or Mac users).
For users to demand native apps, native apps must have a distinctive look and feel that is loved by end users. Before Apple switched to Flat UI, Mac OS X apps did have such a look & feel, and developers were making OS X native apps even when users could use a Web version.
If Apple or Microsoft dropped the Flat UI stupidity — which is both ugly and has usability issues — then one day demand for native apps may return.
Definitely, for apps where you can invest the money. Desktop development is really expensive, even more so if you want to support more than one operating system.
This means either apps with general mass appeal, or niche apps in specialized markets, but not small markets either.
Taking a quick look at my computer right now, I have a bunch of specialized apps that are considered "expensive" by many, and there is NO WAY I would want to use them as web apps: Ulysses, Camtasia, 2Do, Bear, Fantastical.
The other side of that coin, however, is that people do not appreciate how expensive desktop development is. For my SaaS, I would love to offer desktop apps (as in, not Electron apps), but the cost is prohibitive and this will likely never happen.
On Office 365 (desktop version), sometimes when you click a dropdown like the "different kinds of bullet lists" the whole application freezes for a couple of seconds while it builds the menu content or something. This didn't happen with earlier versions, if memory serves me right. I'm not sure if this is web tech or some other kind of new framework under the hood, but it makes me sad that with the speeds and cores we have on modern processors old-fashioned things like word processing have got slower than they used to be.
Native Applications in the platform's toolkit (not some cross-platform abstraction on top) will always squeeze a little bit more performance and usability out of the corners.
I've been going back and forth on this for quite some time. I find that I often end up compromising when choosing an implementation language based on availability of the kind of libraries/UI I want.
What I would really like is to completely separate application logic and UI so I'm free to choose whatever language fits best and potentially add several UIs on top of the same core without being forced to write it in C.
I recently started playing around with the idea of implementing application logic as a separate server that reads/writes a simple line based protocol from/to stdin/out. Simple enough to be tested from a shell, and easy to map to JSON for a web server or drive from a GUI.
Outside of niche use cases you get two much with electron.
Computers are so dam fast most users just don't care and those of us that do care use Linux anyway. If your app also runs nearly identically in a browser not just electron, on boring users is so much easier. I won't install an app just to try it, but I will use a website and reluctantly upgrade to electron.
A sand-boxed stripped down electron replacement as a system library, ala Deno for UI will replace electron in the next few years. Web-Native frameworks targeting WASM, WebGPU for responsive components and Dom for everything else will replace JavaScript, but the web will win.
For me the simplest example is e-mail. I need an e-mail client on my PC where I can browse all my e-mails even when offline. Nothing that runs inside the browser could match the speed and efficiency of a native app.
I crave native apps, especially ones that get out of the way. Web apps take up screen real estate, and are cumbersome if they involve login details or if you want to access the API (if one even exists). There is definitely a place for more native apps, and more creative designs as well. Creatively, I am always seeking inspiration, and web apps are just the same design patterns repeated everywhere. A fresh app like TweetBot or OmniFocus is valuable for its creativity and sleekness, it's just something you could never find in a web application.
The big advantage of web applications is that they don't have an installer, and they don't need to deal with pushing updates.
Many comments here explain the advantages of full desktop applications. (And there are many.)
The thing to always keep in mind is, does your application justify an installer? There's so many tiny utilities that run in a browser that I can find with Google. These are utilities that I probably use once a year and don't care to install.
But if I use the application heavily, I want a desktop application even if it's just an application specific browser.
I prefer native apps ALWAYS when possible as a user. I get that it isn't the direction the world is moving, and that's unfortunate.
With all the computer resources in the world (tons of RAM, NVME drives, etc) it's still easy to tell the difference between native and anything else. It's amazing with these resources that apps can just be ridiculously slow-feeling. It's just painfully obvious.
Just another hell we have to accept living in. Overly dramatic? Sure, but I wanted to express how much disgust I feel for non-native apps.
Yes! at some point I got tired of web stuff, I try to use small native apps for the day to day stuff, cramming everything into the browser completely kills the benefits of the desktop environment.
I created a small macOS native app, it is a menu bar app for monitoring CI pipelines https://tempomat.dev, something like this is not possible on the web because everything is running on the same browser but native features are so much better
I am running a native desktop app Jumpshare (https://jumpshare.com), a remote working communication tool that helps you share your work and ideas through instantly shareable links. Basically, it combines video recording, screenshot capture, and file sharing - all in one app. This is not something you can build using a modern browser or Electron because it does not have tight integration with the operating systems.
Of course there's still place for native applications: movie players (VLC), audio players (Winamp), animation software (like Blender) come now to my mind, and if you look around you find more and more. The web has shifted long time ago its initial purpose: displaying and sharing documents to be a common platform to run full fledged "desktop" applications in the browser. The problem is what you want to develop and the intended audience of your application.
I run a native desktop app called Moneydance. Syncs files, encrypted, to dropbox, and everything runs locally on whatever machine runs Moneydance.
I use this because I don’t like the idea of financial account info being stored by a third party has has no liability for breach. And also because I have over 10 years of transactions in the ledger and running this remote can take a long time. I’m not even sure if there are any services that store for 10 years and let you do text searches.
I've been working on a sort of hybrid of the two by configuring a web app to use CouchDB installed on the user's desktop pc to store the user's data.
The app code is delivered by a web server and runs entirely in the user's browser and with service workers configured it runs offline too.
Personally, I think it would be cool to have a sort of secure "runtime engine" on the client side that could manage permissions to system functions for client side web apps.
Barely ever. If I have to install an app, even (especially) an Electron app, you've already lost me unless you provide something I can't get anywhere else.
Yes, I detest Electron and how its adoption has made apps that would otherwise be great, clunky (looking at you Microsoft (vs code) :/ ). I love Sublime Text by comparison.
I'd pay good money for a native Evernote alternative.
I think no cross-platform solution will ever compare to native, and in the long run, native will prevail.
Electron might be good for quick development and deployment, but that's for the developer's benefit, not the consumer's.
OS extensions like command line tools, Alfred or Little Snitch will remain native by necessity.
As will applications like Final Cut Pro and Logic Pro. As I see it, the web platform is neither adequately performant, nor are its timing mechanisms accurate enough.
So no, native is sliding. I argue that >95% of users are happy with browsers and/or electron apps.
Either that, or those peeps move over to iPads. Which by design forces native or psuedo-native apps.
Sure, there's a place but it's been my experience that native apps have very complicated frameworks using complicated (and unsafe) languages. Qt and GTK both use C++ with other languages as am afterthought.
Web apps have an incredibly flat learning curve with an equivalent entry barrier. If the same were made for native apps, they'd be more popular.
Web apps aren't displacing things, they're filling a void.
As a user, I really miss native apps. The difference in UI responsiveness is huge. Not to mention battery and memory usage.
As a developer I understand why the native app road can become a more expensive solution, especially for startups.
But, I have to express that I am really tired of glitchy super heavy javascript apps, online, mobile and desktop. I would choose old clanky but lightweight UIs everyday.
Good luck using peripherals that requires physical access to them. The day web browsers will have full physical access (well, they technically do but you guys understand what I am saying, right?) is the day hackers will transform internet into that dystopian world from the old movie with Keanu Reevs - Johnny Mnemonic (the short novel is even better btw).
I think there is, and as the "desktop dies" I think we will be left with a higher percentage of users expecting high performance native apps, when "normal people" migrate to tablets and smartphones for all of their tasks.
I see the possibility of a resurgence of native apps based on that. or maybe it is just wishful thinking.
One trivial benefit of web app worth to be considered is that they do not require installation. When targeting business users, this can big a big plus as it turns out that installing/running unknown software in corporate environment is usually not permitted. Most users will give up testing if it requires getting IT to install the software.
Native apps are better than web apps in terms of UX, but few companies have the budget to maintain both a native app and a web app (unless they can share code, as with Electron). Since most companies need a web app anyways, they have little incentive to spend the time/resources needed to develop and maintain a separate native desktop app.
I go out of my way to avoid web apps. They're uniformly awful. They also have a tendency to require an internet connection, even when they don't need to. Some can use local storage, but the other problems with web apps don't make up for it. I don't want an OS running on my OS. I want to just use my apps.
Look at the difference between things and todoist on macos. Things is much snappier and better looking while simultaneously using fewer resources. Todoist is an electron app whereas things is native. It really does make a difference and I wish there were more truly native apps out there.
After using Photopea (web) as a virtually full replacement for Photoshop (desktop), I'm starting to think that these days, all interfaces and business logic should be web-first. JavaScript is fast, other languages can compile to it or WebAssembly, and HTML+CSS is a lingua franca. Obviously PWA's are required, designed to run offline.
Where brute-force performance is a #1 concern I think it diverges. For things like Photoshop filters or rendering, WebAssembly may still be fine (it depends). But for real-time processing that needs timing guarantees or more direct access to hardware (games, DSP, etc.) the code has to be native running in dedicated threads. Still, it would be nice if those operated essentially as native browser plug-ins that still interfaced with a browser environment, rather than a totally separate executable.
I'm seriously ready to let my web browser just be the GUI to my operating system. I want my file manager and terminal and system preferences just to be tabs next to my Gmail and Docs.
Speed in relative terms means that JavaScript in the browser is not fast. The language runs on top of just about as many abstraction layers as possible and I do not envy this world you describe. I wish we’d move away from it.
But it's fast enough. JIT compilation means algorithms run decently fast, and when it deals with the DOM it's fast enough for everyday interface tasks.
Nobody's saying it's elegant. But it's cross-platform and it's a standard that lots of people know. Wishing we move away from it seems like a lost battle at this point -- at least unless we somehow invent a brand-new computing paradigm that replaces browsers, the way browsers have replaced apps in many cases. But I can't even imagine what that might ever be.
It’s not though. Not even close. And I know this partially a matter of taste but I have one of the fastest computers you can buy and I can’t even participate in text chats without input lag and delays in interaction. It’s not a good user experience for me at least but that’s ok, we’ll keep stacking more cpu power until it’s all in the cloud anyways.
I’m talking about the native slack app. I used to use the irc gateway and now go back and forth between desktop and unofficial gateways. I use keyboard shortcuts to navigate between chat windows and nothing about it is instantaneous...even when there’s nothing but text. Same thing applies to discord, and less so with WhatsApp and signal. Messages on the other hand is silky smooth, and terminal irc couldn’t get any faster.
Wasn't Steve Jobs' idea to have Web Apps for Iphone but Iphone team talked him into native apps for various reasons such as performance and security and he finally gave in.
I would assume the same for desktop versus web apps but as time goes on I think web apps will prevail.
I've been moaning about it for years now. I really hate the experience that web-apps provide. Electron is not much better in that regard. I want my apps to be native, I'm willing to pay for it, please stop moving everything to the browser
I prefer desktop apps and I pay license fees for those, which I use as standard for the task or daily. Nothing beats them. I especially like it if they are made scriptable, preferably by using a scripting host provided by the OS (if available).
For the company I work for, our software is preferred offline to secure sensitive data. We, and they, cannot afford data leaks like online “cloud” services.
We’re a very unique niche software that could be design online only, some have, but they slowly fail.
I would say it depends on complexity. Either algorithmic or UI complexity.
Web apps just aren’t as efficient as Desktop apps, and at this moment this is still relevant in various cases, probably in many less in the future.
I believe desktop apps are relevant when they are for apps that are frequently used and need high performance that the browser can not necessarily provide. Also I am more focused on the task when using desktop apps.
Whatever contemporary technical arguments for and against are, it's worth bearing in mind that people were asking the exact same question, with similar arguments at least 10 years ago (and probably more), so I would say:
Yes! There is a place! My machine! Please keep writing them! Offline experiences that launch in ten milliseconds and use kilobytes (not gigabytes) of RAM, are better than any website or electron "app"!
Absolutely! Clean classic desktop apps (still common on KDE and Mac) provide the best UX so far - they are easy (intuitive) to use, they look tidy and they work fast. I choose such whenever it's available.
The lines between the two have been slowly blurring for some time now. My crystal ball says that will continue as the need for cross platform support, especially desktop versus mobile, gets bigger. I think we'll see more and more emphasis on 3rd party VM type frameworks, like JVM, unity, and electron. However with Moore's law reaching an asymptote I believe that writing for performance will also become more and more important. I think these forces will result in a high demand for the skills needed to improve performance on all types of apps.
What I'm hoping for is a pendulum reaction to the recent trends of UX design, where we see a renaissance in really solid interfaces that focus on discovery and speed. But I'm not seeing signs of that yet.
I want native apps not webpages that run in their own 'app'. If I want a web app then I'll run it in a web browser. Things like Electron apps are something I simply refuse to run.
Life sciences laboratory monitoring, trading dashboards, factory automation panels, car infotainment systems, graphics, audio and video processing tooling are all largely desktop applications.
The key advantage for webapps is distribution. I can create a new version and almost instantly distribute to all my users. Therefore webapps can change much faster than native ones
Web last, badly, and only when needed. Having tons of resources at disposal shouldn't be an excuse to add layers of unnecessary stuff where native code always perform better.
There's still a place. But not for reinventing wheels and half bake a new take on a "solved" problem (mail reader, music player, personal task manager, etc.).
Productivity and true isolation, even somethingas as minor as window size/placement... wouldn't replace VS Code, Slack or Spotify clients, they simply do more.
I sell a native Windows program for monitoring battery life and health. It wouldn't be possible to build as an electron app, at least the battery monitoring part.
Native desktop apps are simply better. Cross-platform solutions promote mediocrity. Someone has to make good software otherwise our civilisation will fall.
Native desktop apps already are one of those fields that very few people can master. Such niche will probably become super lucrative in the upcoming years.
They're not that hard. They're just unfamiliar to Unix command line users. A lot of people managed to write desktop apps in Delphi, Cocoa, and Win32 for many years.
Edit: I presumed that the context of the discussion was apps in a general sense. Obviously not every application is going to be on the web and there are a lot of niche software that will never make the switch. I was thinking more qucad or sketch 3d for woodworkers to plan their desks, not solidworks and all its plugins to handle complex fluid simulation in mission critical areas.
We've had some success with 2D AutoCAD on WASM, but it's not as easy as just recompiling when you are dealing with a nearly 40 year old codebase and care about pixel level accuracy. Also, it will never be as responsive as native, mouse events lag on the browser even if you are doing nothing else. AutoCAD users care a lot about such lag.
The BIM stuff (i.e. Revit) is very Windows-specific so a straight port with WASM is a no-starter.
The actual 3D rendering with WebGL is the least problem, we have that nearly solved (in terms of performance relative to desktop -- we never really needed 60fps anyway). The problem there comes in when you have a huge design that does not fit in the allocated/allowed heap space.
I worked in the EDA space for a while and even built an online circuit simulation tool [0] while there. I'm aware of couple of startups such as [1] that tried to move these tools to the browser. There are a number of challenges that make web apps difficult in this space such as"
1. Pricing model: For the simulation side of EDA, you need a lot of computing power. With desktop apps, the user already has a powerful machine available that you don't have to pay to use. With the cloud, you need to pay for all the machines so finding a pricing model that works is a challenge.
2. User expectations: The users who are willing to pay > $100k/seat/year expect very advanced features that would probably take 10s or 100s of engineering years to get working. These users also don't care as much about the UI/UX problems that a modern app would solve.
God I hope not.
Web apps have come a long way. Some SPAs I've used feel native and provide just as good as an experience as any desktop app.
Having said that, I feel that (IMO) a well designed desktop app is just more pleasant to use than any web site. I've written software for both scenarios and I enjoy developing native desktop apps over websites so I am probably biased in that respect.
HNers don't seem to get it, but the average, non tech-savvy really doesn't care whether an app is native or not. It just doesn't really matter, unless of course you're doing something performance-intensive.
But most people are probably not saying "ugh, Slack is made with Electron, I should completely avoid it."
as a Linux desktop user. I believe, having web applications is preferable to desktop ones. however, the problem is talent is lacking around the industry to make performant web apps. Not every company can produce a high quality products such as Figma. things on web are not tied to walled gardens or any other stupid shit. easily accessible.
In my opinion, I suspect that many people shipping electron-based apps would gladly switch to a native toolkit if there was a dead easy way to develop cross-platform GUIs.
In the 90s and early 00s, there were many easy language products like Delphi, VB, and others who were much easier to use than C++ and friends. Cross-platform could be done with REBOL, REALBasic (aka Xojo), Runtime Revolution (aka LiveCode) and many other languages. Users didn't expected fancy flat animated user interfaces made of 100% custom controls, people were fine with normal native standard stuff as long as the app worked.
Now, there is no easy way to build cross-platform apps that is easier than electron. Languages like modern C++ and Rust are extremely powerful but they require more experienced developer to be used effectively. GUI toolkits like Qt and wx, are good but a ton of UX professionals now are trained to ship UIs in tools like figma, skewing all the native controls and creating beautiful masterpieces that are easier to do with HTML/CSS than implement with a native toolkit.
Native apps got harder to do not because we don't have access to the tech we had, but because of added friction. It is like death by a thousand paper cuts. You make the professionals designing the UX/UI create interfaces that are harder to reproduce with native technology, then you select a GUI toolkit such as Qt/wx and try to reimplement that using whatever language is best. It takes longer than developing an electron app, requires more expensive developers. Don't get me wrong, it will be a better product, but it will take longer, be more expensive, and will need to be done with more care. And after the app is done, there is the need to deal with all the sandboxes and crazy stuff that Apple requires these days if you're shipping a native mac app. Many electron apps sidestep a lot of this by simply being thin clients for some SaaS company and doing the hard stuff on a server (which also opens an opportunity for mining the data for our capitalists overlords).
The barrier for entry for developing an electron app is so low. JS is an easy language to start. There are so many ready made UI kits. You can have a small app running before a new developer can understand how the new pointer stuff work on modern C++.
All that text is to say that I think we need easier languages and toolkits that target native experiences. Something like Lazarus, LiveCode, are all steps in the right direction for me.
The whole idea of web first has actually made things worse. Let's compare web apps to native desktop apps, shall we?
Memory Usage
A few years ago while trying to figure out why I was running out of RAM, I stumbled upon chrome’s task manager. Gmail was using up 700MB of RAM and all I had open was the Inbox. No fancy search box or Compose, the Inbox was taking up 700MB of RAM to display lines of text in a Tabular format. I’ve been a heavy user of Email clients and if you pick any of the native clients (Thunderbird, Outlook, Apple Mail) and open a bunch of searches and compose windows the memory usage will still stay below 100MB.
I used mIRC a few decades ago to connect to multiple chat servers, run my own bots, serve files over DCC and run scripts all while staying under 20MB of Ram. These days to send a text message over Slack I need over 1GB of RAM. Even basic apps with all of their trackers, external javascript scripts and doms take up 100s of MBs in memory. And the most common solution to running out of memory is to close chrome windows.
Battery/CPU Usage
DOM is slow. Updating layout and repainting is even slow. Javascript being a dynamic language is slow. Web apps are anything but simple these days and since they have to work under the limitations of all the slow components, the CPU ends up making up for it. Chrome is almost always present in my Macbook’s ‘Using significant energy’ tab. On the other hand, I rarely if ever see native apps in the battery bar unless I’m doing something which should actually consume a lot of CPU cycles like running multiple synths in Ableton or Compiling.
When apps are built for mobile, battery usage is a major area to optimize on. But nobody even gives it a thought for web apps
60 FPS
Mobile apps run on 60fps, Native desktop apps run at 60fps, Games run ( or at least are designed to) run at 60fps. Instagram on chrome starts at 2-3 fps and usually hovers around 30fps. My laptop which is supposed to be more powerful than my mobile phone is only able to churn out half the frame rate even though they are both running at the same resolution! Web apps are not buttery smooth and can be jerky at times. Sure somebody can work really hard and optimize a website to run at 60fps at all times, but it needs a lot of effort. Mobile apps run at 60fps out of the box without much effort.
Mobile and native apps use the GPU a lot more efficiently. Since they have a lot more information about which controls are being used, they can recycle items a lot better and do a much better job at hardware acceleration via the GPU. Not to mention that you have access to raw graphics APIs in case you want to push UI performance even further.
Development Environment
You can build desktop apps in a plethora of languages and even for mobile apps you have many cross-platform frameworks. But building for the web(and electron) forces you to HTML/Javascript(and derivates). Javascript is a dynamic prototypical language that was not designed for the scale of apps that are being built in it today. It has no type safety, memory leaks are common and it’s slow due to its dynamic nature. While there have been many efforts to improve performance like V8s JIT, it still lags behind a lot of major languages. WASM seems promising, but it’s still has a long way to go.
And it’s laughable how weak the browser platform itself is. SQLite is available on raspberry pi, embeddable systems, mobile platforms and just about every other platform on the planet. You can store data, run complex relational queries and do more with so less. But it is not available on the most widely used platform on the planet, the browser.
With the rise of electron apps, big and bulky web apps have started making their way onto the Desktop. There are some who are even trying to get the same apps on mobile via PWAs. While the web has the advantage of easy accessibility, IMO we have sacrificed too much in terms of user experience for that one convenience. If you built an app with the same UI and functionality in a native language and as a web app, the native app will run circles around it.
>Modern browsers these days are powerful things... just a curiousity, are there non native browsers?
Sort of. There are browsers written in things like Java (HotJava was an old one that was basically a demo app) and so I guess anything written in a VM-language could technically be considered non-native.
this just like asking:
do you prefer to use native/high performance applications? yes
do you like developing native application? no
if the performance and memory usage of web app already equal to native desktop app, which do you prefer to use? web
Adobe seems to have achieved the combination of SaaS and Native.
They got data lock in with Photoshop and Lightroom. Admittedly, the free alternative just doesn’t work as well as Photoshop or Lightroom.
The SaaS is on a monthly or annual subscription. So spending $20/month, if you only use it for 1 month, is far cheaper than spending $800 outright for a Photoshop license. And now, Adobe has the potential to reach a global audience of 7 billion people.
Of course, there is some risk that your code will be pirated, decompiled, and reassembled, to remove the ET-phone-home feature, but this is only a minor annoyance. It’s better to get your product pirated, than another competitors product. That way, the people using the pirated copies, who are likely poorer, or kids, will learn your program, and eventually pay for a full commercial license eventually, when they get a job professionally using the product.
Sure there is, but the place isn't with small - medium business. Only large corporations that can afford to develop native cross-platform applications will do so. Hence why there's an influx of Electron applications, etc.
I think you'll also find that the typical non-technical users simply don't care whether or not Electron is powering a piece of software. They just want something that looks modern and works "well enough".
You need a desktop app when 1) you’re spending a ton of time in the app or 2) you need every ounce of performance and feature set that you can obtain and 3) you’re willing to pay for it.
If you’re not building something like this, don’t build for desktop.
Not having a real native desktop player is why I won't subscribe to Spotify. When I'm going to use something more than lightly, I hate using applications that feel like web tech. But I realize I'm in the minority.
Reads comment. Cocks head, curiously. Closes eyes, listens to the music coming from his computer. Opens eyes, alt-tabs to the Spotify app. Closes all web browsers. Music continues, uninterrupted except by ads. Shrugs, goes on with his day.
I will try my best to be nice about this. But what the HECK are you talking about? Yes, of course there is. There will always be.
Electron(or whatever web-based nonsense somebody comes up with six months from now):we've optimized your insane garbage collectors. We've optimized your GPU insane modules. We've optimized data structures to get a 10X performance increase. We've cut down memory usage by half, which still rounds down to AT LEAST 100MB in most cases. Look at me now...
Desktop/Native:Good for you! But did you know my code runs on bare metal? The only thing stopping it is cache misses and dirty TLB entries. Sometimes I can use registers, but there's just a few of them. And, well, stupid RAM sometimes takes over 100 nanoseconds.
Web:What in the world is cache? TLB? Are you on something right now? What is a nanosecond?????
Web cannot perceive what a nanosecond is
Native:There, there. You'll figure it out one day.
I don't want to come off as snarky, but this feels like a dumb question. And yes, I know, from a "business" perspective writing the code "once" and have run everywhere makes more sense. But let's be real, one of the main reasons a lot of these tech companies like Discord/Slack is jumping on the Electron wagon is money. It's cheaper to have one team of developers, than 3 for each platform. But even that argument feels a little weird to me when stuff like Qt that is 100X more mature than Electron will ever be exists. And before anyone asks, as long as you open source your app, Qt is completely free.
I do think that in SOME cases an Electron app MIGHT make sense. But cluttering native platforms with all this bloat is like riding a horse to work on your 40 mile commute instead of your Honda. Even that's a horrible analogy because your Honda is terrible for the Earth. But hopefully you get my point.
This lack of concern for computational resources by us programmers is utterly discouraging. As has been mentioned before in this thread, people that do ACTUAL work on computers will not take the bloated nonsense. Go ask the struggling artist if they prefer to use Krita or the new "modern" app that takes 600MB of RAM and the entire system gets bugged down to the point where they can't do research on their browser while drawing because Windows10/macOS plus the "modern" app takes 90% of their RAM because they have a 4GB/6GB computer.
Well shit, I guess I need a 1500 dollar computer to just draw a few sketches. I already have crippling debt, what's some more debt under my belt gonna do?
There is a need for native apps, however that need is shrinking and, in my opinion, will continue to do so. I also happen to think that's a good thing.
The issue is that developing for a particular OS is, generally, more difficult than developing for the web. The app delivery mechanism is also more convoluted (for native apps) than simply entering a web address in the browser. The web also seems to have more ubiquitous standards that abstract away the differences between OSs - you can, with a high degree of certainty, ensure that your app is usable by 99% of computer users, given the current software they have on their device - with native apps, there is no such guarantee, especially if you're relying on shared libraries.
Browsers are also becoming more feature rich. This has had a negative impact on their memory consumption, but given it's 2020, some may retort "memory is cheap". And while some browsers (Chrome, looking especially at you) do a very poor job of memory management, I believe the competitive nature of the browser market will force a reawakening soon, where a lot of the inefficiencies in memory management will have to be eradicated (or vendors risk losing market share). Think back to 2014/15 when Node.js was really becoming established - the PHP team suddenly felt a need to re-optimise their engine for PHP7 (and with dramatic results)... Point is, only after ~20 years at the top did something spur PHP's team on enough to do something about their inefficiencies.
Finally, I can see a future where the line between web and native apps is even more blurred. WebAssembly is the first step, however things could get even more elaborate. Perhaps we could end up with the ability to start docker-like containers from within web apps (given the right permissions, of course) to spin up servers for audio/video/image processing on the client itself, and interact with them via a web page. If we get to this point, native apps would feel even more obsolete.
I know your question was only about the current state, but I felt the need to talk about the future to highlight the marked difference in the rate of innovation in the web ecosystem (fast) vs the native ecosystem (slower). It would not surprise me if the web ecosystem ends up winning in the end. If I was building an app today, and it only requires features I can deliver via the browser, I would almost certainly go the web app route.
Now, if I was building a video processing app today, I'd almost certainly go native, but in a couple of years, my answer may be very different.
Don't care, i'll take a good web app over a shitty native app any day.
Since native apps take much longer to develop and the UI toolkits are stuck in the 90s, it's pretty much given nowadays that new ones are shitty compared to the equivalent web one.
Please take a look at actual screenshots from the 90's to quickly disprove your assessment.
The only other argument you mention is that native apps take much longer to develop, and that from that somehow follow that they are "shitty". I'm not even convinced the premise is correct, but I certainly don't see why longer development times would imply worse outcomes...
Intuitively, I would assume the opposite: something that is put together all too quickly should have a higher potential for being half-baked.
I'm not criticizing your preference, I just don't buy the arguments.
But like, do they. Personally, I've found using a high level native UI API (like Racket's) to be much easier than a webapp. More flexible and performant too.
Personally, I hope that web browsers die off and we go back to gopher for displaying information. I hate the web.
Not anymore, the security model for native applications is broken, OS does not sandbox applications as effectively as browsers do. Pre Web 2.0 you constantly had all sorts of installed in your system because of installers just doing as they pleased, Applications like Adobe CC abuse the root rights have critical security flaws because of this, run many system services, mess up registry, Oracle put adware on the JRE installers. Large and small companies constantly abused this. 5-6 months you would reinstall your OS just to get a clean base, the performance boost on a fresh install was always strange experience, you felt happy on the boost but you also sad will not last even few weeks.
While there are high performance and graphics heavy tools and niche products that still require native apps , with tech like WebGL , WASM , WebAssembly this limitation is also going away. Unless your application falls into this category where the gap between Browsers and native is still very big, do your users a favour and build a non native application.
The efficiency difference between native and "modern" web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.
Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.
For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II. Even the web stuff at the time was nowhere near as disgusting as it is today --- AJAX was around, websites did use JS, but simple things like webchats still didn't require as much bloat. I lived through that era, so I knew it was possible, but the younger generation hasn't, so perhaps it skews their idea of efficiency.
In terms of improvement, some things are understandable and rational, such as newer video codecs requiring more processing power because they are intrinsically more complex and that complexity is essential to their increase in quality. But other things, like sending a text message or email, most certainly do not. In many ways, software has regressed significantly.