Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Let's ask the question differently: what problems were solved?

A few more:

* Seamless internationalization. If you're a native English speaker you probably never experienced the "fun" of dealing with French and Russian in the same text document. Pre-Unicode supported English + one other language, if that other language wasn't too weird.

* Lots of tiny life quality improvements. Eg, not seeing windows repaint costs a LOT of memory. Every window is present in RAM even if not being looked at so that when you switch to it you never see it paint.

* Stability. Windows 9x tried to be frugal by keeping a copy of everything in system32. That was called "DLL hell". So the current standard is that the app just packages every framework, so you may have a half dozen copies of Qt easily.

> Do we need GBs instead of MBs for that? Why?

Well, let me look at my AppImage:

3.8 GB total.

2.3 GB of dependencies. 2.1 GB is libnode, 128 MB is Qt Webengine.

1.4 GB application. 126 MB of JavaScript and UI images. The rest is mostly code.



> * Seamless internationalization. If you're a native English speaker you probably never experienced the "fun" of dealing with French and Russian in the same text document. Pre-Unicode supported English + one other language, if that other language wasn't too weird.

For some programs, that hasn't changed. I use OneNote heavily to write some sort of personal info database I always look up when I forget something or need to reproduce a command verbatim quickly. The act of writing it and organizing the data also heavily reinforces my ability to memorize thing in my mind in and of itself too. So I'm quite fond of that little program.

When I tried to use it while learning Chinese I ended up having to turn off the spelling/grammar correction. It just can't function with two languages in the same notebook. All the Chinese text had the red squiggly lines warning you of a mistake and I found no way to enable the support for more than one language. You must select /one/ language for the spell checker in that program.

Or disable the spellchecker, which is what I did in the end.


I guess that's the worst case. Where we add gigabytes of data, require magnitudes more CPU and memory, but reintroduce problems that were long fixed.

I see that a lot with JavaScript apps. When they replace native, they often fail in details. Where e.g. my native text areas can handle multiple languages when spell checking. But where that diy or spellchuck.js npm version cannot.


My point is that we are improving the experience.

But that the "cost" to do so, isn't what's technically required to improve it. You can achieve all these improvements, solve all these problems, probably without much more resource usage. Or negligible added resource usage.

Therefore my conclusion is that the reason e.g. slack usage x1000 what my old IRC or Jabber cliënt used, isn't technical. It's a deliberate choice made for reasons of budget, time to market or another trade-off.

I'm certain that Slack could build a client that does all what slack does, in a client that's hundred(s) of times snappier, smaller, less CPU and memory using. But probably not with their current pace, budget, team or wages.


> Therefore my conclusion is that the reason e.g. slack usage x1000 what my old IRC or Jabber cliënt used, isn't technical. It's a deliberate choice made for reasons of budget, time to market or another trade-off.

That was never not the case.

Jabber makes a big usage of XML, which back in the day was very much seen as overkill. It requires a pretty complex parser, and increases the amount of data considerably.

They could have gone with a much more compact binary protocol with ID/length/value pairs, where there's not even field names, but say, a 16 bit integer where IDs are allocated from a central registry.

Even going back to DOS, you could shrink a program with measures like outputting "Error #5" instead of "File not found", and require the user to look up the code in a manual.


I don’t know about nextstep but macOS had all this stuff when I first used it 20+ years ago. It featured compositing rendering, had the apps, supported ppc/x64 in an app image, had a microkernel. I even remember it got an emulator for running ppc code on an x64.

The newest macOS still needs more memory and suffer bloat but 8GB is still perfectly useable if you avoid google chrome. 8GB is also perfectly usable for Linux too.


My SO's laptop is an intel MBA with 8GB of ram. Everything's fine until Google Chrome starts (some work tools). Even if the cpu is not as efficient as the m-series, it runs quite well, even with the tropical weather. But launch Chrome and you have a toaster.


Yes but OS X was widely considered unusably slow for the first few versions. Also OS X was only partly a microkernel. Things like the filesystem and network stack ran and still do run in kernel space.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: