“We can learn from the competition,” said Dotzler. “The way they implemented multi-process is RAM-intensive, it can get out of hand. We are learning from them and building an architecture that doesn’t eat all your RAM.”
That's the money quote here. I've been waiting for this for a long time actually. Every browser I've tried except Firefox just basically eats all my RAM and other app performance (e.g. compiling stuff) goes down the toilet.
Then on the other hand FF has not been so snappy and responsive traditionally. So responsive + soft on RAM is the combination I've really been waiting for. Let's hope they can deliver.
TP5 loads 100 popular webpages, served from a local webserver.
I don't know if that means the test pages are up-to-date copies of real-world pages or not. On one hand, real-world pages have certainly been getting heavier, and it would bear mentioning if that fact was not reflected in this test. On the other hand the graph becomes really hard to interpret if the test suite isn't a constant.
The actual pageset appears to live on build.mozilla.org[1], but is down atm (for me at least). The actual list of pages is on the areweslimyet github repo[2].
It seems like it would be a really good idea if they also displayed a line for a baseline version of Firefox, holding the version constant so you can see how the webpages have changed.
Not only is this false, but completely ignores troves of older hardware existing in people's homes that has to run browsers just as well.
The average consumer hardware improves at nowhere near exponential rate (although I guess it depends on the timescale and your willingness to fudge the graph to sparse data points).
>completely ignores troves of older hardware existing in people's homes that has to run browsers just as well.
The performance has gotten worse for users of a computer from 2006, but it has improved for users of ten-year old computers (since a ten-year old computer in 2012 would have been from 2002). If you assume that a computer is less likely to be used the older it gets (and that this rate is roughly stable), then I am not ignoring older hardware at all.
>although I guess it depends on the timescale and your willingness to fudge the graph to sparse data points
thus is the nature of all real-live long-term trends. If you pick a sufficiently small timeframe, they never work.
I'm saying that the performance of today's firefox on 2006 hardware is better than the performance of 2012's firefox on 2002 hardware. I consider that to be a better measure than to compare both releases on 2006 hardware.
Hardware does not get exponetially better. Maybe it did in 90ties and 00ers. Right now hardwardware gets slimmer, less power consuming and cheaper. But not better. My 12 year old (then mid-end gaming pc) is pretty much exactly as fast as my current, 2(?) year old office machine. (x264 and other mulit-threaed cpu benchbmarks) But it costs a fraction and uses next to no energy.
/edit: Yes this is wrong. I made a mistake. My "gaming" desktop is "only" 8 years old. It has a 4 core AMD processor. It's on par with current atoms.
An extremely low-energy atom, that runs at half the frequency might have about the same performance as a top-of the line 12-year old Pentium 4, but that's not a meaningful comparison in my eyes, considering that we still have high-end gaming PCs today.
A 11 year old Pentium 4 @3.4GHz has a passmark score of 401. A modern 2-core $40 Celeron G1840 @2.80GHz has a passmark score of 2984. And that's among the cheapest processors you can get as end-user.
>that's not a meaningful comparison in my eyes, considering that we still have high-end gaming PCs today.
It is meaningful when you consider what the average user is purchasing. Back then you had to purchase good stuff to run the latest office apps, etc. It was common that the only thing needed to get a friend into the gaming world was the purchase of a graphics card. Now most casual users do not have anything close to what is offered in high-end gaming PCs.
Right now buying anything that doesnt have an atom gets you a literal monster. I've had a "decent" laptop (entry level i3 and entry level discrete AMD spits with disgust GPU) from 2010 and anything for the same price in 2016 will get you an order of magnitude more powerful machine
Your 12 years old pentium 4 (or whatever) likely has 1 core and 1 thread, which is immensely noticeable even compared to the lower end modern 2 cores/4 threads machines (let alone 4/8 mid level modern desktops).
RAM available in mobile phones seems to grow exponentially [1]. Power efficency of processors also seems to grow exponentially [2] (does anybody have a source for ARM processors?).
Power usage has little to do with memory usage. In fact, if the browser is using more RAM as a cache, it won't hit persistent storage as much, which actually increases battery life.
If that exaggerated RAM use causes other more intensive apps and their data to be evicted, that can cause a much more significant load when the app has to spin up again and re-acquire or re-calculate its state.
The extra chips are not free. What we are seeing is bringing up the latest low power tech into the phones for now. Except RAM has hit maximum densities per chip already or is very close.
Remember that RAM has to be powered on for every refresh cycle...
It is "nothing" compared to screen or radios, but it is there. So you won't be seeing more than one Dram chip in there, footprint notwithstanding.
Browser are caching web content, not much persistent storage content... and the OS does persistent storage caching fine, typically no need to try to add your own (poor) layer to do that.
My hardware doesn't improve exponentially. In fact, my current work laptop has less memory (8G) than my previous one (16G) or previous desktop machine (also 16G). And it mostly works with Chrome, but I stopped using Firefox because my habit of opening dozens of tabs led to stratospheric memory usages that killed even 16G machines. That was somewhere in 2013, looks like it's even worse now?
> I stopped using Firefox because my habit of opening dozens of tabs led to stratospheric memory usages that killed even 16G machines
I don't think that's representative of how Firefox works. I used Firefox similarly in 2013 and didn't see those problems; the users supported by my company, mostly on 4GB machines, also didn't see them.
I have a 112 tabs open right now and Firefox has been running for a couple of weeks, which means thousands more have been opened and closed. It's using 4 GB of memory (EDIT: Which is more than usual, but it's fine with me).
Agree with you here - this is essentially the reason that I don't use Chrome on my desktop (aside from brief testing, or to load up something quickly that requires Flash).
Firefox is still a lot better at RAM and CPU usage when hundreds of tabs are present in the browser. This persists when using e10s.
I see your 112 and raise you a 875. FF worked just fine that too with just a less than 1 GB of RAM. I had to block flash, hand tune some of the cache sizes and have a tab uloader etc. No other browser seemed capable of handling my rather <cough> unique <cough> use case.
If you don't mind me asking, what are you doing that you have 875 tabs open? How are you able to keep track of what you opened, and where etc? I tend to feel anxious and unfocussed when I've got over a dozen or so open - at which point i'll dump some "I'd like to read this" articles into a bookmarks folder which I work through in my down-time, and clean out periodically.
There are extensions that can search the body of the open but out of view tabs.
I have commented at length why I use so many open tabs. Basically that's my cache-to-do-list. Once I revisit certain tabs enough times they get upgraded to a bookmark. What bookmarks don't do well but tabs do marvelously is preserve the trajectory and context of how I arrived at the page. Very useful to recreate the state when I visit it later.
Firefox has gotten much more stable over recent years in regards to having excessive amounts of tabs open. I can have over 300 tabs open and it's stable (I was using Waterfox when doing this in Windows though).
I have over 300 saved tabs in Firefox on my desktop, and it is stable, but I notice it runs quite a bit slower than when I only have a few open. I really need to start closing some tabs here and there...
It only loads the actual tab when you activate it though, so I don't think all the tabs are being stored in memory, just the ones you activate during a session. I probably actually load about 30 or so during a session, with many more being opened, looked at then closed.
That you for your evaluation of my buying decisions! With absolutely no information except one data point about memory side you were able to conclude they were bad, this is remarkable. Has nothing to do with the point in question (that memory capabilities aren't nearly growing exponentially, not lately) but still very remarkable insight.
I assumed your workplace made the decision and not you. Sorry for appearing to criticize your decisions. I'm still curious why you downgraded to a machine with less memory.
But this renders your original point moot. Memory is still growing, you just chose a lower end laptop. Obviously you would see it decrease. For me, my previous laptop had 8GB and the current one has 16GB. Exponential increase.
There's more and more features every month. Add that to the various performance improvements (that can take a hit on the ram, dependending on how they're implemented), and it becomes a bit more understandable why you would see such an increase.
You are misreading given that there were big improvements in memory measurment and some of the memory use increase you notice are just firefox measuring better it's own memory
Well..AWSY started a few years ago and many things happend to firefox since then. I cannot tell you exactly what increase corresponds to a memory measurment improvement but i can point you to the blog of Nicholas Netherote https://blog.mozilla.org/nnethercote/
it makes sense though as more features are in use on the set of tested pages, these will inevitably use more ram.
its good to monitor and ensure that stays within acceptable parameters though, and potentially improved upon again. at least the gain is pretty slow since they "fixed all the stuff"
IIRC, the set of tested pages are a static snapshot; I don't think the machines have general internet access (because that would lead to spurious failures when the other end has problems, which is annoying as that means nobody can check in code).
A few years ago I had a a ptoato android tablet. The thing couldn't play music without stuttering, chrome would take more than a minute to render a page, then another minute if you scrolled.
Firefox was first released for android around the time I bought it though, and turned it into a capable web browsing device.
honestly its been years now that FF is one of - if not the - browser which uses the least amount of RAM.
Its also one of the most energy efficient on most platforms.. (edge is more efficient on Windows, though it does not take into account that the engine is always running even if another browser is in use)
2. Site-specific Google searches, with autocomplete for the site URL
command -nargs=+ -complete=url site open google site:<args>
map s :site<space>
All this is really neat functionality, and very simple to achieve. I am waiting for good per-tab sandboxing, but in the meanwhile I have aliased firefox to /usr/bin/firejail so the whole application is sandboxed.
Apologies for the offtopic, but it's really exciting how a program that seemed to have lost development momentum is becoming really great again. Quite close to being the emacs of browsers if you ask me, with so many features and scriptability. Paradoxically you can get best access to these features with a plugin that tries to emulate vi, but you get the idea!
On the other hand, all the inner refactorings happening to Firefox may threaten the addon ecosystem by constraining them Chrome-like. There's no telling if there'll be a vimperator for firefox 50, 55, 60. The way vi emulator addons work for Chrome, it might be a potential travesty.
I'm currently at a little over 1 GB with 8 tabs open.
I really wish I had the option to trade off responsiveness for memory usage by being able to mark tabs as "no background activity allowed" so FF could serialize that content to disk when the tab isn't in focus.
The main reason tabs aren't saved to disk isn't a concern about responsiveness, it is a concern about data loss. You have to be very sure that you restore the state exactly. What if you spent a few hours filling out some government form online, then you went and took a break by watching a few YouTube videos? Memory usage might spike up, and the browser decides to unload the tab with the form. You'd be mad if form state got lost when you switched back.
I've heard that some people set Firefox to not autorestore tabs on load, and then periodically close and reopen Firefox, so their background tabs aren't loaded. Pretty clumsy, but I guess it gets the job done. I don't know if there's any addons that do something similar.
>> by being able to mark tabs as "no background activity allowed"
> What if you spent a few hours filling out some government form online, then you went and took a break by watching a few YouTube videos? Memory usage might spike up, and the browser decides to unload the tab with the form. You'd be mad if form state got lost when you switched back.
GP is talking about actively marking tabs as "no background activity allowed".
I have been searching in vain for something like that: stop scripts in background tabs except for specifically whitelisted pages.
There is absolutely no reason that I can see why every website and their ads should be allowed to roam around freely in the background.
There is however a number of reason why they should not be allowed. Battery life and phishing attempts readily comes to mind.
I guess whoever comes up with a well-working extension for that can easily ask me for USD 30 or even 40 and expect me to be a walking billboard for it ;-)
Ok. Will try. I think I see an issue here where I want to go back to a tab I have open on the train home.
If it is unloaded that means I have to connect to the Internet again to get hold of my page again?
(I'd think what I want is just to pause any javascript or other processing from any webpage that us not focused in the browser and not specifically whitelisted.)
Background tabs already have their timers run less frequently. In Firefox, that was added in 2011 [1]. Certainly further improvements to reduce background tab activity would be nice. One way you can sort of get that effect is to use Reader Mode, on pages where that works.
We've reached a point where a lot of consumer hardware simply can't be upgraded. The ideal of just adding more RAM isn't possible (or perhaps practical in many cases). MBPs can't be upgraded at all these days. My Lenovo laptop can, but going 16 GB -> 32 GB requires far more than $30 since I'd need to buy 16 GB SODIMMs.
That's also a bit beside the point. I have a workstation with 48 GB RAM. I left a browser running in the background and had one tab consume 20 GB on its own before I killed it. Many of these leaks seem to just be unbounded memory growth, so that hypothetical 2 GB you add is going to be eaten up in short order anyway.
The scenario you describe with one tab using 20Gb is "beside the point" and certainly an exceptional situation. I've never seen that happen... I think I've had one Firefox crash this year, and that time it wasn't using all the RAM - it was hogging 100% CPU instead.
Of course the extra 2Gb might help for someone with less than 48Gb in their machine (most of us!).
Well, my point more was apps/sites that go sideways will eat as much RAM as they're allowed to. The 20 GB case was only special in that the leak rate was much higher than most. On a smaller scale, whenever my laptop fan kicked on, I knew I left a Travis tab open in the background somewhere. That was the CPU going into an aggressive GC loop due to the relatively huge amount of memory being used (2 - 4 GB).
But my original point is really that dismissing gross memory usage by saying you can buy more RAM for cheap is no longer accurate. In many cases, your machine's configuration is unchangeable. Getting an extra 2 GB RAM isn't $30, it's the cost of a brand new device. In other cases, where you might have configurable hardware, you often have to buy the largest capacity chip available and the economics get skewed. For SODIMMS, going 8 GB -> 16 GB is reasonably cheap, 16 GB -> 32 GB is fairly expensive.
I'd argue having to do a hardware upgrade for a web app in the first place is a bit silly. There's often very little reason for these apps to be so large in the first place.
It is about things like harrassing my CPU multiple times a minute after my search results have loaded (looking angrily at you googlers, google was the worst offender here it seems! (Windows 10, FF))
FF mobile does this and it drives me nuts. I'll be on 4chan, watch a YT clip, then tab back and FF reloads the original HTML. Apart from being slow (takes a few seconds to rerender it all), it also loses AJAX-loaded data.
My phone reports it has 600MB of RAM free, so I dunno what's causing FF's behaviour here nor how to disable it. I can't always reproduce it, but it happens enough that every browsing session has some sort of frustration.
All the mobile browsers I've tried do this - Firefox, Dolphin, and stock Android. I distinctly remember browsing the web quite happily with Firefox desktop on a machine with 512 MB of RAM as recently as 2008, so I can't help but feel we've gone backwards.
Am I the only one who loves this behavior? I'd be perfectly happy forcing Chrome/Firefox/etc. into a 1GB sandbox and then telling it to "unload tabs on memory pressure." I only use a few tabs at a time; a simple LRU OOM-eviction algorithm should work wonders.
"From the network" isn't actually a part of the semantics of this behavior, though. If the page's browser-side cache is still valid according to its original response headers, the browser will just reload the page from cache rather than hitting the network.
This is the default. If you set it to restore from your previous session on startup (or if it restores your session after a crash, or you choose History->Restore Previous Session), then it will only load tabs when you switch to them.
With applications written in javascript ? Stateful on the client side alone or stateful on both sides ? It wont be safe to assume that state is persisted transparently on the server side. May be they just cache it for a while.
To make this work reliably would be quite difficult.
You could be really conservative: only assume statelessness if the page has no <form> elements and has no calls to the XMLHTTPRequest API (or window.eval).
Or you could be really conservative: only assume statelessness for pages without forms and without any Javascript. (These do exist!)
We have seen things people wouldn't believe. Webpages without any Javascript. Webpages using correct http responses. Watched C-beams glitter in the dark near the Tannhauser Gate. All those moments will be lost in time, here come the transpilers
For a reference point on scaling, on a Windows 7 machine that hasn't been shutdown in a week or two, I have a long-running Firefox session with 5 windows and 80-something tabs, and I'm at 1.7 GB. Half a GB of that is being consumed by the Google Music player.
I've noticed Google Play Music is heavy. It can easily consume half a GB or more on my machine. I think at this point the browsers are doing the best job they can but it's pretty telling that it's the web apps/sites we use that cost us memory.
I'm at 2.9G virtual 1.277g resident with about 130 tabs open (yeah, I'm a goofball but I like my tabs). It's pretty responsive on a thinkpad x220 i5-2540M @ 2.6ghz, 8G of ram and an SSD.
A few people use Tree-Style Tabs, which organize them as a tree structure, similar to directories. It encourages using tabs instead of bookmarks, which have a terrible UX (in all browsers, not picking on Firefox here).
Tree Style Tabs is the primary reason why I stick with Firefox. Without it, I wouldn't even know where to begin managing the relevant sites/info I encounter when reading up on diverging lines-of-thought.
For me, it is /the/ absolute killer feature for browsers.
Not sure how he does it, but I don't manage those. I just don't close tabs (I usually forget to do it :), so after week or two I end up with 200-300 tabs (my record is 900).
I tried to use chrome once and it ate all my ram, and they were saying it uses less ram then firefox back then.
That's interesting because for me the Windows task manger reports 377MB with 10 tabs open, some of them are news sites that tends to be "heavy".
edit: oh, I'm using ad blocking add-on and maybe you don't, that could be one reason, besides different websites another reason could be that you have the browser open longer than me.
Even in an iOS fashion, in the way backgrounded apps work. I don't need tabs consuming memory in the background, and if I did I could white list it as able to run in the background.
I'm curious - what's the benefit of doing this sort of thing at the application level, rather than letting the OS's virtual memory system handle it as usual?
Can someone describe what the "architecture that doesn't eat all your RAM" here is? Is it possible that it will inadvertently provide weaker security protections between tabs than the more naive and RAM-intensive architecture?
One thing that firefox does well is suspend pages you are not using and likely saving the state of the appllication to the disk. Another thing can be for when multiple pages use the same or similar javascript/css/external respurce files, they only need to be loaded once into memory. Another thing canbe that certain javascript globals would only be loaded when the getter asks for them. Something like AudioContext and RTCPeerConnection are not widely used, by changing the name in the global space to a getter that loads the libraries instead of providing them on start may reduce load.
If that's that case I much rather my browser eat up more RAM than needlessly write to my SSD.
A lot of the improvement in the reliability of consumer grade SSDs is due to OS improvements in how they manage their write operations to the hard drive.
Windows is considerably more conservative in how it uses it's page file and what does it write off to the SSD, linux has also made a lot of improvements in how it uses the swap partition if it uses it at all, other things like having to be able to suspend the PC in "hibernation" mode without actually having to write a "hibernation" file to disk by keeping the CPU at pretty much an off state but refreshing the ram also saves a lot of day to day write operations especially on mobile devices.
So overall I highly doubt that Firefox went with the suspend to disk route, because if it did it would be a pretty big step backwards.
Also as power conservation goes writing to SSD/HDD is more expensive than writing to RAM and keeping it refreshed, so using the SSD as a storage device instead of the RAM can also have negative effects on the battery life of mobile devices.
While I agree that I would rather my browser use more RAM than constantly suspend pages, I disagree with your notion that
>A lot of the improvement in the reliability of consumer grade SSDs is due to OS improvements in how they manage their write operations to the hard drive.
We're talking on the scale of writing 500gb to your SSD every single day for 10 years in order to reach their endurance limit (on average). Writing a few more gb at worst from your browser is hardly going to affect that. It's even doubtful that OS improvements change it much, as Windows 7, which a lot of people are still on, doesn't come SSD optimized.
Meh. SSD write amounts matter if you're running a datacenter. They don't matter to a normal user on a laptop/desktop using a browser. SSD lifetime simply isn't an issue, as the other commenter pointed out.
Also, from a more philosophical standpoint - this isn't Mozilla's issue. They shouldn't have to worry about conserving SSD writes. That's not their problem. That's the OS / Disk firmware / manufacturer's problem. If I want to write files to the disk at a very reasonable rate, I should feel free to do that.
> A lot of the improvement in the reliability of consumer grade SSDs is due to OS improvements in how they manage their write operations
Do you have any references for this? I don't think I've ever heard a story of a consumer user running out their writes. Not saying it hasn't happened, but it's not enough of a common occurrence to be a major factor in reliability.
In my experience the overwhelming improvement in reliability is in the firmware of these drivers coming out of the perpetual beta phase (and the death of OCZ)
Yes, that is definitely possible. Chrome isolates data as much as possible between processes. A shared cache would be a big improvement in memory usage, but security is the biggest concern.
Duplicated memory and just overhead from running separate processes (Chrome), as opposed to shared memory and less overhead from a single process (IE? Old FF).
As for security, no, unless there's some unknown vulnerability now (or that they create), that will be ported over and somehow more effective between (potential) processes. So, I doubt it.
Chrome uses separate processes for sandboxing purposes, not just for the fun of it. So yes, protecting against "some unknown vulnerability" is the entire point behind it.
As a fun fact, separate virtual machines running under a VMWare ESXi hypervisor—which is about as "sandboxed" as you can get—still share memory: the hypervisor hashes the 'cold' pages of its VMs and, when it finds duplicate page content either within or between VMs, it merges the duplicated pages together into single copy-on-write pages. ESXi has never had a security vulnerability due to this optimization, AFAIK.
True! Rowhammer-based attacks are kind of unique, though; I expect they'll be treated as a hardware bug and solved by releasing better hardware, rather than through inventing even more layers of securinoia to keep in mind from now on.
It's sort of like when WebGL was first getting going, and the GPUs of the time didn't expect to be fed shaders directly from potentially-malicious web sources. Rather than severely restricting the WebGL API, we got a new generation of GPUs that fail safe.
> I expect they'll be treated as a hardware bug and solved by releasing better hardware
When Rowhammer was first announced, people said "it's okay, we have ECC". Then ECC was shown vulnerable. "It's okay, vendors have promised to fix it in DDR4", they said. Now DDR4 is out and vendors have not deployed the fixes systematically[1] and you have to benchmark every single RAM stick to make sure you're not vulnerable.
I'd really appreciate software workarounds as long as hardware vendors keep fucking up.
While not mandatory, I always sort of assumed BSD and Darwin do the same...
Just wanted to point out that memory usage isn't necessarily inherent to multiprocess, but with a sizable portion of users on Windows the point is practically moot anyway.
a) your security should not rely on process isolation. the javascript engine etc. should be secure in the first place. if you can break out of the JS sandbox you can already attack addon scripts (think password managers)
b) hosting some tabs in the same, sandboxed process is still stronger than hosting all of them in the parent process.
So you mean the interpreter should be mathematically proven like SEL4 and generated from the proof? Otherwise I have no idea what "secure in the first place" means.
The point of defense in depth is that you should not rely solely in practice on a single defense element, but you still should strive the hardest when you design it to be sufficient if it was bug free (at least on points where it is possible given the security model, and what the technical item can address).
So splitting everything in sandboxed processes can play a big part in the security in a defense in depth approach. Of course you are not going to call it a day with just that, but still, it's extremely significant.
It is to put approx on the same level as ASLR, DEP, Mandatory Integrity Controls, etc.
From a modeling POV it might actually be better than ASLR, DEP, etc, which are "only" mitigations for which multiple approaches are know to exploit other holes up to arbitrary execution and complete compromises in some cases, even if they are perfectly implemented (in limited conditions), while multiple sandboxed processes can be, at the model level, perfect. In practice (when you add bugs in the picture in all layers, and not just one, and when you actually don't isolate everything like crazy), it is obviously just another tool, but a very significant one (let's drop the "extremely" - it not about being an order of magnitude more "efficient", which would be a very blurry notion anyway -- I mean I guess at one point DEP even alone could maybe be considered orders of magnitude more secure, depending on your precise definition of everything).
What I want to convey about defense in depth is that it is about layering various mechanisms, independent if possible, to protect against various risks, while making the hypothesis that some will fail. You don't casually remove a layer (or pretend that a layer is equivalent to almost none because it does not protect you against one risk in some cases). Defense in depth is actual engineering, like the various safety components in any dangerous system. And the value of sandboxed processes is pretty clear. That it is not a silver bullet does not render it useless.
I thought extensions also mostly ran in their own processes, with messaging access into the tabs? So a password manager would have its critical code in its own process and only access the tab to look for fields to inject into, and hence be protected.
I'm not a Firefox user, but considering that 95% of the time about half my RAM is going unused (well, Windows uses it for pre-caching random stuff it thinks I might need), I don't mind high memory usage as long as I get a benefit for it.
If it's just inefficient, that's a different story.
The thing is HD caching is extremely elastic. Memory consumption by a browser, less so (maybe even nearly not at all elastic).
So what you might think of a benefit one day might be a drawback the next if you want to open twice the same amount of tabs, or run a big build in the background.
Safari is my favorite on the Mac since it's quite lean (memory wise, CPU wise and energy consumption wise) and responsive. But I miss extensions from Firefox and don't use Safari if I have to work with several tabs and windows. There are several little things that make using Firefox much better for my experience.
Firefox is my favorite browser overall (especially on other OSes), considering its smaller memory footprint and lack of sluggishness with tons of tabs (compared to Chrome) and the fact that it's from Mozilla, a company I hugely respect and admire (even with some of the missteps that have happened in some aspects in the past few years).
I was in the same situation as you and found that in the end Safari fit that bill for me (not sure if it is an option on your platform). Responsiveness is probably somewhere between Chrome and FF, but memory usage has been much better.
Nevertheless, I prefer Chrome's approach. I can just open the process explorer and kill a few of the most memory hungry tabs/tab groups; with Firefox, there's no other solution than killing the whole browser.
I believe in current Firefox you could kill the memory-hungry tabs, then go to about:memory and force a GC. Or restart and let your tabs stay open but not loaded until you use them. Not that either of these is super-convenient.
The thing is, workstation and laptop RAM is so cheap. If your browser is eating an appreciable chunk of your 16 or 32GB of RAM, you need to close some tabs.
I'm tired of this argument. My browser is not the only piece of software I'm running. I usually have virtualbox, a compiler, an IDE running. And sometimes some scripts/programs I wrote that does intensive computing on some data set.
Forgive me; your statement does not make sense. First you're implying that all you have to do is get more RAM; then you're admitting that you only have a certain amount, so you should stop using so much.
In actuality, very few non-server PCs can be populated with any more than 16-32 GB. It's not a matter of bother or expense at all. It's a design limit.
Every single time I've driven a non-server PC into hopeless swap thrashing, it's been the damn browser that was the culprit. Don't bother trying to advise me to bookmark 100 open tabs in 10 windows to save space. That is (forgive me) an insult to one's intelligence.
Even more of a problem on laptops since the market has overwhelmingly moved from a technical capabilities standpoint to a race to less weight and more autonomy. Has a result, max memory on high end or configurable models has at best stagnated for a few years. I've a 4 years old laptop with 16GB. It's not yet possible to get that on all flagships laptops today (even though 16GB SODimm is not even expensive anymore).
No, it's not cheap at all, because buying the physical chips is not the end of it. For some laptops - like macbook air - there is a limit of how much they support, and it's not that high. Some models are not even upgradable - they just have soldered-in 8G and that's it.
> you need to close some tabs
This comment is really not contributing anything to the matter. Obviously everybody could think of closing tabs without you advising to do so, and if they don't, it may be because they want these tabs around. I regularly keep a lot of tabs around because this is my work pattern.
If the available laptops don't support enough ram, then the consumer has spoken, and what they said was that they don't want more ram or 1000 tabs open on their laptop.
Personally I wouldn't buy a laptop that is incapable of supporting the amount of ram that I need for my "work pattern".
I want a laptop of infinite memory, infinite disk space, battery size so big I have to charge it once a decade and that it's cost me $10. But clearly not all consumer wishes are possible. There are some tradeoffs, and one has to operate within the constraints of reality, not wishes.
> Personally I wouldn't buy a laptop that is incapable of supporting the amount of ram that I need for my "work pattern".
Clearly you've got it all figured out. Too bad not everybody can be you, otherwise we wouldn't have any tradeoffs to deal with.
I just mean doing due diligence before you buy something that it will actually be able to do what you intend to use it for. I wouldn't buy a laptop with non-upgradeable and insufficient RAM. I don't think most consumers do that. They just buy a brand name because they can't be bothered learning how it works or what they actually need.
Of course I have tradeoffs to deal with... But yes, I do have it figured out. Thanks.
I switched to Firefox from Chrome about 12 months ago. It isn't as good as Chrome, but I was trying to reduce my Google dependencies.
It has been mostly fine (except for an annoying OS-X multi-screen bug where it screws up the sizing).
I was really looking forward to this feature to help close the gap on Chrome performance.
Until August (I think Firefox 48.x), when it became unusable on any site with... something. I'm not entirely sure what triggers it- I don't think it is just video alone. Something make the entire browser lock up entirely for minutes, and sometimes it even runs out of memory and I have to kill it via the OS.
No add-ins (except for Firebug).
Frustratingly, I can't replicate it well enough to be a useful bug report.
I'm this close to switching back. Muscle memory and shortcuts to Firefox is the only thing stopping me.
So.. this will be great, but please make it a workable browser.
I've found Firefox performance to be absolutely awful at work on OS X. I have no issues at all with it at home on Linux, and didn't have any issues in my previous job on OS X either. I'm not sure if it's OS X specific, a change in recent versions, this laptop being underpowered, or the nature of the work I do (a lot of video stuff), but anything JS heavy severely slows down or locks up the browser. And if anything, it's gotten worse over the last few months.
I have to use Chrome for anything video related or JS heavy (monitoring/graphing stuff for the most part), but I'd hate to switch to it for everything. There are too many addons that I rely on in Firefox, on top of my general dislike for Chrome's minimalist UI and standard Google privacy concerns.
I've tried turning on the config option for electrolysis, but it breaks some addons, so that's still a no go for me for now.
Any chance you could file bugs, particularly on the JS-related performance issues? I'm looking into some similar things right now, but seemingly JS-centric things seem to have a tendency to actually be from something else. It'd be nice to have something I could solidly blame on JS. If you file a bug with https://bugzilla.mozilla.org/enter_bug.cgi?product=Core&comp... and make it block bug 1299643, I'll see it.
For a while now I've suspected that memory management on OS X is actually terrible. I have no really proof, because I just run everything memory intensive on Linux. At work I have a few Python script that eats a few gigabytes of memory, it reads in data, and basically just loops over it, selects the stuff it needs an generate a few XML file (10 to 20MB worth of XML).
All this takes maybe five minutes on Linux, on Mac OS X it will freeze the OS for long periods of time and take 40 minutes to run. In terms of hardware, the Linux box has less RAM, CPU and slower discs.
Regarding the Javascript stuff: Part of it may be that Chrome is the new Internet Explorer. People develop on and test for Chrome (because everyone uses Chrome, right?). Chrome does have the better Javascript engine, still, I believe that much of the slow Javascript on Firefox (an other browsers) could be avoided if people bothered testing a little more.
Chrome does have the better Javascript engine, still, I believe that much of the slow Javascript on Firefox (an other browsers) could be avoided if people bothered testing a little more.
All major engines are pretty much comparable speed-wise. The biggest difference is probably in memory management (of the HTML & JS engines), which may favor some forms of site/lib structure and architecture over others – that's IMHO where the majority of slowness comes from (for any browser).
I've actually had the opposite experience. I work on Firefox support for a very heavy Angular2 app, and I usually see 2-3x faster reflows on OS X (and Windows) than on Linux, and faster paints too (although I haven't taken exact numbers, since reflows are the bottleneck). It seems that at least the graphics pipeline is much better optimized for OS X/Windows than Linux.
I don't know whether it has accelerated compositing, but in my experience compositing is relatively cheap compared to painting the layers or especially layout. That may be an artifact of the app I work on being poorly composited, however.
It's really hit-or-miss for me too. On my work laptop for example, it would freeze up for multiple seconds with e10s enabled but was fine with it disabled. Then one day everything worked fine. On my home computer FF is just getting slower and slower. I'm not looking for troubleshooting tips here, just mentioning that the experience is very uneven.
Firebug is quite often the culprit in these kinds of things in my experience. I can't live without it so I put up with it; if you're switching from Chrome however, it seems that you'd be better off just using the built-in inspection.
Really just that I've used Firebug for so long that using anything else is painful. I've tried switching but always ended up going back. Much of it is interface differences, but some of it is actual differences in functionality: if Firebug has something which another inspector does not, then it's super-frustrating. If another inspector has something that Firebug lacks, then I don't even notice.
Edit: I should emphasize that I'm not saying Firebug is superior, just that I'm used to it.
PSA: The Firebug you know (version 2) is incompatible to e10s and there will be no e10s-compatible Firebug 3. Instead, Firebug.next is built-into Firefox(!):
Though you're used to Firebug and find it convenient, it might be worth looking at something else once in a while to see what else is out there (especially if you're not using addons on top of Firebug to enhance it). Perhaps you could have a look at Firefox Developer Edition and try it. [1]
I would really miss the DOM tab of Firebug. You can explorer the DOM state which is very useful. Non of the other DevTools anywhere has that feature to my knowlegde.
You might want to try Firefox's built-in Web Console. I switched a year or two ago when it was less powerful than Firebug, but notably more stable, and I haven't gone back since.
> except for an annoying OS-X multi-screen bug where it screws up the sizing
Can't argue with you there. At last exiting full-screen video no longer takes the entire browser out of full-screen.
I've been a long time Firefox user (due to multi-row tab extension -- can't live without it). I can easily have 120 tabs open when doing research. Firefox has always been relatively performant even with that huge number of tabs visible and I almost never have to close it.
However, something recently has changed. Firefox will work fine and then something will cause it use a large amount of CPU that it doesn't let go of even after closing all the tabs. I have to close the whole browser and re-open it. This can happen a few times a day.
I'm hoping it's a regression that will be fixed soon.
Use one or more of these with the Session Manager [4] extension and life becomes a lot easier with several hundreds of tabs. I highly recommend looking at the configuration options provided by each of these extensions and adapting them to how you'd like them to behave. Some of these are also developed with compatibility with the other extensions in mind.
I have this exact same problem, in fact I could have written your comment word-for-word. I also cannot reproduce it reliably, but it started in August.
I also switched from Chrome, but for slightly different reasons...I got to thinking about Google, and Chrome,and ad-blocking, and it occurred to me that Chrome will probably never allow extensions on Android, not because it's technically not possible, but because ad-blocking is bad for business (if your business is showing ads).
How does the Chrome team reconcile this? We know that ads are annoying, and ads can track you, and some ads can even be malicious. It seems that blocking ads make a browser better. But the Chrome team is not allowed to block ads on Android, so they are purposefully making a worse mobile browser.
Anyway, I do a lot of surfing at night in bed on my Android tablet, and I'm going to stick with Firefox.
Yeah, works fine. It did clear the password storage at initial sync, don't know if this was because of Chrome to Chromium switch, but I wasn't bothered since I use a password manager anyway and was planning to clear the passwords. Sync works as the normal Chrome one.
This seems atypical. Given that you don't use any add-ons, consider resetting your profile. Alternatively, you may wish to disable multi-process for the time being if it's enabled for you (I'm sure there's a flag you can set for that if you Google for it), or upgrade to the beta or alpha builds. In my experience, the beta/alpha builds are usually plenty stable to be a daily driver.
I second that. My FF on Linux acted weirdly, then I removed the whole .mozilla directory (remembered to save the greasemonkey scripts I wrote, but forgot to save the favorite links :( ) and everything worked again.
Firefox is frustratingly slow for me. I can't use it, have to stick to Chrome. Chrome which, by the way, crashes after 5 minutes of using video Hangouts and brings my 16 gigabytes ram PC to a halt when I open too many youtube tabs.
But firefox scrolling is especially slow. irccloud.com is the main culprit (and I use irccloud a lot). It's pretty bad in gmail as well.
The react app my team is working on is also far slower in Firefox. Try scrolling this in Firefox and Chrome and see the difference (the scrubber is at the bottom of the blue area, like a video). Butter smooth in Chrome, horribly slow in Firefox: https://hsreplay.net/replay/7VAKLeMNaXvshAawnUUoni
This is super surprising to me, because irccloud is exactly what we use at Mozilla. Everyone seems happy with it's performance, and obviously most people are using FF.
If it helps, I'm running irccloud's "Ash" theme, with the sidebar on the left, monospace font, "nicknames on a separate line", "user icons" and colored nicknames all enabled.
It's pretty bad on both Windows 10 and Arch Linux (different computers). Worse on Windows, actually, but only after some time -- I'm in ~30 channels, fwiw.
As always it's probably a plugin. You could integrate the functionality of plugins which everybody uses, such as an adblocker (Instead of pocket? You are already doing it with the developer tools) -- but you get all your money from Google...
Tracking Protection is pretty much an ad-blocker. It's default-enabled in Private Browsing and you can enable in normal browsing, too, by setting "privacy.trackingprotection.enabled" in about:config to true.
Dragging the scrubber around updates at the same speed, really? I'm not on Nightly, maybe that's the difference, but if that's the case that's great news.
I can't find any other reports of it, but I was on FF 47.0.1 on Windows 7 and upgraded to 48.0.1 (For some reason I never upgrade when prompted on the XX.0 releases). And I would have intermittent whole-browser freezes that lasted about a minute. Not like one every 4 hours, but more like once every 5~10 minutes. Back down to 47.0.1.
I tried again on 48.0.2, ... same thing.
I can't nail it down specifically enough to file a proper issue report, so I guess I'll keep trying when the next releases come out.
I also notice that webapps seem to like Chrome better, but FF usually performs "well enough" that I've put up with it because I find it (for me) to be markedly better at helping me deal with the ginormous amounts of text content I apparently consume. But webapps have been degraded enough for the last month-or-so that I'm running chrome in parallel just for those.
I love FF and I'm hoping that this is one of those isolated things that the fabulous team over there fixes before too long. (I suspect it's a JS related something or other, but I haven't been able to track it down specifically.)
Chrome became really really really slow on Ubuntu in the past year, no extensions installed. Every time I use it on a high-end Core M ultrabook, the CPU/RAM is eaten, the whole system becomes laggy, Google Hangout/Slack voice chat/Skype gets interrupted all the time etc. Not sure if you gain anything by moving back, sadly :(
chrome on linux doesn't seem to get much attention for quite a while now: got affected by this [1] more than 2 years ago and been quite happy with firefox for surfing & dev ever since
I found ads to be a major hog (and some bad scrolling scripts on popular or design-y sites). I'm using uBlock Origin or Adblock Plus, which gives a great responsiveness boost for far too many websites.
Did you try updating your graphics driver or disabling hardware acceleration (about:preferences#advanced)?
I had an issue but on Windows 10 where sometimes on video playback FF would just lock up and windows would restart the graphics driver. I also couldn't reliably reproduce it, it sounds like a similar problem.
Firefox has been painfully slower than Chrome by great magnitudes for me on a cheap Dell Windows laptop. I have almost identical extensions except for NoScript which might be the culprit, although it is also the main reason why I use FF over Chrome (and for enabled extensions on private mode, which defeats the purpose of private mode but it has been used more as a "don't save history" mode than private mode if you know what I mean).
I occasionally experience similar slowness. In my case, it's caused by bloated sqlite database. After I delete all history, the browser is snappy again.
It is frustrating that Firefox makes it so easy to delete recent history, but difficult to remove old stuff. I really only want the last month or so; why do I have to clear it manually?
I would, but just recently purged all my history, so I don't have a reference about database size and number of entries. That means I can't include any constructive data in the bug report.
It's tedious but I'd do some basic troubleshooting like embargo all addons, not merely disable, and trash the user prefs. If you use sync with a Firefox account, it'll have your history and bookmarks. Then add back add ons one at a time. I'm on 48.0.1 on Linux (on a Mac!) and I'm not having the problems you're describing.
Funny, I have the same lockups with Chrome. Middle-clicking a new link (to open in a new tab) freezes the entire browser (all tabs) until the new tab is done loading. I believe it's a extension problem (uBlock Origin, EFF Privacy Badger), but I don't really want to get rid of any of them.
I know this is not about ad blockers, but does anybody have any experience of using something like https://github.com/StevenBlack/hosts/blob/master/hosts instead of uBlock Origin? Or even use it in the hosts file of our dd-wrt/openwrt routers?
I use something similar, and love it. Still get blocked by Forbes and similar for using an ad blocker (it's a network issue, you dolts!), but the performance gains in all browsers is well worth the few exceptions.
I use a similar solution and I love it. It works on every app and requires no browser add-ons. I use a list from this ancient looking website: http://winhelp2002.mvps.org/hosts.htm
I have the same problem in Windows 10, since a month or two (I think since v48) something seems to be broken... Sometimes the browser will freeze or some sites take too long to load properly...
This should come as no surprise. It has always been an epic architectural mistake to use the same single non-reentrant Javascript engine to both render the UI and run JS for webpages in Firefox. This change will finally undo that huge mistake made so very long ago.
JavaScript was little more than a toy scripting language used to animate stuff and change layouts when html/Css wouldn't fit for the greatest part of firefox's early existence. At the time it was probably the appropriate decision.
Now, it's a huge burden on Firefox and it's good to finally see progress on this front.
> JavaScript was little more than a toy scripting language
No, back then Mozilla was trying to do the entire UI logic in JS. That was the whole point behind reusing the same JS engine for both: https://en.wikipedia.org/wiki/XUL
Since Javascript is just a language that happens to execute client-side within the browser, why can we not use an existing language syntax (or an existing language entirely) for this purpose? For example, Go, Python, Rust, or perhaps Perl? It feels like a broken shell scripting language that really should have been replaced.
You can, but you should not run the UI script code on the same non-reentrant JS engine that executes scripts delivered from the internet at large.
By doing so (running both the UI code and scripts from the internet on the same non-reentrant JS engine) means that only one gets to run at a time (either UI code or JS from the internet). That's why FF's responsiveness sucks at times, any long running JS script from the internet also prevents the UI code from running.
The JS engine should either have been reentrant (so UI code could run even though some random internet script was still running), or there should have been two independent JS interpreters loaded, one used for the UI code only and a second for all the JS from the internet.
Since the introduction of ES2015, JavaScript has actually been a really nice language to work with. It's been really well implemented and optimized by browsers lately. So I don't really see a good reason to switch languages suddenly. That being said you'll probably be able to compile and run all sorts of languages for the web with WebAssembly in the near future.
Multi-processing is an orthogonal issue to single-threaded JS.
The same asynchronous logic used to have multiple browser engines with multiple JS runtimes talk to each other could also be used for multi-threading while making shared, thread-safe datastructures of non-JS parts much easier.
Yeah, I never understood why they did it, it was obvious a single unresponsive tab would block the whole browser, but I guess fresh blood at Firefox needed to gain experience...
Just so that people don't have to do digging in that article. In about:config, set browser.tabs.remote.autostart to true and check about:support for Multiprocess Windows (might need to restart).
For now though you'll have to disable any plugins for it to work which can be a bit of a handicap. But... almost there :).
Please be aware that if you force-enable e10s while using incompatible addons, you'll end up with a very slow browsing experience. http://arewee10syet.com/ has a compatibility list for some of the most popular addons.
Seems like e10s gets disabled if there is a incompatible addon activated.
Not sure I would trust that list, the only two addons I run, is Tree Style Tab and Greasemonkey, both listed as compatible on the website, but going to about:support shows:
I think it has to do with some kind of global locking/racing conditions.
The current version of LastPass is a big culprit, I've had it lock up nightly for >30s some times. They have a beta version that is e10s compatible which doesn't have this problem.
Every odd browser performance topic is dominated by people claiming Firefox has been faster and more scalable than Chrome for years. Every even one is dominated by people claiming the opposite.
This thread seems to be the latter, my experience is the former. If you see bad performance/memory usage with a small number of tabs, are you sure that it's not due to bad plugins?
I think about:memory and about:performance could be helpful. I seem to recall they used to have some reporting on known problem plugins (perhaps as part of https://www.mozilla.org/en-GB/plugincheck/).
Does anybody know more about how Mozilla manages experiments? I'd always assumed that all users downloaded an identical binary and got the same behavior. How do they assign 10% of their users to an experimental group?
Is it possible that two users will see different functionality when they are both using the same OS and release, say Firefox from the latest version of Ubuntu?
We have several different mechanisms for running experiments. The e10s one specifically runs through what we rather poorly named a "system add-on".
e10s code is present in all builds. The system add-on contains logic that decides whether to turn it on or not. The system add-on can be updated daily (or more often), in all channels.
* I apologize for the naming, this is entirely my fault. I spent a lot of the last twelve months scheming/designing and developing experimental update mechanisms for Firefox.
Firefox generates a random number on first run. Users who are selected for the experiment are sent a signed extension from Mozilla that makes browser.tabs.remote.autostart preference return true if they are on the correct release of Firefox.
As with most native A/B tests, the binary probably ships with code for both functionalities. When the browser launches for the first time they generate a random number, and if it's below a certain threshold you get variant A, above you get variant B. The information gets persisted on disk so it remains unchanged across sessions.
It seems that there is some communication back to Mozilla, since they claim that they can increase or decrease the number of users in each group over a week; that seems unlikely to come in the form of a new binary.
Yeah, Test Pilot is more for features where Mozilla wants to experiment and see, if people even like the feature in general, not so much for bug-testing.
With e10s, it was clear that it would be included in Firefox, so they just tested it by pushing it through the normal chain of Nightly->Developer Edition->Beta->Stable.
Its incredible multi-process took this long. Just goes to show you that your architecture decisions last a long time and are often difficult to change. Chrome had this from day one and never had a big and old codebase to worry about. Yet it took Firefox many years to get multi-process going and my understanding is that its much more limited and simpler than what Chrome or Edge do.
I'm also a little surprised there hasn't been an attempt to launch a completely new Firefox from the ground up. Regardless of what they're doing right now, its still a legacy code monster and much more laggy than the competition. Maybe this is Servo's ultimate purpose, but every Firefox advance is welcomed but always feels like another layer of lipstick on this pig.
Disclaimer: I use Firefox as my main 'non-work' browser several hours a day. Its good, but its very obvious when I'm not in Chrome from a performance/stability perspective.
|| I'm also a little surprised there hasn't been an attempt to launch a completely new Firefox from the ground up.
I (hopefully) think this is what Servo is and will end up being.
Has no legacy code, and is tiny in comparison to Gecko. It's OS support is fairly modern, and has no interest in supporting Windows XP.
It's also written in such a way that allows them to be more multi-threaded and more concurrent than traditional rendering engines.
At some point you have to draw a line in the sand and start again, and I hope Servo is that.
> I (hopefully) think this is what Servo is and will end up being.
Not really the plan. Servo may eventually become a product, but that would be in the very far future. But Gecko can use lessons learned from Servo, and try to share code with it, so you can incrementally replace parts of Gecko.
> and has no interest in supporting Windows XP.
I don't think that's the case? We don't have users on XP so we may not support it right now, but that's just an issue of priorities and not having the resources to fix all the things at once.
There are more XP users of Firefox than there are on Linux (probably for other browsers too). If Servo was a product it would probably care about XP.
Yup, hence my "hopefully", I'm aware small parts of servo may go into Gecko incrementally, however im still routing for something like browserhtml to be at a usable state, and for servo to run on its own.
Well, that was 2014. And yes, there's not much interest in putting the effort to support XP now.
That is because Servo is currently not a product. You said "I (hopefully) think this is what Servo is and will end up being.", which talks of a future where Servo is a product (which can happen, though it would be in the far future). In such a case Servo would reevaluate supporting XP (unless it is so far in the future that XP support is no longer important, in which case most probably other browsers will drop XP support too). I say it in that comment too, "we're currently a research project" and "If/when we stop being a research project, maybe, but I doubt it". I'm less doubtful now, but yes, it's possible that Servo would continue not supporting XP as a product if there was a good enough reason for it.
The reason Servo doesn't support XP is not because Servo is Servo, it's because Servo is not a product.
Addons are a big part of the firefox ecosystem and they reach deep into the internals of the browser. So just replacing the browser wholesale would break a lot of things over the course of a single release.
> I'm also a little surprised there hasn't been an attempt to launch a completely new Firefox from the ground up.
It's interesting how the history goes.
We had Netscape 4, and it was crap (remember when resizing a window reloaded the whole page?)
So they was a ground-up rewrite of the rendering engine, resulting in Gecko, which was put in Mozilla.
Gecko was great but Mozilla was a bloaty amalgamation of features, so this was created a pared-down Gecko version called...
~Phoenix~ ~Firebird~ Firefox, which had the great rendering engine and a lean, native-like UI.
Then there was KHTML/KJS which was built for KDE, which had a lean architecture but didn't have the investment to get the compatibility 100% there...
Which Apple then poured into the project in their fork to WebKit, which paid off in spades on the resource-limited iPhone a couple years later.
But Safari was only on the Mac (aside from their "Cocoa on Windows" version and a few open source ports), which Google took as a reason to create...
Google Chrome, where the biggest innovation was rendering each page in its own process (since Safari could freeze up due to one page going in an infinite loop in JavaScript).
So will someone out-Firefox Firefox and do what Chrome was to Safari for multi-processing? Take the rendering engine and put a new chrome on top of it?
KHTML was originally forked from QHtml (part of Qt) I believe (or they shared the same origin?).
Webkit later became a Qt component, the circle of life.
I would argue chromes biggest innovation was the UI, putting the address bar as part of the tab where it belongs (cue holy war) and better tabbing in general.
Mozilla was also the first to integrate the search and address fields, which they unfortunately stripped out of firefox (and many people think chrome introduced it).
> Mozilla was also the first to integrate the search and address fields, which they unfortunately stripped out of firefox (and many people think chrome introduced it).
Do you know when this was introduced/removed? I'm curious as to what this implementation was as I don't recall when this happened.
> So will someone out-Firefox Firefox and do what Chrome was to Safari for multi-processing? Take the rendering engine and put a new chrome on top of it?
I highly doubt it, as there's very little incentive to create a gecko based browser vs a webkit or chromium based browser.
As a fully rewritten browser engine, how will they handle malformed HTML? Will it, like other browsers, try to 'understand' and fix some errors or will it stick to the specification?
The HTML5 specification now fully specifies what to do in the face of malformed input. That's one of the biggest differences between HTML5 and previous specifications.
Well, it is for now. This isn't yet a concrete plan that Mozilla has, but from what I've read, the Servo developers would definitely also like to write a JavaScript-engine in Rust, simply because of the gain in security.
Also, browser.html already exists in an early form (and is bundled with Servo). It provides a new UI written in HTML, CSS and JavaScript, so it can be rendered by Servo as well.
> Servo developers would definitely also like to write a JavaScript-engine in Rust, simply because of the gain in security.
(Servo developer here)
This isn't really the case. There is interest in doing this, but it's not something we definitely want to do. The problem with writing a new JS engine is that you need to duplicate years of performance tuning so while it might be safe, it would take immense amounts of work to make it efficient. Rust's safety benefits get reduced when you have JITs and all involved, too.
It would be nice to have, sure, but the amount of work in making a usable one is huge.
> Its incredible multi-process took this long. Just goes to show you that your architecture decisions last a long time and are often difficult to change
At last, you'll be able to view other pages while Facebook's Javascript is in an infinite loop.
With thousands of MIPS and a gigabyte of memory on the job, it had better be responsive. It's pathetic how much compute power it takes to run a browser. Even when the pages aren't doing anything interesting. It's not like they're running a 3D game or something.
I've got 732 tabs open at the moment and performance is still fine. The upper limit for my machine seems to be about 745 ish - much beyond that and FF runs out of video memory or something (new tabs render sites half black).
Haha yeah :) TBH I have no idea what half of them are any more. I've just never gotten around to pruning them since FF works well enough and load up time isn't too bad (< 12 secs).
Lol, I've seen that issue myself, but only at maybe half that (ridiculous!) number of tabs.
If you make the window smaller you can still do some stuff, but you need to restart FF (and wait for your 732 tabs to reopen!) to get everything back to normal.
Computers are meant to solve problems, not create mindless tab-closing jobs for humans! We should never have to close a tab! Tab hoarders unite!!
Why? Except because of memory allocation serialization impact, there is no reason for a multithreaded process being slower than multiprocess. And if that's the cause, it could be solved using multiple heaps.
My guess is that the multithreaded-only design wasn't as good at exploiting the inherent per-tab parallelism. The original architecture probably had some synchronization points (perhaps related to the common GUI code) that did not allow processing each tab truly independently. Regarding memory allocation, yes, in theory one should be able to try to avoid sharing data across threads, which should eliminate serialization on memory allocation and deallocation. But I can easily imagine that in practice, in a codebase that was not explicitly designed to process each tab in parallel, you'll still end up sometimes sharing data across threads.
I imagine they could have instead changed the architecture to be multithreaded in such a way to avoid my imagined sequential bottlenecks, but performance is not the only goal in a multprocess architecture. Sandboxing is another design goal, and that cannot be fully achieved with a multithreaded design.
This is all speculation on my part, so if anyone who actually works on the Firefox codebase could correct me, that would be very welcomed.
The current setup is not multithreaded in the relevant ways.
So the real decision point was whether to take an existing large C++ codebase and try to safely make it multithreaded or to take that same existing codebase, run it in separate processes, and make the communication work.
The latter is actually a simpler engineering task, in my opinion, and paves the way for doing things like sandboxing and whatnot once the initial rollout is complete.
One near-future goal is to move the compositor out from the UI process into its own separate one. Mozilla currently maintains a sizeable blacklist of {feature, GPU, OS, driver} for hardware acceleration because glitchy drivers cause whole-process crashes. When this occurs a separate compositor process can be restarted instantly without the user noticing.
I'd guess that it wasn't the effect of switching from threads to processes that made the difference. It might come from code clean up and fixing some of the IPC that was going on. This would have probably seen many eyes in the firefox dev community so they might have made some efforts to clean up the code under the hood before closing the lid on it.
But I might be wrong as I have no ties to the development process of this.
I'm surprised by the responsiveness claims because my ancient Windows PC runs FF superbly. I think Win10+FF is already more responsive now than it has ever been.
I am also using Ghostery, however, with virtually everything blocked. It wasn't my intent to block ads, but that's mostly how it works out.
It depends on operating system. For instance on OSX Yosemite Firefox is sluggish, yet on Windows 10 it's fast.
People should really mention what operating system they're running when they say "x is slow"
Whole browser freezez for me on any amazon page. Just when it starts rendering it freezes everything inside Firefox window for a few seconds. It's so annoying that I'm considering dropping Firefox after over a decade for Opera or Brave browser.
Very similar issue! My gf walked into it recently on ArchLinux, she had the same problem, for her e10 was enabled by default. When she visisted pons.com the whole browser was frozen for at least 10seconds, and if it didn't unfreeze itself after that time, it was just consuming memory and freezing whole OS until Linux killed Firefox process. My issue was solved by disabling e10s, for her it didn't help so she switched to Opera already.
This will probably have significant implications for web worker behavior.
If memory serves, workers are unable to directly communicate with each other in Firefox when the UI thread is blocked, because message handling runs on the same process as the UI thread. Chrome doesn't have that limitation since it's already a multi-process architecture.
Ah, so because I use addons I might not be seeing these benefits for some time? Good to know.
Chrome is right now way faster for me, but I prefer using FF so I am in the Mozilla camp right now. Eager to see this rolled out. Early versions of e10s were very fragile and not so usable for me, so I switched it off.
I use quite a few add-ons and my e10 was disabled because of them. I force enabled e10 using Mozilla's guide https://wiki.mozilla.org/Electrolysis#Force_Enable and didn't notice anything breaking (yet).
It's usually less so things breaking and rather just being slow. For example, if you use NoScript, it often takes really long to load up a page, because that's when NoScript becomes active, and because NoScript isn't yet properly e10s-compatible, it just forces Firefox to do things in a single-process way. But that actually makes things slower than when Firefox is running without e10s.
about:performance is a rather good indicator, if an add-on is causing slow-downs.
I use both FF and Chrome. Chrome is more heavy-duty, so I use it for Dedicated Browsing. I've set it such that when something calls me away and I close Chrome, the tabs will reopen so I can continue where I left off. But sometimes I'll be doing something in another program and need to look up something quickly. In such cases, I don't wanna wait while Chrome reopens 11 tabs so I'll use FF instead. My FF-tabs aren't saved between sessions, so I only see a single fresh tab when I open it.
I don't know. "Eating up all my ram" seems perfectly acceptable to me at first glance. I purchase ram and use it in my computer for expressly this purpose: to improve the responsiveness and snappiness of programs.
Where do you draw the line at what's acceptable use of ram and what's not? I have 32G on my desktop, is it unacceptable that chrome is using 8G?
The question is how sensibly Chrome uses those 8 GB of RAM. We are very much still at the point where (most likely) not your entire hard drive + whatever data is generated on the fly or loaded from a webpage fits into your RAM. As such the often repeated statement that "unused RAM is wasted RAM" is still nonsense. There is always something which could be cached in RAM and would provide a speed-up of some form. If Chrome reserves 8 GB of that for itself, then it should utilize that in some form, so that it's used more sensibly than it could be used by other programs.
For example, Chrome could keep a hundred pages from your browsing history cached in RAM, so that when you go back, it loads that instantly. That is a speed-up, but you probably will never go back one hundred pages in a row. So, another program could almost certainly use the RAM more sensibly than Chrome, if that was the case.
And what Dotzler is referring to, is the worst kind of offense. If his promise holds true, Firefox's implementation will be just as good as Chrome's in terms of speed, but will use less RAM. Which would mean that Chrome is using some RAM without providing a speed-up, i.e. actually wasting that RAM. And that is always unacceptable.
I tried turning on the about:config option and the URL bar broke, so obviously something I have is incompatible - I suspect it's Classic Theme Restorer, although that site does list it as compatible.
Ive been testing Firefox e10s as in multiprocess Firefox for months now (in Nightly, and even before that).
At some point it became very usable and it still is today.
HOWEVER while the interface is, yes, much more responsive, it is also NOTICEABLY slower.
I ended up reverting recently and I'm using single process Firefox right now. Its fast even thus from time to time the UI may block if there's heavy stuff going on.
Whatever makes e10s slow they gotta fix it... I suspect there's a lot of synchronization code.
I hope this works out well, I often see my firefox freeze due to heavy extensions, and would much prefer if it just froze one tab. For me, higher ram usage wouldn't bother me if it meant no freezing, but I imagine it does matter to most of the audience, and it's too much effort to develop two feature sets.
So, finally, some emerging consensus among packers that pthreads is a brain-dead concept, and thousands of its advocates are just narcissistic idiots, loving to hear their voices?
Why is there still a browser capable of having its performance improve by 400-700%? This browser is a decade old. It is clearly the CPU and RAM hog everyone has said its been.
Responsiveness is not the same as performance. Responsiveness is how quickly it responds to user-input. So, for example, if you click on the Back-Button, then it highlights the Back-button to let you know that you have indeed successfully clicked the Back-button and that it is working on going back a page. How quickly it highlights that button after you've clicked it, that's how responsive it is in that aspect. Actually going back a page, i.e. loading the page from cache and rendering it, that's then again a performance-metric.
But you are also quite clearly underestimating how terrible modern browsers are. Mozilla is also currently working on a research browser called "Servo", which has been written completely from scratch and employs a more modern architecture, and that isn't too far away from 400-700% performance increase over other browsers in certain aspects.
That's the money quote here. I've been waiting for this for a long time actually. Every browser I've tried except Firefox just basically eats all my RAM and other app performance (e.g. compiling stuff) goes down the toilet.
Then on the other hand FF has not been so snappy and responsive traditionally. So responsive + soft on RAM is the combination I've really been waiting for. Let's hope they can deliver.