It's not too bad an analogy. Think of it this way:
- Switching from Firefox to Chrome might be similar to switching between two car models, one consuming less energy than the other.
- Fixing this bug is more like going to a car workshop to fix an injector issue in your car that was causing higher fuel consumption and more pollutants.
The first one is really a matter of tradeoffs and personal choices. The second one is less of a choice and more of an actual issue that was left due to negligence. Hardly similar.
An analogy can only get you so far, but in this case the bug is caused by Microsoft Defender, yet Firefox, the car manufacturer, is a different entity. So I wouldn't call it a recall.
> but in this case the bug is caused by Microsoft Defender, yet Firefox, the car manufacturer, is a different entity.
Quoting the Mozilla engineer responsible for most of the recent activity on the bug:
"This problem has two sides: Microsoft was doing a lot of useless computations upon each event; and we are generating a lot of events. The combination is explosive. Now that Microsoft has done their part of the job, we need to reduce our dependency to VirtualProtect."
It was also noted elsewhere in the thread that similar, though less severe, CPU impact is seen with other antivirus products.
Microsoft was doing something wrong that made this operation more expensive than it needed to be, but Mozilla is also doing this far more than any other browser.
With regard to browsers, I think we need more manufacturers, not just the GM (Google/Chrome) of browsers. I want Ford in there as well. (Sticking with US automotive companies).
Based on the increased laptop battery life I notice, so is using Edge on Windows.
It makes sense that both Apple and Microsoft can extract the best out of their OS + browser. There's no way Firefox can compete on such OS specific optimizations.
To continue to overextend the analogy though, that's asking people to use Edge over Chrome or Firefox is like trying to market the Smart Car. Few people want to be seen driving it, fuel efficiency be damned.
Totally guesswork here, but I'd say Chrome has a lot more telemetry, profiling and tracking built-in and its users tend to use a lot more plugins, including things like ad-blockers that scan over each webpage and can be beneficial (battery-wise) or not depending on content. Safari users are more of a barefoot type. A power user is more likely to not be running Safari. And a power user may, well, prefer to sacrifice battery power to get the power they seek.
Besides, there's some precedent set in 1998 by a certain OS that "favored" their embedded browser over the competition, so I doubt Apple would want to tickle that fancy.
If I had to guess, it's that Apple refuses to implement very CPU-heavy JS features they don't think add much value in Safari.
I recall some feature I was using that ran in an inferior way to save power on Safari, so much so that I had to kill that feature and use a different API
I think it's more that Apple implements power saving behavior into the JS engine on desktop much like they do on mobile. For example, pausing background scripts on inactive tabs and disallowing autoplay on videos even on desktop.
I would be at least slightly surprised if Apple would deliberately hold back power-saving techniques from third party developers. It’s not like this is something that makes Safari much more appealing to users. Most users wouldn’t even notice that their choice to use Safari is responsible for their great battery life.
I absolutely think there is a conspiracy (inasmuch as "conspiracy" can be defined as "not going out of their way to change it") to make the competitors to Apple applications look worse than their own offerings. It does not reflect poorly on Apple when Firefox performs worse and uses more memory and battery than Safari.
How is Apple suppose to change Firefox and Chrome’s source code?
If either browser’s manufacturer wants to see what Apple does to make Safari more battery efficient, they are free to look at the source code to WebKit.
And as the other person told you already, Safari.app is not what you get if you build the WebKit source. In much the same way that you can't judge Chrome by Chromium, one is the base of the other, not the whole.
It's not impossible, but I doubt it, if only because very few third party applications use as much as Chrome does. The only exceptions are things that actively use a lot of CPU, like compilers or compressors.
Its been a really long time but safari on Windows was a thing and it did run a lot leaner in the background than anything else available at the time (except Opera if memory serves).
It’s entirely possible that Safari is intentionally avoiding features that make it wake up-
I doubt that it does anything unavailable to other browsers, thats MS territory, because they wanted features. I feel like safari, by contrast, doesn't want to add features.
Think more like this: this bug cost an average coal power plant, all other things being equal. I doubt it's that much, but it certainly did waste a lot of energy.
> imagine the energy savings if Firefox users switched to Chrome.
Imagine the privacy savings if Chrome users switched to Firefox.
If one user switches to Chrome, the energy savings are only for that one user. If one Microsoft engineer fixes a bug, the energy savings are for the many thousands who use Firefox on up-to-date Windows.
Serious savings indeed when the Javascript cryptominer some ad network blithely serves up is ad-blocker'd, but we prefer synthetic benchmarks.
In seriousness, though, this is an issue. Elsewhere, I observe arguments about eg userbenchmark rankings, and the comparative relevance of single-core vs multicore performance. Are you playing a game, or rendering video 24/7 -- or running some entirely synthetic workload that allows for a peak performance the real world would never achieve? Same kinda problem.
Not sure if this is related, but Chrome on my Macbook regularly gets to a state where one process (a tab?) is consuming 4 Gb RAM and 20%+ CPU. This happens on both M1 and Intel CPUs, on multiple releases of OS.
Situation seems to happen some time after visiting ad-heavy sites (yeah I know I should install an ad-blocker). Have others noticed this? Seems to be Chrome. I also use Firefox and occasionally Safari but have not noticed that with those browsers.
I mean, sure, I could also just turn off my computer. Presumably people use Firefox for a reason, and making that a option use less energy is pure upside, and it's very interesting to see how big of an upside it might be.
The cause and effect exists whether or not some commenter on HN writes about it.
The reason it is not “invoked” is because energy prices are sufficiently low (due to not pricing in externalities) that there exists little incentive for end users to optimize for power usage.
>The reason it is not “invoked” is because energy prices are sufficiently low (due to not pricing in externalities) that there exists little incentive for end users to optimize for power usage.
You're right in principle, but in practice even factoring in externalities electricity prices won't be high enough for people to care. Using current US carbon intensity for electricity generation[1] and the higher end estimates for the social cost of carbon[2] gets us carbon costs of $0.142 per kWh. The average prices in US is $0.168. Adding in carbon costs would almost double the price, but there are countries with even higher electricity prices[4] and they're not exactly switching to more efficient software in droves to save energy.
According to Tom's Guide[1] Microsoft Edge beats out both when it comes to RAM utilization but Chrome just edges out Firefox when loading >10 tabs. That was in 2021. I'd be interested to see any other comparisons or benchmarks.
It works for a single case, but not sure about long term. We still have at least a little bit of competition. All browsers improve because the other ones improve. If we all switched to Chrome or Safari because of the power saving, we'd just get another version of IE monopoly/stagnation which could result in more power wasted longterm.
The optimum is likely somewhere where most people use the currently most efficient solution and others keep alternatives alive and competitive.
Performance is time, energy, heat. It’s one of the easiest features to get and there are lots of tools, research, and philosophies to help get it. Memory and storage are similar.
For anyone working on large scale apps that are on millions of devices, hundreds of thousands of servers, or even just some back office guy who has minutes less stress in his day, performance benefits the world. For programmers, it’s one of the easiest ways to Save the Planet™.
Don't forget the waste caused by people throwing away devices that are "too slow", and the resources required to build new computers/phones.
Somewhere I saw a rough figure about phones. Something like: if everyone was able to keep their phone one year longer, it would be the equivalent of 600,000 cars off the road or something. (Just looked it up - source is possibly the founder of iFixit).
Yep. Most things most users do on our computers could be done just fine on a computer from 10 years ago. I think we're on a performance treadmill like this:
- Developers get new, fast devices (yay). We make our software just fast enough to run smoothly (enough) on our own devices. Then we ship it.
- Users' computers are slower than the developers' computers. Users experience is bad because the software is slow.
- Users buy new computers too. Money is poured into R&D for even faster computers.
- Hardware companies release faster hardware
- Developers buy new, fast computers
And the cycle continues.
For general purpose computing, there's not really any meaningful difference between my computer today and my computer 15 years ago. But I can't use that old computer, because modern programs don't work well on it any more. I can't imagine how slow modern Discord or Microsoft Teams would run on my old 2012 macbook air - even though that machine is orders of magnitude faster than the computer I was using to chat on IRC in the late 90s.
I'm low key convinced that if computers and phones stopped getting faster tomorrow, the software industry would grumble but get on with things and adapt. Users wouldn't really notice anything change, except we'd all be richer because we would no longer need to buy new computers every few years.
There's a few notable exceptions of course: modern AI, video production & animation, and massive cloud services (like Google and Netflix) who actually optimize their code. I'm on the fence about compilers - do LLVM & rustc really need to be that slow? Netflix might never make it to 8k video (boo hoo). And ray traced video games might not happen. But the average consumer would probably be delighted. And the savings to the environment would be insane.
This is interesting, but maybe our software is doing a lot more today than it was back then. I guess the question is whether those things are necessary. There is of course some software that is just slower because of negligence too.
I remember using a computer 20 years, and they would take multiple minutes to boot into something that was usable, and executing programs would take quite a few seconds because the HDD had to go searching for stuff the executable required.
My current desktop has a Samsung 970 NVME SSD, it boots up in like 20 seconds or something, and everything is extremely responsive as far as reading from the disk goes.
It's interesting that the one thing that hasn't been completely neutralized by the treadmill is SSDs!
Also the LLVM and Rustc thing is interesting. I imagine they are doing much more advanced optimizations than what compilers did 20 years ago so they probably have a good excuse, but I think these optimizations are also neutralized by the treadmill eventually.
chromium 111 source is 1.7GB compressed with xz(lzma). google has completely lost whatever minds they may have had, and included more code than one would imagine exists in the whole damn world.
it is an absolute abomination, and these words does not even remotely do justice to just how horrible it is
Actually, in the PC/laptop space, I believe this phenomenon has been waning somewhat over the past... oh, the better part of a decade.
This is a result of:
* Single-core performance no longer dramatically improving - almost plateauing
* The rate or extent of "bells and whistles" and other OS overhead being added - decreasing.
* Budget consumer CPUs having reached smooth desktop performance (with sufficient memory and and an SSD) already, even with multiple applications open.
.. and all of these had not been the case during the 1980s, 1990s and 2000s. Now, if your machine's hardware doesn't brake down - and you're just a plain desktop user - your motivation for throwing away your machine is quite limited.
---
Of course, this is not the case for smartphones, we're still on the roller-coaster there.
I would say we're coming to that point with phones too. Phones have also largely reached the end of their breakneck big-annual-improvement phase; There's little annual improvement in performance, power consumption, form factor or feature set. Screen resolutions are plenty high for most users, similar for the cameras. Most people would also probably be quite happy with the version of Android or iOS they are using if it kept getting bugfixes and security updates.
> Most people would also probably be quite happy with the version of Android or iOS they are using if it kept getting bugfixes and security updates.
Absolutely! Most non-tech folks would be delighted by this. Regular users generally don't care one bit about whatever fad style or trendy feature got added with the latest update, they are just annoyed that their buttons or icons got moved around.
I like to look at websites that offer "last season" outdoor gear at discounted prices. The major frustration is that the sizes are never my size of shoe/shirt/etc because the popular sizes are gone sooner than the odd sizes.
There is no "domain-specific" improvement (The "domain" in this case being "last season discount retailer.") to the basic shopping cart ecommerce experience that would give me a way to flag my interested product or product category so that I could be notified when the item is in-stock.
This basic feature would vastly improve my interaction with the websites and would probably result in higher sales.
In today's world, that's a huge ask to have that kind of feature implemented.
Heck, even asking for a good "syntaxable" search seems to be an impossible ask. Imagine being able to search "Category: Shoes; Brand: "North Face"; Price: >50&&<120"
The fact that this is never implemented yet billions of dollars are poured into ecommerce development kinda reflects how the industry is focused on improving the tooling/back end and not the design.
I see vastly more posts in this vein: "Nobody really knows how this complicated spaghetti-like system works"
than in this vein: "X tool allowed us to build a Maserati of our domain."
It can be a bit dangerous (especially to your employer) to continue that line of thinking, though. How many pieces of software do we collectively work on which would make the world a better place if they didn't exist at all?
Meh. I feel like there needs to be an active movement to assess programs that have huge scale (>10m users) to identify unnecessary power usage - whether it be because of a bug, because of unused functionality that nonetheless takes resources, or intermediate steps that take unnecessary power.
Perhaps I’m getting into a bit of a niche here, but the rise of stringy formats for data transfer concerns me. There are many-stage pipelines on machines that agree on what a 64 bit integer is, yet each stage performs encoding and decoding of JSON twice (decoding upon receipt, encoding to pass it on to the right place, decoding the response, encoding it in another manner to reply to the original sender). Sounds like a minor concern, but the scale of this instinctively feels like it’d dwarf 250MW globally.
In some cases you convince your organization to shift focus onto more useful products, and that can be a really great feeling. In other cases (company is too large, management too committed) it helps you confront exactly who you're working for. Because if you're going to sell your soul, you should at least make sure you're getting a good price.
How did the idea of avoiding premature optimization get misapplied to client-side apps where the entity writing the software is not the one paying for electricity, cooling, and people's time when the software takes much longer to run than it could? When did a lot of software devs stop caring?
Pardon me, I think there are some electron devs at my door asking for a word. They might have baseball bats.
Premature optimization should be avoided client side as well I imagine? It just seems like lots of development shops skip optimization altogether, even when it stops being Premature (when it matures?).
And it's not like those Shops suffer for it, so it isn't very surprising they continue.
I'm not sure the devs stopped caring as much as the powers at be. Software development has become more commoditized than we want to believe. Devs following an agile workflow with every intent of performing multiple rounds of optimization find that the product gets shipped as soon as it approximates the thing that had been conceived originally.
It doesn't look like an immediate failure, so the less that leadership takes from it is frequently that the level of maturity they shipped is safe. The cycle continues and eventually folks lower down succumb to this shipping pattern. The only things that get them to optimize is competition that successfully drive home their win was due to performance. This doesn't always lead to optimizations when you are an incumbent who can still close more feature gaps because those often result in higher sales and revenue.
I use a 7 year old low-power laptop. Cooling, electricity usage, and performance of Electron apps are never an issue. Crashes, bugs, lost data, and bad usability still are. I’d rather have devs spend time on that stuff.
If Electron frees up organizational resources to do what’s actually important, I applaud devs for using it.
Not an issue for you, that you know of, because you have no equivalent software that's written in a more performant language (or at least critical portions of the codebase written in a more performant language). That's part of the problem; in most cases, software users don't know what life would be like with better software. They assume the performance they see is close to optimal, or at least that the devs made an effort to optimize. Users are willing to get used to whatever the software's performance is, in order to have access to the software's features. You can get used to almost anything, as suboptimal performance turns into background noise, but that doesn't mean you should. You get used to waiting 5 seconds for a piece of software to do something that you do on average once every hour or two, not realizing that those 5 seconds, 10 times a day, 365 days a year, cost you 5 hours every year.
Optimizing performance and fixing crashes/bugs/dataloss aren't mutually exclusive, either. Developers who care less about code quality than checking boxes for features requested by management or customers, will write code that's both suboptimal and buggy.
There's a similar calculation (in a slightly different context) in a good scene in the movie Margin Call, about all the miles and hours saved by one bridge: https://www.youtube.com/watch?v=m8Mc-38C88g
Takeaway: If you're working for a tech company with billions of users, your personal CO2 footprint is completely irrelevant. Rather than agonizing over flights vs. trains, beef vs. pork vs. vegan alternatives, heating etc. - find some small thing that you have access to and that you can optimize.
That's about 2.2 TWh per year. A decent estimate for average worldwide CO2 intensity of electricity is 400 gCO2e/kWh (that's 400 tons per GWh). A typical personal footprint including some international flights will be in the tens of tons.
It's worth considering that not all Wh are equal in environmental impact. In the case of many (most?) big tech firms, a large proportion of their energy consumption is sourced from renewable sources.
I work at MS, tried to use Firefox but couldn't because FF doesn't integrate with the Windows cert store. Crucially, this keeps Windows Hello (TPM auth) from working, which makes it useless for any internal websites. For a while I used a hand-compiled PKCS#12 plugin that bridged to the cert store, but that was extremely fragile and eventually I gave up.
I think this is probably a major blocker for many enterprise users, and wish Mozilla would have fixed it.
edit: it looks like they may have fixed this in the past couple years, though you might have to go poking around in about:config.
Current MS employee here. For a time this was true, but FF recently added this integration. No about:config needed, there’s simply a checkbox under the FF security settings. Since this was added, I have gone back to using FF as my daily driver, and I haven’t really encountered any other friction.
Firefox not integrating with the Windows cert store is actually a good thing in many use cases. The ability to have an alternate browser that's not integrated has saved my butt more than once.
That same site also suggests that Firefox has around 200e6 monthly active users, the average user uses Firefox 3.5 days a week, and for 5.5 hours per day.
My math could be wrong, but taking the above into account, and arnaudsm's 5 W estimate, I come up with an upper bound of around 80 MW. Discount that further by whatever proportion of Windows users you assume were actually affected. Not a whole coal power plant, but nothing to sneeze at.
Wow, that's fascinating. It really speaks to the utter dominance of Windows over Linux more than anything else. Like even among Firefox users, as of last year, there were an order-of-magnitude more Windows 7 and 8 users than Linux 5.x users.
Don't have any data to back this up, but I would think that the average linux user will instantly turn off firefox telemetry and won't show up on these graphs. It's one of the first things when I install firefox, disable ff telemetry, set privacy mode to strict and then install uBlock. Nevertheless Windows has a huge market share, even if no one turned off data collection, and the year of linux on desktop didn't happen.
Gaming on a mid-tier modern GPU probably uses around 50-100w, the Steam stats probably have a number of users to multiply with. I'm sure it's a massive amount of power.
I don't like video games and they are not-necessary so I propose that we ban them globally, or only allow gaming if using renewable energy. If you don't live in a place where this is an option, too bad!
Maybe instead of this we require all games to be limited in graphical effect (imagine early source games or something). We could save a lot of power globally if we enforced this.
This is why I strongly dislike this line of thinking. I don't think power plants work that way anyways, they probably make a constant-ish amount of power rather than taking exactly 50w worth of fuel every time someone opens up Call Of Duty.
There are also much lower hanging fruit to get upset about if you care about the planet, like cars with large motors or people with heated drive ways (yes thats a thing).
The bug was an accident though, not something intentional!
My intention was to compare the listed things against MicroSoft engineers making mistakes while programming Windows Defender (the quote below), or programmers writing bugs in general.
I have been hoping for many years that global warming would force devs to go back to a time of creating fast and efficient software in order to help the environment.
Unfortunately devs dont realize how needlessly power hungry software is when they don't need to be. But hey, its easier to use electron and javascript than it is to save the planet.
Yes, the trace comparison in the link (which the author says is for browsing YouTube in Firefox) shows average CPU usage is down from roughly 5% to roughly 4%. That’s a great improvement, but I’d like to know why we should assume that translates to 5 Watts.
doesn't this bug only manifest itself if one is using microsoft defender as their only security solution, and not a 3rd party AV/IS? if so, then the number of Firefox users in this calculation is much lower.
I don't know if that's the case. I'm a Firefox user but consider all the 3rd party apps nearly as much malware as the things they are trying to solve. I run strictly defender and try to make good choices when downloading and browsing.
I actually replace Defender with a 3rd party choice (Eset) for this very same reason - to wrestle some control over my OS from Microsoft. I find Defender to be overbearing in so many ways.
well, if we're taking strictly subjective personal experiences as some sort of a relevant benchmark, then I'm a Windows Firefox user that has never used MS defender for any length of time, and always strictly a reliable low-impact 3rd party AV like ESET or Emsisoft. so I guess the two of us cancel each other out.
More complicated still, defender does not completely stop working when 3rd party AV is installed. Also maybe Firefox is not the only app triggering this bug?
I'm pretty sure you're mistaking power for energy. Watts are units of power, which is the rate of change in energy (joules per second). Asking for how much power something produces in an hour is like asking how many miles per hour your car goes in an hour.
While this feels significant, I don't think humanity is doing itself any favors by guilting itself over problems it didn't (or couldn't) anticipate. Hindsight is always 20/20. This is especially true when "don't over-optimize" is a core tenant in software development. Software would never ship in a reasonable timeframe if we scrutinized every release to this degree during development.
This is just one bug in the world affecting power usage with Firefox. There are loads more like https://bugzilla.mozilla.org/show_bug.cgi?id=1404042 which caused me to abandon it on macOS as my primary browser.
> mpengine.dll version 1.1.20200.4 was released on April 4, so the fix should be available for everybody now. See the end of comment 91 to know what version you are using. Also, the latest discoveries in bug 1822650 comment 6 suggest that we can go even further down in CPU usage, with all antivirus software this time, not just Windows Defender.
Really nice to see open collaboration between Mozilla and Microsoft development teams resulting in a net improvement for everybody.
People care about open Firefox bugs much older than that. Basically any long-lived program will have ancient bugs that never made it onto someone’s todo list.
Isn't Ctrl-Q quitting the program normal and expected behavior? For example, right now on Firefox 111, I can see in the File menu that the "Quit" option has the keyboard shortcut Ctrl-Q. I'm fairly sure it works.
On linux, I'd want it to be dictated by my DE or WM. I have Super+shift+q set to close windows on my machine, so I wouldn't be crazy about software adding its own keybind to do the same.
I'll note that mpengine.dll was updated for me on Windows 11 Pro 21H2 in spite of my blocking autoupdates via Group Policy.
To be clear: I'm aware Windows Defender updates itself independently of Windows Update, and I actually don't mind and even appreciate this behaviour since Defender updates almost always aren't intrusive unlike the rest of Windows Update.
This update is causing intermittent browsing issues in Firefox and Chrome for me and several others (not Edge, though). Getting 'PR_END_OF_FILE_ERROR' in Firefox, with Chrome experience issues at the same time (though Chrome does eventually load the page)
If you're on a Mac and using FF (probably not FF specific), turning off "ambient mode" in youtube can save 30% cpu. I just found this out while searching why FF was taking 90% of my cpu while watching youtube videos in normal mode, but went down to 40% use if viewing in full screen. Turns out that this youtube "ambient mode" was the culprit. My lap is now cooler and the fan doesn't turn on anymore. I wonder how much power I've wasted due to this new "feature" they added 6 months ago that I didn't know about.
"Ambient mode uses a lighting effect to make watching videos in Dark theme more immersive, by casting gentle colors from the video, into your screen's background."
While playing a video, select the Settings button.
Locate the Ambient Mode setting in the list of preferences.
Toggle it to off to disable Ambient Mode for all videos on YouTube (in that browser).
It's in the same popup used for video quality and playback speed.
for those unfamiliar with visualizing a doughnut, imagine a bagel-shaped treat of sweet cake-like dough, deep-fried and frosted, with optional sprinkles
This seems unnecessarily passive aggressive. Everyone makes mistakes or bugs, intern or not. It makes no sense to get this salty about basic human error. Also there's nothing wrong with implementing minor UX enhancements.
If anything redirect the frustration to the leadership that doesn't prioritize fixing these kinds of errors.
I don't think there's any error to fix. It's a feature - casting light from the video onto the UI, using JS, surely takes that amount of CPU.
The question of why it is on by default stands - because it's little bit of eye candy, vs people's laptop batteries, CPU that could have been used to get other stuff done faster - so also their time, device thermals etc... I don't think it's just unnecessarily salty to point out how the choice to turn this on by default should have been more nuanced and thought through.
IMO the implementation sucks and the feature is questionable. Recently I set the browser to dark mode, which tells YT to also use dark mode, and if I haven't read here I wouldn't know that this is a toggleable feature. It's sad when we can't tell a feature and a bug apart.
How much can websites determine about the power of the device they're running on? Obviously it'd be a security issue for them to know too much, but it would be nice to be able to progressively enhance the experience for more powerful devices that can handle it, beyond just mobile vs PC. Even just knowing whether a device was running off battery power could be useful.
I don't think you really want websites to make this determination anyway. There are a million reasons why a user might want a website to use fewer resources than their machine could support.
It should just be a setting the user can select. No probing of the machine necessary.
What should be a setting? This specific youtube ambience thing? That is, but it shows the issue - you need to pick a default, and most people won't know they can change it. Having some idea of the capabilities of the device you're running on could allow you to choose sensible defaults.
But if you mean there should be a setting in the browser that websites could check, I agree that could be better.
Sensible defaults go a long way, yes! Although in the case of this YouTube thing, I think the sensible default is to have it disabled regardless of the capabilities of the machine.
It's not unreasonable to hold YouTube devs and QA engineers to a higher standard than everyone else who doesn't work for a ~trillion dollar corporation or deploy code that runs on billions of devices.
Just to be clear I was being a bit snarky, but what I meant is that this is sort of a small, fun, less important project that could be easily given to an intern.
I don’t think there is a bug? It seems like a sort of image processing thing that might take a bit of compute run. To the extent that there’s blame, I’d lay the blame at the feet of whoever decided it should be turned on by default.
This is definitely worth getting salty about when you consider the cumulative electricity wasted for something so trivial. Google should be strictly monitoring performance and CPU consumption of their changes on youtube since a screwup there is the climate change equivalent of paying for 747s to fly in circles.
Because the target audience for the feature is not tech savvy people but common users whom won't know it exists until it is shown to them/might be intimidated to delve onto FF settings
If you are tech savvy, you are then expected to be able to "bear the burden" of turning the feature off if it bothers you
Hell, I'm tech savvy - not a tech worker, but you'd better believe that you want me to be your end-user contact, I know a hell of a lot more than the people I work with - and I didn't even know this was an option. I'm not afraid of fixing FF settings, done it plenty of times. It's on by default. If someone who can install OpenBSD and make it a router for DSL over PPPoE in 2001 (side job) doesn't even know it exists and eats cycles [i.e., a "prosumer", not an expert, but not too far below a new hire and well beyond the masses), it's a bad idea. I don't have time to stay up on every way that people want to eat my electricity. I do know that YouTube spins up the fan on my iMac with disturbing regularity in a way that videos from alternative sources do not. So it's not the decoding.
For the replier that wanted a more substantive response; my comment can be expanded on by pointing out it is ultimately a commentary on the motivations involved in the employee-manager relationship within the bureaucracy of the mega-corporation, within a society where financial success drives including but not limited to comfort, mate selection, breeding, health, longevity and day-to-day basic survival itself - factors with the importance of which has been determined by millennia of evolutionary biology.
Ultimately the result can be that an employee chooses health, survival and the social status that a job title brings and ships a product or feature that they can sell to their manager as innovative / disruptive / greatness reimagined.
> Neat idea, I bet the intern had fun implementing it, why was it on by default?
Total speculation, but Firefox seems to be pushing out a lot of UI gimmicks. Maybe they're trying to drum up interest in the browser that way, since they seem intent on killing many of their other differentiators.
They could do it on a few clients then ship the data back to the server. If they’re resourceful those clients don’t even need to be watching the video! (they could send it and compute the output in the background of another stream)
Yeah, but if google solve distributed processing just imagine the cost savings elsewhere (imagine involuntarily crowdsourcing video encoding, it would save them millions)
The effect is created by scaling and blurring the storyboard images that are also used for the seek preview. The image is refreshed every ten seconds, with a CSS opacity animation fading from the old background image to the next over a couple of seconds.
This sounds like it should be relatively cheap if composition is properly accelerated.
> make watching videos in Dark theme more immersive
the best way to make youtube videos more immersive is to block obnoxious advertisements, remove useless algorithm-driven recommendations, and delete the comment section
Not for long! [1] It's an unconfirmed rumor for now; but perhaps we're heading there.
> The iPhone 15 Pro and Pro Max will use a new ultra-low energy microprocessor allowing certain features like the new capacitive solid-state buttons to remain functional even when the handset is powered off
absolutely. besides, graphical UIs bombard the brain with everyone's unique take on visual aesthetics, consuming limited mental resources like attention
I honestly thought my monitor or GPU was having issues with weird colour banding around YouTube videos. Turns out it was an intentional choice they made to do that. I don’t know why it’s on by default.
It's not in the general settings - instead it's in the setting menu in the video player itself, where you'd select the quality and playback speed, etc.
I stopped doing per-site dark mode and just use the darkreader extension, which is why this option was never there. I call it a win to not deal with per site settings.
I think it is time to have a way to fine tune consumption based on settings. I assume the less complex way to do this is, really, use the telemetry information gathered.
as I don't care about the comments section or the recommender algo, I search (youtube-fzf) and launch (yt-dlp + mpv) youtube videos directly from the terminal. i have a bash pipeline for this and, naturally, it is very resource efficient
I just bought a Macbook because my dedicated Linux laptop, made by a popular Linux-only manufacturer, had so many issues that I got tired of diagnosing. I love Linux, but it's not a panacea for every computer issue under the sun, just a few of them. I, personally, am stoked I no longer have to deal with issues with this new machine, and can just take it into a Genius bar appointment to let someone else deal with it, for pennies a day. You can't get that on Linux!
Feel free to tell me I'm a sell-out, I am happy to be one today.
I switched to linux. I like it and haven't really had any issues to speak of. Not with sound, video, wifi or any of the other things people complain about. My fan went, but likely it was a pet fur issue, and easy to fix... I'm not an admin. I know how to use the command line, and how to use it as a work machine. Really my experience over the past 3 years, its been as trouble free as my Mac used to be. It really is the great development platform.
> I switched to linux. I like it and haven't really had any issues to speak of. Not with sound, video, wifi or any of the other things people complain about.
I'm running a last gen pangolin, (AMD 5700U). Pop OS. I don't do anything too crazy. I develop on JetBrains I did keep it awake for 3 days doing some genetics work I usually do on a cluster. Its fast, quiet and decent on battery. AMD on the notebook is really impressive.
> I, personally, am stoked I no longer have to deal with issues with this new machine, and can just take it into a Genius bar appointment to let someone else deal with it, for pennies a day. You can't get that on Linux!
Honest question. If you could get that on Linux, would you? and what kind of pricing would you consider reasonable? Is it something that would have to come with the computer (i.e. would you pay for it separately or would you only use it if it was "free" aka included with your laptop purchase)? Did you stick with the vendor-provided install or did you wipe and install your own preferred distro?
I would pay the same amount for a Linux laptop that worked as easily as a MBP and had similar build quality, performance and battery life.
Howver, whatever crazy-stable and easy to use and well supported hypothetical Linux this is wouldn’t be compatible with my “real” Linux use cases so I would then also install Arch or whatever and live with constantly borked everything and just swap between my Arch “Dev” OS and my “Linux Mac” business/work/consumer OS.
Current Linux cannot be made “MacOS”-stable. But maybe in 5 years.
Just today I needed to run some software only available in .deb and only on Ubuntu 22.04. So whipped out an old laptop that had Ubuntu installed and ran sudo do-release-upgrade to upgrade to 22.04. boom! GUI gone and the terminal was flooded with weird filesystem errors. I had to spend the whole afternoon reinstalling from scratch.
I would love a Linux daily driver but I've had similar experiences to the above every time I've tried it for the last 15 years.
Yeah I’m talking about Ubuntu being hyper aggressive about upgrading my GPU drivers even after I turn off all auto updates and borking my entire OS install and this happening enough that it’s faster just to make scripts which auto reinstall Ubuntu and all my versioned packages from scratch.
Or non-Ubuntu, software just randomly falling apart after 2 years. Like I need a newer kernel for a new Wi-Fi card but then I need a different GPU driver and that’s incompatible with some softwares UI library, so they update that but it has an incompatibility with something else.
Or how Linux still has Wi-Fi 6 totally broken.
Sure, Linux is rock stable if you have a production environment where everything is nailed down. But from an actual daily consumer user point of view it doesn’t feel stable. At all.
I love Linux. I love fixing it when it borks. But my god does it break every week.
The kernel yes, the distros and userspace, not so much. Linux is my go to for hosting, but macOS and Windows are designed to prioritize for the desktop user experience.
You are not sellout but just the average Joe. No problem with that I guess. Have fun with your Mac that uses a soldered ssd that when failing makes your whole Mac useless as well.
Nice part is that I have a warranty for hardware failures! The big issue I have initially is getting used to not having quite as much control over the OS level stuff like Linux, but I was avoiding even using my Linux laptop, so any improvement on usability will get me more productivity than having full control of every aspect of my computing. Maybe someday I’ll have time like I did when I was young, but until then I just prefer to have everything work together seamlessly, as things should in 2023. Apple offers that kind of support and service for a price I’m willing to pay.
Meh hasn’t happened yet but I’d just buy a new one. That being said, I always also have a windows and Linux machine, they’re just not my daily drivers.
You just didn’t get my point: in 2023 it’s about the environment at least for me. That’s why I wrote my comment. Just buying a new one is somehow dated imho. But ymmv
With all the attention being paid to macOS these days, there's enough mods and addon's that I don't miss Linux so much on my laptop. Hammerspoon gets me drag and resize windows how I want, and there's Rectangle.app for tiling-ish window management. There's no /proc, and all the rest of the cli utilities are just wrong (netstat, route, top, etc) but I can live with my M1.
(brew addresses a lot of the issues though, even if I do have to remember to run gdu instead of du (for gnu du))
> With all the attention being paid to macOS these days, there's enough mods and addon's that I don't miss Linux so much on my laptop.
One problem that I have is all that all those mods and add-ons are out there, but there's a real mindset that "everything must be an app" and a pursuant mindset that "might as well charge for it". I don't mind paying for a complicated app, but there are certain basic features that used to be served by the freeware model on Macs that just aren't any more, and my impression is that things are heading ever more in that direction. (As well as in the direction of subscriptions over app purchases, which are right out for me for basic utility needs.)
You mentioned Rectangle.app and Hammerspoon, which are both open source. Do you have any good recommendations for where to look for other high-quality open-source mods and add-ons for macOS?
What are you looking for? I don't have anywhere specific in mind, but with hammerspoon I find myself writing lua to write the functionality I want. Which I used to do with Linux. Ideally I'd write up and post it on a blog or github somewhere though.
Personally I'm okay with paying for software (though I do whine about it from time to time). Yeah it should just be built in functionality, but someone devoted their time to building a thing, so I don't begrudge them a couple bucks for it - so long as it's a one-time purchase and not a recurring subscription model for unchanging functionality.
This happens on Linux too. I was wondering if the weird CPU-hogging flickering was a bug in my compositor (picom) or window manager (i3) or browser (Firefox). Turns out to be a "feature".
not sure what your point is... ambient mode is a visual effects thing YouTube does and reading the descriptions, not surprised it causes increased CPU usage regardless of OS.
Five years is nothing for MS. You should see how long the bug in File Explorer has been there, where after navigating to a folder and pressing the down arrow, the second item is selected instead of the first. And it's one of those things that, even though I'm aware of it, it still always catches me causing extra keystrokes. It's like they're trying to force me to use the mouse for some reason.
That one I can almost agree with the reasoning for. The first item is selected by default but also by default you have to intentionally trigger a keyboard navigation for it to go into that mode since most don't intend to do that when hitting enter on a freshly loaded directory. As evidence of this behavior instead of hitting a directional key to change the selection whacking space should activate the highlight on the first item and then another navigation action is needed to actually do anything.
I think it'd be more convenient (for me as a keyboard centric user at least) if it were done differently but I don't think it's actually a bug as much as an intentional decision at the cost of keyboard user. This is unlike the Defender issue where it's of no purpose to be significantly slower than it needed to be.
Wanted to chime in with this. Agreed, it's not a bug. Just a UX decision that was made to prevent people from hitting Enter over and over and constantly launching/navigating stuff.
I often accidentally rename files in macOS Finder (hitting Enter activates rename).
Windows Update and Windows Defender are notorious piles of shit that eat up huge amounts of CPU for seemingly no reason.
The problem is that there is zero incentive to get them right. Nobody is going to get promoted because they use 10% less CPU. Nobody is losing their bonus because 10% of all computers melt down. etc.
That's one way to look at it, but a very biased take. An equally valid take is that Firefox was calling an expensive platform feature too often, and even though it has been killing performance for years (possibly, for the entire history of the project) nobody noticed or bothered to fix it on the application side.
The platform feature in question was normally cheap and just made artificially expensive by Defender intercepting calls to it and blocking until analysis was performed. I don't think it's the FireFox' team's responsibility to be aware of and take into account arbitrary software intercepting system calls.
> I don't think it's the FireFox' team's responsibility to be aware of and take into account arbitrary software intercepting system calls.
One of the first, hard lessons I had to learn about web development (like, stare-at-a-wall-and-consider-my-career-hard) is that web development is way more about network effects than application architecture.
Real people run systems with real configurations, and when you're targeting "the public" as your userbase you must account for that. And Mozilla knows this: if you go into the source code (circa 2009, YMMV) and look through the initialization and boot-up logic, you would find places where the system used heuristics to figure out whether some extensions had been installed in odd places instead of the "Extensions" directory (because the tool had been installed before Firefox) and hot-patch paths to pull in that component. Because if a user installs Flash and then installs Firefox and Flash doesn't work in Firefox, it's not Flash that's broken... It's Firefox.
It doesn't matter if the bug is in "Microsoft's code" or "Mozilla's code." That's unimportant. If you're a Mozilla engineer, all that matters is whether this bug would cause a user to get pissed off and uninstall Firefox.
I completely agree with you and have been on the other side of this too, having worked on a native enterprise app running on various MacOS, Windows, iOS and Android versions. Customers don't care if you have a great explanation why stuff with your app doesn't work. That being said, it's completely unreasonable to have the proactive expectation of something working well today (writing many files) breaking tomorrow (due to defender heuristics changing) and proactively trying to prevent this by optimizing. Mozilla reacting to this by both reporting the bug to Microsoft and optimizing to work around the problem is really the best you can do.
"They shouldn't have written so many files in the first place" is not a valid preventative strategy, but a one way road to premature optimization hell.
It's the application owner's responsibility to make it the app run as best as it can on a given platform. Platforms are messy, but you have to deal with it. You should escalate to the platform owner, sure, but you can't rely on them fixing it in any reasonable time-frame.
I worked on a desktop<->cloud file sync app. On Windows, only one badge can show up on a file's icon in Explorer. If there's multiple apps trying to set the badge, who wins? Well, it depends on the lexicographical order of the registrants names. So what did we do? We added some spaces to our registration name to make them show up first. Good for the user, as best as we can know - since the user or their admin had to install the app to get these badges in the first place. And they were useful ones too - whether a file was synced or not. We tried our best, and escalated.
They also don't say anything about "sane" usage, and while I don't have an MBA, I'm pretty sure they don't teach anything about `VirtualProtect` ratios when doing competitor analysis.
One possibility is that the Chrome team's implementation was more efficient due to luck, or they invested the resources to identify the performance characteristics of this function call, whereas the Firefox team missed it. I don't think "Chrome has more development resources than Firefox" is news to anybody.
Did you read the bug report? This is literally about writing to files in a temp folder. Surely you can optimize that but you should also be able to assume that this does not use excessive amounts of CPU on a modern operating system.
Why is Search Indexer constantly rescanning the same files? Can they not cache the results from the previous scan? That and OneDrive are constantly making my work laptop scream.
Come on, anyone that has even unzipped Linux-centric stuff on Windows knows how slow individual file operations are compared to Mac or Linux.
It's very common knowledge that on Windows you will get terrible performance if you have many many small files.
I don't know why Microsoft doesn't fix that. Maybe they can't for compatibility reasons or something. But that's the way it is, and any software that wants to run well on Windows needs to deal with it by using fewer bigger files.
Yes, I have read the bug report. It mentions that Firefox writes wayyyyy too much in the temp folder. It also mentions that the team should fix this behaviour independently of the fact that some of those calls are more costly than they should be because of the bug in Defender:
> With a standard Firefox configuration, the amount of calls to VirtualProtect is currently very high, and that is what explains the high CPU usage with Firefox. The information that the most impactful event originates from calls to VirtualProtect was forwarded to us by Microsoft, and I confirm it. In Firefox, disabling JIT makes MsMpEng.exe behave much more reasonably, as JIT engines are the source of the vast majority of calls to VirtualProtect.
> On Firefox's side, independently from the issue mentioned above, we should not consider that calls to VirtualProtect are cheap. We should look for opportunities to group multiple calls to VirtualProtect together, if possible. Even after the performance issue will be mitigated, each call to VirtualProtect will still trigger some amount of computation in MsMpEng.exe (or third-party AV software); the computation will just be more reasonably expensive.
> It mentions that Firefox writes wayyyyy too much in the temp folder.
> > the amount of calls to VirtualProtect is currently very high
Calling VirtualProtect is not writing to the temp folder. The VirtualProtect call is to change the permissions of the in-memory pages. It should be an inexpensive system call (other than the cost of TLB flushes and/or shootdowns).
- we implement a feature, test it thoroughly for functional and non-functional requirements
- when we are happy, we release it
I don't see myself being responsible for a third party software company coming along years later and introducing a bug in code that injects itself between my software and the operating system that users of the software I wrote happens to install at some point.
Maybe you're not responsible, but if someone says "something changed in the OS and your previous method is now adding substantial overhead", you could either a) report the change to the OS and mitigate or b) report the change to the OS and ignore the problem for years. It sounds like Mozilla chose b, for whatever reason.
As a software developer, I've had to workaround many many bugs in OSs, especially when dealing with updates to Android. It's just part of the job.
The OS isn't some random third party software, it's one of your dependencies. Your software doesn't work without the OS and if it also doesn't work with the OS, it just plain doesn't work.
That's really not a tenable mindset to be taking these days. With how much Windows has become a constantly-moving target rather than a stable platform, you need to regard it first and foremost as your adversary, whether you are developing against it or are simply an end user. And the days of being able to thoroughly test against every relevant version of the OS are long gone; Microsoft has ensured your QA will be Sisyphean.
If your users are on Windows, you have to be where they are. Moving target, wonky API, warts, and all.
Yes, it's Sisyphean. That's why my shop had a whole room stuffed with parallel Windows installs. We couldn't afford to have our users be the first ones to notice Microsoft pulled the rug out from under us again.
Windows Defender isn't "arbitrary software" - it's built into the OS and enabled by default. To anyone building an application for Windows, it should be considered part of the platform.
I'm not sure how you can possibly qualify VirtualProtect as "an expensive platform feature". Looking at the operation that VirtualProtect actually has to perform, from first principals, it should be one of the cheapest syscalls in the entire kernel.
The bug was that ETW (in the antivirus process) was doing something braindead; zeroing a megabyte of memory unnecessarily every time someone called it just to get the size of a buffer.
I'd love if you'd elaborate on this. I know very little about what VirtualProtect actually does under the hood but, in theory, it should just have to flip a couple bits in the address space mapping which says what the protection level is.
You are assuming things you are unsure about :) Even if your assumption was correct things could change from one Windows update to another.
When I worked on a time sensitive java project, our test suite had benchmarks for JDK functions as simple as Arrays.copy() to make sure we are the first to notice if something changed under the hood.
There's nothing about the bug that had anything to do with malware protection, or branch prediction, so I'm not sure how that statement applies to the conversation.
The bug was in ETW, which just happened to surface in a windows utility that ostensibly protects you from malware.
Also worth noting that the "expensive platform feature" you refer to in this specific case means "writing to a file". Something as basic as this should be assumed to be fast on modern operating systems.
No it had nothing to do with Firefox writing files. Firefox was making a bunch of calls to VirtualProtect. Windows Defender (MsMpEng.exe) was then writing to file (an sqlite database) every time one of these calls was made, which was slowing down the system.
I can assure you that I perfectly understand the bug and corresponding patch/fix. The patch fixes Event Tracing for Windows (ETW) and how it allocates temp buffers.
The speculation about SQLite at the top of that Mozilla bug report is mostly irrelevant.
It is not a bug that there are overlooked optimizations in some platform features. Windows has a ton of slow features. Starting a process, for example, takes forever. It is the responsibility of application authors to write their performance-sensitive critical path in such a way as to avoid bogus platform behaviors. This goes for Linux, which has more than its fair share of brain damage, as well as Windows.
I generally agree with you. Having worked on lots of cross platform software, a big part of that job is to work around quirks of the underlying platforms, which can be significant. However in this case, it's not that Firefox was introducing the usage of these APIs and was then starting to have performance problems. They used the APIs without problems when suddenly Defender came along and slowed them down by orders of magnitude when they had been working fine for years.
Yeah, your program definitely should not do as many useless writes on the system it runs on, it's just bad behaviour. If every program did the same the disk would grind to a halt, SSD or not.
Recent discussion of this here also cited a problem (not sure if it was the same problem) with Defender causing 100x performance drop with some PowerShell operations.
If you use Windows Pro and Enterprise, you can use GPO to disable Defender. Just run gpedit.msc and edit a few of the policies to disable real-time protection etc.
Under Computer Configuration > Administrative Templates > Windows Components > Microsoft Defender Antivirus
- Turn off Microsoft Defender Antivirus -> set to Enabled
Under Computer Configuration > Administrative Templates > Windows Components > Microsoft Defender Antivirus > Real-Time Protection
- Turn on behavior monitoring -> set to Disabled
- Monitor file and program activity on your computer -> set to Disabled
- Turn on process scanning whenever real-time protection -> set to Disabled
- Turn on behavior monitoring -> set to Disabled
Restart the computer and Real-time protection should be disabled permanently (until you reverse the same settings through gpedit.msc at least).
With 11 (or possibly newer versions of 10, haven't tried lately) this doesn't seem to actually disable MsMpEng.exe from loading anymore. Using something like https://github.com/jbara2002/windows-defender-remover seems to work though.
My car also nags me every time I unbuckle my seatbelt to park yet that doesn't mean everyone should have it unbuckled all the time. There's a reason it's designed to be naggy.
Having everyone easily disable Windows Defender will not lead to a great outcome.
There's a reason malware on Windows has been on a steep decline from the Windows XP days and I'd prefer it to keep it that way.
Not all uses cases for a car are the same. Some are held entirely on private property and are used as work vehicles where the seat belt chime would be unnecessary and distracting. Which is why most manufacturers provide a sneaky mechanism to disable it. I own the vehicle, why wouldn't they let me disable the nag?
Their solution? Make it intentionally complicated, but still possible:
Step 1: Turn your headlight switch off
Step 2: Unbuckle your seatbelt and turn the key to the off position
Step 3: Turn your key to the on position till the seatbelt warning light turns off
Step 4: Buckle and unbuckle the seatbelt three times and end on the unbuckled position
Step 5: Turn your headlight switch on for three seconds and then turn it off
Step 6: Repeat step number 3
Step 7: Wait for the seat belt warning light to turn on and off again then buckle and buckle the seat belt
I remember doing this sort of song and dance with my RAM and Jeep. Sometimes I am just moving around a parking lot for a brief moment, or especially when off roading (read: stuck) and don't want the constant beeping.
Seat belts are 100% an immediate habit for me. Driving at any rate of speed without one makes me feel super sketchy and uncomfortable, so the nag is not needed at all.
On my Ford's I would use FORScan to defeat it via the OBD2 port.
I do have a security gateway bypass module for my truck though so hopefully I will be able to start playing around with AlfaOBD soon.
TBH the main reason I commented this was to get some kind of validation from the community (positive or negative). Sounds like I need to turn it back on :)
I really only use this machine for MWII, Halo and Titanfall. It's a glorified Xbox. I even contemplated putting it on a standalone VLAN to 100% physically isolate it from my core net.
Haha, you should enable it with exclusions. It's the best AV out there that isn't an EDR. I disable it in labs but I can't imagine running windows in prod with defender enabled. Don't use windows like it's Linux.
Well they call a LOT of things Defender now anything from email and azure specific alerting to EDR and DLP. It's all "Windows Defender ______". But I meant the consumer license AV.
But also even with the most basic win10, cloud submission (if privacy is no biggie) gets you EDR detections to a point but without the edr console and logs.
When I simulate attacks with defender on, I would spend a lot of time bypassing it but then as soon as I break opsec (e.g.: run whoami.exe) if cloud submission is on I basically burn that technique because the edr in their cloud blacklisted it but with that off I can last as long as I want so long as I don't execute things flagged as malware by the defender on the host (and even then, usually that thing gets blocked not my original technique which I can still reuse).
I would bet it is more likely that MS devs noticed but just didn't care. The farthest it would have gotten in conversation with QA triage would have been "does this issue affect any of our services? Ok then that is Mozilla's problem."
I had an issue in early builds of W11 with use of WSL 2 & Node, Github and VS Code. Something in the git change detection process caused Defender to decide it just decided it wanted 100% of a single thread on the 5600X system I was using. While coding it would just have a core screaming at well over 4Ghz. Just all of Mankind's greatest innovations that lead to 7nm lithography and incredible processor design
just to be a space heater. I never did get it figured out at the time. It also re-enables itself. So that's cool.
Defender (or other AV) can slow down a lot of things, but in terms of the exact way that Firefox ran into it, the other apps would be anything with a JIT. Well, a JIT that uses memory protection as a security measure, though that's very common. (After generating executable code, the JIT marks the pages as executable but non-writable, so an attacker can't change the code after it starts running.)
Although the V8 JIT stopped using this, at least in some configurations (?), for the stated reason that it's not perfect—another thread could sneak in and modify the executable code in between when it was generated and when it is protected in preparation for execution. They're instead planning to rely on memory protection keys, which should be faster and more robust, but are only available on some hardware.
JITs can show up in unexpected places. Regular expression engines will sometimes have a JIT.
I would think anything with a JIT that is toggling the page protection for machine code many times a second, based on a very quick reading of the bug report talking about VirtualProtect calls and the processing of ETW events for them by defender.
I don't think anything is toggling them back and forth, it's just that a lot of chunks of executable code are being produced. But I could be wrong; maybe if you have space left for more code on a page, you'll toggle it off and append some new code, then toggle it on again.
My guess is that this would mostly come from inline caches (ICs), since they're typically small and a lot of them are generated.
Thunderbird is atrociously slow even without an AV with any mailbox that isn't tiny. Could it be that yours has just grown over the years and Defender amplifies it?
All of them? From IDEs through games to email clients. Remove that malware as soon as you can. Either replace it with some more competent antivirus (not sure there are any) or don't use any antivirus at all - as a visitor of this site you should generally know what you're doing and what is and what isn't safe. I use https://github.com/jbara2002/windows-defender-remover and have been running my Windows machines without any antivirus and without any issue for years (if you ask how do I know Defender sucks if I don't run it - I do run it at work where I can't remove it - only disable it temporarily and it turns itself on again after a while).
That bug is more subtle. Apparently the various ways to use VirtualAlloc is not self evident, and some variations have wildly different performance characteristics due to undocumented interactions with Event Tracing for Windows (ETW) events that get sent to anti virus products.
So it's not only the original problem of the events being handled inefficiently, it's also that the way they're generated is a bit of a black box and hard to predict without detailed performance tracing work.
I have screamed about this like a crazy person and filed bugs and was always told, "Meh there's nothing there..."
But if you use Firefox to call yourself on Chrome... you'll see that Firefox takes up a TON more energy on an Intel MBP than Chrome does.
You can tell because Firefox literally heats your laptop up to do streaming videos. You hear the fans kick on, the laptop gets hotter to hold.
Anyway I'm sure there are more bugs like this! Glad Firefox is getting some of the people to fix their code... but look, Microsoft isn't the only culprit. Until Firefox takes as little power as Chrome in MacOS & Windows... I think we should all stay outraged! (=
Good thing its a bug though, not a monopolistic attempt to sabotage the competition running on your platform, by doing strange things with API rodeo. This surely ruined the performance of other software too..
Guess that's why I never feel firefox laggy but others said it is. The first thing I do after installing windows is always installing some other antivirus to disable defender. Because the defender start routine scanning at weird time and lag games randomly, which is really annoying.
I really have no clue why engineer at ms think such behavior is ok. Shouldn't scans like these scheduled at some time slot that people are not actively using computers?
I haven't run in to this issue either. The first thing I did when I bought my windows laptop, before ever booting it, was plug in a debian installer disk and then boot to that, erase the entire drive and install debian. More windows users should try this little trick!
I would like anyone that considers Microsoft to be a recent champion of Open Source to reflect on corporate doublespeak. It's plausible that this bug was engineered as an attack on Firefox.
By the looks of it took Firefox a few years to figure out what the repro was, they reported it to MS, it was (very) promptly fixed and they were warned that the syscall they were using isn't being used as intended and they should consider changes to FF for future use cases.
It's the AV that was calling TdhFormatProperty(), not FF. The problem was mostly on the AV side, not FF. FF itself was generating many events due to too many VirtualProtect() calls which in itself was only a smaller part of the problem.
I don't have proof. I'm presenting a theory based on circumstantial evidence. I think it says just as much to reject a theory without proof as it does to present a theory without proof. Let me break down the context in which I make put forward my theory.
* Corporate doublespeak is a well documented tactic in which a business will project a message when the truth is the opposite of the message. Sometimes they use euphemisms, ambiguity, or omissions. I am stating that we cannot take Microsoft's press releases about being Open Source friendly at face value.
* Five years ago Edge was rebuilt with a chromium backend and Microsoft had a large campaign to increase adoption of Edge.
* Reduced Firefox performance would make Edge compare more favorably. This error was clearly in Microsoft's favor.
* It is common for companies that own a platform to create advantages for their applications running on the platform.
* Microsoft has a long history in the browser wars, highlighted by an antitrust lawsuit in the late 90s. Their anticompetitive behavior regarding browsers was a key part of the lawsuit.
> I think it says just as much to reject a theory without proof as it does to present a theory without proof.
Except you don't have any proof, and the proof that opposes your wild speculation is:
- 5 years ago, a bug was opened on the FF issue tracker that over the years had a bunch of derailments.
- A month ago, someone _actually_ investigated, found an issue and reported it to Microsoft.
- Issue was promptly fixed
You're spreading FUD, and attacking _me_ for asking for proof when you made the accusation in the first place. If you want to discuss this reasonably, I'm happy to, but in order to do that, the slightest modicum of bait that points to any sort of effort is required, meanwhile to someone who has worked on software for a reasonably long time, seeing issues spin like this for years is par for the course, and it's great that someone fixed it.
Yes, it does. Stringing together a narrative and ignoring the proof that contridts your claims is conspiracy theory territory.
> Someone can make a statement like this solely based upon past behavior.
Yet all the evidence (the bug linked) implies nothing of the sort.
> They're merely stating that it is plausible.
I can state that it's _plausible_ that the US government conspired with Disney to fake the Moon landing as a show of power to the USSR in the 60's, but I'd expect to be asked for proof, and I'd expect to be accused of spreading absolute nonsense.
There's a difference between something not surprising you and a wild, totally baseless accusation. Ill happily eat my words if there is a shred of proof, but right now it's "company fixes old bug when it was reported to them"
What a weird take. If this bug was engineered as an attack on Firefox, then it seems like the project has been infiltrated by bad actors, because the bug comes from Firefox's codebase. Indeed, the developers themselves contradict your comment in the linked bug conversation:
> This problem has two sides: Microsoft was doing a lot of useless computations upon each event; and we are generating a lot of events. The combination is explosive. Now that Microsoft has done their part of the job (comment 82), we need to reduce our dependency to VirtualProtect.
Compare how many calls other browsers make (this is also quoted in the link): Firefox was generating up to 46 times more (costly) events than Chrome. It is a bit ludicrous to shame Microsoft for the whole situation.
> Firefox with normal configuration: ~14000 events, 98% of which are PROTECTVM_LOCAL;
> Firefox with the preferences from comment 83: ~6500 events, 95% of which are PROTECTVM_LOCAL;
> Edge: ~2000 events, 91% of which are ALLOCVM_LOCAL;
Inaction is a pretty low "bandwidth" form of action, and can sometimes produce the results you're looking for just as well, if not more effectively.
Microsoft has a storied history of anti-competitive views leaking to public eyes/ears, something like this is quite literally a matter of not organizing anyone.
Why would Microsoft attack Firefox specifically and not Chrome? Chrome is the bigger threat to their business. Firefox has become almost too small to care about - little revenue, small browser market share.
There's an argument that Microsoft's Edge use of Chromium and then the Surface Duo would cause 'don't bite the hand that feeds you" problems. Not agreeing with OP, but it would make sense.
Well no, but I also would question the inverse. Holding accountable companies that gain from possibly bad actions and asking the questions is helpful.
See: Microsoft's Supreme court case over their preference for IE and forced monopoly. While Microsoft 'won' the case, the outcomes were exactly what the case feared but "convenient" political climate helped them avoid travelling back to court of course. Microsoft took extreme steps to avoid being broken up in the 1990s however and it's arguable that one of their political mitigation methods, investing in Apple, actually had worse effects on them. (Prior to the iPhone in 2007, it was assumed that RIM and Microsoft would be the big two players in the smartphone space, Apple and Google have basically become the big two players in the Computing space mindshare)
We should at least be aware of it as an option. Many call this "healthy skepticism". It becomes unhealthy when you veer into blind optimism/pessimism/cynicism.
Very interesting point. They might have had the intentions of pushing everyone to use Edge, and it is not surprising after their so many consistent nags and misleading messages to think its the "better" browser compared to anything else.
If Microsoft were so good at software engineering that they could pull off such an attack on Firefox, then maybe they do deserve to have a monopoly. /s
slow walk.. or.. in comparison, have you contacted your local city government to fix obvious holes in the road recently? Around here, a two-year wait time to fix it is common.
Nowadays a lot of malicious acts are intentionally disguised as stupidity and incompetence. Not necessarily in this case, but that quote really is showing its age.
yet another reason why I don’t touch Windows for any professional/sensitive workflows.
Only keep a license around for the occasional gaming session. Disable all of the Windows features (ie, firewall, auto updates, antivirus) and telemetry. Strip the OS to bare minimum and manage the GPU, mobile drivers manually. Limit it to only games
Seems less dumb than targeting Firefox though. Presumably, in the universe of this conspiracy hypothesis, they would do it in a way that wouldn't effect Edge.
Then they would lose any semblance of plausible deniability, which would expose them to being positively identified as bad actors. What it looks like now is mere incompetence in the face of enormous complexity, which means they lose a lot less face compared to doing what you suggest. Put bluntly, they're hiding within the space covered by Hanlon's razor.
That's 250 megawatts saved, the equivalent of an average coal power plant. Because some Microsoft engineer missed a bug.