Hacker News new | past | comments | ask | show | jobs | submit login
Firefox 127 (mozilla.org)
160 points by skilled 7 months ago | hide | past | favorite | 97 comments



> You can now set Firefox to automatically launch whenever you start or restart your Windows computer. Setting Firefox to auto-launch optimizes efficiency in our browser-centric digital routines, eliminating manual startup delays and facilitating immediate web access.

What’s going on at Mozilla that caused normal release notes to get transformed into awful marketing speak like this?



Mozilla is explaining why you'd enable a feature they defaulted off.

Those benefit from a sales pitch.

If anything, I'd say it's the opposite problem -- that, instead of a marketer, they left the marketing speak to an alpha nerd and no one else proofread it.


Haha, reading again, this is pretty apt.

Hopefully it fits with the target audience, to me it read relatively sound. How did Chrome advertise this at the time?

"Security"? The full maximum Google Experience?


Sounds like someone using a particularly pathetic RLHFed LLM such as OpenAI's products.


Firefox and Mozilla are basically dead living only off of Google's charity. It's a welfare case.

If I didn't personally know some Mozilla employees then I would honestly be surprised that anyone actually worked there.


To Firefox's credit, they are the last major browser to inject themselves into our computer's startup (and at least they are providing notice and a way to disable it).

I find it not only annoying but arrogant how most other software companies assume our lives revolve around their particular tool. So much so that, obviously, we must want their software to leech our time, bandwidth and power running its setup caching (and checking for / installing updates) every damn time we boot our computers. No need to ask permission to inject your silent bloat into our daily lives or even offer a way to disable adding it!

Ultimately, I blame the OS for allowing apps to inject startup cruft silently and without explicit user consent. Aside from increasing the security/privacy attack surface, that ever-growing pile of aging cruft-ware makes their OS slower and less stable. It's now a regular manual maintenance task to de-cruft each of my PCs. And since Windows has so thoughtfully provided over a dozen legitimate ways for apps to silently insert startup bloat, a special tool is needed: Autoruns (https://learn.microsoft.com/en-us/sysinternals/downloads/aut...). Amazingly, many apps now go and check that all their startup cruft is intact and, if not, "fix" it every time they run (looking at you Chrome).


Moving PCs to a more mobile-like environment, where apps are always sandboxed and all permissions are explicit, sounds so tempting. But then you realize that manufacturers will always cross the line: MS and Apple apps get special privileges, the bootloader will be permanently locked, and you'll eat your DRM because you have no other choice.

I prefer to curate my own apps and if they misbehave, they're uninstalled immediately. The downside there is this philosophy feels a lot like, "I don't need a virus scanner because I'm too good to get stung by a virus". Still, it seems better than the alternative hell.


> Moving PCs to a more mobile-like environment, where apps are always sandboxed and all permissions are explicit, sounds so tempting. But then you realize that manufacturers will always cross the line: MS and Apple apps get special privileges, the bootloader will be permanently locked, and you'll eat your DRM because you have no other choice.

It's always worth emphasizing imo that technological prophylactics like mobile-style sandboxing are redundant on operating system distributions whose social processes of development take responsibility for things like this (auto-updating, startup behavior, default app associations, file type associations, etc.) away from apps and centralize it under the control of the user via the OS' built-in config tools.

To be more explicit: free software distros generally lack software that behaves this way, and often actively patch such behavior out of ill-behaved cross-platform applications (who also sometimes lack it themselves, in builds targeting free operating systems). The problem is, as you note in your second paragraph, just as much that we're getting our software from untrustworthy distributors as it is that our desktop operating systems do too little to protect us from untrustworthy software. In some cases, the problem is rather that our software distributors have too little capacity to intervene, both for technical reasons (e.g., they don't have the damn source code) and social/economic ones (e.g., the model is to accept as much software as possible rather than to include a piece of software only after it is clear that there is capacity to scrutinize it, sanitize it, and maintain a downstream package of it).

You can avoid 99.999% of this kind of crap just by running a free software base system and avoiding proprietary software. Better app sandboxing is great! We should demand it and pursue it in whatever ways we can. But installing software directly from unscrupulous sources and counting on sandboxing to prevent abuse is also an intervention at the least effective stage. Relying on a safer social process of distributing software goes way further, as does keeping shitty software off our systems in the first place! These should be the approaches of first resort.


The problem is, reliably catching programs misbehaving is non-trivial even for the technically capable, let alone everybody else which easily results in a lot of damage having been done by the time they’re caught, if they’re caught.

I don’t think there’s a practical way forward for desktop OSes that don’t embrace sandboxing and tight permission controls, at least for the masses. Even for myself, I’d be more comfortable using an OS like that — if trust of the corporations making the OSes becomes an issue, the solution that’s apparent to me is to run a Linux distribution with deeply integrated permissions and sandboxing, not to run without either.


I think that the way is to design an entirely new operating system, and I have some ideas how to do it (but not a name, yet). It will be FOSS, and the user can easily reprogram everything in the entire system. It uses capability-based security with proxy capabilities; no I/O is possible (including the current date/time) without using a capability. Also, better interaction of data between command-line and GUI will also be possible.

Linux and other systems tend to be overly complicated; they need to add extra functions to the kernel because of the problems with the initial design; but, I think it can be done better in a simpler way.

Webapps are also too complicated, and does not really solve the permissions issue properly (the set of permissions doesn't (and cannot) include everything, the possibility of overriding by the user is difficult and inefficient, etc), and also isn't very well designed either.

There are sandboxing systems available on Linux but they have their own problems; e.g. many have no way to specify the use of popen with user-specified programs, or what needs to be accessed according to user configuration files and/or command-line arguments, user-defined plugins, or cannot properly support character encoding in file names (due to issues with D-bus), etc. (Using D-bus for this is a mistake, I think. The other things they have done (other than D-bus), also don't handle it very well.) There is also issue of being unknown what permissions will be needed before the program is installed, especially when using libraries that can be set up to use multiple kinds of I/O.


Are you aware of genode ( https://genode.org/ )? It's a full-blown capabilities OS that is FOSS and already exists and AFAIK basically works today.


Yes. My ideas have some similarities with Genode but also many significant differences. For example:

- The design is separate from the implementation. The kernel of the system will be specified and simple enough that multiple implementations are possible; the other parts of the system can also be specified like that and be made, and the implementations from different sources can be used together. (Components can also be replaced.)

- The ABI will be defined for each instruction set, and this will be usable same for any implementation that uses that instruction set; the system calls will be the same, etc.

- The design is not intended for use with C++ (although C++ can be used). It is intended for use with C, and for use with its own programming language called "Command, Automation, and Query Language". Assembly language, Ada, etc are also possible, although the core portable design supports C, but would be written in such a way that the abstract system interfaces are defined in a way that is not specific to any programming language.

- All I/O (including the current date/time) must be done using capabilities. A process that has no capabilities is automatically terminated since it cannot perform any I/O (including in future, since it can only receive new capabilities by receiving a message from an existing capability). (An uninterruptable wait for one of an empty set of capabilities also terminates a process, and is the usual way to do so.)

- It does not use or resemble POSIX. (However, a POSIX compatibility library can be made, in order to compile and run POSIX-based programs.)

- It does not use XML, JSON, etc. It has its own "common data format" (which is a binary format), used for most files, and for command-line interface, and some GUI components, etc.

- The character set is Extended TRON Code. The common data format, keyboard manager, etc all support the Extended TRON character set; specialized 8-bit sets are also possible for specialized uses. (This is not a feature at the kernel level though; the kernel level doesn't care about character encoding at all.)

- Objects don't have "methods" at the kernel level, and messages do not have types. A message consists of a sequence of bytes and/or capabilities, and has a target capability to send the message to.

- Similar than the "actor model", programs can create new objects, send their addresses in messages through other capabilities, and can wait for capabilities and receive messages from capabilities. (A "capability" is effectively an opaque address of an object, similar to a file descriptor, but with less operations available than POSIX file descriptors allow.) It works somewhat similar to the socketpair function in POSIX to create new objects, with SCM_RIGHTS to send access to other objects.

- A new process will receive an "initial message", which contains bytes and/or capabilities; it should include at least one capability, since otherwise the process cannot perform any I/O.

- There is no "component tree".

- Emulation is possible (this can be done by programs separate from the kernel). In this way, programs designed for this operating system but for x86 computers can also run on RISC-V computers and vice-versa, and other combinations (including supporting instructions that are only available in some versions of instruction sets; e.g. programs using BMI2 instructions work even on a computer that doesn't support those instructions). Of course, this will make the program less efficient, so native code is preferable, although it does make it possible to run programs that don't.

- Due to emulation, network transparency, etc, a common set of conventions for message formats will be made so that they can use the same endianness, integer size, etc on all computer types. This will allow programs on different types of computers (or on the same computer but emulated) to communicate with each other.

- A process can wait for one or more capabilities, as well as send/receive messages through them. You can wait for any objects that you have access to.

- The file system does not use directory structures, file names, etc. It uses a hypertext file system. A file can have multiple streams (identified by 32-bit numbers), and each stream can contain bytes as well as links to other files.

- Transactions/locks that involve multiple objects at once should be possible. In this way, a process reading one or more files can avoid interference from writers.

- Better interaction between objects between command-line and GUI, than most existing systems.

- Proxy capabilities (which you can use C, or other programming languages, including the interpreted "Command, Automation, and Query Language") can be defined. This is useful for many purposes, including network transparency, fault simulation, etc. If a program requires permission to access something, you can program it to modify the data being given, to log accesses, to make it revocable, etc. (For example, if a program expects audio input, the user can provide capability for microphone or for an existing audio file etc)

- There are "window indicators", which can be used for e.g. audio volume, network, permissions, data transfer between programs, etc.

- The default user interface is not designed to use fancy graphics (a visual style like Microsoft Windows 1.0, or like X Athena widgets, is good enough).

- USB is no good. (This does not mean that you cannot add drivers to support USB (and other hardware), but the system is not designed to depend on USB, so avoiding USB is possible if the computer hardware supports it, without any loss of software functionality.)

- System resources are not the same like Sculpt; they are set up differently, because I think that many things would be better done differently

- As much as possible, everything in the system is operable by keyboard. Mouse is also useful for many things, although keyboard is also usable for any functions; a mouse is optional (but recommended).

- There are many other significant differences, too.


> - Due to emulation, network transparency, etc, a common set of conventions for message formats will be made so that they can use the same endianness, integer size, etc on all computer types. This will allow programs on different types of computers (or on the same computer but emulated) to communicate with each other.

If you're going there, you could consider just going to wasm as the binary format on all architectures.

> - There are "window indicators", which can be used for e.g. audio volume, network, permissions, data transfer between programs, etc.

Kind of like qubes? More so, obviously, but it reminds me of that.

> - USB is no good.

What? USB is extremely high utility; just this makes me think you'll never get traction. By all means lock down what can talk to devices, do something like https://usbguard.github.io/ or whatever, but not supporting USB is going to outweigh almost any benefits you might offer to most users.

(Also on the note of things that will impede uptake, throwing out POSIX and a conventional filesystem are understandable but that's going to make it a lot harder to get software and users.)


> If you're going there, you could consider just going to wasm as the binary format on all architectures.

There are several reasons why I do not want to use wasm as the binary format on all architectures, although the possibility of emulation means that it is nevertheless possible to add such a thing if you wanted it.

> Kind of like qubes?

Similar in some ways.

> What? USB is extremely high utility; just this makes me think you'll never get traction. By all means lock down what can talk to devices

I had clarified my message, since "USB is no good" does not mean that it cannot be used by adding suitable device drivers. However, it means that the rest of the system does not use or care about USB; it cares about "keyboard", "mouse", etc, whether they are provided by PS/2, USB, IMIDI, or something else. However, USB has problems with the security of such devices, especially if the hardware cannot identify which physical port they are connected to, which makes it more complicated. Identifying devices by the physical ports they are connected to is much superior than USB, for security, for user device selection, and for other purposes; so, if that is not available, then it must be emulated.

For distributions that do have a USB driver, something like USBGuard could be used to configure it, perhaps. However, USBGuard seems to only allow or disallow a device, not to specify how it is to be accessible to the rest of the system (although that will be system-dependent anyways). (For example, if a device is connected to physical port 1, and a program has permission to access physical port 1, then it accesses whatever device is connected there if translated to the protocol expected by the virtual port type that is associated with that physical port.)

Even so, the system will have to support non-USB devices just as easily (and to prefer non-USB devices).

> Also on the note of things that will impede uptake, throwing out POSIX and a conventional filesystem are understandable but that's going to make it a lot harder to get software and users.

Like I mention, a POSIX compatibility library in C would be possible, and this can also be used to emulate POSIX-like file systems (e.g. by storing a key/value list in a file, with file names as the keys and links to files as the values). Emulation of DOS, Uxn/Varvara, NES/Famicom, etc is also possible of course.

However, making it new does make it possible to design a better software specifically for this system. Since C programming language is still usable, porting existing software (if FOSS) should be possible, too.


I guess I wasn't thinking of my primary line of defense: webapps! Naturally sandboxed and least privilege'd. And many of them are locally selfhosted in containers too.

Native apps for me tend to fall into a few narrow categories:

* Professional software engineering tools

* Videogames (on a Steam Deck these days)

* CAD for 3D printing (FreeCAD)

* A/V editing with Audacity, Gimp, etc


You may be interested in "immutable" distros like OpenSUSE's Aeon, Fedora's Silverblue or the kind-of-Debian Vanilla OS. If you go and try Vanilla, by all means try the beta.


> a way to disable it

It's disabled by default.


Agree that starting apps up at boot is normally pretty annoying. But, it seems like a nice feature for a web browser. A web browser might host a user’s whole universe of programs. I mean, it seems like a nice to have optional feature.

After Ubuntu switched from a native package to Snap (I think it was), Firefox first startup times got bad. It is objectively not a big deal, but it feels awful, like using some crappy old pre-SSD computer.

Anyway, starting the program before the user asks for it is a pretty hacky and ridiculous way of solving this artificial Snap-induced program (and I don’t think it was the motivation). But it would technically work I guess.

What a mess.


Ironically, since I think Windows ME, Windows has been doing this anyway. It became smarter in Vista - I believe it would preload binaries at a time of the day when they were most frequently opened. Also how often are people logging in afresh? It seems a bit of a waste of engineering time, but then again I hope that Mozilla has used these resources as a result of data showing it was worth implementing.


> Firefox will now automatically try to upgrade <img>, <audio>, and <video> elements from HTTP to HTTPS if they are embedded within an HTTPS page. If these so-called mixed content elements do not support HTTPS, they will no longer load.

Really interesting, and I think it makes sense, why show a padlock and mislead the user if not all content is properly going through HTTPS?


They don't show a full padlock. They show one with a warning icon


Does it? so if the content can't be auto-upgraded to HTTPS, will be blocked.

I rather have a padlock with a warning that... a broken website because missing content?


There is an article linked in the release notes that allows users to opt-in to a less secure option.

https://blog.mozilla.org/security/2024/06/05/firefox-will-up...


I'd say the website was already broken. I'd rather block content than have insecure resources fetched with a potentially sensitive referrer. That is, I don't want `https://example.com/why-does-my-butt-hurt` to fetch `http://media.example.com/spacer.gif` over the coffee shop's open wi-fi.


Very solid point and why it matters. I wonder if other browsers are or will do the same.


"Close Duplicate Tabs" is actually a really useful feature for some. My Firefox at work has dozens of tabs open at any given moment, mostly JIRA tickets, and it's very easy to have the same issue open in three different tabs when I'm constantly opening links from email, Teams, etc. I had looked into installing an extension to get similar functionality, but held off. I'm glad it's now baked into the software.


The problem is that duplicate tabs don't appear duplicate when they have different tracking params. Unless they somehow figured that out.


Close all other tabs is also pretty useful yet infuriatingly hidden behind another context menu... their ux sucks imho


Duplicate tab is my most useful. I like being able to maintain the history while starting a new tab


Tip: you can duplicate a tab by middle-clicking the refresh button


It is a manual function, not automatic. So best of both worlds. :)


you can "sort of" duplicate a tab by ctrl + clicking on the back button. I use that more than duplicating.


I wish there was an option to disable the spawning of the "you have been updated" tab. I never read it, and, worse, the tab which was active when I closed Firefox will no longer be active, forcing me to go through a couple of tabs to find it again, while reloading all the tabs which I click through while searching.

It's really annoying.


Try

Type about:config in URL bar -> click proceed -> Search "browser.startup.homepage_override.mstone" in the shown search bar -> change value to "ignore"

https://kb.mozillazine.org/Startup.homepage_override_url


Thanks, this looks promising.

I hope it also works with the "last session" tabs and doesn't pop up a homepage (If browser.startup.homepage_override.mstone is set to “ignore”, the browser’s homepage will not be overridden after updates.).

I love Firefox's about:config page, it has so many options.


There's got to be an about:config entry that tells Firefox whether it's already displayed that tab, which, at worst, you could manually edit before starting. But, even if that were an acceptable solution, I couldn't find it with a quick search for "display", "update", or "tab", nor even find an up-to-date list of entries (I think that http://kb.mozillazine.org/About:config_entries is quite old).

EDIT: Actually, I know for sure that there's some preference that controls this, because I just updated and didn't see the new page; but I forget what it is. One suggestion I saw in https://superuser.com/a/1392487 was to change various addresses to non-existent pages; based on the names of their default values, `app.update.url.manual` might be a relevant one.


I think that's already there. I must have done it long back since I don't remember the last time I got it. Could be a flag if it's not an option in preferences.


> The Screenshots feature in Firefox has gotten a big update! It now supports taking screenshots of file types like SVG, XML, and more as well as various about: pages within Firefox. We've also made the screenshot tool more accessible to everyone by implementing new keyboard shortcuts and adding theme compatibility and High Contrast Mode (HCM) support. And finally, performance for capturing large screenshots has been improved.

I have no problem with this feature, but am curious if it brings any new functionality beyond existing screenshot tools.


I use the Firefox screenshot tool quite a lot, mostly because it automatically snaps the region to DOM elements. This makes it less fiddly to take cleanly cropped screenshots of sections, especially if they don't entirely fit on the screen.

I'm not that knowledgeable about the screenshot tooling space, but I feel all my needs are covered with the default system one plus the Firefox screenshot tool for convenience in some cases.


Maybe it's not the exact use case and maybe you know about this already but just in case, in the dev tools, if you right click on a node in the inspector you can directly take a screenshot of that specific node.


Yeah I also like how I can get a whole page pretty easily, even if the page is several screens long.


I actually used the screenshots feature for the first time (to illustrate my devconf.cz talk this week, do come, it's free!). The nice thing about it is that you can scroll the page while framing the screenshot, allowing you to take a shot of the whole page including everything below the fold. I think such a feature needs to be integrated into the browser.


Just use `screenshot --fullpage` from the console. I usually do something like `screenshot --dpr 3 --fullpage` for a higher resolution image


Or rightclick on the page, Take Screenshot, Save full page button

the --dpr option sounds useful, I'll try that out.


"Console" being the Linux shell or the Firefox JS console? I don't understand how that could work from the shell.


Yes, Console being the Firefox DevTools console


Is this different to the Web Developer console?


> I think such a feature needs to be integrated into the browser.

It certainly helps/simplifies the problem, but I imagine in most cases a builtin image stitcher inside the screenshot tool could also work in great majority of cases. Infact I would expect someone have done that already..


Yeah, definitely possible. The firefox screenshot feature is fairly smoothly integrated and intuitive, a rare moment of clarity from their UI team.


One really important one for even non-dev users is that it isn't affected by DRM protections.

Windows' screenshot tool nowadays blacks out any DRM content which makes screenshotting frames from a show, etc a royal pain in the ass, This is presumably because the tool can now also do video recording but either way it's very frustrating.

The firefox tool doesn't have that issue.


I hope it fixes the "too long filename" issue that often prevents screenshots from being saved (e.g. tweets with long messages) because it seems to use the window title to generate the filename.


> Firefox will now automatically try to upgrade <img>, <audio>, and <video> elements from HTTP to HTTPS if they are embedded within an HTTPS page. If these so-called mixed content elements do not support HTTPS, they will no longer load.

This is not so great for radio streaming as a lot of them still use http shoutcast servers (often with direct ip instead of domain name, so no https there anyway).


> (often with direct ip instead of domain name, so no https there anyway).

Real-world counter example of an IP address available over https:

https://1.1.1.1


> (often with direct ip instead of domain name, so no https there anyway).

That's not true, one can have IP addresses in certificates, it's just very rare because virtual hosting makes "single IP" services rare nowadays: https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.6 ("Defined options include an Internet electronic mail address, a DNS name, an IP address, and a Uniform Resource Identifier (URI).") OpenSSL uses "IP:" syntax in their subjectAltName field (e.g. https://mta.openssl.org/pipermail/openssl-users/2016-Septemb... ) and the CAB seems to be on-board, subject to some "well, watch out for ..." language https://cabforum.org/working-groups/server/guidance-ip-addre...


So do CAs in general offer those? Are they as affordable as "normal" certs? Certainly you need to be paying for that those kind of certificates, possibly some premium?

It's probably not worth it anyway, because you're now stuck with that single IP, for little benefit. In general IPs just don't get HTTPS certificates acquired for them.


Seems to be free from ZeroSSL https://help.zerossl.com/hc/en-us/articles/360060119973-Is-I... but Letsencrypt decided against it since their whole thing is about proving ownership over a domain, or set of domains, and doing that for an IP address would be some nonsense. I would be absolutely shocked if the "name brand" cert issuers don't support CSRs containing iPAddress SAN entries, since it's very clearly part of the RFC, but I haven't ever had a need to try such a stunt, either


you can get a free cert from ZeroSSL for an IP address

https://help.zerossl.com/hc/en-us/articles/360060119973-Is-I...


I guess I stand corrected on the ip / https thing :)


> navigator.clipboard.read()/write() has been enabled (see documentation). A paste context menu will appear for the user to confirm when attempting to read clipboard content that is not originated from a same-origin page.

FINALLY, I've been following the ticket to implement this for so long


Usually FF is the first to implement experimental specs, but this took so damn long.


They still haven't finished WebCodecs support unfortunately


I'm just happy the update is not "We've integrated new AI features".


Why? Fully offline translations they've added in one of the recent versions are nice, I've used them today to read a couple of articles in French and did not notice any difference with Google Translate (compared them side by side). Another AI-related feature they promised recently are automatically generated alt descriptions for images that don't have any. Seems like a massively useful feature for the visually impaired.


To give a serious answer to your serious reply to my snarky post: I think we're in an AI hype cycle, I don't deny that AI is sometimes useful but it seems like everyone and their dog is trying to add some kind of "AI" to their product now, without thinking through either the value to the end user, or any privacy implications. They've just heard that AI = $$$ and are running with it. In some cases it even becomes one more way to slurp up everyone's data for ads and training yet more AI.

Even Microsoft Recall could be useful for many people, but it should have been opt-in from the start. Personally I'd also wish that AI modules were only installed when you opt in, so you don't take up valuable space (especially on mobile) with a massive matrix of model weights unless the user actually asks for that.


I am responsible for the Firefox Translation feature (with my dev partner Andre Natal) and I'm proud to have introduced a great "AI" feature. Firefox Translation is powered by small language models (and a transformer architecture) that gives you 100% local translation with a minimal of hassle. It's the opposite of AI-hype and when I was at Mozilla building this, everyone was on the same page as me with using AI only where appropriate and not as a hype mechanism. I left Mozilla about a year ago so perhaps things have changed, but IMO you don't have to worry about Mozilla treating AI as $$$ (mostly because it's actually a huge cost center, I couldn't even get budget to train up beyond about 10 languages when Firefox is available in 100 languages.)


I wish that offline translation system was part of the desktop rather than the browser, so we could use it in every app.


Thank you for this. That is very encouraging.


Unfortunately it only supports a very small number of languages, none that I really care about, but also of the times I did try it on a supported language, it only translated the first ~1/3rd of the page for some reason.


Why? I'd love to see a fully-local LLM for text generation, proofreading etc.

Nobody forces you to use it, and I have much higher trust in Mozilla to not do a sneaky switcheroo to a cloud-based model or something similar than I have in the other browser vendors.


Does it really have to be integrated in the browser, though? Because Mozilla already distributes offline LLMs: https://github.com/Mozilla-Ocho/llamafile.


Don't hold your breath, it's Mozilla after all. They're king of shooting themselves in the foot to pursue trends.


I mean, technically they have? Maybe not in this update, but they include local AI models for webpage translation. It's pretty useful (when it's a supported language pair, at least)


> Firefox will now automatically try to upgrade <img>, <audio>, and <video> elements from HTTP to HTTPS if they are embedded within an HTTPS page. If these so-called mixed content elements do not support HTTPS, they will no longer load.

Web developers make no sense to me. In one moment, it's all about "not breaking the web" (see: Smooshgate, and the stubborn resistance to reasonable API naming because it would break a 15-year-old third party library). And then they casually drop a huge breaking change like this. Ok.


It's almost as if people consider all possible upsides and downsides, and try to make a good trade-off for that specific situation, instead of adhering to a simplistic black/white thinking...


It's all trade-offs. No one likes breaking software. In this case, it's a security issue, so that can justify a breaking change.


Safari already disallows mixed content, so not really new. And it's been an error for a long time, no one should really be surprised that their broken code is broken causing their website to break. It's more surprising that you are demanding we keep allowing affirmatively wrong code.


Changes made in the name of security have a higher chance of being allowed to break backwards compatibility. If changing that API name fixed some security issue, or even reduced the security exposure of some part of the browser, it might be allowed even if it breaks some third-part library.


I await with bated breath for integrated vertical tabs.


Try WaterFox.


Developer release notes here:

https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Rel...

Notably, Async Clipboard API is now fully enabled.


Tangential question to fellow tech observers, have any of you Mac users found an optimal plugin/ extension similar to ublock origin for Safari ? Safari is buttery smooth and runs cool. I hate to use Firefox / Chrome just for ublock origin's reasons.


If you use Kagi's Orion (which is Webkit, like Safari), you can install either Firefox or Chrome extensions from their respective stores


Less than perfect, but could start with a PiHole or other blocking DNS service.


AdGuard has been working well enough for me.


> the CPU architecture for 32-bit x86 Linux will now be reported as x86_64 in Firefox's User-Agent

And in a few months, everyone will state that no 32 bit users remain and support can be dropped.

#facepalm


I'm curious, what kind of features in the Web are dependent on the PC being x32 or x64? What even would there be to shut down?


Downloads.

If the user agent reports 32 bit, when you go to download an installer, the page can present you with the 32 bit version. Likewise with 64 bit (from the early days when 32 bit was more common). Same goes with reporting that it's x86 vs ARM.

So now if you are on an old 32 bit machine and try to install new software from the web, it'll give you a 64 bit installer that you almost certainly won't even be able to run on your machine.

Now with linux that's less of an issue given that installs on linux already depend on a ton of stuff (and linux users tend to understand the difference between 32bit and 64bit)


Linux users don't usually download installers from websites.


No, but then the firefox team will stop caring for 32bit because some online metric shows that there is no 32bit firefox users anymore


Let me know when they add tab groups.


why it loads for ages sometimes ? it feels sluggish and unstable on latests versions...


Try a Firefox Refresh. about:support, click the button in the top right.


Still haven't fixed the white flash


Works on my machine /shrug


[flagged]


True browser-centric digital gurus don't restart their computer unless there is a power cut. They also don't shutdown their browser either.


Though the times you need to do this, do you want to wait for the extra two seconds? Certainly not!

But actually I will enable this feature..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: