Comparing PIE executables is a bit of an anachronism. They did not really exist when x86-64 was introduced. Position-dependent executables are more compact on i386 because its lack of PC-relative addressing (the %rip stuff on x86-64) does not matter.
That being said, the larger register set, the register-based calling convention, SSE2 for floating point math, PC-relative addressing (important for shared objects) together usually compensate the cost of wider pointers. Unless your workload happens to be pointer-heavy and fit into slightly less than 4 GiB of address space.
A nice complement to this post is Mike Culbert’s article about how Apple chose to use 32-bit ARM6 instead of a 16-bit processor for reasons entirely unrelated to maximum memory size. http://waltersmith.us/newton/COMPCON-HW.pdf
> Sadly, and as far as I can tell, x32 is pretty much abandonware today. Gentoo claims to support it but there are no official builds of any distribution I can find that are built in x32 mode.
It is supported, but the list of broken packages is rather long... [1]
You don't need an official build, as Gentoo builds itself and also cross-builds itself for any target and any ABI. In this case, you don't even need a cross-build.
Just install Gentoo x86_64 multilib, enable x32 support in kernel, enable abi_x86_x32 USE flag for the packages you want to test and run emerge to build them. They won't even interfere with the rest of the running system. See [2].
147 packages don't strike me as that many, that's less than 1% of all Gentoo packages. I expected much more.
EDIT: And several of those packages are binaries. Apart from the *-bin packages, I've identified TeamSpeak, dev-libs/amdgpu-pro-opencl, Skype, and Slack. That leaves 138 packages.
(Sane) web UIs in general should be fine, since they can just use the browser's JS engine. I don't think transmission is affected, at least it's not listed as masked.
You're right about firefox though, which is weird. Apparently node is a build dependency, but I can't figure out if that's something specific to the ebuild (that Arch's PKGBUILD does as well) or just generally something the buildsystem requires, but it's not listed as a dependency at https://firefox-source-docs.mozilla.org/setup/linux_build.ht....
x32 was such a waste of time and resources for Intel. While the world was moving to 64 bit, someone at Intel had the "bright" idea to go back to the past, and create a new 32 bit architecture (???) that was incompatible with traditional Windows applications (????)
Now, in 2024, 32 bit cell phone CPUs seem like a distant memory and nobody remembers what the hell x32 was even for.
x32 is a different (possibly badly designed) Linux kernel ABI for programs running in 64-bit mode on regular x86-64 processors, but with 32-bit pointers. It has nothing to do with Intel or Windows.
It's an architecture from the Linux kernel's point of view, although it runs on existing hardware. However, it's not accurate to say it has nothing to do with Intel -- they designed and funded it. And it was a waste of resources since the world was going 64 bit, and that was obvious even at the time.
Realistically, Windows is the platform keeping 32 bit code alive, in the form of all those legacy binaries. MacOS and Linux have moved on. So if you design something new for 32 bit that can't be used by Windows, it's DOA.
> We can’t run the resulting binary. x32 is an ABI that impacts the kernel interface too, so these binaries cannot be executed on a regular x86-64 kernel. Sadly, and as far as I can tell, x32 is pretty much abandonware today. Gentoo claims to support it but there are no official builds of any distribution I can find that are built in x32 mode.
Which distro is this ? In debian I'm pretty sure CONFIG_X86_X32 was enabled in the kernel not too long ago, enabling this to work without issues. When I benchmarked it was on average 20% faster than x86_64 and used quite less memory for GUI apps IIRC - I think that for most utilities like coreutils, image viewers, file explorers, media players, etc. they'd benefit from it ; >4GB is only really meaningful for web browsers, compilers & interpreters, games and creative apps such as anything doing 3D, audio or large data processing.
As far as I'm aware, kernel-side support hasn't budged an inch since all those articles came out. The bigger annoyance is that you'll either have to compile a special libc or define your own raw-syscall wrappers.
It has been several years since I played with x32, but Debian had pretty good support. Not every package built for amd64 was also built for x32, but most server stuff was there.
It has been a long time, but I was using x32 containers on an amd64 host (both Debian), and things worked fine without having to build anything from source. After moving to a different VPS provider who was less stingy with RAM in their base offering, I just simplified and used straight amd64.
While I didn't try adding x32 as an additional arch with Debian's multi-arch, I'm pretty confident, it would "just work" just as adding i386 as an additional arch does.
Perhaps. I was just trying it on Ubuntu, and found that while I can add x32 as an architecture on dpkg, there aren't any x32 versions of libc6 and other basic libraries. The odd part is, I distinctly recall playing with x32 assembly quite recently on Ubuntu with no special configuration, but now it seems like it's no longer enabled in the kernel by default. Maybe it was disabled with 24.04.
> >4GB is only really meaningful for web browsers, compilers & interpreters, games and creative apps such as anything doing 3D, audio or large data processing.
Yes, but...
> web browsers
...includes Electron, Cordova/PhoneGap, WebView-based UIs, et cetera.
So yes, so long as software-houses uncritically jump on Electron as their platform-of-choice then all those apps will also ultimately be "web browsers" in disguise
...even a simple utility to flash an SD card[1] is a 150MB behemoth... I'm dying inside at this point.
IDK, I mostly use KDE apps and none of those are electron. The only web-browser thing I have open right now is firefox, everything else is pretty lean Qt apps: strawberry (RES 77 megabytes), dolphin (RES 61 megabytes), konsole (RES between 30 and 60-megabytes depending on my instance) the app I'm developing https://ossia.io (lean enough to run on a raspberry pi zero 2).
Meanwhile, I have a dozen firefox processes each above 500M RES and a few above 1G...
It’s a simplified version of x86 that is 64bit only. It removes all the backwards compatibility for 32/16bit apps, as well as a number of low level features that aren’t really used/needed (legacy IO, old versions of SSE, memory segmentation, etc). Currently in the “investigation” phase by Intel.
I hope not! Because systems like PS/2, COM/Serial ports, etc are really simple and well-understood (compared to USB) which does make them more reliable or dependable in situations where it matters. Additionally, PS/2 is very much still used today: every laptop I've ever owned, including my brand-new 2024 ThinkPad P1, has a touchpad ostensibly using a PS/2 connection internally - and we get all this functionality essentially for free (yay lithography): it's all baked into a single "Super I/O" unit[1] which today is part of some larger IC on your mobo.
...that said, Microsoft and some hardware partners tried this when Windows XP came out; they were called "Legacy-free PCs", which in-practice just meant (excepting video and audio) all periphial ports were USB 1.1, with zero (exposed) ports for PS/2, parallel, serial, ISA, and gameport/MIDI; no floppy drive either; it was also a convenient excuse to omit a 56K modem too. (Anyone remember the iPAQ Desktop[2]?).
...but it was more style than substance: those computers were still x86 machines using the same mobo chipsets as a traditional box, so the hardware support for PS/2 and the like was still there in the chips, just without the traces connected to anything and the functionality disabled via BIOS settings or maybe something like an e-fuse.
That's very interesting, but what's the purpose of installing PS/2 ports on modern motherboards when no one uses them? It's a waste of money and materials and space, and just contributes to e-waste. I can see leaving them in equipment made for certain specialty/embedded applications, but not mainstream consumer equipment.
I also wonder how much power those "super I/O" chips are wasting with this unused circuitry.
And why is PS/2 internally used for touchpads anyway? Inertia? There's plenty of perfectly good serial protocols out there for these kinds of low-speed serial I/O applications, such as SPI.
Some USB keyboards and mice, and some ways of plugging them into the system, can result in noticeable issues like key ghosting and input lag. Thus some people still kept around their PS/2 devices and avoided USB adapters.
This is mostly not a problem nowadays. The issues arose from cheaper/poorly implemented devices and the use of hubs (both internal and external) to provide more ports.
(I wrote a long reply to this post with technical details; then I accidentally reloaded the tab in Chrome and it lost my textarea text and it's 3am here so argh).
But to briefly summarize when I did write:
> installing PS/2 ports on modern motherboards
I haven't seen a PS/2 port on any brand-new (non-industrial) computer, even rackmount servers, since 2009. My own last mobo with PS/2 ports was bought in 2008; I haven't seen PS/2 on a laptop since 2003.
> and just contributes to e-waste
e-waste is a problem, yes; but PS/2 ports are really, really, not a meaningful cause of any problems in this area.
> I also wonder how much power those "super I/O" chips are wasting with this unused circuitry.
Zero. ICs (mostly) consume power only when their transistors undergo a state-change. If the PS/2 microcircuitry in an IC is never used, then there won't be any concordant transitor state-changes, ergo, there won't be any (measureable) power-draw. This is why microprocessor lithography is great: you really do get stuff for free.
> why is PS/2 internally used for touchpads anyway?
I wrote "ostensibly"; in reality, it usually isn't actually "PS/2" as-we-know-it, but is some other protocol over I2C or even SPI as you suggested (e.g. Synaptics calls theirs "InterTouch"[1]), but the hardware-interface that the OS uses is compatible with Intel's i8042 which was the original PS/2 ( https://wiki.osdev.org/%228042%22_PS/2_Controller ) even though the i8042 no-longer exists today.
As for why practically every laptop seems to use "PS/2" for their touchpads today, I speculate that laptop OEMs prefer this "PS/2-a-like" for 2 reasons:
1. Consumes less power (and with far less overhead overall) compared to an internal USB connection, which is important in a laptop computer.
2. Laptops likely only have a single USB root controller, which is usually for the latest USB revision when the laptop was engineered - so a new laptop today likely has only a USB4 controller, which will be backwards-compatible with older USB devices, but you'll have issues if you need to use an OS or EFI application (e.g. BIOS Setup utility, Bitlocker manual unlock screen, custom boot-manager, etc) due to spotty support for future USB controllers, so it makes sense to ensure that the laptop's fundamentally-essential input devices (i.e. mouse and keyboard) are connected via a mature, widely-supported interface like the i8042's) to avoid the laptop becoming completely unusable if something goes wrong with USB support (because it's not like you'll be able to plug-in an external USB mouse/keyboard and expect that to work...). Whereas on desktops I've noticed that whenever there's support for some cutting-edge USB revision (e.g. USB 3.x in 2012ish or USB4 today) the board will have a couple of USB 2.0 ports via a different controller, so you won't be SOL if anything goes wrong with the new stuff.
Granted, these are a generation old and not the latest AM5 socket stuff, but still AM4 chips are fairly popular; these motherboards probably all came out around 2019.
>e-waste is a problem, yes; but PS/2 ports are really, really, not a meaningful cause of any problems in this area.
It all adds up. Millions upon millions of never-used PS/2 across all these systems for the many years these motherboards have been manufactured is a big pile of PS/2 connectors, all gone to waste.
I'd argue it's more similar to 64-bit x86, and the only difference between the "standard" x86_64 ABI and x32 ABI is that x32's pointers are 32bit.
Everything else matches the 64-bit version - not the 32bit one - the instruction encoding, the extra registers, the minimum isa extensions (x86_64 puts a baseline of sse2), the calling convention advantages etc. etc.
In part yes, the difference is the ability to access the 64-bit registers, the additional set of registers, and the assumption that SSE is present. But because of all of these, x32 defines a new call convention to optimize function calls.
That being said, the larger register set, the register-based calling convention, SSE2 for floating point math, PC-relative addressing (important for shared objects) together usually compensate the cost of wider pointers. Unless your workload happens to be pointer-heavy and fit into slightly less than 4 GiB of address space.