There's still some 32 bit software (usually proprietary stuff, and/or libraries used by Wine). Removing 32 bit libraries especially just breaks this software and you don't really have any option except to get the vendor to fix it.
But more importantly I sometimes find 64 bit assumptions in code. One recent example was in this new package that we're trying to add to Fedora: https://bugzilla.redhat.com/show_bug.cgi?id=2263333 It doesn't compile on i686 with some obvious compile issues. Upstream isn't particularly interested in fixing them.
Typically the issues are incorrect assumptions about the size of 'long' and 'size_t'. And incorrect casts from pointers to longs (instead of using uintptr_t).
Does any of this matter still or should we embrace this 64 bit only world? Will CPU makers remove i686 support? I guess we're going to find out.
> Does any of this matter still or should we embrace this 64 bit only world? Will CPU makers remove i686 support? I guess we're going to find out.
We're already on this path, Intel has proposed an "X86-S" subset which drops support for booting into 16/32-bit modes (though 32-bit applications would still be supported under a 64-bit OS), ARM Cortex-A cores already dropped support for 32-bit boot a few years ago and the latest ones are also dropping support for 32-bit applications, so the whole stack has to be 64-bit. Apple preemptively killed 32-bit apps even on hardware that could technically still support them, and their current hardware doesn't support 32-bit at all, not even in Rosetta.
Why not just start the CPU in "long mode", which is what everyone is using it for, in the first place?
These newer ARM processors support 32-bit code at EL0 only (userspace). That seems like a reasonable approach for x86 as well and the freebsd announcement has this to say:
> There is currently no plan to remove support for 32-bit binaries on 64-bit kernels.
So for the moment, you can run 32-bit applications just fine.
> These newer ARM processors support 32-bit code at EL0 only (userspace).
Even that is being phased out, starting from 2021 ARMs reference cores mostly dropped 32-bit support, with the Cortex-X2 big cores and Cortex-A510 small cores being 64-bit only, and only the medium Cortex-A710 cores retaining the ability to run 32-bit code in userspace. With the next generation the medium cores lost 32-bit support, but the small cores gained 32-bit support again, so the chips can barely still run 32-bit code but it's relegated to only running on the least performant cores. I'm sure they'll drop it altogether as soon as they think the market will accept it.
I think we need to be more specific, since on x86 64bit instructions are a superset of 32bit. On ARM they are different with AArch32 and AArch64 being two separate set of instructions. It is AArch32 that is being dropped and no longer required.
> Why not just start the CPU in "long mode", which is what everyone is using it for, in the first place?
Are you sure it's "everyone"? No doubt it's "overwhelming majority", but that's not the same, and "everyone" is a lot of people.
As I understand how this works, the compatibility is perhaps slightly inconvenient to a small group of developers, but not that inconvenient and generally relatively cheap. So why not have and keep it?
ARM has a lot less history than x86. Yes, it goes back to 1985 in the BBC Micro, but didn't really see serious usage as a "generic platform" until the late 90s/early 2000s.
Do you ever boot your machine only into 32-bit mode? Most UEFI firmware I've seen is launching /EFI/BOOT/BOOTX64.EFI, which means your bootloader is in long mode even if you then switch back to 32-bit mode (and UEFI is likely already there). I don't know if emulating a traditional bios avoids this or not, never thought about it. But I'd take a guess that the vast majority of people are passing through long mode and then back again if they installed a 32-bit OS.
I'm not saying remove the possibility of 32-bit userspace and nor are, it seems, freebsd. I am saying don't bother with a 32-bit kernel. Otherwise you're limited to a 4GB address space and hacks like PAE. No harm in running 32-bit binaries if you want and, I think Solaris actually left userspace as 32-bit only even on their 64-bit OS, unless the application really benefited from a 64-bit setup. Not sure what they do now, been a while since I used it.
It's arguable ASLR only works well in a 64-bit mode, as well.
> Do you ever boot your machine only into 32-bit mode?
No, but I'm not everyone.
Or let me put it this way: I don't know the motivation for why the things work like they do. It's entirely possible there is no good reason at all. Or maybe there is? Either way, I wouldn't assume anything so quickly.
Intel is currently trying to find out if it's everyone. The whole point of announcing the idea of dropping the old modes years before you plan to actually do so is to give a chance for people to speak up if it's a problem.
Dropping 32-bit macOS apps let them drop the second copy of every system library, and possibly even more importantly, support for the old obj-c runtime that was weird and different from the runtime used on every other platform. Supporting 32-bit wine is a much smaller burden.
> Apple preemptively killed 32-bit apps even on hardware that could technically still support them, and their current hardware doesn't support 32-bit at all, not even in Rosetta.
> There's still some 32 bit software (usually proprietary stuff, and/or libraries used by Wine).
Wine is going to stop being an issue somewhat soon. The Wine developers are adapting it to work more like Windows, by making even 32-bit applications call the 64-bit libraries.
I expect that, not long after Wine switches to always using 64-bit libraries, distributions (probably starting with faster-moving ones like Fedora) will completely (or almost completely, leaving only glibc and its dependencies) drop the 32-bit libraries. And we're already close to the Y2038 deadline, which gives distributions another incentive to drop 32-bit libraries (instead of having to go through a migration to the 64-bit time API, similar to the migration to 64-bit file offsets we had some time ago).
Debian is moving their 32-bit releases to 64-bit time, it's a big transition but it's expected to be ready for the next release. But they've also announced that they may be forced to drop their i686 release quite soon.
It's increasingly hard to test 32 bits -- I run a modern Ubuntu, they don't even provide 32 bits versions of most libraries any more, so I have to wonder why it's worth putting significant hours I to supporting 32bits, when it's unclear if there are even any users.
I agree with you, we have appropriate types now, so we don't need to use long anymore - long also has different assumptions in Windows x64 builds vs equivalent regular Linux x64 (see also LLP64 vs LP64 vs ILP64)
No matter how technically correct that may be people just don't like doing work for some theoretical future benefit that may never materialize. Supporting 32-bit imposes a cost I have to pay right now today for the benefit of very few people (and a shrinking pool at that). Why not spend the effort on something more productive?
It is also likely that general purpose CPUs will use 64-bit integer registers and address spaces for the rest of our lifetimes. Computing has already (for the most part) converged on a standard: Von Neumann architecture, 8-bit bytes, two's complement integers, IEEE floating point, little-endian. The number of oddballs has dramatically decreased over the past few decades and the non-conforming CPUs manufactured every year are already a rounding error.
FWIW If a transition is ever made to 128-bit that will be the last. There aren't 128-bits worth of atoms in the visible universe.
> No code should cast a pointer to a `long`, ever. Trying to excuse it it with "we don't support 32-bit" is deeply wrong.
It's deeply wrong as an excuse but for a different reason: casting a pointer to long also works on 32-bit. The only mainstream architecture in which it doesn't work is 64-bit Windows.
Does it matter to remove portability to platforms we don't care about any more? It's a question with no objectively/universally right or wrong answer. The obvious counterpoint I'd mention is big endian: is it worth fixing your endianness bugs? Or is it ok to assume little endian since the vast (VAST) majority of software does not care about the very few remaining big endian platforms? 32-bit could become a similar situation. It happened with 16-bit, right? Nobody today cares whether their modern code is portable to 16-bit machines.
More generally, I disagree with absolutist takes on programming. There's no right or wrong in programming. There's just requirements and choices.
The C++ filesystem section contains the following, which basically means you can’t do safe IO when other programs are running in the same computer:
“ The behavior is undefined if the calls to functions in this library introduce a file system race, that is, when multiple threads, processes, or computers interleave access and modification to the same object in a file system.”
One of the reasons, besides having the OS run on it, OpenBSD has the multiple architectures is to catch bugs based on non-portable assumptions - endian stuff, 32 vs 64 bit, memory alignment stuff etc.
> And incorrect casts from pointers to longs (instead of using uintptr_t).
That's not an issue on mainstream Linux, since uintptr_t and long always have the same size, both on 32-bit and on 64-bit. It's an issue only when using some new experimental stuff which uses two machine words per pointer, or when trying to port to Windows (where long is always 32-bit even when compiling for 64-bit).
At one point 16 bits were universal, and then 32 bits came along and broke everything. We learned some lessons that day. Then 32 bits was universal for a while and 64 bits came along and broke everything. I guess we forgot those lessons, but OK well at least we learned them again for real this time.
What a relief it is that we have reached the final architecture and will never have to worry about anything like that again!
At some point in history a "byte" could also be considered to mean any number of things. There was great value in standardizing on 8 bits.
The there is of course no clean separation between the two, but by many definitions the "64-bit era" has already lasted longer than the "32-bit era". The real lesson from history is that we moved for specific practical reasons, and that these reasons don't exist with 64-bit architectures today. 64-bit is and most likely will remain the standard size for the foreseeable future. I predict that even embedded will (slowly) move to 64-bit because again, there is great value in standardizing these types of things.
I agree that 64-bit is likely to be a very long-running standard. But, given that on 64-bit, there's no agreement on sizeof(long) - and it's impossible to change at this point because it would be a massive breaking ABI change for any of the relevant platforms - the only sensible standardization approach for C now is to deprecate short/int/long altogether and always use int16_t/int32_t/int64_t.
It helps to look at the history of C integer type names and its context. Its origin lies not with C, but with ALGOL-68 - as the name implies, this is a language that was standardized in 1968, although the process began shortly after ALGOL-60 Report was published. That is, it hails from that very point in history when even 8-bit bytes weren't really standard yet nor even the most widespread - indeed, even the notion of storing numbers in binary wasn't standard (lots of things still used BCD as their native encoding!). ALGOL-60 only had a single integer type, but ALGOL-68 designers wanted to come up with a facility that could be adapted in a straightforward way to all those varied architectures. So they came up with a scheme they called "sizety", whereby you could append any number of SHORT or LONG modifiers to INT and REAL in conformant code. Implementations could then use as many distinct sequences as they needed to express all of their native types, and beyond that adding more SHORT/LONG would simply be a no-op on that platform. K&R C (1978) adopted a simplified version of this, limiting it to a single "short" or "long" modifier.
Obviously, this arrangement makes sense in a world where platforms vary so widely on one hand, and the very notion of "portable code" beyond basic numeric algorithms is still in its infancy. Much less so 40 years later, though, so the only reason why we still use this naming scheme is backwards-compatibility. Why use it for new code, then, when we had explicitly sized integer types since C99?
Specifically on the topic of "avoid using long" I don't disagree; I was mostly replying to the general sentiment of "you should never assume 64 bit because this changed in the past and will change in the future". That said, if you're writing software specifically for Unix systems (as much software is) then it de-facto works, so it's fine. That's why people keep doing it: because it works in all cases where the software runs.
Starting C in 2024 is like starting Game of Thrones in the middle of Season 5. Dubbed in Esperanto.
> At one point 16 bits were universal, and then 32 bits came along and broke everything.
That's not an issue in mainstream Linux, since 16-bit Linux (ELKS) never caught on. Other than ELKS and perhaps some new experimental stuff, since its first release Linux always had long and pointer with the same size.
It's trivial to roll your own basic <stdint.h> with those types for existing C89 implementations.
Unless they don't provide the corresponding type natively at all (which can be the case for some 16-bit platforms and 64-bit). But then you have a bigger problem anyway.
This is probably a good sign for anyone out there interested into getting into a long tail niche industry, similar to how COBOL lives on in banking. There are probably going to be places we'd never expect running 32-bit applications for decades to come (I'm thinking industrial automation shops and the like), and getting in on the ground floor of developing for NetBSD would be a high-powered signal that you are interested in helping support these ancient machines.
(Do I recommend this myself? Well, I think techies find all kinds of weird things fun. :))
> Due to the increased usage of OpenBSD/amd64, as well as the age and practicality of most i386 hardware, only easy and critical security fixes are backported to i386. The project has more important things to focus on.
This has been on the i386 page for quite a while. I haven't specifically heard of the developers dropping i386, but I also don't follow the mailing lists as much as I used to.
I really hope OpenBSD doesn't drop i386 as it's my go-to operating system for a lot of otherwise not very useful hardware - retro PCs, 32-bit laptops, etc. One can take any bog standard Pentium or higher box and turn it into a usable machine with OpenBSD.
My understanding is that, they will support the architecture as long as one of the OpenBSD developers doesn't mind maintaining it. It isn't like Linux. Linux developers will make a top-down decision. OpenBSD is more of a, if a developer has the physical hardware and will to do it, it will remain supported until that is no longer true. Same with NetBSD, but I think NetBSD is a little more loose with the 'having physical hardware.' The spirit of OpenBSD is, no, you need to build on physical hardware. NetBSD doesn't mine the primary mechanism of testing and development being a QEMU image.
Which hopefully gives some comfort. There isn't a shortage of cheap 32-bit x86 machines. The biggest problem are probably like sparc machines that getting your hands on one in good condition can still be pricey. But you can find a functional 32-bit x86 desktop machine on ebay for 30 bucks.
For open-source operating systems there will always remain the older stable 32-bit releases like FreeBSD 16 in this case.
Even if there still are plenty of obsolete 32-bit CPUs running Linux or *BSD, since many years I have never seen a case where using them is the right technical choice.
When a UNIX-like OS is desired, it is a mistake to use anything less than a very cheap 64-bit CPU, like one with ARM Cortex-A55 cores.
For the many embedded applications where a 32-bit CPU is appropriate, either one of the many open-source RTOSes should be used or a bare-metal program, which is the best choice in many cases. For such embedded applications, a UNIX-like OS is much too bloated and it does not allow deterministic control of the hardware.
There os an in between space like the RPi zero where you can drop rt linux kernel (and rt modules). 32bit embedded linux is still really quite widespread. I expect risc-v to breath some new life into it as well. Sometimes a lack of hard (or any) real time requirements, combines with price and occasionally power (where I expect risc-v to come in) to allow for embedded linux. It usually requires less development time too. Think how many home routers, and security systems are still built off of 32bit linux. I don't know any built off of freebsd though, so maybe a good call?
Actually I've been surprised at how much 64-bit dominates RISC-V even in the very lowest end of Linux-capable embedded. Perhaps I'm just in a bubble, though; not like I work in actual embedded stuff professionally.
The price difference between something with Cortex-A55 64-bit cores and something with obsolete 32-bit cores is negligible.
For modern home routers or security appliances you want them to be able to easily execute firewalls with complex sets of rules at multi-gigabit per second throughput, so that the router throughput should not be limited by the CPU instead of the Ethernet or WiFi interfaces.
For this, you really need something like Cortex-A55. The routers with obsolete CPUs limit the performance without any worthwhile price reduction.
On the other hand, for many embedded projects where a 32-bit CPU is the right choice, the developers choose Linux just because they are familiar with it and they are too lazy to read the manuals of some free real-time operating system, even if in fact the latter might require less development work and maintenance work for their software project.
If even A55 is too chonky, there is also A34 which afaik is the smallest 64bit ARM core. Unfortunately I haven't seen anyone actually releasing chips based on it :(
There are some socs with A35 cores though, that is probably lowest end that is actually available.
Lots of "fatter" embedded systems run on FreeBSD due to the more favorable license situation. Not so much SOHO routers, but big industrial routers will use it. Juniper is a notable example. But also 1U firewalls, SANs, etc... are often FreeBSD based.
Well that's the tragedy of not contributing back upstream then.
If there were companies actively maintaining the 32 bit port, there would be no plans of dropping it.
32-bit x86 CPUs haven't been made in years, companies building products based on FreeBSD switched to 64-bit x86 or to other architectures long ago. It's not that work on i386 is being done but kept in private repos and not upstreamed -- work on i386 just isn't being done at all.
> Even if there still are plenty of obsolete 32-bit CPUs running Linux or *BSD, since many years I have never seen a case where using them is the right technical choice.
I have a couple WD NATs which chug along on 32-bit PowerPC, a patched kernel and the last version of Debian before they dropped support for 32-bit PowerPC. Don't really plan to replace them before they break...do need to work out a backup "story" (as the kids would say) but they just hold old TV shows so if they let out the magic smoke then nothing important is lost.
Would be nice if I could keep them up to date on the latest and greatest but nobody cares about the 32-bits anymore...
I think it's reasonable to eventually treat 32-bit embedded in the same manner we treat 8-bit and 16-bit embedded today: as "weird" architectures that have their own bespoke OSes (even if it's a fork of Linux or BSD), and on which "normal" portable code is not really expected to compile or run without modifications.
The gp did say embedded. The RPi Zero is still selling, and its popular. and 32bit is still the big end of microcontrollers. But almost none of those have memory management units, so normal linux is off the table (not sure about FreeBSD) but it is possible to build a linux for those platforms.
Broadcom released new quadcore 32bit router SOCs as recently as last year. The rpi may have moved on, but I wouldn't be surprised if there is a niche for an rpi -1 what with inflation. The rpi zero 2 is 64bit and the same price as the old one. But cheaper is always cheaper. With microcontrollers growing to 32bit, you still see 8bit used all the time just for the price difference. If the 32bit part is even a little cheaper on a high volume item, it will be used, and routers lean on free operating systems heavily.
I was referring to the BCM47722, but I was basing the assumption on the assumption that all the other 64bit SOCs had 64bit in the title page, and that one didn't. But it is also 64bit. After poking around it does indeed seem like they don't make anything 32bit with an mmu anymore.
I had to switch off from macOS when they retired the 32bit system libraries, because I simulate my armv7/avr embedded firmware natively. On linux I still can, but on other systems soon not anymore.
Sounds a lot like what RIOT is doing, but there support for 64 bit `native` was just merged recently, so you can also build for the virtual `native64` board and run your code as a 64 bit Linux executable.
(Support for macOS was removed a few years ago due to a lack of maintenance and no active developers with access to a macOS machine)
But that's pretty much retro-computing these days, I don't see why anyone would use those on new projects when Cortex-M0+ parts are much cheaper and more power efficient.
I was happily running Arch on my netbook until they dropped support. I ran the Arch32 variant for a while, but could see the writing on the wall and got an irritatingly larger laptop only for its 64 bitness. The netbook sits in a drawer now.
I routinely install old 32bit applications in wine because they never got a 64bit version. It's literally the only reason I haven't tried running 64-only.
We’re not talking about 32-bit applications though. We’re talking about a 32-bit kernel.
The 64-bit FreeBSD kernel is perfectly capable of running 32-bit applications, and will be so for the foreseeable future. The only reason I can see to support a 32-bit kernel is to allow for
installation on hardware that is approaching decades of obsolescence, so again I ask, what is the point?
Those old machines use so much more energy (CO2) to do the same work that we are environmentally ahead in a year trashing them. Of course newer machines often use more energy because of bloated software, but assuming you run the same software newer machines are much more energy efficient.
In many places, 90% and up of a computer environmental footprint is its fabrication. In France, Canada etc using a computer as long as possible is the right choice from an environmental standpoint. Of course even 10, 12 and 15 year-old computers are generally 64 bits now (my home server is of 2007 vintage).
Depends on the machine. Big desktop systems and servers yes, but early Atom-based netbooks/"nettops" are 32-bit based and could still be useful as thin clients or for very light office work. You won't want to run a modern web browser on them though.
If you use a light environment, a CPU throttler such as cpufreq and software like Dillo, Sylpheed, Pidgin... the power usage will be perfecly down.
For general news there is http://68k.news, gopher://magical.fish... and several others such as https://text.npr.org. For gaming, people would like Minetest or some light games at least once a week to disconnect from some AAA games. The setup wouldn't be as fancy as a modern machine with FFox/Chrom* and Steam, but fore sure it will be useful and gamers would discover crazy gaming mechanics not seen anywhere else, such as Cataclysm DDA:Bright Nights.
As they state in the thread, creating a new computer it's far more wasteful.
There's still some 32 bit software (usually proprietary stuff, and/or libraries used by Wine). Removing 32 bit libraries especially just breaks this software and you don't really have any option except to get the vendor to fix it.
But more importantly I sometimes find 64 bit assumptions in code. One recent example was in this new package that we're trying to add to Fedora: https://bugzilla.redhat.com/show_bug.cgi?id=2263333 It doesn't compile on i686 with some obvious compile issues. Upstream isn't particularly interested in fixing them.
Typically the issues are incorrect assumptions about the size of 'long' and 'size_t'. And incorrect casts from pointers to longs (instead of using uintptr_t).
Does any of this matter still or should we embrace this 64 bit only world? Will CPU makers remove i686 support? I guess we're going to find out.