Hacker News new | past | comments | ask | show | jobs | submit login
Future of 32-bit platform support in FreeBSD (freebsd.org)
123 points by PaulHoule 11 months ago | hide | past | favorite | 110 comments



Fedora has been slowly removing i686: https://fedoraproject.org/wiki/Changes/Stop_Building_i686_Ke... https://fedoraproject.org/wiki/Changes/EncourageI686LeafRemo...

There's still some 32 bit software (usually proprietary stuff, and/or libraries used by Wine). Removing 32 bit libraries especially just breaks this software and you don't really have any option except to get the vendor to fix it.

But more importantly I sometimes find 64 bit assumptions in code. One recent example was in this new package that we're trying to add to Fedora: https://bugzilla.redhat.com/show_bug.cgi?id=2263333 It doesn't compile on i686 with some obvious compile issues. Upstream isn't particularly interested in fixing them.

Typically the issues are incorrect assumptions about the size of 'long' and 'size_t'. And incorrect casts from pointers to longs (instead of using uintptr_t).

Does any of this matter still or should we embrace this 64 bit only world? Will CPU makers remove i686 support? I guess we're going to find out.


> Does any of this matter still or should we embrace this 64 bit only world? Will CPU makers remove i686 support? I guess we're going to find out.

We're already on this path, Intel has proposed an "X86-S" subset which drops support for booting into 16/32-bit modes (though 32-bit applications would still be supported under a 64-bit OS), ARM Cortex-A cores already dropped support for 32-bit boot a few years ago and the latest ones are also dropping support for 32-bit applications, so the whole stack has to be 64-bit. Apple preemptively killed 32-bit apps even on hardware that could technically still support them, and their current hardware doesn't support 32-bit at all, not even in Rosetta.


For the modern server/desktop and even laptop, that's also no bad thing. It is somewhat ridiculous that UEFI bioses, internally, still boot in 16-bit real mode and have to do all the steps your bios bootloader used to do to set up a 64-bit environment ready to go: https://github.com/tianocore/edk2/blob/edc6681206c1a8791981a..., https://github.com/tianocore/edk2/blob/edc6681206c1a8791981a..., https://github.com/tianocore/edk2/blob/edc6681206c1a8791981a...

Why not just start the CPU in "long mode", which is what everyone is using it for, in the first place?

These newer ARM processors support 32-bit code at EL0 only (userspace). That seems like a reasonable approach for x86 as well and the freebsd announcement has this to say:

> There is currently no plan to remove support for 32-bit binaries on 64-bit kernels.

So for the moment, you can run 32-bit applications just fine.


> These newer ARM processors support 32-bit code at EL0 only (userspace).

Even that is being phased out, starting from 2021 ARMs reference cores mostly dropped 32-bit support, with the Cortex-X2 big cores and Cortex-A510 small cores being 64-bit only, and only the medium Cortex-A710 cores retaining the ability to run 32-bit code in userspace. With the next generation the medium cores lost 32-bit support, but the small cores gained 32-bit support again, so the chips can barely still run 32-bit code but it's relegated to only running on the least performant cores. I'm sure they'll drop it altogether as soon as they think the market will accept it.


I think we need to be more specific, since on x86 64bit instructions are a superset of 32bit. On ARM they are different with AArch32 and AArch64 being two separate set of instructions. It is AArch32 that is being dropped and no longer required.


They’re mostly a superset, but also some instructions were dropped like the bcd opcodes.

https://en.m.wikipedia.org/wiki/Intel_BCD_opcodes


> Why not just start the CPU in "long mode", which is what everyone is using it for, in the first place?

Why not indeed?

https://www.intel.com/content/www/us/en/developer/articles/t...


Not even a new idea!

https://en.wikipedia.org/wiki/Intel_80376

> It differed from the 80386 in not supporting real mode (the processor booted directly into 32-bit protected mode)

... although it didn't support paging.


> Why not just start the CPU in "long mode", which is what everyone is using it for, in the first place?

Are you sure it's "everyone"? No doubt it's "overwhelming majority", but that's not the same, and "everyone" is a lot of people.

As I understand how this works, the compatibility is perhaps slightly inconvenient to a small group of developers, but not that inconvenient and generally relatively cheap. So why not have and keep it?

ARM has a lot less history than x86. Yes, it goes back to 1985 in the BBC Micro, but didn't really see serious usage as a "generic platform" until the late 90s/early 2000s.


Do you ever boot your machine only into 32-bit mode? Most UEFI firmware I've seen is launching /EFI/BOOT/BOOTX64.EFI, which means your bootloader is in long mode even if you then switch back to 32-bit mode (and UEFI is likely already there). I don't know if emulating a traditional bios avoids this or not, never thought about it. But I'd take a guess that the vast majority of people are passing through long mode and then back again if they installed a 32-bit OS.

I'm not saying remove the possibility of 32-bit userspace and nor are, it seems, freebsd. I am saying don't bother with a 32-bit kernel. Otherwise you're limited to a 4GB address space and hacks like PAE. No harm in running 32-bit binaries if you want and, I think Solaris actually left userspace as 32-bit only even on their 64-bit OS, unless the application really benefited from a 64-bit setup. Not sure what they do now, been a while since I used it.

It's arguable ASLR only works well in a 64-bit mode, as well.


> Do you ever boot your machine only into 32-bit mode?

No, but I'm not everyone.

Or let me put it this way: I don't know the motivation for why the things work like they do. It's entirely possible there is no good reason at all. Or maybe there is? Either way, I wouldn't assume anything so quickly.


Intel is currently trying to find out if it's everyone. The whole point of announcing the idea of dropping the old modes years before you plan to actually do so is to give a chance for people to speak up if it's a problem.


Rosetta supports 32-bit code execution via a very basic (segment-level) facility, designed for use by Wine.


TIL. I'm surprised they went out of their way to accomodate emulating 32-bit Windows apps, when they cut off 32-bit Mac apps completely.


Dropping 32-bit macOS apps let them drop the second copy of every system library, and possibly even more importantly, support for the old obj-c runtime that was weird and different from the runtime used on every other platform. Supporting 32-bit wine is a much smaller burden.


Which is especially useful on phones that are much more memory constrained than desktops.


> Apple preemptively killed 32-bit apps even on hardware that could technically still support them, and their current hardware doesn't support 32-bit at all, not even in Rosetta.

Slightly related: visionOS references "iOS" for old 32-bit apps when you view them on the App Store. https://rr.judge.sh/Acaciarat%2FIMG_0002.PNG


> There's still some 32 bit software (usually proprietary stuff, and/or libraries used by Wine).

Wine is going to stop being an issue somewhat soon. The Wine developers are adapting it to work more like Windows, by making even 32-bit applications call the 64-bit libraries.

I expect that, not long after Wine switches to always using 64-bit libraries, distributions (probably starting with faster-moving ones like Fedora) will completely (or almost completely, leaving only glibc and its dependencies) drop the 32-bit libraries. And we're already close to the Y2038 deadline, which gives distributions another incentive to drop 32-bit libraries (instead of having to go through a migration to the 64-bit time API, similar to the migration to 64-bit file offsets we had some time ago).


Debian is moving their 32-bit releases to 64-bit time, it's a big transition but it's expected to be ready for the next release. But they've also announced that they may be forced to drop their i686 release quite soon.


It's increasingly hard to test 32 bits -- I run a modern Ubuntu, they don't even provide 32 bits versions of most libraries any more, so I have to wonder why it's worth putting significant hours I to supporting 32bits, when it's unclear if there are even any users.


How about effort to remove undefined behavior based on non-portable assumptions?

No code should cast a pointer to a `long`, ever. Trying to excuse it it with "we don't support 32-bit" is deeply wrong.

EDIT: Apparently this position is unpopular. I am curious why people feel differently.


I agree with you, we have appropriate types now, so we don't need to use long anymore - long also has different assumptions in Windows x64 builds vs equivalent regular Linux x64 (see also LLP64 vs LP64 vs ILP64)


This is simply correct and I see no excuse for objecting.

Hell I still try to make sure stuff works on both big and little endian.


No matter how technically correct that may be people just don't like doing work for some theoretical future benefit that may never materialize. Supporting 32-bit imposes a cost I have to pay right now today for the benefit of very few people (and a shrinking pool at that). Why not spend the effort on something more productive?

It is also likely that general purpose CPUs will use 64-bit integer registers and address spaces for the rest of our lifetimes. Computing has already (for the most part) converged on a standard: Von Neumann architecture, 8-bit bytes, two's complement integers, IEEE floating point, little-endian. The number of oddballs has dramatically decreased over the past few decades and the non-conforming CPUs manufactured every year are already a rounding error.

FWIW If a transition is ever made to 128-bit that will be the last. There aren't 128-bits worth of atoms in the visible universe.


Google says "There are between 10^78 to 10^82 atoms in the observable universe." Even the lower number is (just slightly) greater than 2^256.

(Imagine installing 2^256 sound cards in a system ...)


That seems non-responsive, as none of those things mean that a pointer and a `long` have the same size. Did you intend to respond to someone else?


> No code should cast a pointer to a `long`, ever. Trying to excuse it it with "we don't support 32-bit" is deeply wrong.

It's deeply wrong as an excuse but for a different reason: casting a pointer to long also works on 32-bit. The only mainstream architecture in which it doesn't work is 64-bit Windows.


Sure, but just because something works doesn't mean you should do it. Better types have been around for what 25 years now?


Does it matter to remove portability to platforms we don't care about any more? It's a question with no objectively/universally right or wrong answer. The obvious counterpoint I'd mention is big endian: is it worth fixing your endianness bugs? Or is it ok to assume little endian since the vast (VAST) majority of software does not care about the very few remaining big endian platforms? 32-bit could become a similar situation. It happened with 16-bit, right? Nobody today cares whether their modern code is portable to 16-bit machines.

More generally, I disagree with absolutist takes on programming. There's no right or wrong in programming. There's just requirements and choices.


Casting a pointer to long is incorrect on any LLP64 platform, which includes all modern versions of Windows.


Well, yes, I'd like to do that, but programmers make mistakes and it isn't easy to test.

Also, you can't reasonably remove all undefined behaviour. In C++ you basically can't do file IO without undefined behaviour.


As a fellow UB nitpicker, I'd like to hear more about your favorite UB triggers. E.g. in the space of file IO.


The C++ filesystem section contains the following, which basically means you can’t do safe IO when other programs are running in the same computer:

“ The behavior is undefined if the calls to functions in this library introduce a file system race, that is, when multiple threads, processes, or computers interleave access and modification to the same object in a file system.”


I'm not sure that means much, since that's true of literally every language that does file I/O.


One of the reasons, besides having the OS run on it, OpenBSD has the multiple architectures is to catch bugs based on non-portable assumptions - endian stuff, 32 vs 64 bit, memory alignment stuff etc.


On GCC , as far as I know, pointers will always cast to long and back ok. I'd expect the same from clang.

No, that's not a portable behaviour but I can see that it's valid if you're targeting Linux.

uintptr_t is more polite, though.


Ubuntu dropped 32-bit support several years ago. AFAIK they only support the 32-bit libraries required by Steam and Wine.


I keep a very old Fedora VM around for this. Not ideal.


> And incorrect casts from pointers to longs (instead of using uintptr_t).

That's not an issue on mainstream Linux, since uintptr_t and long always have the same size, both on 32-bit and on 64-bit. It's an issue only when using some new experimental stuff which uses two machine words per pointer, or when trying to port to Windows (where long is always 32-bit even when compiling for 64-bit).


At one point 16 bits were universal, and then 32 bits came along and broke everything. We learned some lessons that day. Then 32 bits was universal for a while and 64 bits came along and broke everything. I guess we forgot those lessons, but OK well at least we learned them again for real this time.

What a relief it is that we have reached the final architecture and will never have to worry about anything like that again!


At some point in history a "byte" could also be considered to mean any number of things. There was great value in standardizing on 8 bits.

The there is of course no clean separation between the two, but by many definitions the "64-bit era" has already lasted longer than the "32-bit era". The real lesson from history is that we moved for specific practical reasons, and that these reasons don't exist with 64-bit architectures today. 64-bit is and most likely will remain the standard size for the foreseeable future. I predict that even embedded will (slowly) move to 64-bit because again, there is great value in standardizing these types of things.


I agree that 64-bit is likely to be a very long-running standard. But, given that on 64-bit, there's no agreement on sizeof(long) - and it's impossible to change at this point because it would be a massive breaking ABI change for any of the relevant platforms - the only sensible standardization approach for C now is to deprecate short/int/long altogether and always use int16_t/int32_t/int64_t.

It helps to look at the history of C integer type names and its context. Its origin lies not with C, but with ALGOL-68 - as the name implies, this is a language that was standardized in 1968, although the process began shortly after ALGOL-60 Report was published. That is, it hails from that very point in history when even 8-bit bytes weren't really standard yet nor even the most widespread - indeed, even the notion of storing numbers in binary wasn't standard (lots of things still used BCD as their native encoding!). ALGOL-60 only had a single integer type, but ALGOL-68 designers wanted to come up with a facility that could be adapted in a straightforward way to all those varied architectures. So they came up with a scheme they called "sizety", whereby you could append any number of SHORT or LONG modifiers to INT and REAL in conformant code. Implementations could then use as many distinct sequences as they needed to express all of their native types, and beyond that adding more SHORT/LONG would simply be a no-op on that platform. K&R C (1978) adopted a simplified version of this, limiting it to a single "short" or "long" modifier.

Obviously, this arrangement makes sense in a world where platforms vary so widely on one hand, and the very notion of "portable code" beyond basic numeric algorithms is still in its infancy. Much less so 40 years later, though, so the only reason why we still use this naming scheme is backwards-compatibility. Why use it for new code, then, when we had explicitly sized integer types since C99?


Specifically on the topic of "avoid using long" I don't disagree; I was mostly replying to the general sentiment of "you should never assume 64 bit because this changed in the past and will change in the future". That said, if you're writing software specifically for Unix systems (as much software is) then it de-facto works, so it's fine. That's why people keep doing it: because it works in all cases where the software runs.

Starting C in 2024 is like starting Game of Thrones in the middle of Season 5. Dubbed in Esperanto.


More like dubbed in Volapuk. Dubbed in Esperanto would be Smalltalk.


> At one point 16 bits were universal, and then 32 bits came along and broke everything.

That's not an issue in mainstream Linux, since 16-bit Linux (ELKS) never caught on. Other than ELKS and perhaps some new experimental stuff, since its first release Linux always had long and pointer with the same size.


I thought the Linux kernel enforces/requires a pointer to fit into a long, so any platform with Linux will define long type as appropriately sized?


I'd much prefer C users not use long at all. What's wrong with size_t and uint32_t and uint64_t?


> What's wrong with size_t and uint32_t and uint64_t?

An extra #include and more typing?

Plus, all the C89 users are out of luck...


It's trivial to roll your own basic <stdint.h> with those types for existing C89 implementations.

Unless they don't provide the corresponding type natively at all (which can be the case for some 16-bit platforms and 64-bit). But then you have a bigger problem anyway.


Only with effort Linux was able to be ported to Clang. All compilers that compile Linux should also be able to deal with size_t and consorts.


OpenBSD may be looking at the same:

https://www.openbsd.org/i386.html

Luckily NetBSD still exists, I very much doubt they will depreciate 32 bit:

https://www.netbsd.org/ports/i386/hardware.html

NetBSD to me is a cool system, doing some unique things. But, sadly they need more support than they get these days.


This is probably a good sign for anyone out there interested into getting into a long tail niche industry, similar to how COBOL lives on in banking. There are probably going to be places we'd never expect running 32-bit applications for decades to come (I'm thinking industrial automation shops and the like), and getting in on the ground floor of developing for NetBSD would be a high-powered signal that you are interested in helping support these ancient machines.

(Do I recommend this myself? Well, I think techies find all kinds of weird things fun. :))


> OpenBSD may be looking at the same

Is that in reference to this line?

> Due to the increased usage of OpenBSD/amd64, as well as the age and practicality of most i386 hardware, only easy and critical security fixes are backported to i386. The project has more important things to focus on.

This has been on the i386 page for quite a while. I haven't specifically heard of the developers dropping i386, but I also don't follow the mailing lists as much as I used to.

I really hope OpenBSD doesn't drop i386 as it's my go-to operating system for a lot of otherwise not very useful hardware - retro PCs, 32-bit laptops, etc. One can take any bog standard Pentium or higher box and turn it into a usable machine with OpenBSD.


My understanding is that, they will support the architecture as long as one of the OpenBSD developers doesn't mind maintaining it. It isn't like Linux. Linux developers will make a top-down decision. OpenBSD is more of a, if a developer has the physical hardware and will to do it, it will remain supported until that is no longer true. Same with NetBSD, but I think NetBSD is a little more loose with the 'having physical hardware.' The spirit of OpenBSD is, no, you need to build on physical hardware. NetBSD doesn't mine the primary mechanism of testing and development being a QEMU image.

Which hopefully gives some comfort. There isn't a shortage of cheap 32-bit x86 machines. The biggest problem are probably like sparc machines that getting your hands on one in good condition can still be pricey. But you can find a functional 32-bit x86 desktop machine on ebay for 30 bucks.


Yeah, I'm still running openbsd on a dual pentium pro 200 with 96 mb ram as my irssi machine :)

It's almost a miracle that the quantum fireball in it still lives, it's over 20 years old now.


Very impressive setup! If the Fireball ever dies, an SSD or CF card will probably keep the rest of the box going for another 20 years. :)


Why would you bother with the second CPU just for irssi? You would probably notice the energy savings from removing it


I usually ssh into it, so sshd runs on one and irssi on the other. Also, the whole computer is just got fun.

The motherboard also has a UW SCSI interface, but I have no disks left sadly. The 10k rpm Seagate died a long time ago.


> Luckily NetBSD still exists, I very much doubt they will depreciate 32 bit:

I love NetBSD and their focus ppears to be so you can run a modern supported/bugfixed OS on legacy hardware.

So FreeBSD and NetBSD are perfect compliments to each other.


Didn't OpenBSD keep platforms like VAX going for some time? Googling, they discontinued it in 2016, which is pretty late for that.


Yes NetBSD has nice chill vibe to it, neither Free nor Open has. But packages are less fresh, and lots of things are not supported.


I guess it could eventually move to Tier II and join the 68k Amigas which is still on NetBSD 9.3 [0]

[0] https://wiki.netbsd.org/ports/


Somewhat related:

> time_t is 8 bytes on all supported architectures except i386.

* https://man.freebsd.org/cgi/man.cgi?arch

* https://en.wikipedia.org/wiki/Year_2038_problem#Implemented_...


Freebsd is sponsored by those who only in concerned about its server performance, and 32-bits are not there anymore.


In a way this makes sense UNIX was born as server OS, that is still its strong point, and what usually gets discussed on USENIX.


I am slightly concerned with running embedded Unix software if 32-bit is going away so fast.


For open-source operating systems there will always remain the older stable 32-bit releases like FreeBSD 16 in this case.

Even if there still are plenty of obsolete 32-bit CPUs running Linux or *BSD, since many years I have never seen a case where using them is the right technical choice.

When a UNIX-like OS is desired, it is a mistake to use anything less than a very cheap 64-bit CPU, like one with ARM Cortex-A55 cores.

For the many embedded applications where a 32-bit CPU is appropriate, either one of the many open-source RTOSes should be used or a bare-metal program, which is the best choice in many cases. For such embedded applications, a UNIX-like OS is much too bloated and it does not allow deterministic control of the hardware.


There os an in between space like the RPi zero where you can drop rt linux kernel (and rt modules). 32bit embedded linux is still really quite widespread. I expect risc-v to breath some new life into it as well. Sometimes a lack of hard (or any) real time requirements, combines with price and occasionally power (where I expect risc-v to come in) to allow for embedded linux. It usually requires less development time too. Think how many home routers, and security systems are still built off of 32bit linux. I don't know any built off of freebsd though, so maybe a good call?


Actually I've been surprised at how much 64-bit dominates RISC-V even in the very lowest end of Linux-capable embedded. Perhaps I'm just in a bubble, though; not like I work in actual embedded stuff professionally.


The price difference between something with Cortex-A55 64-bit cores and something with obsolete 32-bit cores is negligible.

For modern home routers or security appliances you want them to be able to easily execute firewalls with complex sets of rules at multi-gigabit per second throughput, so that the router throughput should not be limited by the CPU instead of the Ethernet or WiFi interfaces.

For this, you really need something like Cortex-A55. The routers with obsolete CPUs limit the performance without any worthwhile price reduction.

On the other hand, for many embedded projects where a 32-bit CPU is the right choice, the developers choose Linux just because they are familiar with it and they are too lazy to read the manuals of some free real-time operating system, even if in fact the latter might require less development work and maintenance work for their software project.


The more bits you have to push around / the wider your registers and busses are, the more energy you are going to consume though.


Is that actually measurable difference on modern process nodes and cores as big as Cortex-A?


If even A55 is too chonky, there is also A34 which afaik is the smallest 64bit ARM core. Unfortunately I haven't seen anyone actually releasing chips based on it :(

There are some socs with A35 cores though, that is probably lowest end that is actually available.


Just a poor chose of words, I guess, but FreeBSD 16 will not support 32-bit platforms.

> For open-source operating systems there will always remain the older stable 32-bit releases like FreeBSD 16 in this case.


FreeBSD was never big on embedded in the first place. 32 bit Linux isn't going to go away anytime soon.


Lots of "fatter" embedded systems run on FreeBSD due to the more favorable license situation. Not so much SOHO routers, but big industrial routers will use it. Juniper is a notable example. But also 1U firewalls, SANs, etc... are often FreeBSD based.


Well that's the tragedy of not contributing back upstream then. If there were companies actively maintaining the 32 bit port, there would be no plans of dropping it.


32-bit x86 CPUs haven't been made in years, companies building products based on FreeBSD switched to 64-bit x86 or to other architectures long ago. It's not that work on i386 is being done but kept in private repos and not upstreamed -- work on i386 just isn't being done at all.


The companies use what their supply chain has, which ends up being 64-bit by default. They're not building their own 32-bit CPUs.


Those aren't using x86 tho?


> Even if there still are plenty of obsolete 32-bit CPUs running Linux or *BSD, since many years I have never seen a case where using them is the right technical choice.

I have a couple WD NATs which chug along on 32-bit PowerPC, a patched kernel and the last version of Debian before they dropped support for 32-bit PowerPC. Don't really plan to replace them before they break...do need to work out a backup "story" (as the kids would say) but they just hold old TV shows so if they let out the magic smoke then nothing important is lost.

Would be nice if I could keep them up to date on the latest and greatest but nobody cares about the 32-bits anymore...


There will always be NetBSD which runs on everything from a potato to supercomputers, no matter how many bits of address-space it has ;-)


While the porting of NETBSD used to be a meme, these days there are far more embedded platforms supported by Linux in some fashion.

Sadly not all on the mainline, but having worked in the embedded world, Linux is a massive part of that space.


I think it's reasonable to eventually treat 32-bit embedded in the same manner we treat 8-bit and 16-bit embedded today: as "weird" architectures that have their own bespoke OSes (even if it's a fork of Linux or BSD), and on which "normal" portable code is not really expected to compile or run without modifications.


How many embedded 32-bit Unix softwares are being constantly upgraded to new versions of Linux / FreeBSD?


Why? We build everything from source anyway, and armhf compilers are not going away any time soon.


Building from source doesn’t matter if platform support code is no longer in the OS source tree.


[flagged]


The gp did say embedded. The RPi Zero is still selling, and its popular. and 32bit is still the big end of microcontrollers. But almost none of those have memory management units, so normal linux is off the table (not sure about FreeBSD) but it is possible to build a linux for those platforms.


The 32 bit RPis have this notice though

> Raspberry Pi Zero will remain in production until at least January 2026

Considering the roughly 2ish year release cycle of FreeBSD, it seems likely that 32bit RPIs will be eol'd before FreeBSD 16 is released.


Broadcom released new quadcore 32bit router SOCs as recently as last year. The rpi may have moved on, but I wouldn't be surprised if there is a niche for an rpi -1 what with inflation. The rpi zero 2 is 64bit and the same price as the old one. But cheaper is always cheaper. With microcontrollers growing to 32bit, you still see 8bit used all the time just for the price difference. If the 32bit part is even a little cheaper on a high volume item, it will be used, and routers lean on free operating systems heavily.


Interestingly it seems 32-but raspbian is recommended over 64-bit even on the zero 2 which is 64-bit. Note it still has the same tiny amount of ram.


> Broadcom released new quadcore 32bit router SOCs as recently as last year

Which SoC are you referring to here?


I was referring to the BCM47722, but I was basing the assumption on the assumption that all the other 64bit SOCs had 64bit in the title page, and that one didn't. But it is also 64bit. After poking around it does indeed seem like they don't make anything 32bit with an mmu anymore.


I don't believe FreeBSD has any support for micros without memory controllers.


You wouldn't run FreeBSD on an MCU but rather something like Zephyr.


I had to switch off from macOS when they retired the 32bit system libraries, because I simulate my armv7/avr embedded firmware natively. On linux I still can, but on other systems soon not anymore.

My simulator is not an qemu/renode like CPU emulator, which would be too slow, just some mesh routing algorithms, with like 200 nodes simulated on a single machine. https://blog.nubix.de/2022/04/testing-baremetal-firmware-at-...


Sounds a lot like what RIOT is doing, but there support for 64 bit `native` was just merged recently, so you can also build for the virtual `native64` board and run your code as a 64 bit Linux executable.

(Support for macOS was removed a few years ago due to a lack of maintenance and no active developers with access to a macOS machine)

It also comes with a mesh simulator: https://github.com/RIOT-OS/RIOT/tree/master/dist/tools/zep_d...


No, my mesh runs on 8bit atmel's. 32bit would be way too big. Just the simulator needs 32bit and cannot run on 64bit.


RIOT also runs on 8 bit AVRs, e.g. ATmega256RFR2

But that's pretty much retro-computing these days, I don't see why anyone would use those on new projects when Cortex-M0+ parts are much cheaper and more power efficient.


Ha, small sensors still rule. Everywhere. Cortex are way too large.

And gcc-avr still rules over gcc-arm. qemu on avr easy, on arm horrible.


I was happily running Arch on my netbook until they dropped support. I ran the Arch32 variant for a while, but could see the writing on the wall and got an irritatingly larger laptop only for its 64 bitness. The netbook sits in a drawer now.


I routinely install old 32bit applications in wine because they never got a 64bit version. It's literally the only reason I haven't tried running 64-only.


We’re not talking about 32-bit applications though. We’re talking about a 32-bit kernel.

The 64-bit FreeBSD kernel is perfectly capable of running 32-bit applications, and will be so for the foreseeable future. The only reason I can see to support a 32-bit kernel is to allow for installation on hardware that is approaching decades of obsolescence, so again I ask, what is the point?


The only place I see it is people who are using 32-bit python on windows for compatibility with something and can't use the 64 bit builds.


This makes me feel old.


[flagged]


Those old machines use so much more energy (CO2) to do the same work that we are environmentally ahead in a year trashing them. Of course newer machines often use more energy because of bloated software, but assuming you run the same software newer machines are much more energy efficient.


In many places, 90% and up of a computer environmental footprint is its fabrication. In France, Canada etc using a computer as long as possible is the right choice from an environmental standpoint. Of course even 10, 12 and 15 year-old computers are generally 64 bits now (my home server is of 2007 vintage).


Depends on the machine. Big desktop systems and servers yes, but early Atom-based netbooks/"nettops" are 32-bit based and could still be useful as thin clients or for very light office work. You won't want to run a modern web browser on them though.


If you use a light environment, a CPU throttler such as cpufreq and software like Dillo, Sylpheed, Pidgin... the power usage will be perfecly down.

For general news there is http://68k.news, gopher://magical.fish... and several others such as https://text.npr.org. For gaming, people would like Minetest or some light games at least once a week to disconnect from some AAA games. The setup wouldn't be as fancy as a modern machine with FFox/Chrom* and Steam, but fore sure it will be useful and gamers would discover crazy gaming mechanics not seen anywhere else, such as Cataclysm DDA:Bright Nights.

As they state in the thread, creating a new computer it's far more wasteful.


I bet 60% of computer e-waste is 64-bit machines by now.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: