When I worked at Xbox I was amazed to see the 360 game boot code had multiple title ID checks in it. For some titles it would report things differently, for others it would sleep a bit. When new features like cloud saves were introduced, those code paths for checking storage devices had per title ID branches too. It seems like whether you design your APIs perfectly or poorly, someone will use them in a way just outside of your intention.
I’m curious, do you mean consoles had code paths specific to current gen games for that console, or was this when they were implementing forms of emulation support for previous gen games?
Both impress me but the former is deeply fascinating to me. Though I think graphics drivers do this all the time?
> On beta versions of Windows 95, SimCity wasn’t working in testing. Microsoft tracked down the bug and added specific code to Windows 95 that looks for SimCity. If it finds SimCity running, it runs the memory allocator in a special mode that doesn’t free memory right away. That’s the kind of obsession with backward compatibility that made people willing to upgrade to Windows 95.
This would be pretty hacky by todays standards but looking at it now it just seems like an engineer was doing what was easiest to get the job done. Not condoning it of course but it’s fascinating nonetheless.
There's nothing easy about appcompat shims - diagnosing them, implementing them, or testing them. It involves preserving and testing all sorts of parallel code paths.
It was about doing the only thing possible - software developers were often out of business, never interested in fixing the bugs themselves, and never capable of distributing the fixed versions to customers before the days where you could assume constant internet access (and still not entirely then)
Raymond Chen's blog addresses this a lot. The reason why they did this was because even if it wasn't their fault, people would blame Microsoft for breaking things. Since they can't force third-party developers to fix their broken code or to stop using weird hacks, they chose the best option available to them. Fix it themselves.
I don't see how they did anything wrong or why it wouldn't still be one of the best solutions today. And honestly, it often still is the best or even the only solution today. There's only so much we have control over, no matter who we are.
I still think it's a wonderful solution. If your code CAN make something work, then you do, because nobody else will, and people will be happy their old stuff still works.
Shims still exist because even now with auto-updating you can’t be sure everyone is running updated software, and if you know how to handle what is causing the issue, you may want to do it.
The largest shim, of course, is an entire virtual machine.
Very cool. I was an MS Windows Tech Support during the Win31/1 to Win95 transition. Looking back, it went pretty smoothly. Most of our calls were HW driver related or usability related if I remember well.
I'll share my own story of the Windows 3.1 -> 95 migration and MS Tech support. At the time, I was using my grandfather's old PC that had originally been a 486sx 33mhz processor with 4mb of RAM and a 200mb HDD. We upgraded the machine using an Intel "Overdrive" processor to a 486dx/2 66mhz processor with 8mb of RAM and added a Soundblaster 16 sound card and triple-speed CD ROM drive. I received a copy of Windows 95 for Christmas 1995 and proceeded to install it on the system. It worked pretty well and, a few months later, I decided I wanted to add the "MS Plus!" pack.
I was 14 years old and knew very little about PCs at the time; though, I was learning. What I definitely didn't know at the time was that the HDD in the machine that I was told was nearly 500MB was actually a 200mb drive that had been compressed with an older version of DriveSpace. The addition of Plus! upgraded the compression to DriveSpace 3 which corrupted something on the drive that caused the system to hardlock as soon as the Windows 95 UI appeared no matter what I did.
After spending 4-5 hours on the phone with a very patient tech support specialist at MS, he eventually concluded that I would need to format the drive as nothing we did in those hours worked at all. Definitely a major learning experience for me doing my first full system format and OS reinstall.
By the end of 1996, I'd be doing my first Linux installation on a slightly newer PC that I saved money from a summer job to buy. If it hadn't been for DriveSpace 3 and an MS tech support specialist who educated the hell out of me for a few hours, who knows when (or even if) I would have gone down the rabbit hole that led to my career.
It's difficult to explain to people today just how good Microsoft tech support was in the early-mid 1990s. We had a similarly complex issue with DOS 6.something that I don't remember the full details of, and I think I learned more about operating systems in the couple hours we were on the phone with MS than I did in the semester-long operating systems class I took in college. Some days after the call we got a stack of floppies in the mail from Microsoft with a small bug fix that helped with whatever the situation was we had encountered. Just night and day compared to most modern interactions with tech companies.
> Some days after the call we got a stack of floppies in the mail from Microsoft with a small bug fix
That is just so utterly inconceivable today that I didn't believe it at first. Not just being physical media, but receiving that level of attention and care.
The best you can hope for these days is a vague forum reply with some shockingly bad information from an "official Microsoft rep" who is at least 5 degrees of separation from anyone who has seen code before. Disgraceful.
I was on official Microsoft support forums for Xbox cloud this fall. Many of us had similar breaking issues that did not appear to be a pebkac. I provided reasonably detailed description, what I tried, what I discounted, my full hardware software and network setup, testing results on different machines etc.
But couple of other users provided hours of their own network traces and wiresharks and extremely detailed investigation. Basically handed a full replication and analysis to them for something that affected a significant number of people. Continued copy and past response from "support" was basically to send us to reddit or stack or some other even more social support.
I worked for a summer at a managed service provider that was a certified partner of Microsoft, Cisco, HP, etc. The level of service you get as a (technical) business partner is unbelievable compared to what you get as a consumer. Cisco TAC made a custom firmware patch for me once.
This is something you only see from indie FOSS developers nowadays. I used to do this, and my users were always delighted. They usually became pretty loyal to the project, and the quality of bug reports they submit got exponentially better as a result.
It's a great strategy all around if you take an hour to care about another person.
The most rewarding calls were those helping either very young or very old customers who just needed help getting started. I recall a grandfather calling in with his grandson, trying to figure out the new Windows machine he had just bought him. Being patient and understanding was all it took to make a difference.
This lead me to wonder if retail Windows licenses are expensive because they used to include a phone support and then, when people learned how to Google for problems, Microsoft dropped the phone support but kept the price because "customers are used to this price tag"?
I recently bought a Windows 11 machine that came with Windows 11 Home, I felt the need for some Pro features and went to check the price for an upgrade and my jaw dropped. Years of "free upgrade to Windows 10/11" lead me believe those licences were less pricey nowadays.
They have basically done the same thing with software assurance support recently. A few years ago when we had a SQL issue in pre-deployment testing we opened a ticket and spoke to an engineer who knew what they were doing. Last year when we did that we had a 1x/day email back and forth with an out-of-country third party contractor who ultimately was unable to help us in any way. We ended up figuring it out ourselves (that Visual Studio version somehow conflicted with that SQL version such that data errors would get introduced (WTF!?)) while said support contractor was still trying to get in touch with real actual SQL engineers for us.
It was something like that; I remember my friend’s mom arguing for a hour to get the mouse for free (it was advertised as such or something on any new PC and the sales guy wanted to say it didn’t apply) and they forgot to charge for windows 95.
Of course soon is was bundled with any prebuilt but back at release those were relatively rare and expensive.
At the same time, it’s probably worth remembering the contemporary Microsoft rule of thumb that “each product-support call costs a sale”[1], that is to say, handling a single product-support call to that standard costs as much as was earned by selling the product in the first place (and the products weren’t exactly cheap—not that they’ve become cheap now).
Google and the new wave of firewalled engineering orgs are so long-run stupid that it boggles the mind.
A legacy proper, functioning support org's job is to sift through user-is-the-problem bugs to identify the smaller list of actual bugs... which engineering then fixes. Because they're actual bugs!
Nowadays, folks look at Support and QA as cost centers, to be funded and staffed at minimum levels.
Small wonder SRE have become the new rock stars -- companies disempowered anyone else able to call a bug a bug.
But if the IT guy tells his boss that ‘95 don’t work with shit because sim city didn’t run for him at home, it could cost a lot more sales from a company that chooses not to upgrade.
Exactly. We are moving to a tine where having a person individually and attentively help you with anything is a high order luxury item.
A major change is on the horizon though. We are close to where a large language model could play the role of the support side of that call. But if it an AI on the support side, would anyone bother to learn on the customer side?
That must have been a wild time, between brand new interfaces and a changing hardware landscape. Did you have many BSOD calls, or was that what you meant with HW driver related?
Plenty of BSOD calls for sure. Also some very memorable calls from a bunch of Linux guys who would think-up of a monster Windows machine loaded with all sorts of crazy hardware peripherals and call support to mess with us.
Are you sure it was truly just to mess with you? If I were a Linux kernel maintainer back then, and there was some hardware I wanted to fix support for but didn't personally have the money to buy, I think "trick Microsoft into divulging how their own driver for it works" would be what I'd see as a "clever hack"!
Backwards-compatibility between new Windows OS releases and previously available apps has always been great. Recently, things have regressed a little bit, but even on Windows 11, you can still run anything available for Windows 8 (released over a decade ago) just fine by simply ticking a checkbox.
And for apps where the out-of-the-box mitigations don't work, the inscrutably-named Assessment and Deployment Kit (https://learn.microsoft.com/en-us/windows-hardware/get-start...) will most likely help you out. Sure, some truly ancient apps (mostly DOS and OG-WinAPI) really won't work anymore, but those are better relegated to a VM anyway, if only for security reasons.
If you want to understand why Windows still occupies a lot of IT mindset even in Y2K22, understanding the traditional Microsoft approach to backwards compatibility (as much demonstrated deficiencies as it has...) is a necessary first step.
Yes, I'm sure such exercises work fine for any real-life OS: if you take, say, Ubuntu 1.0, then run all the intermediate upgrade steps, you'll eventually end up with an up-to-date system as well.
The real question, though, is: after a successful OS upgrade, how many of your installed apps still work without issues?
On Linux, this varies: popular source-available apps tend to do fine, whereas less-common and proprietary (not to mention expensive!) apps fail after even minor updates, with no other resolution than 'ask your app supplier to do better', which is not always an option due to said supplier being too burned-out, bankrupt, or both.
On MacOS, apps generally seem to have a 2-5 year lifetime, after which they break for various (often minor) reasons. Resolution is as per above, and often unavailable due to suppliers disappearing or giving up because of to the relatively small size of the MacOS market. iOS has similar issues.
On Windows, you can generally continue to use even the most obsolete apps for 10-20 years, and often even longer. Of course, traumatic generational changes (obsoleting DOS or 16-bit WinAPI apps) still take their toll, but compared to other operating systems, this happens a lot less often, and migration tooling is often available.
The entire WoW64 subsystem is the epitome of Microsoft backwards compatibility fervor.
WoW64 makes 32-bit programs work under 64-bit Windows by providing an entire 32-bit copy of the 64-bit operating system, and then transparently redirecting reference calls made by 32-bit programs to them instead.
It's simple in design, brute force in execution, and practical in results. I hope Microsoft never changes their backwards compatibility philosophy; other operating system vendors should strive to be like them.
Not only that, they built an entire x86 and x64 emulator into Windows for Arm, so that (for example) if you install Parallels on a M1/M2 Mac and run Windows on it, you're getting the special Windows for Arm build, but many of your apps will still run because of the built in emulation of Windows.
It's just crazy compared to how much software just completely breaks between different macOS releases with no recourse, or have to be recompiled and reinstalled from scratch on Linuxes. Microsoft's bad at a lot of things, but they deserve a huge amount of credit for their backward compatibility on Windows...
While you’re right that Linux doesn’t have the binary compatibility Windows does, I think you gloss over the advantages of source availability. Having the source lets you patch bugs and avoid the (impressive and commendable) hacks described in the article.
It’s also worth noting certain Linux releases have extended support so you continue to get security updates without upgrading (a strategy Microsoft also uses).
Still, it is a downside of Linux and macOS compared to Windows.
Some software I code works on Windows XP through 11 without issue. Some of the tools I work on still work on Windows 2000 though I don't still test for it.
The one thing I couldn't make work right upgrading to windows 10 was the Microsoft webcam. You could almost force it but the quality became crummy. The one thing that broke in xp was my Microsoft joystick.
I think I remember reading some weirdness about legacy joysticks, because they didn't map cleanly to MS standard controller types or somesuch, so all had to have hacks in the drivers to be functional.
Which then expectedly exploded whenever MS shipped a new Windows version.
If you mean running apps made for those platforms, I wouldn't say that you have to be particularly lucky to do so. Even apps that try to write into shared registry or their own installation folder mostly work (because Windows emulates it by transparently creating writable copies).
Per https://en.wikipedia.org/wiki/Windows_8, "Windows 8 is a major release of the Windows NT operating system developed by Microsoft. It was released to manufacturing on August 1, 2012"
But possibly, per Bill Clinton, that all depends on "what the meaning of the word 'is' is"? Or I'm missing some implied sarcasm? Usually Occam's Razor would help out, but I just got downvoted twice for enumerating some historical facts on a separate thread, so who knows...
My mental model of Windows 8 is "that toc (terrible half of the cycle) windows release to 7's tic, that I had to remove from my Aunt's new laptop and found that UEFI made downgrading to Win7 a right ballache and I swear I did this 3 years ago not 10" How time flies.
I now maintain the MUD I played as a child and have been playing whack a mole with bugs like this for years. I'm a very poor C coder so still run into issues pretty often! Interestingly the game is much more stable on my MBP that I use for development than it is on the production Ubuntu 18 server I run it on. Not too bad for a Diku.
I don't know why SimCity and just a few titles are getting media attention on this. There perhaps a dozen more games and quite a few applications that were being patched at runtime.
At Microsoft this eventually became a feature: Application Compatibility Database
Likely because SimCity was a memorable game that can be associated strongly with that era in our nostalgia. But also, not something "serious" such as tools designed for work or productivity.
Having MS design Win95 with specific hacks for SimCity means that MS thought that SimCity was special and important, which reminds people in that era that they also thought SimCity was special and important -- and it's mildly or moderately interesting enough for these people to find "behind the scenes" work which validates their nostalgic memories.
SimCity games had long shelf-lives and were on the sales charts for years. It could be it wasn't individually special and it was just "make sure the top 20 windows games work".
Microsoft had a program internally within it's development teams for Windows 95 that you could get any software for free. You just had to agree that you'd ensure it was compatible with Windows 95 and take ownership of its quirks to get it to run on Windows 95.
I'm familiar with SimCity and Raymond Chen. So familiar to know that SimCity was one of the top-selling Windows (not DOS) games. So while you and I and everyone loved SimCity, Microsoft had the economic motivation to make it work. They also certainly put more effort into obscure business software nobody remembers.
Per wiki, they released "SimCity Deluxe CD-ROM" for Windows 95, so the game was obviously still selling well even then.
Everything worked fine back on Windows 3.1. Then you upgraded to Windows 95 and now your favourite game crashes. Why would you not blame Microsoft for this?
Browsers and engines are also regularly patched to improve web compat. I remember V8 "recently" released an update to improve the performance of a very common module. The details escape me however.
GPUs and games have a vested commercial interest in ensuring they work.
On the other hand, both the website and browser are mostly provided for free. Paid websites often do put the extra effort in to make it work everywhere, eg IE/Safari compat.
Someone’s paying for you to use the browser, even if it’s not you - it’s not like Mozilla is hoping everyone will stop using Firefox. So browsers do have compatibility checks for websites sometimes.
DOS compatibility however wasn't that great. I recall using OS/2 (2.1 first, then Warp 3) DOS terminals to build and test Clipper programs. It was much a better experience, also for being able to emulate two networked DOS machines, which had some limitations and stability problems when using Windows 95 terminals.
During the beta of Windows 95 I was working at a company that had made a DOS extender for Clipper, and we got reports That the apps in 16 bit protected mode did not work. Looking in to it, there was a bug in the 16 bit FAT driver, I think, things are fuzzy from that long ago, which wasn’t used by a lot of things. I actually managed to get a bug report in, talked to the guy working on the driver, and had a build Fedexed to me a couple of days later to verify the fix.
Yes! Right click on the exe file and change the startup options to do a full reboot into DOS. Definitely needed for the games that were hard coded to use the first MB of RAM
Or for any games at all on a low/mid-range machine of the time. The overhead of Win95 crippled almost all DOS games on my machine. They might as well not have bothered with compatibility.
That is true, but my goal was to have two DOS terminals in which I could emulate two DOS networked machines sharing files, both running a client/server software I wrote, to emulate how they would work at a customer of mine shop. While it worked perfectly on OS/2, I encountered some stability problems on Windows 95, which also (if memory serves) required some more steps to allow DOS terminals to share their storage.
By using OS/2 I could effectively replicate two machines in a single one and compile both client and server in a single take with 100% confidence that it would work once installed at the customer shop.
How would that work? From my recollection the computer booted up in DOS ran through your config.sys and autoexec.bat. The final line typically in the autoexec.bat was win. Which would start up windows 95.
Also when you shutdown windows 95 it would leave you at a DOS prompt. So no restart was actually needed.
Windows 95 didn’t have “win” in the autoexec.bat, it autoloaded it. The behaviour was controlled in msdos.sys, which on 9x had various startup options including BootGUI (DOS was entirely in io.sys).
IIRC “MS DOS mode” could either just exit Windows, or reboot the entire system depending on config (presumably if you wanted custom config.sys / autoexec.bat)? But I might be wrong, it’s been years since I used 9x.
This is correct. msdos.sys was a text file with configuration values, which could be set to either auto-start Windows after running autoexec.bat, or not. In the former case, exiting Windows would stop at the "now safe to turn off your computer" screen. In the latter case, startup would go to a DOS command prompt, and Windows could be started from there, and exiting it would go back to DOS.
Those two cases got conflated by the ability to start Windows from within autoexec.bat, which wasn't the Microsoft default but was a fairly common setup from some OEMs. To the OS, that's the case of starting Windows separately, but to the user, it looks like the auto-starting case.
You could also make it "boot into DOS" by adding command.com as the last thing autoexec.bat would invoke. That way you end up in the shell, and exiting it would boot Windows.
Don't know about gracefully (some drivers might not cleanly shut down), but Win95 did indeed exit to DOS. It left the screen in graphics mode with the message "It is now safe to turn off your computer", but you could blindly type some command to reset it to text mode.
DOS compatibility is inherently very hard because its de facto API, such as it was, involved poking into hardcoded addresses in memory and other such stuff.
Even then, things that stuck to DOS syscalls (INT 21h etc) would generally work.
I recall buying a magazine that focused on the Win95 launch. The highlight was the ability to play Prince of Persia in the included DOS. Backwards compatibility was effectively a major selling point.
As amazing as this was, I do think subscription software and near-universal internet access has changed the equation now. Apple breaks backwards compatibility and old software all the time, and there's essentially no consequence.
I'm not sure that's true -- almost everywhere I've worked, our developer machines have been macbooks (both in large and small corporations). Right now I'm working on a Thinkpad, but that's because the company I'm at primarily makes windows software.
I own the games Myst 1993 for Windows 95 and I own the point and click adventure Ankh 2005 on Linux ported by RuneSoft.
Only one of the 2 games I am able to run on modern hardware on a modern Linux without going the extra mile of compiling stuff.
It's a shame but older Windows software runs better under wine then old Linux software does under Linux. Maybe we need a line?!
Maybe for once someone should tell the story of how Windows 3.1 went the extra mile to ensure incompatibility to the extent of getting an antitrust settlement enforced.
^ For those unfamiliar with what this poster is talking about.
tl;dr:
In a _beta_ release of windows 3.1 Microsoft included code to detect if the user was running authenticate DOS or a third-party clone and errored out if it wasn't the real thing. The code was disabled on for the actual release of 3.1 that went out to customers.
Internal memos about the code came to light during the government's antitrust prosecution of Microsoft. When this happened the new owner of the clone DOS system sued Microsoft and they settled to make the case go away and get the anti-trust headlines out of the news.
Clone isn’t quite the right term since Digital Research made the original CPM which Seattle Computing used as the basis for their 86-DOS which they sold to Microsoft:
Disclaimer: I'm a complete noob about the history of the region and the languages involved.
I just figured out the best way to get an answer on the web is to post a wrong one first, so I went 5 minutes into the subject on Wikipedia and what I've concluded is that maybe it's actually the other way around.
Poland is like "Land of Polans" which were the original tribe of the region.
The Ш letter seems it is a prefix that comes from Hebrew, so maybe, Шпола also refers to Polans in some way.
I must emphasize again that this can be completely wrong.
The name "polans" derives from the word "pole", which simply means "field". There were actually several different groups of Slavs who were known as "polans", all named because of where they settled. One of them did indeed found Poland. But there were also (different) polans in what is now Ukraine - indeed, they were the dominant group, as Kyiv was their city.
In any case, "pol" in the placename can simply refer to a field directly. Or even to something else; in case of Shpola, the local legend is that it was named after the guy who first settled there.
I think the workers on Ellis Island often made up last names when speaking to hurried immigrant families. My last name is pretty close to "Von Der Hook" by pronunciation, which is awful close to "from the hook" (Hoek, a part of Netherlands). My friend is quite close to "From the Sluis", which is another part of Netherlands.
Add to that the various spellings "Vander X, Van Der X, VanderX". I'm sure the conversation went
I don't know anything about refugees immigrating into the US so I can't refute your story in any way, but one thing worthy of note is that Dutch names often include geographical areas with "van de/van der/van" (meaning "of, from") as an insertion ("tussenvoegsel") between first and last name . "van der Sluis" would be a perfectly normal Dutch name. I also wouldn't think twice about "van der Hoek" if I heard someone introduce themselves by that name.
Many people had to pick surnames when the French occupied the Netherlands, often leading to geographical names, references to occupations, or sometimes even jokes ("Naaktgeboren" being relatively common, meaning "born naked"). The forefathers of someone named "van der Sluis" could have lived near a sluice/lock, lived near a place called Sluis, or perhaps operated sluices/locks as part of their job in the barging industry.
English speaking countries where last names consisting of multiple words were incredibly rare often concatenate(d) such names into one or fewer words. To many English speakers the only name with multiple words would imply nobility (if they even considered the concept at all) and I wouldn't expect nobility to arrive amidst refugees either. To this day some American websites refuse to take the space in my last name.
As an added bonus, Dutch names specifically can have "tussenvoegsels" that are part of the name but need to be treated specially to be used correctly (i.e. when sorting a list of names). Depending on if the name belongs to a Belgian or a Dutchman the capitalisation rules also differ (the Flemish capitalising the "Van", the Dutch using lowercase letters). Of course other languages and cultures also have their own naming schemes with grammar rules (take German/Austrian "von" or Danish "af/de/von" for example); it's hardly a unique concept, but the details differ between countries.
It's no wonder those poor American immigration workers couldn't make heads or tails out of the names these people brought into their country. As a Dutchman, the end result is often quite interesting to witness when Americans or Canadians with Dutch names appear on TV, most names containing their own special deviation from "normal" Dutch names.
> Depending on if the name belongs to a Belgian or a Dutchman the capitalisation rules also differ (the Flemish capitalising the "Van", the Dutch using lowercase letters).
Actually, a little-known grammar rule (in the Netherlands) is that lowercase is only used for a tussenvoegsel following a first name. In other cases, the first tussenvoegsel is capitalized.
Example: Piet van der Sluis -> meneer Van der Sluis
That's true, but in many cases where official registration is involved one tends to need to write out one's full name (the capitalisation of which then depends on the country of origin).
Not doubting your story, but wanted to add that many European Jewish communities especially wound up with surnames that reference geography. That would have happened centuries before the US existed, representing migrations that happened in Europe.
Don't have a lot of details on this history (which i heard orally) but a Google search for "Jewish surnames geography" seems to back me up.
China still struggles with that. A traditional question is "what is your village?", to get name uniqueness in a system which has too few family names.
France at one time went to the other extreme - names had to be approved by a central registry at birth. Until 1993, there was an official list of allowed first names. Today, there are still some prohibited names. "Nutella" just made the list.
A clarification: it's not so much that there's a list of prohibited names, it's more like some people had "clever" ideas and the legal naming process has a judge involved to validate that names are not outrageous or could cause harm to a child. Once it's been ruled out, it's unlikely to have a different result on a second occurence.
Examples (from the process above: keep in mind people actually tried)
Fraise: (strawberry) because of the expression "ramène ta faise" ("bring you ass over here")
Jihad: for obvious reasons
Joyeux: (happy) from the dwarf name
Patriste: (notsad) an attempt to circumvent the above
Babord / Tribord: (port / starboard) someone tried that for twins
MJ: trying to honour of Mickael Jackson's death
Griezmann Mbappé: you can guess
Mégane: because the family name was Renaud (close to Renault, which has a car named Mégane)
Mohamed: because the family name was Merah (Mohamed Merah being a well-known serial murderer)
Fañch: a traditional Brittany name, initially rejected because the ~ diacritic, which does not exist in the French alphabet. Was later overruled and allowed.
The horrible part, though, is about all those terrible names that are not rejected as the ones above were but are obviously a sort of sad, tasteless joke and very very bad for the child.
My sister's ex-husband's family tacked a "-ski" onto their surname while immigrating to avoid sounding too German. They immigrated during one of the World Wars (I believe it was the second) when anti-German sentiment was high.
If they're from Silesia, it might actually be a mixed German/Polish family that used both family names historically depending on the context; it was not uncommon there.
>I think the workers on Ellis Island often made up last names
no, ship's manifests (prepared at the point of departure) have been a standard in the ocean-voyage game for a long long time, and those were the lists used by immigration, not handwritten lists based on what they heard.
No doubt this kind of thing happened, but people also just changed their names over time because it's easier. One of my grandfather's brother migrated to Canada in the 50s, and he now goes by "Vanderloo" or "Vanderlo" (not entirely sure how he spells it) instead of "Van der Loo".
Same people Dutch people named Martijn or Maarten, who will often use Martin abroad. Johan Cruijff is typically known as Cruyff abroad, etc.
> I think the workers on Ellis Island often made up last names when speaking to hurried immigrant families.
This is not true. The Ellis Island workers recieved the names in written form from the ships passenger lists. But many immigrants changed their names after arriving in the US for convenience or to better integrate in society.
I have an incomprehensible last name and it often gets butchered by busy people.
It would not surprise me at all if some overworked government employee doesn't really care what noises the foreigner is trying to make.
This was in a time when lots of people couldn't read or write.
"Someone wanting to book passage to America, Canada, Australia, South America, etc., would have had no difficulty locating an agent. Agents quoted ticket prices to the would-be traveler, accepted payment, and then recorded each traveler’s name and other identifying information (the specific information collected varied over the years). The information taken down by the agents was sent to the home office, where it was transferred by shipping company clerks onto large blank sheets provided by the US government. Those sheets became the passenger lists which later were used by American port officials."
> Records show that immigration officials often actually corrected mistakes in immigrants' names, since inspectors knew three languages on average and each worker was usually assigned to process immigrants who spoke the same languages.
> Many immigrant families Americanized their surnames afterward, either immediately following the immigration process or gradually after assimilating into American culture. Because the average family changed their surname five years after immigration, the Naturalization Act of 1906 required documentation of name changes.
Where do you place the burden of proof when someone makes up a story with no evidence and which contradict all legitimate histocal sources - and another comment calls bullshit?
The grandparent anecdote is very clearly a work of imagination, while historical sources show that new names were not assigned at Ellis Island, and that Ellis Island officials were multilingual and assigned to immigrant cases based on language.
Just an generalization. It seems like
wine and steam/proton embodies the early Microsoft philosophy of (backward) compatibility as selling point,
while the overall Linux mindset seems to be: "fuck you, you should have provided the source code if you wanted your software to work after the next dist-upgrade"
Just to be clear. I wrote this on my linux mint laptop. I'm thankful for all the hard work people put into Debian, Ubuntu and Free Software in general.
This is just an generalization.
Linux takes backward compatibility very seriously. The FOSS ecosystem as a whole has the whole spectrum, so the bad apples break it for everyone.
The result isn't even "fuck you, recompile." The result is, "fuck you, you better be constantly fixing the shit that we broke with our breaking changes."
This is part of the value that distros bring to the table. Providing a snapshot of versions that actually interoperate well.
> Linux takes backward compatibility very seriously. The FOSS ecosystem as a whole has the whole spectrum, so the bad apples break it for everyone.
> The result isn't even "fuck you, recompile." The result is, "fuck you, you better be constantly fixing the shit that we broke with our breaking changes."
Linux's approach to drivers is the most "fuck you, you better be constantly fixing the shit that we broke with our breaking changes" of any OS. It does take userland compatibility seriously though.
What you say is not true. Once a driver is in the kernel, there's usually nothing a user must do to get the device working. It works so well that devices works on architectures manufacturers haven't even thought about. The policy in the kernel when any internal interface changes is: you change it, you have to modify all the driver to keep them working. It works splendidly well.
It's true, and I have 3 devices that stopped working under Linux to prove it.
> Once a driver is in the kernel, there's usually nothing a user must do to get the device working.
If the driver is accepted into the kernel tree in the first place, and until it gets removed, yes. The fact remains that that's a much narrower window than for most OSes.
> the overall Linux mindset seems to be: "fuck you, you should have provided the source code if you wanted your software to work after the next dist-upgrade"
These days you just need a container. Or use Nix/Guix, which makes it easy to preserve required dependencies for an older app even when upgrading other parts of the system.
I wouldn't say so, for windows you can run the same built package on anything from 7 to 10 without too many issues, on linux trying to install a package that hasn't been released on the current OS version package manager repo feels like blasting your brains out. At least for Debian/Ubuntu. And the only fallback is to compile the entire thing yourself, because fuck me right?
Like sure I understand that having specific packages for each version should make it reliably work, but the trade off and demand for devs to keep up is pretty huge.
Usually tracking down old versions of libraries works, except when it doesn't of course. If the source is available, I think "just compile it" is kind of okay to be honest; it's usually not that hard, and if it is, then that's kind of a problem with that software's build system IMHO.
I compiled xv from 1994 the other day on my Linux box; just has to make a tiny patch to fix an include (lots of warning, but it compiles and runs).
That said, there's some room for improvement. For example pkg-config/pkgconf could automatically suggest which packages to install if "pkg-config --libs x11" fails, or some other distro-agnostic way for people to track down dependencies.
If you're shipping a binary program without source (i.e. a game, for example, which tend to be closed source) you should ship the libraries or compile things statically. Some of the older Linux games on gog.com can be a bit tricky to run on modern systems due to this.
See that's the thing, everything on linux is designed to work by sideloading as much as possible, depending on thirty thousand packages that must be all installed to the perfect version by apt or else nothing works. A good system if you need to get a fully featured OS running on 100 MB of disk space, which tbf is linux's niche, but it's absolute horseshit to maintain.
Windows on the other hand tends to have flatpak-style monolithic executables, with the odd .NET framework or cpp resdistrubutable here and there, but it's the rare exception. Things tend to actually work when they bundle their dependencies. Hell, the average Java app ships the JRE along with it because nothing works if the wrong version is installed globally. Linux just takes that problem as a fact of life and tells you to fuck off.
In general, I'm a fan of statically linked binaries or shipping the libraries with the application, so I mostly agree with you. But I also can't deny there's advantages to the shared-link approach as well. In short, there's upsides and downsides to both approaches and no perfect solution.
In practice however, I rarely encounter issues, except for closed-source programs that don't ship their libraries. That, I think, is mostly the fault of the vendor and not the Linux system. Of course, as a user it doesn't really matter whose fault it is if your application doesn't work because you just want the damn thing to work. It does mean the problem (and solution) is mostly an educational one, rather than a technical one. You certainly can ship binary programs that should work for decades to come: the two core components (Linux kernel and GNU libc) take backwards compatibility pretty serious, more or less on equal level with Windows.
You don't even need flatpack. A wrapper script with "LD_LIBRARY_PATH=. ./binary" gets you a long way (and still provides people the ability to use a system library if they want, so sort-of the best of both worlds).
These days, Windows doesn't tend to have monolithic executables - look inside the installation folder of a random app, and chances are good that you'll see a bunch of DLLs aside from the CRT. Also, .NET apps are normally redistributed as a bunch of assemblies, even aside from the CLR.
The key difference is that each app bundles its DLLs for its own private use. From the user's perspective, it's essentially the same as static linking, if you treat the entire folder as "the app".
That is exactly what the op meant with " flatpak-style monolithic executables".
In MS speak, it is xcopy install, based on how MS-DOS applicatons used to be distributed, even when static linking was the only option (with overlays).
That's everything on Unix-likes with shared libraries, yes. You can statically-link or use a system like Nix or Guix that don't enforce a single library version (by ditching the "default library search path" concept), and you get the best of both worlds: space saved for most of your shared library dependencies, and separate versions for those binaries that need them. It's basically Unix with a garbage collector.
This was one of the things I found very painless with gentoo surprisingly enough. I could easily slap together an ebuild on my local overlay that matched the dependencies of the deb/rpm even if that meant some old as dirt dep that wouldn't be on my machine normally. Then just `emerge sync -r localoverlay && emerge category/package-name` like normal. Maybe 5-10 minutes at most to do and I'd have a fully managed install that'd continue to work even if I tried to install it 5 years from now.
I've tried similar on debian, ubuntu, and centos but fighting with apt or yum and their (seemingly) comparatively brittle packaging systems got very old very quickly. Not that it can't be done easily on those systems but so far I haven't managed it yet.
Nix I find can also be really nice for this, especially since flake based packages are pretty much self contained. Still a lot less pleasant compared to the portage/ebuild route though.
I disagree. Couldn't get Deadly premonition working. Windows recommended XP compatibility mode and it worked!
All the 25 years of code is still in Windows 11 they just hide it with a lick of paint.
If you want to play old games Microsoft is still your best bet.
There are a lot of baffling decisions by Microsoft’s Windows division, but their extreme dedication towards backward compatibility is nothing short of Herculean.
Sometimes I wish they’d split up Windows in an ultra-slow ‘Enterprise’ ring and a move-fast (think about the speed of macOS changes) ‘consumer’ ring, where they could drop much of the legacy cruft in trade for speedy/forced improvements. Think macOS going fully 64 bit, making the system image immutable, etc.
Hell, imagine if they’d allowed the Xbox One, X and S to run Windows 11 in S mode. Instant cheap performant computer for the layperson! And with it running S mode, they’d make their money through the Microsoft Store.
Unfortunately for Microsoft, condumers are as wedded to legacy apps as enterprise customers are. A lot of Steam's library immediately vanishes if you lose 32-bit support for example.
> Windows had a big, discontinuous shift when they abandoned the DOS-centric Windows codebase in the move to XP.
You mean the move to the NT platform, which made the consumer windows desktop OS finally stable.
Compatibility features, while not perfect, were implemented to help try and get Win9x and DOS workloads to function normally. Some DOS apps ran fine on XP in compatibility mode, although I couldn't say what %.
>Windows had a big, discontinuous shift when they abandoned the DOS-centric Windows codebase in the move to XP.
Windows 11 is the latest member of the Windows NT family, which originates from Windows NT 3.1 which coincides with Windows 3.1.
Windows NT 4.0, which coincides with Windows 95, is where things start to look more familiar to us today.
Windows 2000 followed to coincide with Windows 98 and ME and is where the NT family really got consumer-aimed software working right with the integration of DirectX all the way up to DirectX 9.
Windows XP which followed 2000 is really just 2000 with some more cleanup and QoL improvements. All the foundations were laid by the time of Windows 2000.
So yes, "all the 25 years of code" are definitely still in Windows 11 under a lick of paint.
I was about to gloat, as it's so cheap to buy, I thought it was a good opportunity to test Steam proton by selecting "enforce compatibility" which so far has worked for everything I threw at it (CIV V, hitman absolution, bunch of other obscure windows only titles).
Initially happy as the install and intro ran fine, but pressing enter to skip the video exits the game immediately.
Maybe that's further than you would expect, wine / proton is incredibly good nowadays, but still your point stands.
That wasn't true for Windows XP, which shipped with the NT Virtual DOS Machine (ntvdm.dll). Unfortunately, this was dropped in 64-bit Windows and finally axed in Windows 10.
That mindset only really applies to the weird enthusiast OS's like arch & gentoo, where the need to recompile half your stuff after an upgrade is almost the point. Everything else from my experience has been pretty good at maintaining compatibility
What about that classic meme of Linus (Tech Tips, not Torvalds) running the equivalent of `apt update` on Pop OS and breaking the entire Desktop Environment?
I'd be lying if I said I'd never done something similar when trying to switch from the open-source Nvidia drivers (glitchy at 4k60, at the time) to the officially provided ones.
To be clear, Linus got an error trying to install an app via the GUI, ran the equivalent terminal command, and directly overrode the warning telling him what would happen before his desktop environment broke.
It wasn’t quite as simple as just apt upgrade.
The point is that it gave him a text warning that basically said "press y to destroy your whole Desktop Environment" buried in a wall of text that you'd normally ignore. This is beyond terrible UX and would never happen outside of Linux/FOSS.
In a proprietary operating system, you just wouldn't have been allowed to uninstall the desktop environment at all. Unfortunately, you are also forbidden from uninstalling Facebook if they have a deal with your device's manufacturer.
There is a huge space between cli everything and pre-infested walled garden. I have always felt that there should be more visual differences in the standard newby frendly tools for linix. Many problems would be avoided with a basic graphic interface.
In my experience that's backwards; Arch and Gentoo are far better at telling you exactly what you need to stay backwards compatible and letting you do it than the fancy commercial distros are. (E.g. compare the difficulty of running the original Linux release of Quake 3 on those distros).
That philosophy extends to the kernel only - there are multiple other dependencies for running programs that may not have a stable API/ABI, or the same compatibility approach. Shared libraries like glibc may be updated, graphical interfaces may differ, search paths may not be uniform, etc. and these can all break a program.
I am always torn about the agressivity of MS on the marketplace at the time and their displayed technical insecurity visible in trying to ensure compatibility with existing apps (vs telling external developers to fix things).