You're offering hypothetical, worst-case whataboutisms. In the real world, any shop that isn't stupid will use defense-in-depth to protect the unupgradable bits.
Security is about risk profiling and making good tradeoffs between things like cost, convenience, timeliness, and confidentiality/integrity/availability. All computer security is basically futile because in the face of a sufficiently motivated attacker, so chasing perfection is wasting your time.
If you're doing home security, you don't use armed guards and reinforced steel doors, with the defense of depth of an extra-secure bulletproof safe room, because the security would cost more than the value it provides. You might use a good deadbolt though.
The same goes for computer security. In combination with certain security approaches like air gapping, a technically insecure out of band management network can quickly become a dramatically less plausible means of being exploited compared to say - unsexy things like email phishing attacks. So replacing all your servers with ones with supported out of band management systems can simply not be a reasonable priority to have.
Whatever. Imaginary paranoia strawman that is the wrong kind of paranoia: the unactionable, ego-based kind. You completely ignored defense-in-depth approaches like airgapped systems and adding additional layers and protections to mitigate your hypothetical non sequitur. If you fail at these then you don't actually understand security and are just arguing without a leg to stand on.
But sure! If I have a server I currently use OpenSSH to connect then I certainly _could_ airgap the machine and require anyone using it to be in physical proximity to it. But don't you think that might be unrealistic in the vast majority of scenarios?
If you have a server that you can't secure properly because it only supports obsolete, known-broken cryptography, then yes, absolutely, you should airgap it or find some other way to protect it.
Or you could... not do that... and expect to be hacked, over and over.
But the airgap scenarios are very real, and they make it more difficult to just go online and grab an old ssh client that will do the job.
It seems that the argument for removing support for the old algorithms involves the need to maintain them in the new releases. This only becomes a problem if/when the code and/or regression testing is refactored. So eventually the effort required to remove support becomes less than the effort needed to continually support the old algorithms.
The OpenSSH maintainers can of course do anything they like, but removing support for legacy algorithms is basically passing the problem down to (probably less capable) users who are stuck without the ability to connect to their legacy systems.
You do recall that the source control doesn't disappear, even after support is pulled? I've built ancient versions (specifically in case of SSH, to get arcfour for whatever convoluted system); this wasn't a simple operation, but feasible, even for someone with just a general knowledge of SSH and its build toolchain.
Maintaining code also takes time and effort: smaller codebase, effort better spent. If it's too costly to just keep an ancient version of ssh around, and even too costly to pay someone to do that for you, how's it suddenly NOT too costly for the maintainers? If you're going to the lengths of having a special airgapped network of legacy systems, how do you NOT have the tools to use with those systems?
I think you missed my remark about the inability to pull an old version from within an airgapped environment. It's usually still possible, but the level of difficulty can vary depending upon security requirements. Imagine a security officer refusing to approve the introduction of an older and insecure program into a secure environment.
I think that you are making a lot of assumptions about the purpose of airgapped systems. Why would you assume that no changes or development work occurs? In my experience, there are often legacy components that are a critical part of a larger system. Also, in my experience, such environments are often segregated into smaller enclaves. Some of those may have the most up-to-date tools available.
I very much did not miss that: "how do you NOT have the tools to use with those systems?"
The hypotetical airgapped secure environment, running an old version of SSH (which only supports DSA) has no requirements for a SSH client, just "eh, just bring whichever openssh that you happen to have, and let's assume it works"? That's a failure to plan: if your network is airgapped, you can't expect to have client software in compatible versions appear out of thin ether.
I appreciate that you're trying to drill down and improve your understanding of such environments, which you obviously do not have much knowledge of. I'm limited in how much specific information I can disclose, but I'm certain that I'm not the only one who has worked in these environments and faced these challenges.
Here's a hypothetical example of a situation closely matching some of my experiences:
A long-term support contract exists for some legacy system that cannot be updated because it is under configuration control. The contract involves peripheral development activities, which are best done with the most modern tools available. The whole environment is airgapped, and has security protocols that require security updates to the peripheral development systems, and these are done under a strict and bureaucratic review process. The legacy system interoperates with the development system via a single network connection, which is monitored by a separate entity. (The system is airgapped, but is part of a larger airgapped network, and is protected from unauthorized access even within the airgapped environment.)
So you've got a new environment talking to a legacy environment via SSH, and they need to share a common security algorithm. If a new development environment is spun up, and its SSH client does not support the legacy algorithm, then a long and complex delay occurs in which multi-level approvals are required from bureaucrats who are generally not qualified to understand the problem, and are thus inclined to deny approval, to introduce the legacy SSH client software, which will be compared with the modern SSH client for any change history related to security issues, which would include the deletion of these security algorithms. The legacy SSH client would be assumed to be a security risk by the ignorant bureaucrats, and a months-to-years-long process ensues to convince them otherwise.
So, this essentially boils down to "someone else, ANYONE not me, should simplify the bureaucratic process for me (because there's the actual issue), and I've picked OpenSSH maintainers for the job. Oh, and for free, too."
You're not expecting the toolchain to appear out of thin ether, that was my misunderstanding: you fully expect volunteers to provide you with it for free, for your highly specific situation; in return, you offer...nothing? That's not a very enticing trade.
I sense there may be other ways around this, but those would a) cost you (in a broad sense; after all, the infrastructure is supposedly critical) money, and/or b) cost you (perhaps in a stricter sense) time, effort, perhaps influence. I agree that's rather inconvenient, given the alternative.
Personally, I'm able to do what is needed to make things work. My whole point was that by pushing the work from the OpenSSH dev team to downstream, the sum total of work will increase.