May it happen that CloudFlare stops sending their call invitations to me. I have an account at them which has shared access to company domains, because sometimes I was needed to assist with them. CloudFlare reps repeatedly e-mail me to schedule a call, even after I replied to them and told that I am not a person directly responsible for our domains and asked to stop mailing me. Whoever was their rep at that time, answered that they will stop. Some time passed, and they started e-mailing again. Eventually I started putting their e-mails to spam folder.
Either I don't understand the problem completely, or why wasn't it possible to introduce something like 'ex' address family that allowed to pass and disambiguate extended parameter format(s) which would include array sizes etc? We had these *Ex functions everywhere in Win32 API for an eternity, why unices couldn't do the same trick?
You can introduce new APIs and new types to resolve the "is it flexible, or is it 14 chars"; that's more or less one of the explored approaches in the article.
It won't make existing uses any clearer or safer though, requiring rewrites to take advantage of them.
The array should not be touched so the question is moot. The struck sockaddr type should only be used for pointers, which are cast to the correct type according to their family before they're are dereferenced, with the exception that the sa_family member can be accessed through the sockaddr base.
For defining or allocating an object that can hold any address, sa_storage should be used, mentioned in the article.
I imagine the problem they're trying to address is ensuring that everyone _does_ only use correctly cast pointers; as defined, it's legal to use that 14 char array, it's just that it's never what you're meant to do.
In my opinion, the array should be marked obsolescent, and removed (not necessarily physically, but the name gone from the member namespace). The 14 bytes is not enough for it to be define storage for all types, which is why sockaddr_storage is there. It cannot meaningfully access anything. Implementations could rename it to some __sa_data or whatever name so that the size of the structure doesn't change, if that is important to them.
Respectfully, I think you're missing the point. Windows uses idioms like that too (though usually via filling an initial field with the size of the structure in byte), but GP's point of an Ex-style API would be to completely eliminate kludges such as runtime type fields at the beginning of structs and move to something safer.
I don't see what. Type fields in structs is more or less the pinnacle of what it means to make things safer in C. :) :)
The same bind() API has to work for any kind of socket: AF_UNIX, AF_INET, AF_INET6, AF_X25 or what have you. That any kind of socket pairs with the matching address type, whose type is erased at the API level down to struct sockaddr *. But from the socket type, it is inferred what type it must be and the appropriate network stack that is called can check the type of the address matches.
I don't see how you'd get rid of this with a new bind_ex function, or why you would want to.
Of course we could have a dedicated API for every address family: bind_inet, bind_inet6, bind_unix, ... which is bletcherous.
Strictly speaking, I think we could drop the address family field from addresses, and then just assume they are the right type. The system API's all have a socket argument from which the type can be assumed. Having the type field in the address structure lets there be generic functions that just work with addresses. E.g. an address to text function that works with any sockaddr.
If the goal here is to reduce attack surface or bug-prone code in the kernel, leaving the non-ex variants around unchanged wouldn't help much with that.
>At present – Thunderbird version 128.0 is only offered as direct download from thunderbird.net and not as an upgrade from Thunderbird version 115 or earlier. A future release will provide updates from earlier versions.
I am not familiar with zsh, but is it really interpreted by zsh? Because the script has #!/usr/bin/env bash in its shebang, isn't it executed by bash on your system, even if launched from zsh?
This script is explicitly a Bash script and it is not executable by every other shell present on modern unix-like systems. Examples are Korn shell, Almquist shell. Hence the distinction: if one states that the script is a shell-script, it implies that it can be interpreted by any modern shell, for which there is only one common denominator, POSIX. This script is explicitly only Bash shell compatible, not any-shell compatible.
I understand that rustup can be convenient, but I don't get why they insist on using this tool. It should be enough to distribute the toolchain and std as a zip file.
rustup makes it easy to have multiple Rust versions installed side by side, so you can test compatibility with older versions, or try beta or nightly builds (e.g. `cargo +1.77.0 test`). I think that functionality is rustup-specific and not easily replicable with tarballs or apt.
Apart from that rustup makes it easy to add and remove optional components, libraries for cross-compilation, and update Rust itself. These things could in theory be done with apt if Debian repackaged all of these components, but they didn't. Having one way to manage Rust installation that works the same on all platforms makes it easier to teach Rust and provide install instructions for software.
I think what X did is still the standard, which this article explains is pretty suboptimal. Sometimes crappy solutions stick around until someone pushes for something better.
Doesn't the CPU/GPU bottleneck which is already assumed to be slow actually provide the perfect opportunity for abstraction over a network protocol? Sending "what to draw" and "how" (shaders) over the wire infrequently and issuing cheap draw commands on demand? I think GPUs provide a better situation for a network first model than was available when X was designed.
Only if everyone agrees on a central rendering model & feature set, which simply isn't the case. 2D rendering is not a solved problem, there are many different takes on it. Insisting all GUIs on a given device use a single system is simply not realistic, which is why nobody actually uses any of the X rendering commands other than "draw pixmap" (aka, just be a dumb compositor)
Sorry, but I can’t take the suggestion that the PCI bus and the internet should be treated the same seriously. You’re telling 4 or 5 orders of magnitude difference. Maybe more on some specs.
It’s like saying you should use the same file access algorithms for a RAM disk and punch cards. No you shouldn’t!
> Xorgs decades of highly dubious technical decisions.
People like to say this, yet time and time again, X's design proves to be the superior one in the real world.
Some of it could use minor revisions (adding some image compression would be fine, etc), but it is hard to seriously say things are "highly dubious" compared to the competition.
As for remote backups - I use ssh with Borg and it works fine. If this is a NAS you can probably enable ssh (if it is not enabled already).
BTW for my remote backups I do encrypt them but this is a choice the author of Borg left open.
There are other issues with Borg such the use of local timestamps (naive date format, no timezone) instead of a full ISO8601 string, and the lack of capacity to ask whether a backup is completed (which is a nightmare for monitoring) because the registry is locked during a backup and you cannot query it.
Don't confuse the Internet and networks please. Older machines are completely fine to be used in private firewall-protected networks, as long as they're facing only LAN. SSH still be required to access them.
They’re the perfect target to hit when looking to pivot across/through a network though. Not to mention the risk of being hit during a ransomware event. Keeping around legacy hardware/software is the pinnacle of fuck around and find out.
Instead we'll have the company going bust for constantly having to replace critical machinery in the name of "security". Sometimes it is necessary to keep old equipment around and it can be secured using physical methods (an air-gap).
You're offering hypothetical, worst-case whataboutisms. In the real world, any shop that isn't stupid will use defense-in-depth to protect the unupgradable bits.
Security is about risk profiling and making good tradeoffs between things like cost, convenience, timeliness, and confidentiality/integrity/availability. All computer security is basically futile because in the face of a sufficiently motivated attacker, so chasing perfection is wasting your time.
If you're doing home security, you don't use armed guards and reinforced steel doors, with the defense of depth of an extra-secure bulletproof safe room, because the security would cost more than the value it provides. You might use a good deadbolt though.
The same goes for computer security. In combination with certain security approaches like air gapping, a technically insecure out of band management network can quickly become a dramatically less plausible means of being exploited compared to say - unsexy things like email phishing attacks. So replacing all your servers with ones with supported out of band management systems can simply not be a reasonable priority to have.
Whatever. Imaginary paranoia strawman that is the wrong kind of paranoia: the unactionable, ego-based kind. You completely ignored defense-in-depth approaches like airgapped systems and adding additional layers and protections to mitigate your hypothetical non sequitur. If you fail at these then you don't actually understand security and are just arguing without a leg to stand on.
But sure! If I have a server I currently use OpenSSH to connect then I certainly _could_ airgap the machine and require anyone using it to be in physical proximity to it. But don't you think that might be unrealistic in the vast majority of scenarios?
If you have a server that you can't secure properly because it only supports obsolete, known-broken cryptography, then yes, absolutely, you should airgap it or find some other way to protect it.
Or you could... not do that... and expect to be hacked, over and over.
But the airgap scenarios are very real, and they make it more difficult to just go online and grab an old ssh client that will do the job.
It seems that the argument for removing support for the old algorithms involves the need to maintain them in the new releases. This only becomes a problem if/when the code and/or regression testing is refactored. So eventually the effort required to remove support becomes less than the effort needed to continually support the old algorithms.
The OpenSSH maintainers can of course do anything they like, but removing support for legacy algorithms is basically passing the problem down to (probably less capable) users who are stuck without the ability to connect to their legacy systems.
You do recall that the source control doesn't disappear, even after support is pulled? I've built ancient versions (specifically in case of SSH, to get arcfour for whatever convoluted system); this wasn't a simple operation, but feasible, even for someone with just a general knowledge of SSH and its build toolchain.
Maintaining code also takes time and effort: smaller codebase, effort better spent. If it's too costly to just keep an ancient version of ssh around, and even too costly to pay someone to do that for you, how's it suddenly NOT too costly for the maintainers? If you're going to the lengths of having a special airgapped network of legacy systems, how do you NOT have the tools to use with those systems?
I think you missed my remark about the inability to pull an old version from within an airgapped environment. It's usually still possible, but the level of difficulty can vary depending upon security requirements. Imagine a security officer refusing to approve the introduction of an older and insecure program into a secure environment.
I think that you are making a lot of assumptions about the purpose of airgapped systems. Why would you assume that no changes or development work occurs? In my experience, there are often legacy components that are a critical part of a larger system. Also, in my experience, such environments are often segregated into smaller enclaves. Some of those may have the most up-to-date tools available.
I very much did not miss that: "how do you NOT have the tools to use with those systems?"
The hypotetical airgapped secure environment, running an old version of SSH (which only supports DSA) has no requirements for a SSH client, just "eh, just bring whichever openssh that you happen to have, and let's assume it works"? That's a failure to plan: if your network is airgapped, you can't expect to have client software in compatible versions appear out of thin ether.
I appreciate that you're trying to drill down and improve your understanding of such environments, which you obviously do not have much knowledge of. I'm limited in how much specific information I can disclose, but I'm certain that I'm not the only one who has worked in these environments and faced these challenges.
Here's a hypothetical example of a situation closely matching some of my experiences:
A long-term support contract exists for some legacy system that cannot be updated because it is under configuration control. The contract involves peripheral development activities, which are best done with the most modern tools available. The whole environment is airgapped, and has security protocols that require security updates to the peripheral development systems, and these are done under a strict and bureaucratic review process. The legacy system interoperates with the development system via a single network connection, which is monitored by a separate entity. (The system is airgapped, but is part of a larger airgapped network, and is protected from unauthorized access even within the airgapped environment.)
So you've got a new environment talking to a legacy environment via SSH, and they need to share a common security algorithm. If a new development environment is spun up, and its SSH client does not support the legacy algorithm, then a long and complex delay occurs in which multi-level approvals are required from bureaucrats who are generally not qualified to understand the problem, and are thus inclined to deny approval, to introduce the legacy SSH client software, which will be compared with the modern SSH client for any change history related to security issues, which would include the deletion of these security algorithms. The legacy SSH client would be assumed to be a security risk by the ignorant bureaucrats, and a months-to-years-long process ensues to convince them otherwise.
So, this essentially boils down to "someone else, ANYONE not me, should simplify the bureaucratic process for me (because there's the actual issue), and I've picked OpenSSH maintainers for the job. Oh, and for free, too."
You're not expecting the toolchain to appear out of thin ether, that was my misunderstanding: you fully expect volunteers to provide you with it for free, for your highly specific situation; in return, you offer...nothing? That's not a very enticing trade.
I sense there may be other ways around this, but those would a) cost you (in a broad sense; after all, the infrastructure is supposedly critical) money, and/or b) cost you (perhaps in a stricter sense) time, effort, perhaps influence. I agree that's rather inconvenient, given the alternative.
Personally, I'm able to do what is needed to make things work. My whole point was that by pushing the work from the OpenSSH dev team to downstream, the sum total of work will increase.
Exactly. There are untold 10's to 100's of millions of critical infrastructure systems that cannot be upgraded containing insecure and horrible SSH implementations. Defense-in-depth by layers of other security measures and isolation permits them to be reasonably secure for their use prior to lifecycle replacement where possible.
Furthermore, no one should place remote access servers on the internet and should instead place them on a private, internal network behind an infrastructure VPN-jumpbox such as OpenVPN or Wireguard.
Only a few extremist developers in control of all of their own software and who don't have to interact with anything in the real world can maintain the idealistic purity to forever run only the latest version of everything.
The openssh developers supporting outdated systems and software forever also isn't a long term solution. Why should they pay this cost, but not you (or your company)?
If you can keep unsupported hardware in operation, why can't you keep a containerized openssh image around, or maybe a VM image, or ideally a statically linked executable?
Maybe your company can hire an expert in software archival to set this up and maintain it if needed, or an extra developer to maintain an openssh fork that supports your environment.
Expecting other people (who you don't even pay) to support your outdated systems doesn't really make sense.
It seems only fair to me that if someone is insisting that they must connect to ancient systems that they should be expected to use only-slightly ancient software to do so. Or fork it, of course. If the team doesn’t want to be responsible for maintenance you’re welcome to take it on.
It depends on what you think a "request flood" attack is.
With HTTP/1.1 you could send one request per RTT [0]. With HTTP/2 multiplexing you could send 100 requests per RTT. With this attack you can send an indefinite number of requests per RTT.
I'd hope the diagram in this article (disclaimer: I'm a co-author) shows the difference, but maybe you mean yet another form of attack than the above?
[0] Modulo HTTP/1.1 pipelining which can cut out one RTT component, but basically no real clients use HTTP/1.1 pipelining, so its use would be a very crisp signal that it's abusive traffic.
I think for this audience a good clarification is:
* HTTP/1.1: 1 request per RTT per connection
* HTTP/2 multiplexing: 100 requests per RTT per connection
* HTTP/2 rapid reset: indefinite requests per connection
In each case attackers are grinding down a performance limitation they had with previous generations of the attack over HTTP. It is a request flood; the thing people need to keep in mind is that HTTP made these floods annoying to generate.
I wonder why exactly this attack can't be pulled off with HTTP/1.1 and TCP RST for cancellation.
It seems that (even with SYN cookies involved) an attacker could create new connections, send HTTP request, then quickly after send a RST.
Is it just that the kernel doesn't really communicate TCP RST all that well to the application, so the HTTP server continues to count the connection against the "open connection limit" even though it isn't open anymore?
The problem for the attacker is they then run into resource limits on the TCP connections. The resets are essential to get the consumption not counting.
For most current HTTP/2 implementations it'll just be ignored, and that is a problem. We've seen versions of the attack doing just that, as covered in the variants section of the article.
Servers should switch to closing the connection if clients exceed the stream limit too often, not just ignoring the bogus streams.
By request flood I mean, request flood, as in sending insanely high number of requests per unit of time (second) to the target server to cause exhaustion of its resources.
You're right, with HTTP/1.1 we have single request in-flight (or none in keep-alive state) at any moment. But that doesn't limit number of simultaneous connections from a single IP address. An attacker could use the whole port space of TCP to create 65535 (theoretically) connections to the server and to send requests to them in parallel. This is a lot, too. In pre-HTTP/2 era this could be mitigated by limiting number of connections per IP address.
In HTTP/2 however, we could have multiple parallel connections with multiple parallel requests at any moment, this is by many orders higher than possible with HTTP/1.x. But the preceeding mitigation could be implemented by applying to the number of requests over all connections per IP address.
I guess, this was overlooked in the implementations or in the protocol itself? Or rather, it is more difficult to apply restrictions because of L7 protocol multiplexing because it's entirely in the userspace?
Added:
The diagram in the article ("HTTP/2 Rapid Reset attack" figure) doesn't really explain why this is an attack. In my thinking, as soon as the request is reset, the server resources are expected to be freed, thus not causing exhaustion of them. I think this should be possible in modern async servers.
> But that doesn't limit number of simultaneous connections from a single IP address.
Opening new connections is relatively expensive compared to sending data on an existing connection.
> In my thinking, as soon as the request is reset, the server resources are expected to be freed,
You can't claw back the CPU resources that have already been spent on processing the request before it was cancelled.
> By request flood I mean, request flood, as in sending insanely high number of requests per unit of time (second) to the target server to cause exhaustion of its resources.
Right. And how do you send an insanely high number of requests? What if you could send more?
Imagine the largest attack you could do by "sending an insanely high number requests" with HTTP/1.1 with a given set of machine and network resources. With H/2 multiplexing you could do 100x that. With this attack, another 10x on top of that.
> An attacker could use the whole port space of TCP to create 65535 (theoretically) connections to the server and to send requests to them in parallel.
This is harder for the client than it is for the server. As a server, it's kind of not great that I'm wasting 64k of my connections on one client, but it's harder for you to make them than it is for me to receive them, so not a huge deal with today's servers.
On this attack, I think the problem becomes if you've got a reverse proxy h2 frontend, and you don't limit backend connections because you were limiting frontend requests. Sounds like HAProxy won't start a new backend request until the pending backend requests is under the session limit; but google's server must not have been limiting based on that. So cancel the frontend request, try to cancel the backend request, but before you confirm the backend request is canceled, start another one. (Plus what the sibling mentioned... backend may spend a lot of resources handling the requests that will be canceled immediately)
The new technique described avoids the maximum limit on number of requests per second (per client) the attacker can get the server to process. By sending both requests and stream resets within the same single connection, the attacker can send more requests per connection/client than used to be possible, so it is perhaps cheaper as an attack and/or more difficult to stop
Is is a fundamental HTTP/2 protocol issue or implementations issue? Could this be an issue at all, if a server has strict limits of requests per IP address, regardless of number of connections?