Even if you have opened it, you have no guarantee that the file descriptor has not been closed since. Yes, that would be stupid of the user of the library, but many security lapses happens because people makes stupid assumptions. Code to close all file descriptors on fork for example is fairly common, so you can not safely assume that the file descriptor remains valid.
> Even if you have opened it, you have no guarantee that the file descriptor has not been closed since.
You can absolutely rely on internal file descriptors not being closed. A program that closes file descriptors it does not own is as buggy as a program that calls free on regions of memory it does not own. A library cannot possibly be robust against this form of sabotage. The correct response to EBADF on a read of an internal file descriptor is to call abort.
The "close all file descriptors" operation is most common before exec. After exec, the process is a new program that can open /dev/urandom on its own (since, as I've mentioned previously, it's a broken environment in which /dev/urandom does not exist).
> You can absolutely rely on internal file descriptors not being closed.
I've explained several times why you can't. The program that closes all file descriptors may be broken, but the big problem is that as long as the library has no safe way of reporting this to the caller without breaking the OpenSSL API, they are faced with either breaking a ton of applications or finding an alternative. And they've explained why this is not an alternative (in the copious comments in the soure):
> The correct response to EBADF on a read of an internal file descriptor is to call abort.
They have no control over whether or not this will result in an insecurely written core file that can leak data, and this is a common problem. If the person building the library knows that the environment it will be used in does not have that problem, it's one define to disable the homegrown entropy.
> The "close all file descriptors" operation is most common before exec.
I've seen it in plenty of code that did not go on to exec, to e.g. drop privileges for portion of the code.
OpenSSL is crufty in part because it's full of workarounds for ancient, crufty code. LibreSSL shouldn't repeat that mistake. LibreSSL does have ways to report allocation failure errors to callers. It shouldn't even try to work around problems arising from applications corrupting the state of components that happen to share the same process. That task is hopeless and leads to code paths that are very difficult to analyze and test. You're more likely to create an exploitable bug by trying to cope with corruption than actually solve a problem --- and closing file descriptors other components own is definitely a form of corruption.
> [LibreSSL has] no control over whether or not [abort] will result in an insecurely written core file
The security of core files simply isn't LibreSSL's business. The mere presence of LibreSSL in a process does not indicate that a process contains sensitive information. LibreSSL has no right to replace system logic for abort diagnostics. If the developers believe that abort() shouldn't write core files for all programs or some programs or some programs in certain states, they should implement that behavior on their own systems. They shouldn't try to make that decision for other systems. LibreSSL's behavior here is not only harmful, but insufficient, as the library can't do anything about other calls to abort, or actual crashes, in the same process.
> I've seen it in plenty of code that did not go on to exec, to e.g. drop privileges for portion of the code.