"-> We have not reserved a CVE for this issue as Apple is a CNA and does not
> see it as a security issue.
>
And here's my main teachable moment.
If a CVE Numbering Authority (CNA) does not grant a CVE to an issue
(whether it be due to "not a bug" or non responsiveness or whatever) there
is a simple process to deal with this. You go to the CNA's parent, a list
of CNA's is currently at:
so essentially you go to the parent and keep working your way up until
either you are satisfied, or you hit MITRE and they tell you to take a
hike, or give you a CVE.
Speaking of which if you are an Open Source project and want to be a CNA,
polease contact me and chances are I can set you up in a pretty quick
timeframe (faster than I assign CVEs because creating a CNA is a much
better ROI of my time than issuing a CVE). ."
- via>" Kurt Seifried -- Red Hat -- Product Security -- Cloud"
The quoted statement, "not a security issue", doesn't appear in the linked page, but via the Thread Prev link[1] I found Daniel Peebles explaining:
> Yes, we did try with them first and they told us they didn’t consider it a security issue, so we posted to oss-security after making sure Apple understood we’d do that unless they asked us not to.
To be clear, this bug was found running Nix on macOS, not iOS (just in case someone else besides me interpreted the two comments here as such).
Additional fun fact - the issue in question was not found during new development, but by just running the same code that had worked on previous macOS versions just fine. So besides a vulnerability, it's an user-visible regression to behaviour described by POSIX APIs.
I remember him beginning his work on Nix after his involvement in iPhone jailbreaking, but I might have not paid enough attention to the beginning of the Nix project.
Reminds me of when GreySyntax was trying to get Nginx to run on Linux as well. The reverse engineering / hacking community usually has all sorts of fun little goals to stretch tech past initial design.
Is Apple wrong here? I mean, this is obviously a bad thing that should be fixed, but ultimately it amounts to a DoS against the machine and does not grant any privileges or expose any data to the attacker. I can see why Apple might say it's not a "security" issue.
It is a security issue. The exposure is limited, but as folks point out; Apple forces people to build iOS binaries on machines running OSX.
It'd be quite easy to DoS commercial build services with this technique and there seems to be no way to stop it and detecting it can be pretty difficult. Depending on how the logging for such a server is structured, it may even be hard to tell which customer did it.
This is also on the heels of a bug that's been discovered where it's possible to hang many types of system monitoring by abusing a race condition in the UNIX<->mach layer to block all process inspection tools on the OS.
Given that Apple forces us to run OSX build servers, they should take the integrity of basic syscall checking seriously. And that totally ignores the chaos one person with an agenda (or poor project security) can cause if they get code that does this into Homebrew.
After looking at it more carefully, I'm not positive that it's a failure to check privileges, although the symptoms at the time made it look like that (literally logging an unprivileged process killing a privileged one). Rather, I think it's some sort of race.
Anyway, I'm glad they fixed it now, but the response to our original bug submission didn't inspire a ton of confidence.
There are a lot of ways to DoS a commercial build service. If you let people execute arbitrary code on your machine, you better have logging in place to know who is responsible if your machine crashes.
I'd assume that most build farms build on VMs to avoid exactly this kind of problem -- building on bare metal seems pretty risky.
The SLA¹ for macOS 10.13 allows you to run up to 2 additional copies of macOS in virtual machines on your existing Mac hardware. So each Mac can run up to 3 copies of the OS in total. You may not run it on non-Mac hardware.
Well, either case you can still run the VMs on a real macOS machine. I assume crashing the VM should not crash the host, in most cases, unless there's a bug in the virtualization software, right?
A denial-of-service typically falls under the "availability" part of the "CIA triad" [0], although the seriousness can vary tremendously from one DoS to the next.
It seems to me that it's in the same category as a fork bomb - a neat trick requiring local access that will crash a machine. Deeming it a non-security bug seems entirely reasonable. As you say it
does not grant any privileges or expose any data to the attacker.
So the bug cannot really be exploited to compromise security except in a case where someone has gone to extraordinary lengths to keep even a local user from crashing the machine.
> So the bug cannot really be exploited to compromise security
Do we really need to get into the discussion of how many times someone thought a bug wasn't a security problem or was unfeasible to use in an exploit in practice o ly to be proven wrong later as new techniques came about and were applied to the original problem?
The unknown unknowns are the problem here, and they are as their name suggests, hard to nail down.
Race conditions are a thing. My bet is a determined and smart hacker could find a way to weaponize this within a week at most.
I had to argue for the fix of more than one bug like this in another OS back in the day, and I take a more fundamental view of it: Does the OS have a security feature that's intended to prevent you from doing this? Does the exploit allow you to do it anyway? In this case the answer to both questions is "yes" which makes it a security bug.
Misspelling some random text in an application is generally not it a security issue. Website that don't render correctly when a screen resizes is again not a security issue.
Sure, their are a lot of things that can become security issues, but a lot of code has little to do with how a program operates.
I meant it like this: 'If the windows access control dialog box had "Cancell" instead of "Cancel" it's a bug in a security feature, but not an attack surface.'
Now, you can make a semantic argument that it's a security issue based on what team needs to fix it, but IMO that's a meaningless distinction for users. What's distinct about a security bug is what it allows to happen.
Sorry, I misunderstood you. In that case, the way I'd break it down is:
Is that dialog a "security feature"? Sure. (Actually, that's a stretch, because it's just lipstick over the API, but...)
Is it intended to block someone from being able to do something? No.
I'm not saying these criteria are all inclusive - for example, if the buttons were mislabeled and "Cancel" actually applied ACL changes, I'd still call it a security bug. It seems like the bug this thread is talking about clearly does meet the criteria, though. If you look at the man page for kill[1] it says unless you're root, you're only allowed to kill processes running under your user, but the bug allows a low-privilege user to kill other users' processes.
Just because a lot of bugs were thought hard/impossible to exploit and were later proven wrong does not mean every single bug is possible to exploit. Most software bugs are not exploitable. You just generally read about the ones that are.
I'm not just making a general statement about all bugs, but also a specific statement about how I think this bug could play out. Not only does it kill all processes under the user, but it also crashes the entire system. Both of those are interesting bugs, and have possible implications if you can cause them in interesting situations.
Can you cause some system process that happens to share the user with another, more interesting process to run a kill syscall?
Can you cause the system to crash while some more trusted process is in the middle of updating a configuration, password, or other data structure that will cause interesting effects after the system is restarted?
Both being able to cause other user processes to die, and to cause the system crash on demand as a user, are things I would consider individually as security problems just because they have a high probability of having unforeseen consequences. Having them together doubles the attack space (even if it does possibly create some problems with following through with a second action).
Killing all processes under the user is the expected behavior of kill(-1). That's perfectly fine. Crashing the system is the bug part.
From the manpage:
If pid is -1:
If the user has super-user privileges, the signal is sent to all
processes excluding system processes and the process sending the
signal. If the user is not the super user, the signal is sent to
all processes with the same uid as the user, excluding the
process sending the signal. No error is returned if any process
could be signaled.
I've never seen any evidence of that. Most bugs start you down rarely thought about paths and if you didn't map them out before hand all kinds of unexpected behavior could crop up.
Doesn't that hack allow you to write to arbitrary memory? Being able to write to attacker-controlled memory is a serious problem. That's not what the kill(-1) bug lets you do though.
The attack was a buffer overflow on handling objects in the "saved item" area at the top. By manipulating objects as done in the video, one can set RAM up as needed to construct the "program". Then when the overflow is executed, it jumps to the invalid place in memory, thus executing the code you seeded.
This is also done in Pokemon Yellow with a link cable. The last time I saw this being done, a TAS hacker was able to inject a TCP stack over the link cable, and build an IRC client on top, and chat using the game boy.
And I realize that's not the immediate action that bug has, regarding kill(-1). But I've seen what people thought were innocuous bugs that ended up being "gimmee root" kind of bugs.
I'd say, at that point, it becomes a security issue. They'd probably agree.
Just because it isn't categorized as a security issue doesn't mean their engineers aren't looking for solutions, even if they're publicly trying to downplay the issue. And just because its not a security issue today doesn't mean it can't be escalated into one when we know more about the exploit.
They're not dumb. In fact, I'd say they're acting with more logic and measure than the alarmists in this thread.
They may well be treating it intelligently inside the org (which is to say they may not be discounting and ignoring it entirely), which is a good thing. My response was more aimed towards the people here that took a less nuanced view, and assumed it wasn't exploitable or a security problem just because that's how an engineer initially categorized it.
Every bug that causes the system to malfunction should be considered a possible security problem. The difference is how much severity you rate it and how quickly you decide to address it.
It's the "So the bug cannot really be exploited to compromise security..." point of view I find dangerous. Historically it's led to quite a few problems.
There are ways to protect against a fork bomb from an unprivileged user though, you can set per-user limits for number of process or memory use. If the OS didn't provide these mechanisms then yes I'd consider it a security issue.
Isn't a forkbomb the equivalent of a DoS attack though? So even if it doesn't grant any privileges or expose data, it is still part of the traditional security concerns.
This is not a simple DoS as in a service hangs. This is a reliable kill and potentially restart, making it possible to use fault injection attacks (not 100% reliable exploits) and exploit start-up race conditions.
"-> We have not reserved a CVE for this issue as Apple is a CNA and does not > see it as a security issue. >
And here's my main teachable moment.
If a CVE Numbering Authority (CNA) does not grant a CVE to an issue (whether it be due to "not a bug" or non responsiveness or whatever) there is a simple process to deal with this. You go to the CNA's parent, a list of CNA's is currently at:
https://cve.mitre.org/cve/cna.html
in general most current CNA's have MITRE as their parent (we're working on a federated hierarchy but we're in the early stages), so using the form:
https://cveform.mitre.org/
to request a CVE would be your next step. For the Open Source Distributed Weakness Filing (DWF) hierarchy each CNA and sub CNA is registered at:
https://github.com/distributedweaknessfiling/DWF-CNA-Registr...
so essentially you go to the parent and keep working your way up until either you are satisfied, or you hit MITRE and they tell you to take a hike, or give you a CVE.
Speaking of which if you are an Open Source project and want to be a CNA, polease contact me and chances are I can set you up in a pretty quick timeframe (faster than I assign CVEs because creating a CNA is a much better ROI of my time than issuing a CVE). ."
- via>" Kurt Seifried -- Red Hat -- Product Security -- Cloud"