Hacker News new | past | comments | ask | show | jobs | submit login
CVE-2021-22555: Turning \x00\x00 into 10000$ (google.github.io)
344 points by defect on July 15, 2021 | hide | past | favorite | 71 comments



Manual parsing of complex binary formats (in this case a netfilter control protocol) in hand rolled C code in ring 0. An approach from the innocent 1990's.

Quipping aside, an interesting thing is that this kind of thing used to be callable by root only and even in the early days of putting things behind more fine grained capabilities, CAP_NET_ADMIN probably wasn't taken very seriously as representing untrusted users. Which begs the question of whether it would be more secure to keep these things root-only and make people do the hard and dangerous part of untrusted input handling in userspace, where it's also easier (and not forbidden by kernel coding style rules...) to use safer PLT techniques to do it.


Obligatory could this have been done in rust and work the same without the vulnerability? Or are there some problems that make it unreasonable to do?


There are very few things C can do that Rust can't, but there are lots of things you probably shouldn't use Rust for: this being one of them. On a purely speculative level though? Sure, it's totally possible as long as you're willing to break out a few unsafe blocks.


Why is Rust a "shouldn't" here? (And usually, parsing doesn't require any unsafe blocks, though it's not 100% clear to me exactly if this would or would not require it if things were written in Rust. It also depends on how much is... it's not a simple yes or no, which is one reason why I am curious about your opinion.)


I'd imagine that portability along with avoiding the need to add a significant dependency to the kernel's core toolchain (rustc, llvm) are two reasons. Rust's involvement in the kernel is mostly relegated to less-essential modules for a reason.


I wonder how these vulnerabilities would be treated if the researchers didn't bother to find an exploit for them?

The description of the exploit is so detailed that it gives the impression that discovering the original vulnerability was fairly quick and simple, and the real work is in building a functioning exploit to wriggle through the discovered kernel flaw.

If the researchers were to just stop once they discover the vulnerability, and immediately report it to the kernel security team, would it be treated with the same level of seriousness? Could a CVE with the same severity be issued, or would there be arguments over whether or not the bug is exploitable?


In general, to be safe you should treat all memory corruption bugs as leading to code execution. Now, it may be difficult to exploit the bug, but more often than not "probably unexploitable" bugs end up being exploitable; even bad primitives like one-byte overwrites or wild copies. Of course, without an exploit you can always just say "ok this is not exploitable, I'm going to ignore it", but such an outlook is generally frowned upon now.


Yep, this is a big problem. In general non experts are impressed by laboriously refined exploits, not vulnerability discoveries, and even on HN a lot of people get those 2 concepts and terms mixed upwhich muddies the issue even more.


The vulnerability would be buried. Greg is extremely, consistently reluctant to issue CVEs without security engineers bending over backwards.


> The vulnerability would be buried. Greg is extremely, consistently reluctant to issue CVEs without security engineers bending over backwards.

From https://kernel-recipes.org/en/2019/talks/cves-are-dead-long-... I'd paraphrase his philosophy as kernel developers should fix bugs (without requiring they be security-related or worrying about CVE assignment), users should run a recent stable/long-term kernel.


LTS is a noble effort but leaves quite a lot to be desired, and doesn't really change the fact that Greg (and others) have multiple decades of history hiding vulns and pushing back on CVEs.


Have you watched the talk? Greg talks a lot about the flaws with CVEs.


Yes, I have.


Nitpick: CVE's don't have severities, and it's somewhat arbitrary if a CVE ID number gets assigned for a bug or not. They are basically just handles so that if there is one, people can use it to differentiate which potential vulnerability they are talking about.

There are some linked things (called CWE, CVSS, and probably more) that links to CVE IDs and there's some scoring but it's less widely used & more contested as a system.


It was, the first step to find out bugs in C (and C++) code always starts the same way "...inspired me to grep for memcpy() and memset() in the Netfilter code. This led me to some buggy code.".


Does C++ code even use memcpy these days? A honest question.


Surely.

It is part of ISO C++, and not everyone agrees with C++ safe coding practices.


In practice it's much easier to use assignment or copy construction, thus avoiding the dangerous memcpy. Same goes for those memsets and bzero's beloved of C-programmers: just make the class do it for you; it always works, cannot be forgotten, cannot be done wrong, and is just much easier to begin with.

Or, if you have a C-struct you need to interface with, just declare it as 'foo myfoo = {0};'. That also zero's out the whole thing, and the compiler won't get the size wrong.


I fully agree.


memcpy is the only defined way to type pun in C++. Anyone intentionally avoiding it probably has UB instead.


Starting with C++20, std::bit_cast is also an option.

https://en.cppreference.com/w/cpp/numeric/bit_cast


So you think it's too much work for $10k? I won't write off just the desire to nail it beyond doubt to secure the bounty.


It should be at least as profitable (per hour) to find and fix kernel out of bound writes as writing full exploits. The risk is you’re doing legwork for bad actors. Of course the exploit is published after the fix, but not all devices are fixed by that time.


Didn't see a link to the fix. Here it is: https://github.com/torvalds/linux/commit/b29c457a65114359601...


I'll never not be impressed by the people who can find stuff like this


I'm doubly impressed by the clarity of the explanation.


Find stuff and give it away for very little money.

Such vuln could sell for much more.


Don't forget to subtract the cost of your criminal trial defense.


What criminal trial? Security research is perfectly legal to perform and to compensate.


The access I have at work is extremely valuable in that same market. Should I sell that too?


"Should"? Morally no. But you could likely make a lot of money if you found the right buyer.


I'm sure a security eng at Google earns more that enough.


1 - access to source code

2 - grep for string, arrays and memory functions from C

3 - even if you are not a security expert, maybe you will be able to find out some possibilities


This is entirely reductive.

I have written a substantial enough C codebase used in the real world that uses C string handling functions. I did that because the codebase needed zero dependencies other than libc. Yet as far as anyone who's looked at the code can tell, there are no vulnerabilities because I learned how to use realloc() and grow the allocation if needed before calling the string handling functions. It's not hard; it's just busywork.


Has the code been fuzzed and subject to static analysis as well?


Yes. Absolutely. [1]

I use scan-build, but I also used to use Coverity until they had that vulnerability a few months back.

[1]: https://git.yzena.com/gavin/bc/src/branch/master/manuals/dev...


Open C source file, read five minutes, done. The existence of C style strings by itself already guarantees that any C code will be 50% memory access exploits. Sadly instead of adopting one of hundreds sane string types that have been around as long as C if not longer the C standards committee only did the the C equivalent of my_sql_real_escape_string by introducing dozens of overloads that let you specify a "string size" yourself, without any ability to ensure that the provided size is correct. The only good thing about that is that they cannot mess up anything else while they work on the C standard.


You're not just finding functions, but also how to affect memory, bypass security controls, and many other things to get a stable exploit

I kinda feel like you the other poster suggesting to just search source code learned basic buffer overflows from the 90s and called it a day


In general and not talking about the Linux kernel, you would be surprised how much enterprise code, written is C, is susceptible to basic buffer overflows from the 90s.

Specially because the people that wrote it use basic C coding patterns and tooling from the 90s.

Or do you think the 70% comes all from highly technically hard code to exploit?


>read five minutes, done.

I mean it's obviously harder... I agree with the point you're making but finding an exploit like this is way harder than just "reading the source for 5 minutes".


> While the vulnerability was found by code auditing, it was also detected once by https://syzkaller.appspot.com/bug?id=a53b68e5178eec469534aca..., however not with a reproducible C code.

This made me pause. I had naively assumed (well, actually, never thought about it) that fuzzing would always expose a clear and obvious error path, but apparently there's a lot of manual digging required to find the error mode?


It depends on the bug. syzkaller does an excellent job finding race conditions, but it can be difficult to generate a reliable reproducer for them. It often succeeds nonetheless. In other cases there can be a wide gap between the proximate and root causes of a crash. For instance some system call bug might corrupt memory in a way that only results in a crash some time later, when some asynchronous task runs, in which case it's also difficult to find a reproducer. Sanitizers can help identify such bugs earlier and so reduce the amount of manual analysis needed in the absence of a reproducer.

I'm not sure what happened in this case. The linked report does indeed have an associated reproducer.


Perhaps the difference between merely observing a crash while fuzzing and being able to exploit it for RCE or whatever?


Did this line get removed from the blog post?


Sorry, it's from [0] linked further down in the comments. I didn't notice that when posting, or I would have made my comment a reply to that post [1].

[0] https://github.com/google/security-research/security/advisor...

[1] https://news.ycombinator.com/item?id=27842381


For folks who are operating at this layer of bounds and overflows and such, are you relying entirely on a mental visualization of the data structures when reasoning about them? Or does this kind of exercise require some sketching etc? I find it to be quite challenging to explore these topics without having to draw on paper, so I was curious as to whether this is something that you eventually graduate from, or whether sketching things out remains a large part of the process.

Are there some tools that help this kind of thought process? Or do you have to use drawing primitives - rectangles, etc to do it?


There's absolutely nothing wrong with drawing it out on paper - why give yourself a hard time by trying to do without?

I occasionally write it out even if its dead simple if else logic but nested.


I actually appreciate seeing someone draw something complex out on paper because it shows they care enough to make sure they get it right.


The old Microsoft "Behind the Code" video series always asked their guests (Microsoft employees who distinguished themselves with technical accomplishments in their careers) to draw and explain their favorite data structure. I loved that bit of the shows. (If you haven't seen them I would highly recommend them.)



I often have loaded memory in a hex editor with most of the data types within marked off so you can visually see the separate elements. Like, if you knowingness four bytes are a an integer they get colored one way, etc. I am much of a drawer or white boarder.


Would following steps prevent this exploits and if yes, why they were not implemented still?

1. When spraying, they make use of the fact that same arena is used for all struct of a similar sizes. This allows them to fill holes in arenas with arbitrary data, creating fake structs. Why not have arena per struct or at least per subsystem?

2. Have some secret tag stored before each struct allocated in the arena. When freeing , check that tag is still intact. This should detect if there was a write spanning multiple adjacent structs.

3. I didn't quite understand how they managed to create malicious release function. They have control over buffer where they can write code, but memory page containing that buffer isn't executable, right?


> 1. When spraying, they make use of the fact that same arena is used for all struct of a similar sizes. This allows them to fill holes in arenas with arbitrary data, creating fake structs. Why not have arena per struct or at least per subsystem?

This would help, but it's hard to do for C: there's no type information when you do a malloc. XNU has a lot of C++ code in it that just (iOS 15) got separated kalloc heaps, so it's certainly something that can be done, but I'm not sure if Linux can adopt this easily.

> 2. Have some secret tag stored before each struct allocated in the arena. When freeing , check that tag is still intact. This should detect if there was a write spanning multiple adjacent structs.

I'm not familiar with kernel allocator hardening, but the post mentions some kind of freelist protections. In general, you can detect some heap corruptions, but not all of them; the better you get the worse performance you have, usually.

> 3. I didn't quite understand how they managed to create malicious release function. They have control over buffer where they can write code, but memory page containing that buffer isn't executable, right?

No new code is introduced, the function pointer is pointed at a JOP chain.


Bravo, just amazing. Great detail in the explanation.

I can hardly imagine how much time and effort someone must put into these kinds of things.


No logo, cool name, or vanity url. Clearly not worthy of attention. /s


Is it just me, or is this is super similar to Dirty Cow [0] in terms of severity? Insane find!!

[0]: https://en.wikipedia.org/wiki/Dirty_COW


iiuc dirty cow did not require CAP_NET_ADMIN to be exploited, and this one does.

That's a massive difference since CAP_NET_ADMIN is somewhat rare (i.e. you can't get it from a random android app, but a random android app could have exploited dirty cow)


I get that the terms of the contest stipulated payout limitations, but $10,000 really seems like chump change for this bug. Bypasses all protecting, executed arbitrary code. That's worth a lot in the right hands.


> 10,000 really seems like chump change for this bug. Bypasses all protecting, executed arbitrary code. That's worth a lot in the right hands.

I'm not sure you understand the actual severity of this exploit. I personally am not going out of my way to check if any of my servers are patched for this or verify my kernels are up to date.

This exploit doesn't impact me, or most people. Why? It requires CAP_NET_ADMIN to exploit that. Who has CAP_NET_ADMIN? root, root in a container, or a machine with unprivileged user namespaces.

Many kernels turn off unprivileged user namespaces still, and even with them enabled, it also requires running untrusted binaries on your machine.

This isn't remotely exploitable. I don't run untrusted binaries or containers on any of my machines. I feel pretty safe.

It also doesn't "bypass all protections". If you run a container where the user doesn't have CAP_NET_ADMIN, they won't be able to exploit this. If you turn off unprivileged user namespaces, an unprivileged user on the host won't be able to get CAP_NET_ADMIN to exploit this.

Why do you think this is worth a lot? What are "the right hands" and what could they do with this?


Imagine all of the chumps out there having sex for free when they could make good money at it.


To quickly identify affected and patched kernel versions see

https://github.com/google/security-research/security/advisor...


TheFlow is also the person behind many of the PS Vita exploits. Amazing stuff!


Our regular reminder that the union of lifetimes of all known kernel exploits covers the entire history of Linux. There has never been a time during which your Linux systems were not vulnerable to takeover, somehow.


They're computers that are powered on, so yes, they have vulnerabilities. Do you have a point?


Am I understanding correctly that the author has scored a bounty of $10K? If yes, where did it come from in this case?


https://google.github.io/kctf/vrp.html

Looks like a Kubernetes-specific CTF sponsored by Google. And, yes, they won $10k, but donated it to charity, in which case Google matches the donation (https://g.co/vrp):

>We understand that some of you are not interested in money. We offer the option to donate your reward to an established charity. If you do so, we will double your donation - subject to our discretion. Any rewards that are unclaimed after 12 months will be donated to a charity of our choosing.

So they won $20k for charity, essentially.


Why does a Google security engineer gets a bounty / reward for finding a security issue ? Isn't it supposed to be his job ?


Vulnerabilities once you have access to the machine are not really worth spending time on because servers should rely on shell hardening and clients should not be trusted anyway.

What we need is ways to avoid the kernel for networking memory, I figured Oracle would have realized this for Java by now but they are dragging their feet!


> Vulnerabilities once you have access to the machine are not really worth spending time on

That's absolute nonsense. There's so many platforms, Android/iOS, where applications are compartmentalized: local privilege escalation (LPE) vulnerabilities is what enables a rogue application (or remote code execution into one, such as a browser/chat), into full control of the device and its data. Sometimes LPEs are needed to achieve some level of persistence. In VMs, a guest LPE is frequently needed to interface (and exploit) directly the emulated/paravirtualized devices and VM-escape into the host OS.

Yes, it is a big deal.


What do you mean by "shell hardening"? In general the shell is not really a security critical component that would give big payoffs to harden.


I presume he meant "defend the perimteter, and they never get execution, so there is no need to defend the inside against privilege escalation".

The opposing line of thought is defense in depth. The idea being that "a hard shell but soft and mushy on the inside" is a fragile setup and only needs one thing to go wrong.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: