Hacker News new | past | comments | ask | show | jobs | submit login

> Historically the jump from overflow to RCE was much much shorter.

Not really. I am about to read the article, but it sounds like return-oriented programming[1] chaining "gadgets" that are small bits of existing code that you can re-purpose into executing arbitrary code by manipulating the stack. Extremely common exploitation technique, even if not trivial. Who said an exploit or RCE was trivial to exploit?

Edit: I was a bit quick to dismiss. The technique is certainly interesting, although the article doesn't go into the details of how the control flow is handled and where that register is stored. However, I'd like to point out that ROP is quite complex on its own, as it's kind of like using a computer with an arbitrary instruction set that you have to combine to create higher-level functions, hence my original confusion.

[1] https://en.wikipedia.org/wiki/Return-oriented_programming




I think what he means with historically is before ASLR, DEP, and other mitigations, where a buffer overflow meant you can simply overwrite the return pointer at ESP, jump to the stack and run any shellcode. Mitigations have made exploitation much, much more complex nowadays. See for example https://github.com/stong/how-to-exploit-a-double-free


Exactly. This escape is technically quite cool frankly in terms of some creativity.

That said, my own view is that messages from untrusted contacts should be straight ascii, parsed in a memory safe language with no further features until you interact (ie, write back etc).


Safeguards should be applied uniformly to all senders. A trusted sender could have been already exploited.


It's this attitude that is diminishing our security posture. Users want gifs, they want shared locations, they want heart emojies, they want unicode.

The fact that you force EVERY user you interact with to have them same treatment is the problem. Some people, I left into my house unsupervised. Some as guests. Some I don't let in at all.

We need to start modeling this approach online more.

I don't think you understand how far users will go to work around safeguards if they interfere with their daily life.


> Users want gifs, they want shared locations, they want heart emojies, they want unicode.

I want all of those things. I use them every day. I don't trust any of my contacts to not have an infection.


ROP chains are similar in spirit but typically created by hand and thus not all that long (several dozen steps, at most). Creating a 70,000 step program via a Turing tarpit is very interesting.


> 70,000 step program

My initial assumption was that they would compile a program, take the binary output as an image and JBIG2-compress it, as I don't really get how they would use the result of the binary operations to branch to different code. Reading the article a bit more, I think they can loop multiple times over the area, by changing w, h and line dynamically over each pass, which would give them some kind of basic computer. That part is still unclear to me, but that would indeed be a lot more impressive.

There are no details on how control flow is handed over to the program either, so it's possible that they loop multiple time over the scratchpad (1 loop = 1 clock cycle roughly), especially if the memory area is non-executable, and they have one shot at computing a jump pointer.

In any case, they can probably copy arbitrary memory addresses into the new "scratchpad" area to defeat ASLR (we'll see in part 2).


iOS does not allow the modification or generation of new executable code (at least, it will not at this stage of an exploit). So they are likely creating a weird machine to patch various data and then redirecting control flow with the altered state by overwriting a function pointer.


> […] then redirecting control flow with the altered state by overwriting a function pointer.

The analysis calls this out specifically:

> Conveniently since JBIG2Bitmap inherits from JBIG2Segment the seg->getType() virtual call succeed even on devices where Pointer Authentication is enabled

Which is disturbing. Was the code compiled for the arch64e architecture in the first place, or it is a bug in the LLVM compiler toolchain? The armv8.3 authenticated pointers have been invented to preclude this from happening, but that is not the case with the exploit.


Pointer authentication cannot protect against all pointer substitutions, because doing so to arbitrary C++ code would violate language guarantees. https://github.com/apple/llvm-project/blob/next/clang/docs/P... is a good overview of which things can and can’t be signed because of standards compliance.


Right, and they get there of a decomp pass on totally untrusted input over the network. This is why it's so crazy that apple has this huge attack surface.

My own suggestion. Ascii only messages if contact is not in address book or is a contact you've communicated with in your message history (however long you keep that) up to 1 year. Once you reply these untrusted saudi contacts can send you the gif meme's.


"Hello this is the state police, your mother just got in a car accident, please respond"


In the US if something serious like this happens the police will physically notify the next of kin of it, not send you a text.


The "police" already email and call me about my overdue IRS bill and my imminent arrest. I ignore all that crap.

Never interacted, maybe ascii only. Interacted, allow unicode and some other features (basic emojies? / photos?). Full contact? Allow the app integrations, heart sensor, animated images, videos etc.


[calls phone number]


Ah yes, let’s just force ASCII so that anyone using a language that’s not English has to suffer.


I wonder how they test the code? Maybe they can write a meta VM using a testable environment(e.g. in C) and transpile it into the instructions that library uses?


If I was them I’d test each part of the toolchain (which I assume is a high-level compiler of some sort to their RISC VM) independently, as you would for any component of this type. For the actual exploits itself it’s probably a regular debugger with facilities tailored to their VM.


Suffice it to say, this exploit was not simply chaining gadgets.


Right, my bad. I now read the article, the technique is intriguing, but I can't say much more for lack of details!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: