Hacker News new | past | comments | ask | show | jobs | submit login

The scary thing is that even though this sounds like a monstrous effort to pull off this hack, its not out of reach for large governments. Its basically known as a fact they have loads of these exploits sitting in their toolbox ready to use when they have a enticing enough target.

Short of rewriting the whole of iOS in a memory safe language I'm not sure how they could even solve this problem. Assigning a researcher to search for 6 months only to find one bug is financially prohibitive.




The research would've been much shorter if Apple would actually provide researchers with debug symbols. Or you know, if Apple open sourced their security-critical software.

> One of the most time-consuming tasks of this whole project was the painstaking process of reverse engineering the types and meanings of a huge number of the fields in these objects. Each IO80211AWDLPeer object is almost 6KB; that's a lot of potential fields. Having structure layout information would probably have saved months.

> Six years ago I had hoped Project Zero would be able to get legitimate access to data sources like this. Six years later and I am still spending months reversing structure layouts and naming variables.


It’s intensely frustrating, because for some reason Apple thinks it’s a good idea to strip out security code from the source that they do release (months late), and they tend to strip (and until recently, encrypt) kernel code. This is what a company from the last decade might do to hide security issues, except it’s coming from the world’s largest company with a highly skilled security team. Is there some old-school manager with so much influence that they’re able to override any calls from internal and external sources? It’s gotten to the point where Apple engineers privately brag about their new proprietary security mitigations after researchers who scrounge for accidentally symbolicated kernels (thank you, iOS 14 beta) do the work to find them. Why does this situation exist?


There were some Hacker News threads the other day about Marcan's Patreon campaign for porting Linux to Apple Silicon. Everyone basically expects that Marcan will need to reverse engineer everything on his own, and my gut tells me they're right.

But, if you actually stop and think about it for a moment... isn't this situation completely bizarre? Apple Silicon Macs explicitly support booting alternate OSs, because Apple went out of their way to add a `permissive-security` option to the boot-loader. They know Linux is important—the initial Apple Silicon reveal included a Linux VM demonstration—and now a well-known and talented developer is planning to do a native Linux port, at no cost to Apple, and we all fully expect that Apple won't make any documentation available or answer any questions? And, we're probably right?

The more I consider it, the more crazy it all seems. Why is Apple so private about the internals of their products? It won't affect marketing—normal consumers don't care—and I can't think of a plausible scenario where this type of information could help a competitor.

Is Apple using source code stolen from Oracle? Are they scared someone will discover an internal library written in COBOL and make fun of them? Are they worried their documentation could revive Steve Jobs as a vengeful ghost? I just don't get it.


> Why is Apple so private about the internals of their products?

Because they don't care. The extent they care is directly linked to the amount of money they will make from caring. They won't sell more macs if macs can run Linux better; but they will sell more Apple Music subscriptions if macs keep running macOS.

> They know Linux is important

No, they know Linux is a pain in the ass. The bootloader option assuages the executives' conscience enough to be able to talk to a journalist and keep a straight face when asked about "openness" or being "hacker-friendly", stuff those 1980s-style Linux hobbyists keep talking about and nobody else gives a shit about.

Apple makes money by selling iDevices to consumers and selling Macs to enough developers to build apps for iDevices. Everything else is a bonus, and not worth spending much time on. They do the minimum and leave it as that. There is no inconsistency or secret motive. They just don't care. When they cared, in the early '00s, they did a bit more; now they do less. The attitude is the same.


> Apple makes money by selling iDevices to consumers

For now. They haven't made any "I have to buy this right fucking now!" worthy revolutionary improvements in the iDevices lineup recently, the Western market for smartphones is near saturation - and with Corona tanking the US economy for wide masses, people don't have the hundreds of dollars just lying around to shell off for the latest iteration.

I believe that both the last (horribly expensive at that, and still people kept buying it) Mac Pro and the new M1 lineup is a sign that Apple wants to shift back attention to the non-mobile sector - because the competition there is asleep on the wheels. Everyone uses Intel who has managed to fuck up its lineup for many years now, Microsoft has thoroughly pissed off the privacy-conscious folks with Win10 and (judging by a random walk through a Mediamarkt) build quality in Windows laptops still hovers barely above "acceptable" - cheap plastics, tiny touchpads and abysmal screens are the norm, whereas Apple is only robust aluminium cases, giant touchpads and crystal clear, bright screens.

What I'm really excited for is when Apple decides to put an Mx chip into an iMac, paired with a decent AMD GPU. The thermals and energy profile should be allowing a lot more leeway for resource usage than a Macbook...


> they will sell more Apple Music subscriptions if macs keep running macOS.

The type of person who buys an Apple Silicon Mac to run Linux is not going to buy an Apple Silicon Mac to run macOS. However...

> They won't sell more macs if macs can run Linux better.

They would sell some more Macs. Possibly hundreds of thousands more. A drop in the bucket for Apple, but still money—and all they have to do to get it is answer some questions.


Even without being able to compile it I've successfully used their source dumps to debug problems in my code quite a few times (and occasionally find bugs in their code which I have to work around). Having code with comments to read is a huge step up from having to rely on decompilers.


(Quick note for others that my GP comment originally contained a paragraph about the stuff Apple does open source. I edited this out because I felt it was beside the point.)


> P.S. And what's with the stuff Apple does release as open source? Don't get me wrong, I'm glad they do it—because I'll take what I can get—but I have no clue who it's for! A lot of the code is either extremely difficult or impossible to actually compile, because it relies on internal Apple tools or libraries which aren't public

Even when it doesn't rely on anything Apple-specific, it can be unclear how to build it.

I noticed that if I ctrl-z dc, then resume it, it silently exits. I grabbed the source to see if I could build it, and then perhaps debug this.

The source is part of bc. When you extract it there is a directory containing a bc dir, a patches dir, a bc sources tarball, and a Makefile. The bc directory is the contents of the tarball with the patches from the patches directory applied.

Optimistically typing "make" does not work. It runs configure somewhere (in the bc directory, I think), decides that gcc is /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc, and decides that this cannot create executables and exits.

Maybe just going into the bc directory and running configure and make there will do the trick? ./configure works and builds a makefile. Trying to compile with that gets fatal errors, apparently due to missing <string.h> in some C files.

OK, I don't actually care about bc, so how about just trying to build dc, which lives in a subdirectory under the bc directory.

That gets a fatal error due to a conflict between "#define ptrdiff_t size_t" in the config.h that configure made, and "typedef __darwin_size_t size_t" from somewhere. Based on the comments in config.h, apparently it should only be defining that if it is not defined by the system. Commenting it out in config.h and trying again...and all the compiling steps for dc actually finish!

Alas...it then fails because it needs ../lib/libbc.a, which presumably would have been built before building dc if the bc build had worked.

Maybe if I go to ../lib and type make? Nope. In fact, the errors are identical to when I typed make for bc, because it turns out that making libbc.a is the first thing the bc make tries to do.

Tossing in "#include <string.h>" in lib/getopt.c and lib/number.c makes everything build, finally giving me a locally built dc.

Is it too much to ask that when I download the source from Apple to their version of a simple command line knows-nothing-about-MacOS utility like this, I should just be able to type "make" somewhere and have it build? Or at least have a README in the tarball that tells me what I need to do?


In this case, the top-level Makefile includes a bunch of internal junk, and the configure script thinks your system is very broken because it's old and Xcode 12 ups the warning for a missing prototype to an error. I was able to get it to build with

  $ CC="clang -Wno-implicit-function-declaration" ./configure
  $ make


Lots of the stuff they release I imagine is to comply with license obligations.


In a few cases perhaps—they definitely still use some GPLv2 stuff—but it's mostly under a license that doesn't require them to release anything.


I can only speculate, but Apple seems to have very tightly coupled software and hardware. Since this coupling probably holds trade secrets (which we don't know about by definition), it seems likely to me that they are controlling access to as much of the stack as they can while still protecting those secrets.


Yes, but that doesn’t really make sense for things they have already shipped: researchers have to reverse engineer those for what seems like no reason. For example, the newest iPhones have entirely custom privilege levels that are lateral to the typical ARM exception levels and entered using proprietary instructions that their own silicon understands. This is something you can find if you load the kernel into a disassembler and poke at it a bit. But Apple doesn’t mention it at all or document it…what’s the point? Why put up such petty barriers in the face of people trying to audit this?


Likely the documentation that does exist internal would take a relatively large amount of cost to extract without pulling other stuff with it.


But they go through the effort of stripping all mentions of these things from the source code they release?


I'm late to respond, but the obfuscation of code is almost certainly automated.


Apple doesn't really obfuscate their code outside of their DRM stuff–usually they just remove all the symbols and do a ⌘F for certain terms in their open source releases and strip those out. It really seems to be an manual process, since sometimes they miss things…


wouldn’t the public interest in that be obvious at design time? why would apple write internal docs in such a way that they could never be released?


Because you're missing the other half of the exploit market: Selling vulnerabilities for big cheques.

https://zerodium.com/program.html


I'm not sure how that applies to my points?


> Apple thinks it’s a good idea to strip out security code from the source...

Because it makes it easier for attackers to find vulns. That's why they do it. The high payouts on these platforms are evidence that it isn't as simplistic as "only defenders would look at this why won't they release it!?"


But doesn't it work in some ways? It's not going to save them, but it seems to significantly increase the time/cost of exploiting the vulnerability. One more layer to the security system.


Obfuscating source? No, not at all. It just annoys legitimate security researchers (making them not want to deal with you) and is something that black hat bug finders largely don't care much about. Not only do they have more resources and patience, they are also more willing to use questionable methods to make their lives easier.


What makes it less of an issue for black hats? Do they have access to symbols/source code that security researchers do not/are not willing to use?

I certainly understand the frustration for legitimate researchers, and there's plenty to be said about having the source code available to make auditing easier but in itself it seems that making a black hat take 6 months instead of 1 to create an exploit raise the skill/patience level needed and busy them for while where they are not working on the next exploit.


Yes: black hats have much more incentive and generally larger, more focused teams to find these bugs, and they aren't concerned with the issues of buying stolen devices and source code on the black market. (If you're curious, search for "dev-fused iPhone" and "iBoot source code". The Project Zero team works from about the worst situation possible, choosing to even forgo using services like Corellium.)


Thanks for the detailed explanation!


> The research would've been much shorter if Apple would actually provide researchers with debug symbols.

I believe they're about to do this: https://www.theverge.com/2019/8/8/20756629/apple-iphone-secu...


And Google Project Zero won't get them.

https://twitter.com/benhawkes/status/1286021329246801921

> It looks like we won't be able to use the Apple "Security Research Device" due to the vulnerability disclosure restrictions, which seem specifically designed to exclude Project Zero and other researchers who use a 90 day policy.


Goddammit, 90 day policy and reasonable rewards would strengthen their security and gain the trust of their advanced users.

For some reason this ridiculous restriction reminds me when Apple sued Samsung because their phones had round corners.


Apple sued Samsung because Samsung had aspirations of being Apple.

Rounded corners are the after-the-strategic-decision legal justification.


Frankly, I think Apple sued Samsung because Steve Jobs was still CEO at the time, and he sometimes acted emotionally instead of rationally.


Advanced users that want a secure device require devices that can be reinitialized to a known state without external input.

This is no longer possible on any phone, tablet, or computer Apple sells: all require online activation with device-specific info. There is no way to put the device back into a known state offline or without Apple having an opportunity to tamper with it (or be forced to tamper with it).


> This is no longer possible on any phone, tablet, or computer Apple sells

It is still possible on all of their computers, just not their phones or tablets. Intel Macs (which are still being sold in large numbers) can always be wiped and restored from USB without an internet connection, and Apple Silicon Macs can do it if you set the boot-loader to "Reduced Security" mode.


This is a false statement. Intel macs have the T2 boot security chip, which requires online activation to be able to access the internal disk after a full system wipe. The M1, even in reduced security mode, also requires online activation after a full system wipe. I've tested this this week; if you know of something I'm missing please tell me the exact steps to take to wipe and reinstall a T2/M1 mac offline, as I am confident now that it is not possible to do so.

I would love to be wrong about this.

This is the case even if you have a full offline boot/restore USB.

I have a post coming out today about just this, and how it renders all current macs unsuitable for long term offline/airgap applications.


These are just phones that you are officially permitted to attach a root shell and kernel debugger to, like to any other device that's not an iPhone. Researchers have been working around that for years by using private jailbreaks / exploits to get similar levels access, and with checkm8/ktrw you yourself can get similar access to any vulnerable iPhone 7/8/X.

No sources or structure layout or symbols, so you're still stuck waddling through megabytes of compiled code to reverse-engineer everything from scratch.

It's Apple drumming up absolutely nothing, and from my point of view it's mostly a PR stunt.


> It's Apple drumming up absolutely nothing, and from my point of view it's mostly a PR stunt.

Well, I don't think it's quite "nothing". Newer phones don't have access to checkm8, and getting a private jailbreak or exploit working can be non-trivial. And in some cases, researchers may need to avoid reporting that exploit to Apple in order to keep using it.

It's a good step. It's just not sufficient, especially given all the other restrictions.


> And in some cases, researchers may need to avoid reporting that exploit to Apple in order to keep using it.

And this will continue to happen until Apple just starts selling the damn things to anyone who wants them, instead of trying to gatekeep them to people who are playing by their ridiculous security disclosure rules.


Right! It would solve so many issues! Put them on an unlisted page of your online store, charge a 50% markup over a normal iPhone, make the boot screen bright red, and do something ugly and obvious with the phone's exterior.

Sure, some crazy people who aren't security researchers will probably buy them too and use them as daily drivers (I'd probably be one of them). So what? I don't understand why Apple feels the need to hold this stuff so close to their chest. Everyone in this scenario knows exactly what they're buying.


> No sources or structure layout or symbols…

Oh, that's a shame. The slide in the referenced tweet says, "advanced debug capabilities", so I'd assumed that's what it meant. I wonder what else that could mean?


The ability to attach a debugger to the kernel. No, really, that’s “advanced” for an iOS device, because normally you don’t get to do anything even close to that. You can’t even debug userspace processes that aren’t ones that you put there yourself (as a developer writing apps) on normal iPhones.


Believe it or not, open sourcing the security code is actually not a great idea. Most of the worlds bot nets run on Wordpress which is open source. Most of the time legitimate actors are not going to read through an entire code base because they have better things to do. Illegitimate actors however have a very high incentive to read through a widely used public code base and do so.


OpenBSD [0] is OSS, practices full disclosure, and is considered highly secure by... everyone.

Wordpress is a mess, but being OSS does not inherently make something less secure.

[0] https://www.openbsd.org/security.html


He could just have sent in a bug report. Said that the length was not validated.

No need to dig so much if you just want to fix the problem.

But he wanted to prove something. That is a different thing.


By 'wanting to prove something', he caused the vendor to act urgently, instead of sweeping this as a maybe-exploitable-maybe-not bug that would get lazily patched whenever.

By 'wanting to prove something', he showed the shortcomings of multiple security mitigations, all defeated by simple bugs.

By 'wanting to prove something', he also discovered two other exploitable 0days, that wouldn't have been discovered otherwise. Those 0days were likely already in the hands of bad actors, too.

Finally, the reason he even discovered the original bug is because Apple accidentally once or twice forgot to strip function names from a binary. If this didn't happen, that bug very likely would still be out there in the wild.

I'm not sure you understand how security research works.


This is a weird statement, since the premise of this blog post is that these kinds of attacks aren't out of reach for a single talented researcher on a Google salary. It's not out of reach for any government. Nauru, Grenada, Tonga, the Comoros --- they can all afford this.


I believe the point of SulfurHexaFluri's final sentence is that it is cost prohibitive for Apple to dedicate a bunch of employees to search for bugs in order to fix them all. That is, it's cost-effective to find 1 bug, but not to find all of them. The sentence could have been worded better.


I'd personally phrase things a bit differently: an _individual_ was able to pull this off while surrounded by screaming children. A large government, with all its resources and hundreds+ of people, would pull this off regularly and without breaking a sweat.


> Short of rewriting the whole of iOS in a memory safe language I'm not sure how they could even solve this problem. Assigning a researcher to search for 6 months only to find one bug is financially prohibitive.

Note that memory safe languages won't solve security. They only eliminate a class of security bugs, which would be amazing progress, but not all of them.


The OP sounds like it was a memory-safety bug, so this is a bit pedantic.


Didn't they move WiFi drivers, among other things, into the userspace in macOS Big Sur? I've heard somewhere that they're going in the direction towards microkernel for this particular reason of reducing the attack surface.

(yes I know I'm talking about macOS but the vulnerability was found in iOS, but there's a lot of shared code between them, especially on levels this low)


>Its basically known as a fact they have loads of these exploits sitting in their toolbox ready to use when they have a enticing enough target.

Do you have a source for this?


Google "NSA TAO" -- Tailored Access Operations. AIUI, among other things they're responsible for developing, discovering, and weaponizing exploits used to access high value targets -- sometimes through fun techniques like "Quantum Insert", a sort of faster-man-in-the-middle attack. The wealth of exploits released in the equation group hack should put all doubts to rest.


Spot on. I expect this was a designed-in feature, but if I could prove it, I wouldn't be able to do so without going to jail.



There's a market for exploits that pays pretty well. Someone is throwing millions of dollars at them, and from what we can glean from investigations, leaks and whistle blowers, it's states that are buying them. One company in that space made world-wide news[1] by selling to governments.

[1] https://en.wikipedia.org/wiki/Hacking_Team


>[1] https://en.wikipedia.org/wiki/Hacking_Team

Also a good idea to DDG Phineas Phisher. You should turn up an interesting read on pastebin iirc.

Edit: found it on exploit-db

[0] https://www.exploit-db.com/papers/41915


First time I've seen "DDG", well done.


Thanks! I realized a while ago that I don't use google, so why should I recommend it to others, even in passing.


OK. But you should have said Bing


The user interface I use and recommend to others is Duck Duck Go. Why would I recommend using Bing directly?


Because that is the engine, the cosmetic is irrelevant. Unless you are choosing for the interface but that is not the concern of 99% of the people. We need a real independent searcher


I use DDG for the fact that it provides a privacy layer over Bing.

There are of corse Yacy and other more independent options, but they aren't ready for most people.


The whole NSA leaks thing proved it. They had a tool built for exploiting windows boxes which was leaked and converted in to the ransomware WannaCry which spread globally a few years ago.


The NSO Group, the Israeli team behind the Pegasus iOS spyware, have been accused of selling it to the UAE government.

https://www.haaretz.com/middle-east-news/.premium-with-israe...


Interview with a nation state hacker for TAO at NSA.

https://podcasts.apple.com/us/podcast/darknet-diaries/id1296...

I believe they described their toolbox as metasploit on steroids. Some other episodes of darknet diaries also interview former and current government hackers.


Official website with full transcript + some nice pixel art: https://darknetdiaries.com/episode/10/


Who do you think the customers of ZDI, Zerodium, Azimuth and others are?



It is not just not out of reach for large governments, it probably not even out of reach for most organizations with between 5-10 people. As the author says, 6 months of "one person, working alone in their bedroom, was able to build a capability which would allow them to seriously compromise iPhone users they'd come into close contact with". Even if we assume the author is paid $1,000,000 a year that is still only $500,000 of funding which is an absolute drop in the bucket compared to most businesses.

The average small business loan is more than that at $633,000 [1]. Hell, a single McDonalds restaurant [2] costs more than that to setup. In fact, it is not even out of the reach of vast numbers of individuals. Using the net worth percentiles in the US [3], $500,000 is only the 80th percentile of household net worth. That means in the US alone, which has 129 million households, there are literally 25.8 million households with the resources to bankroll such an effort (assuming they were willing to liquidate their net worth). You need to increase the cost by 1,000x to 10,000x before you get a point where it is out of reach for anybody except for large governments and you need to increase the cost by 100,000x to 1,000,000x before it actually becomes infeasible for any government to bankroll such attacks.

tl;dr It is way worse than you say. Every government can fund such an effort. Every Fortune 500 company can fund such an effort. Every multinational can fund such an effort. Probably ~50% of small businesses can fund such an effort. ~20% of people in the US can fund such an effort. The costs of these attacks aren't rookie numbers, they are baby numbers.

[1] https://www.fundera.com/business-loans/guides/average-small-...

[2] https://www.mcdonalds.com/us/en-us/about-us/franchising/new-...

[3] https://dqydj.com/average-median-top-net-worth-percentiles/


For those who don't see why a company would want to use such exploits, consider how valuable it would be to know if a company's employees were planning to organize or strike.

There are also paranoid people in positions of power, and bureaucracies that can justify spying on employees. One of the interesting things about this lockdown was finding out that many companies put spyware on their employee-issued computers to monitor their usage.


How is it financially prohibitive to pay a researcher a salary to find a 0day like this, where the bounty programs pay $100k-$500k on 0days on the same ? source: https://developer.apple.com/security-bounty/payouts/




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: