Any competent malware developer must have already figured out how to exploit this the first time around. Now that every single one of those malware developers has learned it is still exploitable, the payload they've spent the past month perfecting can now be deployed in the wild.
So, can someone explain why a disastrous worm hasn't already swept the globe and infected 99% of Android devices on the planet within ten minutes of being released in the wild?
1. Text payload to victim
2. Payload executes on victim's phone and texts itself to all of the victim's contacts
3. Repeat
Assuming the average Android phone owner has 20 contacts who also have Android phones, and assuming also that texting the payload to those 20 people would take two minutes to complete, the infection would spread exponentially and only take ten minutes for the initial text to result in the infection of 10 billion devices worldwide.
Why am I not currently being bombarded with MMS video texts from infected devices? It frankly seems a bit miraculous. Did Google set up an emergency arrangement with all of the carriers to block suspicious video texts so this wouldn't happen?
It's not often that Android's fragmentation is touted as a benefit, but the fact that Android isn't a monoculture is certainly one of the hurdles to developing a worm capable of mass deployment. Supposedly the varying implementations of the Android media services mean a libstragefright exploit may or may not end up with root access.
Now, if this were an iOS zero-day exploit...
[Actually, it'd be trivial for the carriers to filter the payload, so I don't reckon SMS/MMS is ever going to be a viable transmission vector.]
Yes, I believe the payload have to be tailored pretty much exclusively for each type of device. But, I doubt that would stop anyone trying. Company phones are often the same type and model, someone with a certain target in mind could probably put in the hours.
do carriers already have a mechanic in place to trigger precautions a message matching worm-like patterns (similarity, mass transmission)? or would they have to implement one after it's too late?
1. A worm would be highly visible and easily blocked by carriers
2. There's no easy way to make more money from a mass worm than you could make from lower-profile, more-targeted attacks. You could ask a billion people to send you a dollar but that'd leave a paper trail anyone could follow.
3. The more widespread it is, the more pressure there is on vendors & carriers to ship patches for old devices
>So, can someone explain why a disastrous worm hasn't already swept the globe and infected 99% of Android devices on the planet within ten minutes of being released in the wild?
There are a couple of reasons:
1) Just because you have an exploit it doesn't guarantee you'll be able to execute code because you still need to bypass ASLR. The PoC's released do not do this.
2) Infecting phones with malware is very rare. The "tech pundits" like to scare the public, but the reality is that smartphones are rarely infected. Besides, the people that write and distribute malware are too busy infecting Windows machines.
From what I've read, and could be wrong, is you could be infected and not even know it. The attacker has the opportunity to "clean up" the MMS so you never even get a notification because the bug is before any of that kicks in.
It can't remove the system libraries, but it can decide not to use them. Yes, this means if you were to open it with any app that isn't protected then you will still be vulnerable, but within the app you're safe.
Uh no, it disable the stagefright library... disabling MMS auto-retrial is something else that I also did, but QKSMS also removed the ability for the app to interface with the Stagefright Libraries
Summary: A little over two weeks ago, it was publicly disclosed that MMS messages can cause Android phones to decode video with libstagefright, which is a C++ library with vulnerabilities and insufficient sandboxing, leading to remote code execution without user interaction. Today, Exodus Intelligence is reporting that the patch to fix one of these vulnerabilities does not, in fact, fix it. Thus, all Android phones are still vulnerable.
You can partially mitigate the risk by disabling auto-downloading of MMS messages in whichever app you have set to handle text messages, such as Messaging or Hangouts. If you have not done so already, this is urgent. Furthermore, you should assume that auto-downloading of MMS messages will not ever be safe, no matter how many individual security fixes are applied, until this component of Android is significantly re-architected.
I'm still unclear on the sandboxing assertion. The mediaserver in current versions is, in fact, pretty well isolated. I've had to work around and defeat lots of this protection for debugging purposes in my professional life, so I know it's there. IIRC you can't read system or app data outside the sdcard area, you can't write anywhere persistent. You can open network sockets and make binder requests, which is not trivial but again rather different from a remote root.
That this is an exploitable bug in libstagefright seems to be uncontested. But AFAICT there are no assertions of an actual sandbox breakout or a practical payload that does something more than e.g. send spam. Are there? Link?
service media /system/bin/mediaserver
class main
user media
group audio camera inet net_bt net_bt_admin net_bw_acct drmrpc mediadrm
ioprio rt 4
You'd need another exploit to elevate from SELinux (and I think send MSSes for a self-propagating worm). Though given Android's abysmal patching, most Android kernels are also terribly outdated...
That's not an SELinux policy? That's just a service statement for Android's init, defining the process' supplementary groups. There is no explicit seclabel I can deduce.
The policy is linked. The service definition was probably quoted to show other aspects of its isolation and privileges, but the comment is not written very clearly.
I could have sworn I read in the OP a few minutes ago, that mediaserver runs as "system" on some devices, which would probably be worse. I'm not finding it now, so maybe it was edited or I'm mis-remembering.
Anyway, vendor customization diverging from AOSP makes it hard to say.
You are correct. It runs with system and graphics privileges on many devices. It is a native service started at system init and is automatically restarted if it crashes.
I believe mediaserver also has microphone and camera access which seems scary, though I don't know what harm a crook could do with it. Spy agencies might be more interested.
In general though I totally agree, the media hype has been irresponsibly overblown.
Yes, audio and camera are there, which means that you also have access to the graphics memory managed by gralloc/hwcomposer too (though surfaceflinger itself is a separate process -- whether mediaserver has access to all such buffers or just ones passed to it from elsewhere is platform-dependent I think), as well as enough of the display driver to spit audio streams out via HDMI, etc...
All that kernel code tends to be complicated and poorly audited, so it would be a plausible hole. But that's not a "sandboxing" problem exactly.
This specific exploit can be initiated whenever metadata for an MP4 file is processed. Disabling auto-download of MMS is an important first step workaround. Be cautious with any untrusted media files on your Android device. Simply creating the thumbnail preview image is enough to silently trigger privileged code execution.
July 31st - Author noticed patch was not sufficient but could not test (did not notify google)
August 6th - Patch released
August 7th - Author notified google that patch was not adequate
August 13th - Author went public?!?!
They are counting the original date of exploitation as the start date for notification. I would think a more responsible and friendly date would be August 7th. Just me.
I sympathise with your point, but one complicating factor is that when big security vulnerabilities like Stagefright are found a lot of people then turn their attention to that code. Either finding other issues in the same code, or that the patch isn't fully effective. It was similar with Shellshock, where there was a series of patches as more issues were found because suddenly people were looking at this bit of code that had previously been uninteresting.
I'm not sure keeping it secret for long serves much purpose in this kind of situation; the eye of Sauron is already gazing on the code in question. I doubt these people were the only ones to notice that the patch didn't completely fix the problem.
Perhaps I am more alarmed by the assertion of the author that they had given 100+ days notice... it came off like they talking about the patch and not the original issue.
OTOH, it's highly likely that other people already found this. So by disclosing now, they are still helping users by making sure they don't think this bug was fixed.
But they shouldn't try to justify it based on the timelines. Especially if they noticed a bug in the original patch, but held off on saying anything.
At the same time.... It's business. They didn't act maliciously (exploiting or selling the exploit to bad actors). If the way to build a career is to rack up CVEs, well then that's what people will do, right?
August 13th - Still no response from google(!), disclosing publicly.
Basically that's integer type overflow, the moment I saw the four line patch, I knew what it was going to be. Everyone else would see that too cause it's a classic and can modify whatever exploit code they already have in a matter of minutes to work again.
I have a nit too. I don't like the term "responsible" used in this context. I prefer coordinated if anything. The responsible as in responsible disclosure is such a loaded term.
By the severity and simplicity here as well as the attention from the recent talk, this was a fine course of events in my personal opinion.
What I wish would have happened is someone at google would have done a better code review and caught the bug, it's pretty glaring as these things go, but still it happens all the time so considering that the patches were simply applied the next option I wish would have happened otherwise is that someone at google would have responded. In a case like this I would have liked to see a day or two at the most.
But none of that happened and considering the other concerns laid-out in the post, releasing the info publicly after almost a week is pretty responsible.
I sympathize, but there weren't enough details for Joe Random to build an exploit. This was an attempt to apply pressure, and it did it in the right way, by telling people how they can defend themselves.
Unlike Shellshock which was all over the freaking place, neither I nor my colleagues have gotten any suspicious MMS messages.
Indeed. Even the supposedly quickly updated Moto X did not receive a patch yet two weeks after the fact [1]. I feel sorry for all those people who are still stuck on Kit Kat or even worse Jelly Bean.
I feel for people whose providers won't upgrade them and I think they've got a good complaint, but if you choose to get off the upgrade path I think it's reasonable to assert that you're assuming responsibility for your own choices and security. I expect it can be backported to 4.x via custom ROMs for folks in that spot.
It's not an apples-to-apples comparison, but Android has been around since 2007, and they're up to API level 22 right now. In that same time period, Microsoft has had the following releases of Windows (excluding Mobile/Phone and Server):
Vista, which actually came out in 2006 in the OEM edition
Windows 7
Windows 8
Windows 8.1
Windows 10
Windows has had a much slower release cadence than Android. It's a much, much bigger burden on Google to continue to support older Android versions with bugfixes and security patches than it is Windows. Now, you can counter that Google decided on this release cadence, but still, I don't think it's reasonable to expect Google to support Android versions as long as Microsoft does Windows versions, as some here are stumping for.
(That said, every time I read an article about Android these days I get the urge to buy a Windows Phone.)
I have a Lumia 635, which is nice "for the price" I agree but not nice enough to switch to full-time. I'm hopeful that the new high-end phones provide a real choice.
But there is quite a difference between the 635 and the 640: non-HD vs. HD, possibly 512 MB RAM (depending on the version) vs. 1 GB RAM, 5MP vs. 8MP camera.
I agree that new flagships would be nice though. Especially with Continuum.
The fact that Google created a huge maintenance burden for themselves shouldn't absolve them of responsibility to provide that maintenance.
Microsoft supports Windows versions for ten years, and I agree that's crazy for Android. However, three years I feel is a bare minimum expectation. Devices tend to remain on the market for about a year, and the standard phone contract is two years. So three years from a version release should cover the vast majority of users for the life of their device, should they choose not to take "system upgrades" which may slow their device or change it in an unwanted manner.
I agree that DEVICES should be covered for n years, where n is somewhere between 3 and 5 and we can quibble about the specifics later. I don't think that backporting fixes should be the way we gauge this if newer versions are available instead.
The problem is that newer and better are not synonymous. And most Android users have probably learned by now that their devices getting slower with each update, not faster.
Here's the problem: It's a DROID Turbo. It's locked down by Verizon/Motorola, and neither root nor bootloader unlock has been achieved. So it's not even an option.
The problem is that the "upgrade path" and the "security fix" path need to be separate things. People should not be forced to have their device changed in an unacceptable manner (I did not buy a device with 'material design' for a reason, and being forced to get it to get a security fix is an unacceptable situation.)
I think, honestly, the only real answer is "don't buy locked devices if you want to make those choices." Google doesn't control those devices, and that part of Android is open source. You can make those decisions, but you're earning the consequences with them.
This, as it happens, is why I buy phones with unlockable bootloaders. My current phone uses the OEM build, but I like having that choice.
Google :does: control those devices. They're MADA agreement devices, which means Google approves every device that goes to sale, and Google approves every software update they release.
Unfortunately, as a Verizon customer, I don't have a wide variety of options with unlockable bootloaders. And the battery life on the Turbo was simply, the only feature that mattered. Usually there's an unlock within a few months, but there still isn't one at this point for the Turbo, I guess.
Obviously it wasn't the only feature that mattered, though? I mean, you're complaining about another feature right now. =) Like, I'm sympathetic, but this is a solvable problem with the information at hand. Buy phones with unlocked bootloaders. (As it happens, this is why I steer clear of Verizon...)
It's not just a providers. I've got a couple of el-cheapo $80 Androids just a month ago to use while traveling. They're running 4.4, Android 5 is not at all being offered for them, and probably never will. I guess I can throw them in the trash if I care about owning them myself.
Being el-cheapos, they aren't even supported by cyanogenmod. Anyone has an idea for how to use them (somewhat) securely?
This was one of the big motivating factors for me getting a nexus 6 (aside from the size) and putting it on the fi network. I got 2 updates in the period of a week.
I think the problem is vendors and carriers wanting to modify and inject their crapware and custom UIs into Android. Google is happy to provide upgrades to the OS in a timely fashion (and are going to start monthly updates), it's the vendors/carriers that delay these updates or don't even bother to port them over to devices.
Google is moving much of the core Android functionality over to specific apps that they can update as needed from the play store, instead of integrating them directly into the OS. This is good, because it allows them to push the updates quicker. They run the risk that carriers will get sick of the lack of control over the "experience" they can provide, and fork android, but it's a necessary step in many ways to ensure things like this MMS exploit can be patched on some devices at all, as vendors abandon some phones quickly, leaving users vulnerable.
If every vendor and carrier used stock nexus android without modification, the updates could be pushed out in days. The linked article blames google for these problems, but that is, in my opinion, misguided.
> Even if Google patches this, there's an incredible delay in getting the patch to users.
I don't think delay is the problem - not being able to get or apply the patch yourself is the problem. Ignoring the somewhat ridiculous requirements to compile Android (200GB of HD space and 16GB of RAM [1]) - you couldn't put it on your Android device due to proprietary drivers for wireless and/or video.
Assuming you can get an updated version - I have noticed that after getting root on my Nexus 6 it won't install new versions of Android. I don't know if it's a by-product of getting root/installing a new recovery or if they have an actual check. I have a legitimate reason for root because I use FreeOTP and it does not have an export feature - so I use Titanium backup to backup and restore the app. Getting the OTP QR codes for: dropbox, gmail, Microsoft account, facebook, srchub, github, paypal etc would take a very long time and considerable effort to recover (hint gitlab).
It's a check - as of Android 5.0, Google's OTA updater scripts refuse to overwrite your /system partition if its checksum isn't on the known-good list. Rooting inevitably involves writing files to it, so OTA updates will stop working with an uninformative "Error!" in recovery. Whoever came up with the idea should be fired, but that's Google for you.
See [1] for details from the author of NRT [2], which can update your phone from factory images without wiping it. The procedure is a bit more involved if you're not on Windows - IIRC, you have to download the correct archive from [3] and modify the update script so it doesn't try to flash userdata.
> Whoever came up with the idea should be fired, but that's Google for you.
To be fair to Google, once you've modified your /system partition, it's really case-by-case how an update will interact with it. The alternative is Google push an update that inadvertently bricks a bunch of rooted phones. Can you imagine the kneejerk reaction from the internet then?
> The alternative is Google push an update that inadvertently bricks a bunch of rooted phones. Can you imagine the kneejerk reaction from the internet then?
That already happens [1] on non-modified devices. So just come up with a "yes I know what I'm doing and accept that this may bork my phone". Or and even better idea - how about the ability to turn off OTA updates? Right now my phone says there is an update but I can't apply it due to being rooted.
Regarding exporting OTP settings: My solution was to use Titanium Backup on the Google Authenticator app, extract secret keys from that data, use them to create GA-compatible setup QR codes for each account, and save those QR codes in KeePass.
You could likely do something similar with FreeOTP, once that's done you can easily restore your codes without root. (And from then on be sure to save any future setup QR codes you use.)
> Almost all users would be incapable of applying patches themselves.
There are currently 357,101 registered users on XDA developers. Saying "almost" every android user can't apply a patch is somewhat far fetched.
Most procedures on XDA developers have a much better step by step documentation than most SDKs I've seen - and every time I've used them it's been successful. I've only had one issue regarding flashing a radio - but that was my fault for not reading/paying attention and Motorola for not allowing a lower version of radio to be flashed...
Yes - there are grandmas and people who don't even know what Linux is would not be able to apply the patch themselves - but you are going to find that in any market.
I think something got lost in translation - I'm not saying for the manufacturers NOT to release updates.
I'm advocating for the ability for people like me to be able to apply the patches manually - and thus as a result the ability to remove and tweak the underlying OS to my liking. As it stands right now I can't do that due to proprietary drivers.
> What proportion of Android users do you think would be left unpatched?
But I'll humor you. Analyzing the breakdown of Android devices [1] - I would argue at this point devices running 4.2.X and lower will never see another update (because 4.2 is almost 3 years old - if there is an upgrade available people haven't or will never upgrade). That is about 34% of Android devices who will, arguably, never see another update.
I do like how they left off Honeycomb (3.X) - I know for a fact there are still devices out there running it so that graph is a little off.
Oh, I get it, so when you said you don't think the delay is the problem, while quoting a sentence talking about getting the patch to users, you were in fact talking about what you wanted, not what would be good for general users.
There is a major issue, not sure if in the rest of the world, but in Canada, the service provider has to request, and commonly pay for, the patch to which the manufacturer completes and then the service provider then pushes out to their devices. At least that is how it was when the E911 issue happened, it may be better now, but knowing Telecoms in Canada, I wouldn't be surprised if it wasn't.
It's not better. Look at the proposed target released dates for StageFright patches by Telus. And considering that it is not even fully patches, this only adds to the insanity of the situation with regards to Android fragmentation and carrier's controlling releases.
OEM Model Target Release
HTC One M7 August 14th
HTC One M8 August 14th
HTC One M9 August 14th
HTC Desire 320a August 28th
HTC Desire 601 August 14th
LG Nexus 4 Completed
LG Nexus 5 Completed
Motorola Nexus 6 Completed
Samsung Galaxy S5 August 11th
SamsungGalaxy S5 Active August 11th
Samsung Galaxy Alpha August 21st
Samsung Galaxy Grand Prime August 21st
Samsung Galaxy S6 Completed
Samsung Galaxy S6 Edge Completed
Samsung Galaxy S4 August 28th
Samsung Galaxy Note 3 August 30th
Samsung Galaxy Note 4 August 11th
Samsung Galaxy Core September 4th
Samsung Galaxy Tab S 8.4 September 4th
Samsung Galaxy Tab S 10.5 September 4th
Sony Xperia Z3 August 14th
Did I read that right? They reported the bug to Google on August 7th and disclosed it publicly on August 13th?
Is this still responsible disclosure if they give Google basically 6 days to respond and use the original notification date as justification? I'm not learned enough in the practice of responsible disclosure to know if this is common, but I've not seen that before.
Can you help me understand this? They're complaining about a bug in the patch implementation and the patch implementation did not exist prior to the patch; ergo, if Google didn't patch the code, they wouldn't be able to write the article.
Is that not a new bug almost definitionally? Please help me understand if I am incorrect.
I understand the underlying issue which was first reported did not get patched properly, but, if someone found a bug in the heartbleed patch today and disclosed it immediately with the original patch date as justification, I would imagine many would be screaming bloody murder.
Keeping it secret wouldn't have been very useful. The original issue was already well-known, and seeing the severity and media-exposure of the bug, it is very possible malicious actors studied the patch and independently found out about the problem that came with it. At this point, it is better to let the public at large know they are at risk than let the skiddies have fun with this pseudo-0-day.
This is a grey area. Not everyone is going to agree.
For example, if some email client can cause arbitrary command execution by adding a malformed email as CC, like nobody@file:///calc.exe or something, vendor patches it, then the workaround to use the exploit again is nobody@file\:\ / \ / \ /calc.exe , I don't consider that a new bug and doesn't deserve the same grace period for disclosure, IMHO. Now, if it turned out that the email client's ability to show embedded images in the message-body and setting the metadata in a PNG to "file:///calc.exe" caused the calc program to run... I think that IS a new bug and does deserve another grace period because its "point of entry", rendering a PNG and processing its metadata, is very different from parsing the email to/from/cc/bcc fields.
I'm not a Google apologist and I don't appreciate your tone. I'm attempting to orient myself so that I can think about what is right and wrong with respect to responsible disclosure in a clear and coherent fashion.
As stated elsewhere, the original bug was reported in April and not publicly disclosed until July. The issue here is that the patch did not sufficiently remove the flaw. This gets to the crux of the debate on what is "responsible" disclosure. One would assume that the patch would be studied by legitimately malicious attackers and presumably they would independently realize the flaw still existed and continue abusing it. By stating that the flaw is still present the public at large can make educated decisions about their handling of MMS messages instead of assuming everything is fixed when in fact it is not. The flip side is now that less capable malicious attackers will also be made aware. And so the endless argument continues.
Eh, fuck google. They still haven't patched the original stagefright for android 4.4.4 on my nexus 5, and I don't want to upgrade to android 5, which I shouldn't be required to do to get security releases.
On an ethical level: Because there's quite a lot of difference between major versions, and you shouldn't be forced to replace one product with another to get basic fitness-for-purpose fixes on something that's only a year or two old.
On a practical level: Tons of devices are stuck on 4 and it's not hard to backport the fix. Once you do that, just compile it for the devices you sell already.
Because I shouldn't be required to accept giant releases (including broken functionality I rely on) to get security updates. Because it noticeably worsens battery life. Because it's a horrid practice to screw with software on production gear just for shits and giggles, and what is your phone if not a production device?
If Microsoft came out and said they weren't fixing a critical Windows 7 security flaw to force people to upgrade to Windows 10, HN would have bottomless wells of criticism. Google refuse to issue a security fix for a critical vulnerability for a less-than-2-year-old OS, and what, that's ok?
Because withholding security updates over accepting horrifically invasive UI and branding changes is unacceptable.
Similar to Microsoft still patching Vista nearly ten years later, Google should be obligated to deliver security patches to all versions of Android within a reasonable timeframe.
I think it's more difficult to patch old versions of Android than you're suggesting. You can't just backport a few lines of code and hope for the best. You have to maintain all the testing infrastructure that you had in place back when that version was supported, to avoid introducing new bugs with your change. And if you're releasing a security fix, you now have to coordinate that release across all versions that were vulnerable, because patching the vulnerability in one version usually discloses it in all versions. So you've slowed down fixes for your most up-to-date version, on top of the added expense of all that testing.
Companies like Microsoft make boatloads of money in exchange for supporting old versions of their software like that. But nobody is going to pay Google enough to support old Android phones.
Google released a patch for Stagefright for 4.4.x anyways. So this is not the reason. The problem is that Android is currently flawed in implementation in that it treats security fixes and system upgrades the same way. There's only one "path" to get updates. So by blocking 5.1, I also don't get a Stagefright fix for my 4.4, even though a fix for 4.4 already exists.
Also, companies like Microsoft do charge extra for supporting old versions. Specifically, versions OVER TEN YEARS OLD. (Vista security updates are still free!) However, Google does not properly support OS versions released within the last year, which is drastically worse.
Microsoft is a very unusual case, in that they have a brand that is built on legacy support and enterprise stability.
Very few other companies in this industry are willing to put up with the pain of maintaining multiple versions, especially without paying enterprise users who care.
But Google has been heavily pushing their new "Android for Work" concept. Microsoft's support lifecycle is expected by enterprises today. Why on earth should people trust Google with their business data if they won't provide enterprise-level support?
The problem with general updates is that they tend to screw up your workflow, settings, etc. in different ways (they call that "features"). Personally, I'm also more and more reluctant about installing updates for most apps and systems.
Ideally, security updates would be distinct from general system or application updates. Don't Apple and Microsoft do that for their OSs?
Doesn't seem very responsible behavior by the reporter. Google accepted the suggested patches, fixed the original cases. Now some other cases are discovered for these larger numbers, OK, that seems like a new thing to fix next. Not sure why I have to read paragraphs of hate when the company put the suggested patches in already. Seems like just an excuse so they can ride the page view wave.
This doesn't seem hateful to me, and the problem is that google took so long to fix anything at all on top of barely caring about the fix. Why not fuzz the fixed version for 10 seconds?
And I was wondering at the beginning of the article why they were doing
if (SIZE_MAX - chunk_size <= size)
and not the more readable
if (size + chunk_size >= SIZE_MAX)
Of course, C integer overflow. The real WTF is that this is possible in C.
What would be more sensible than integer overflow would be to automatically promote integers to a larger type in the context of a comparison, so that they don't overflow. I wonder if you could add that to the language in a backwards-compatible way? Maybe add a new builtin (compiler-specific, but shared by popular implementations?) like
if __no_overflow(x + y > z)
that would make the addition of two ints become long, two shorts become int32, and so on. (Two long longs would internally become BigNums, but that wouldn't be exposed.)
And while we're at it, add a __checked(a+b) construct, that sets a flag if overflow occurs (or maybe raises an assertion - or maybe we should have both options).
You seem to be asking for quite some magic in that __no_overflow idea. It might be possible with a trivial expression like this, but what if there are function calls in that expression, or even library calls? There are lots of places overflow could happen, and the site of your __no_overflow may not have code-gen control over it at all.
Well, it wouldn't reach into functions. It would mainly just change the + and - operators within its scope to return a larger type. So instead of
int32_t plus(int32_t left, int32_t right);
the plus operator would be equivalent to
int64_t plus(int32_t left, int32_t right);
So basically
int32_t a = 2000000000;
int32_t b = 2000000000;
int64_t c = __no_overflow(a+b);
// now c is 4000000000;
I don't claim the idea to be flawless or completely thought out, but I believe something like that could be one of the more useful C language extensions.
Per the wikipedia article[0] ASLR (address space layout randomization) was first added in 4.0 and fully enabled across the OS in 4.1. To go with that, 91% of android phones are on >= 4.1, and 95% are on >= 4.0 [1].
>Deadline exceeded – automatically derestricting
>The flaw was initially reported over 120 days ago to Google, which exceeds even their own 90-day disclosure deadline.
It always seemed likely that Google's hubris[1] would come back to haunt them. I guess this is that day.
It would be funny if it wasn't remote code execution affecting 950 million phones, with no official patch in sight.
They have become a bit more flexible[0] after the Windows issue. They are still living by the 90 day policy, but baked in some flexibility if the vendor is communicating with them.
Wait, either I'm grossly misunderstanding the article, or you are. The flaw the author is talking about is one that Google has been aware of for six days... Not 120. The original bug from 120 days ago was already patched.
We're essentially talking about a bug in the bugfix, which obviously hasn't existed as long as the original bug itself. I'm really not seeing where "hubris" comes into this.
That's the logic the blog author is using. As you can see in the bit of the article I quoted. I didn't say I necessarily agree with their analysis of this still being the same bug.
The fact that they mentioned it a couple of times suggests it was a factor in their decision to release the details today (or at least wanted to poke fun at Google).
Why are arithmetic overflows and underflows not exceptions/crashes by default, like divison by 0?
Aren't the cases where you actually want an over/underflow the exception? Why not resort to special instructions/macros/operators for these operations?
You could actually have runtime checks by default (in a hypothetical language), and have a smart compiler elide them whenever possible.
int32 a, b = ...;
a+b; // there is a check
// implicitly:
// if (MAX_INT32 - a <= b) throw OverflowException;
// a+b;
but
int32 a = ...;
// after here: typeof(a) = int32
int32 b = rand_int(0, 1000);
// after here: typeof(b) = int32[x|where x >=0 and x < 1000]
if (a < 500) {
// in here: typeof(a) = int32[x|where x < 500]
a+b; // the overflow check can be elided
}
So the compiler would be able to narrow down the type of a variable, and know what operations are safe to perform. This is probably impossible in the general case (halting problem and so on), but I believe it is very doable if you restrict yourselves to a limited number of subranges. This is like a kind of dependent types, but completely internal (you could expose them, if you wanted though).
The compiler can't only use this extra information to remove overflow checks, but you can also have a language that guarantees there is no overflow - add two int32 and the result is an int64, and so on. And if it can infer that the result fits in an int16, then it can put it in there. But most of the time you would just use int, which means: integer variable of enough length to store my data. int8, int16, int32, maybe a BigNum. Kind of like python does it, but with the ability to pick optimized native types if needed.
There's a performance cost now because processor instruction sets have dropped hardware overflow detection due to disuse. Current processors are largely engineered to just run legacy C code fast. See eg. https://news.ycombinator.com/item?id=7847980
No, overflow is still properly detected by common CPU instructions. There are flags that are set on overflow after the addition and subtraction and they can be tested.
It's only hard to test the flags in "standard" C (I don't know if it's better with newer standards or those in progress) but the CPUs do their work on the hardware level.
But like ploxiln said you now need a conditional branch to test for the overflow flag after the arithemetic operation, unlike the old overflow traps on x86/MIPS/Alpha etc.
I'm not aware that integer operations ever made traps on x86 and nobody actually uses floats for allocation sizes (in C). And checking the flag would be more than enough IF the languages supported it.
It's the languages that should be changed to be able to simply check the overflow after the critical operations, not the CPUs. The overflow checks are needed only where something can "go wrong" not everywhere.
I think it is time for this removal to be reexamined, considering popular scripting languages like Python and Ruby (which check for overflows on all integer operations) face significant performance hits because of lack of hardware support for these instructions.
The naive check-and-branch approach on current x86 only costs a couple of percent in integer-heavy and well tuned C code (see http://danluu.com/integer-overflow/). Integer operations in Python and Ruby are so glacially slow compared to C that the overflow checking overhead doesn't register at all due to all the other overhead there.
That's not how you'd implement it if you had a choice though. You'd rely on a certain page of memory not being mapped on the target platform (the page that starts at address 0 is often a fine choice) and you'd issue a conditional move that touches that address. You'd rely on the CPU exception interrupt mechanism to branch.
I know modern ARM instruction sets don't have conditional instructions like that, but they may have something similar for extremely infrequently triggered control flow. The same kind of trick can be handy in a variety of programming language and GC implementation techniques.
This is a common problem in C. Integer types are inherently type unsafe and are silently promoted with many different rules which are hard to remember and understand. As is seen in this case, even the ( borderline paranoid ) flag -Wconversion would not catch the bug.
I think this problem in C would be solved with a single flag: -Wwarn-if-using-integers-of-different-types-in-an-operation , forcing you to cast the integer if the types don't match in a arithmetic operation, or an assignment.
Because no truncation happens. In this case [] operator doesn't specify any type, only that the expression inside is an integer expression. While normally the type size_t is used for object and array sizes, [] takes any integer expression and the compiler won't complain.
uint8_t *buffer = new (std::nothrow) uint8_t[size + chunk_size];
size + chunk_size is clearly unsafe to truncate to 32 bits, but it truncates anyway. When I say 'inside the new operator' I'm including the allocation function. Something truncates it. If it actually allocated 8GB, or failed to allocate 8GB, there would be no exploit.
Apparently new is a "special" operator, or there is a bug in the compiler. I also can't get a warning with g++.
The problem seems to be that, as I said, [] takes any integer expression, it is there where the value gets truncated when operator sizeof or new is applied on it since they either return or take a size_t value.
The bigger issue of libstagefright is that it there's a ton of code involved with media playback at the native level that has access to many system resources. This specific exploit was just looking at a small part of the MP4 handling -- one of the many parts within the library. It is very likely more severe exploits like this one will surface as a result of this huge library.
It's a bit surprising because so much of Android is written in Java. Given hardware decoding of the video itself I wonder why Stagefright needs to be written in C++ at all. Media processing code has been notorious for being exploit ridden for years, so it's not like this problem was unpredictable.
Can carriers (and by extension, the Hangouts backend itself) check messages and block "evil" ones? Wouldn't that be an easier way of fixing these things quickly?
At the very least, Google should block any Hangouts message that triggers the bug even on non-updated devices.
if (chunk_size >= SIZE_MAX - size) {
return ERROR_MALFORMED;
}
Due to size being a size_t and SIZE_MAX being well a maximum size_t, SIZE_MAX-size is properly calculated. The comparison with chunk_size is also properly done (due to the C promotion rules - as strange as they are, they do work "as expected" when your values are nonnegative, which they are here).
Also, I am slightly puzzled why one would use SIZE_MAX as a limit rather than some "small" number, like a few megabytes or whatever is a reasonable bound for this buffer. In this case the fix may be a bit more complex than this: if (chunk_size >= SIZE_MAX - size || size + chunk_size > the_limit) .
There was an Android update pushed to my phone recently. I wanted to know if it was an urgent security fix so I checked the diffs. It's hard to tell but it doesn't seem to be. It's a bunch of fixes to do with video out, SIP etc.
I thought maybe the patch fixed this security flaw. It wasn't clear what it was for from the phone. I had to do a fair bit of digging. Are there any change-logs or release notes for these system updates?
> IMO, the Android echosystem is a clusterfuck and Google needs to get a hold of it. I would buy a Windows phone before I would buy an Android device.
This is nowhere near as bad as the situation with Windows XP 10 years ago.
The difference is that Android has the majority marketshare worldwide and Windows phone does not, making Android the more attractive target both for researchers and malicious actors.
Security by obscurity is not entirely without value, but it's not particularly strong as a defense either.
Things like this is why I trust an iPhone enough to handle two-factor auth for banking (in Sweden: "Mobil BankId"), but not an Android device.
I hope Google will raise the security level now that they have reached global dominance, in no small part through lax security (as a consequence to their liberal licensing models).
And before you nominate Apple for security sainthood, let's not forget the bruteforce / unlimited password attempts hack on iCloud that allowed hackers to get ahold of sensitive pictures belonging to celebrities.
What about the other 15% that don't receive patches of any kind? Should they remain vulnerable just because their devices are too old? I guess that "working patching model" has a time limit.
So, can someone explain why a disastrous worm hasn't already swept the globe and infected 99% of Android devices on the planet within ten minutes of being released in the wild?
1. Text payload to victim
2. Payload executes on victim's phone and texts itself to all of the victim's contacts
3. Repeat
Assuming the average Android phone owner has 20 contacts who also have Android phones, and assuming also that texting the payload to those 20 people would take two minutes to complete, the infection would spread exponentially and only take ten minutes for the initial text to result in the infection of 10 billion devices worldwide.
Why am I not currently being bombarded with MMS video texts from infected devices? It frankly seems a bit miraculous. Did Google set up an emergency arrangement with all of the carriers to block suspicious video texts so this wouldn't happen?