I was going to say this isn’t a big deal but copying and uploading the libraries is actually illegal (copyright violation) and users likely can’t even consent to this even if it is in the Facebook ToS as many android phones contain proprietary libraries not licensed for redistribution.
The creators of those various libraries should have a valid legal case against Facebook here, if they want to exercise it. I doubt any users are being harmed by this but it’s a violation of the software creator’s rights.
Some older android devices running newer lineage/AICP/etc builds include a few libraries I wrote (in their entirety) for compatibility of old vendor prebuilts with new android versions - libdgv1 & libdmitry. Maybe I should C&D FB for laughs?
Sure, by closed source I meant "not licensed under a typical open source license that gives you that right" rather than "I literally can't find the source via google".
Edit: CANCEL MY KNEE-JERK REACTION TO A KNEE-JERK REACTION! A few tweets down (good [deity] twitter is a terrible way to transmit information, this is why I don't generally bother with it) she does say the full library is sent.
--------------------
> copying and uploading the libraries is actually illegal (copyright violation)
That isn't what is happening here - the headline is misleading.
Further into the tweet (FFS, it is a tweet and people aren't reading it all before reacting!) it clearly says "It periodically uploads metadata of system libraries to the server".
Basically that means they are sending and storing what versions of what things you have, not the code or other data in the libraries themselves. I'm pretty sure there is no reasonably copyright on the name and version number of a library in this context.
If it is being done for fingerprinting purposes then there might be an unreasonable tracking claim by the users, but not the library copyright holders.
The less worrying explanation is that it is being used for problem analysis: if a new version crashes a lot but only on devices with a specific version of a specific library, that makes tracking down the bug and implementing a workaround or fix much quicker. Of course this is facebook so if it is for the issue analysis (which it almost certainly is) but can also be used for fingerprinting as well, you can bet your bottom dollar that it will be used for fingerprinting as well.
At first glance, the amount of damage being done is close to nil — even if they reverse engineer received files to steal trade secrets therein (lol), it is hard to pinpoint specific amount of harm, dealt to the copyright owners.
But actually... why are Facebook people doing that? If I were to wager a guess, Facebook needs those files to create exact copies of user systems to debug. In other words, they are trying to save up on buying real devices for their test lab! Using "pirated" copies of libraries to spin up testing VMs is most likely cheaper than owning lots of real smartphones with all available firmware versions. And also illegal.
I wonder if they gauged possibility of being sued for this along with possible legal expenses and found that it is still cheaper than buying those devices themselves.
In the US there is a legal doctrine called fair use, which limits the extent of copyright. There are a number of factors but one of the most relevant here is the purpose and character of the use. If you're using the work to create an analysis of it, that's often covered under fair use as it is not a simple reproduction of the original. (not legal advice).
There are numerous exceptions to the exclusive right to copy works of authorship, of which Fair Use is only one case, actually an "affirmative defence" (that is: not a preclusion to civil or criminal proceedings, but a defence which may be presented), based on a four-part test.
There are additional excemptions, including copying which is required in the normal use of software, and possibly other information, on electronic systems. Whether copying to a malware-scanning service may or may not be included in that, though it would seem a fair argument that it should be (transformative, doesn't impact market, does affect the whole work, purpose is constructive and not otherwise served, context is specific to the nature of the work).
Not under US Copyright law, but that's because you have a licence to the software which means you have the fair use right to take this security measure.
Absent a rather dubious user agreement allowing Facebook to copy all the data off your phone, Facebook does not have that fair use right. Nor is it likely even remotely ethical to be doing this without explicitly notifying the user. So, illegal and unethical, but I guess unless some PR firm is paying the news media to be outraged about it, they're not likely to care.
I seem to remember a case where files were being uploaded to a server where they were only retained in memory and that qualified as a copyright violation, but I'm having trouble actually finding it.
Note that there is an exception for such a copy when it’s necessary to run the program, I believe this exemption was added because otherwise running an executable on a personal computer would have been a copyright violation.
> a) Making of Additional Copy or Adaptation by Owner of Copy.— Notwithstanding the provisions of section 106, it is not an infringement for the owner of a copy of a computer program to make or authorize the making of another copy or adaptation of that computer program provided:
> (1) that such a new copy or adaptation is created as an essential step in the utilization of the computer program in conjunction with a machine and that it is used in no other manner, or
> (2) that such new copy or adaptation is for archival purposes only and that all archival copies are destroyed in the event that continued possession of the computer program should cease to be rightful.
Maybe in jurisdictions like the US, where the copyright lobby has been very effective in getting aggressive anti-piracy legislation with huge penalties enacted, the statutory damages alone could be astronomical? Since Facebook could still afford to pay them, it might also offer to settle for a very worthwhile sum without even the risk of going to court.
I'm generally not a fan of hugely disproportionate penalties for copyright infringement, but this isn't some normal person falling victim to opportunist lawyers engaging in a form of barratry, this is a huge company with its own legal team who should know better than to wilfully infringe copyright.
I'd expect that they're doing this because they'd like to diagnose crashes or bugs on systems that they don't have the hardware for. It's still somewhat creepy and possibly a fingerprinting mechanism.
Agreed. This is about how the phone number thing went "for security". I think a lot of people believed FB was using it just for security but in reality they were trying to find more connections, possible friends, tie you to an identity. A real citizen of a country - which is one of their products. I would suspect this is like browser fingerprinting.
Yeah, when I was working on an SMS app, I briefly considered doing something similar. The variety of ways companies break these shared services is astounding[1], and there's no way to reproduce without having the actual phone on-hand, and/or decompiling the framework and seeing what nonsense they wrote. I never did ship it tho.
There are definitely some non-shady useful reasons to do this, but Facebook has sorta lost my default assumption of not-evil, yea.
Even ignoring the ethical questions it is a massive waste of bandwidth. They could hash the libraries, and if they get a cache miss, upload that one from one person (or perhaps a few people, since everything is in parallel). They then know what system libraries their users have installed without wasting a ton of bandwidth.
Next step to reduce creepiness is to only upload info on system libraries that actually affect the app (so if some users experience crashes and others don't, they can trace it to differences in system libraries).
But that presumes a human engineer is going through and looking at the libraries in order to maintain fingerprints. I suppose it's possible that's what Facebook is doing, but it strikes me as a massive waste of time, particularly in comparison to all of the other metrics at their disposal.
Why wouldn't they just track the model of the phone + the current software version if fingerprinting was the goal? How would this approach give them any more fingerprinting data than that one?
Except it's not the data that's protected by copyright laws. ...and that it's not the original file is what makes it ethically palatable that Facebook is doing this without explicitly notifying the users that it's happening, although they damn well should have because it represents a profound change in the relationship.
That culture has been built up over years. And probably most of the people they hire don't have the life experiences that would give them pause and allow them to consider or even recognize if what they're tasked to do is creepy or not.
Not at FB but just doing enterprise software development I've had to explain to other developers that capturing and storing user info just because we can is in the "not okay" category. There are plenty of people who don't even consider ethics at work. They get a feature request, so they deliver it, with no second thoughts. These aren't bad people per se, it just doesn't occur to them to question the reasons for a request or what the end result is within a larger context.
I think this is because people don't see the value of data. I've also heard a lot of justification about people saying "well my individual data isn't worth anything" but also being freaked out by ads they get. I'm always confused by this juxtaposition. But I guess there's similar stances in a lot of places in our culture right now.
Theranos is the prime example of this. Whoever, was negative about the product in a way was removed. If one questioned a promise of the product because it was IMPOSSIBLE they got fired.
(I'm talking extreme end here, btw). Someone doesn't turn to high crimes overnight nor are they (usually) born that way. You start of making smaller immoral decisions which then become normal. You essentially move the bar a little more and more. People sleep at night frankly because they no longer think that these things are immoral. That's why there's plenty of sayings along the lines of "the path to heaven is long and narrow and the path to hell is short and wide".
Obviously morals are flexible and need to be to survive in this world. But the thing is when people are put into environments that encourage this bar to be pushed too far (by what society determines is too far).
Btw, if you like podcasts Hidden Brain did an episode on this concept that went through how an athlete went from taking no sports enhancing drugs to being a major dealer. And they decompose each step and how reasonable they seem in the context.
Tldr: it's no longer immoral for them, so the real question is "what keeps them up at night?"
I'm sorry I can't find it either. Maybe it was a different podcast. I'm trying to think of other podcasts. I don't think it was RadioLab. Could have been another NPR show but I can't think which one would address this issue. HB seems like the best fit. I can bound the date of when I listened to it. Was definitely between 2018 and 2015. My intuition says 2017, because that was the height of my podcast listening. The time is probably why I can't remember in high detail.
Basically the story I remember is that there was a sports athlete that was in the lower ranks and then got a prescription, that they actually needed, that also improved their performance. Then I think it got banned? So they start buying it from another country like China or something. They then started buying in bulk because it was cheaper. Their friends started asking for some because he was getting it for cheap and he didn't think anything of it because he was just helping them save money too. They're his friends after all and he was already buying the stuff. So what difference did it make if he just ordered a little more? Then it started being friends of friends. Before he knew it he was selling tons of this stuff and to people he had no real previous knowledge of.
I now really want to listen to it again so if someone can help find it let me know. I think they might have also talked about Lance (I'm not sure if they interviewed him or that's another podcast I'm thinking of).
In a way that's what you want for the stereotypical "startup culture".
Nobody in their fourties who has the life experience of being married with kids is going to work lots of overtime and put up with their boss acting like a child and treating them like they are nobody. I guess that's why they like hiring college grads (that and they don't know their value so can be paid much less).
Especially on Qualcomm devices (such as the Jolla phone) Qualcomm explicitly forbids you from distributing their OpenGL drivers. So if facebook copies libGLESv2.so off from the device they are potentially performing straight piracy at that point.
If I recall the damages demanded by RIAA it was several hundred k per infringement.
They don't upload the library just their filename and hash. Probably for bug troubleshooting, device fingerprinting and feature availability data. It's a bit much, device fingerprinting like this should get you banned from the play store. However there are legitimate uses for this.
The exact details would depend on what they do with the uploads and the specific countries they're uploaded to/from. I'd presume they do this for security and debugging purposes, not to 'steal' the libraries. Like a virus company uploading samples of 'suspicious' .dlls for analysis, this looks like a fair use exception.
It's not a fair use exception because Facebook is not the party involved in the copyright agreement. They're a completely separate third party so they don't have any rights at all to those files.
As someone who’s built my company’s mobile crash reporting solution, I have a guess why they might do this.
It’s is extremely difficult to diagnose Android native code crashes. Unlike iOS where it is both straightforward to unwind on the phone, and where Apple makes the iOS system symbols available for symbolizing system frames in a stack trace, neither of these things are true on Android.
My first approach for my company’s Android crash manager SDK was to use Google Breakpad. This works by capturing a snapshot of stack memory at the time of the crash. Unwinding then occurs on a backend server. But to unwind successfully, absent a frame pointer register, you need unwind info to provide to the unwinder. This simply isn’t available except for Nexus devices for which you can download the system images from Google. And even on devices where the code was compiled with a frame pointer, you still need symbols so you know what each frame’s function was.
Another approach is to unwind on the device. In my experience, using libunwind, this is successful about 50% of the time. It also risks hanging the app, which looks even worse to the user than just crashing.
Years ago, I briefly considered having our crash SDK, optionally and with user consent, extract the symbols and unwind data from the libraries on the device and upload them to our backend. I dismissed it as too expensive to do on a user’s phone.
Instead, we crowd source as much as we can from our employee phones.
Android native code crashes remain a bear to diagnose. Especially annoying since Android itself collects a ton of diagnostic data about your app when it crashes - it just doesn’t make it easily, or in some cases at all, accessible to the app itself.
If you have read access, then yes. Conventional desktop and server linux distributions would allow this behavior. As does android. Good luck using dylibs without it, anyways.
Since the android market is so fragmented and customized, this probably saves them from having to buy lots of phones when diagnosing crashes.
The knee-jerk reaction is to feel uncomfortable but these are system files, shipped with the phone, that are accessible to anyone who purchases the phone. This saves FB the trouble of spending $200 every time a new OS update comes out. Personally, with that knowledge, I don't have a problem with this - however, I have a ton of problems with other stuff FB does so I'm happy to keep not using their service.
> If you have read access, then yes. Conventional desktop and server linux distributions would allow this behavior.
The difference is in people's expectations of mobile vs. desktop apps. You'd never install untrusted software on your desktop, but mobile OSes provide the sense that software is isolated. In Android, that's mostly an illusion.
I feel like users install untrusted software on the desktop all the time and it's called closed source software.
It's not like Facebook is some small, unknown malware peddler so that its software should be considered "untrusted". If anything, it's untrusted because it's coming from a scummy company and opaque (due to being closed source).
You're right that it being from Facebook makes things a little different. At the same time, I've never needed to install a native desktop app from Facebook and I'd have some suspicion about doing so if such a thing existed, for exactly this reason.
> The difference is in people's expectations of mobile vs. desktop apps. You'd never install untrusted software on your desktop
I knew many Linux desktop users who had installed the Slack client back in the days we used Slack at work. Myself I have installed Skype. Not that I find Skype particularly good, but sometimes I need to communicate with people who have no clue about software freedom.
So, yes the number of "untrusted apps" is significantly lower on a (Linux) desktop, but "you'd never install" is an incorrect characterization.
I'm not making a moral judgement (FB is a big yikes), just technical. They'd have to:
- build lists of every phone, including carrier variant and internal revisions (pretty common!), to make sure they could be sure they had a complete library
- rely on the manufacturer to publicly post the ROM (cheaper mfg wont do this) (or somehow retrieve the URL from the update mechanism, said URL not easily accessible from userspace)
- handle the multiple different packaging mechanisms that android phones, especially older versions use (Google has gone a long way in remediating this but FB has to support billions of devices that don't adhere to best practices).
- For ROM packages that are encrypted, they'd need to acquire the keys from real devices.
- and they still would not have visibility into non-posted firmware, such as factory versions with day 1 upgrades (aka many many devices)
1. Uploading files from the user phone to their servers is straight up copyright violation in plenty of cases.
2. I have doubts that you need copies of all kinds of system libraries to debug that crash. They won't help you debug a crash dump (assuming they don't have debug symbols left in for some reason). They generally won't help you reproduce the crash unless you actually know reproduction steps - it wouldn't surprise me if they tracked every user action, but I doubt they do - so it takes many of those crashes to even start debugging. At that point you probably know precisely which library you need and can obtain it legally.
That said, I agree that uploading the files themselves is not necessary to fingerprint users (the hashes would totally suffice). Unless they do the uploading as a cover-up story, which doesn't make much sense either.
At the very least, the privacy-respecting solution would be to upload hashes and only upload libraries once some critical mass of users had reported the hash along with a bug. Even then, you would only upload the files themselves from some capped number of users.
But...what about my pitchfork? The knee-jerk reaction to every Facebook blog spam entirely diminishes the harm they've done to nations around the world.
Yeah sorry, they could send ro.build.fingerprint instead if they really wanted to know what version of builds and devices out there are causing issues.
I can see this as an opt-in but not as a silent, default behavior.
Well Android does have a Linux kernel. The permissions happen at different layers within Android, as apps are run on their VM (Dalvik byte code). The /system partition is read-only on Android, but I don't /think/ you'd need any special permissions to read most of the system partition. The data partition is what's protected.
I mean, an application has to be able to read standard libraries to function, right? Same with any traditional Linux distro and /lib, /usr/lib. Really tight Apparmor or SELinux profiles can lock this down a bit.
Regarding apparmor/selinux, who creates/audits those profiles to make sure each application only has access to exactly the libraries it needs? It probably defeats the purpose if it's the app authors. Similarly, who validates that these profiles don't break functionality for any device/os version? I could see this being an option for power users who are willing to collaborate on creating the profiles and deal with fixing the occasional incomplete profile. I'm not sure how feasible it'd be as a solution for your typical user though.
You need read access to actually use them. Even if some file system trickery would be made to prevent it so that you couldn't fopen it, the actual binary code will be in memory accessible for your app, because it has to be there in order to run.
That would only work for the .text section. Other sections like .data, .rodata, and .bss need read access because they contain data required for the library to function (global variables, vtables, constants, etc.).
I'm not sure if Android allows for multiple mappings of the same page, but I could see something where global variables that are internal to the library are accessible only through a mapping that is known only to the binary itself (possibly by hardcoding it into instructions inside .text?) API needs to be exposed, of course, but ideally that would be the same across different implementations of a library.
What problem are you actually trying to solve though? System libraries are not secrets, there really isn't any good reason we should go through these hoops to prevent reading them.
...system libraries contain identifying information? Is this some alternate usage of the word "system library" I wasn't previously aware of? I'm assuming, and haven't read anything to the contrary, that these are just dynamically linked code libraries what have been a pretty big part of how OSs work for decades.
Therefore, I can only conclude that the identifying information is which precises system libraries are installed on any given machine. Any solution to this problem other than "have fewer system libraries and don't change them as often" is adding far more complexity to computing than it will ever be worth, and computing already has enough of such "for the security gods!" solutions.
> I'm assuming, and haven't read anything to the contrary, that these are just dynamically linked code libraries what have been a pretty big part of how OSs work for decades.
Yes, we're talking about the same thing.
> Therefore, I can only conclude that the identifying information is which precises system libraries are installed on any given machine.
Specifically, it provides very detailed information about your hardware revision, OS version, security patch level, which is generally something that forms part of most fingerprinting suites. Now, I'm not saying that we can solve this problem easily–but I do think it's important to recognize the privacy implications that uploading them can entail.
Ok, so let's say you've solved this problem tomorrow. You've just made it impossible for an application to know which OS version, security patch level, and hardware revision it is running on, meaning it can't do anything like dynamically adjust itself depending on features (or bugs) it will know are available or not. It also can't submit any of this in crash reports, so fixing bugs that occur in rare combinations of hardware and software is now much more problematic.
All to prevent... what? If you log into Facebook, which is presumably why you have the application, then it already knows who you are. Not to mention the hundreds of other ways to fingerprint you.
There are situations where different people use the same machine to access Facebook. And indeed, where they also (foolishly, I know) expect that Facebook will preserve mutual privacy. Partners. Parents and children. Siblings.
So Facebook, if it's relying on hardware/software fingerprinting, might compromise multiple users to each other.
Whereas the execute flag on a page allows one to only load data from there but not jump there is nothing that allows one to execute code while preventing mov.
System libraries are trivially readable, but "Linux" escapes this problem by not setting itself up as a playground for malicious actors alongside personal use. The Android model is basically to give a shell account to anyone for asking, while trying to protect the privacy of the sysadmin.
It's not my business, as I don't use the FB app --and I won't. But even if the original intent was to help the debugging process, this is not acceptable. This is, to put it plainly, copying files from a user's device, without the user's consent.
FB has the means (resources) to route around this and find the ways to properly debug apps.
I hope this would find its way to Google Play blocking the app and a class action lawsuit. It's the only fair outcome.
Why is this bad? Don’t most error reporting libraries send this sort of metadata with exception stacktraces? I would think this falls under the usual “improving the quality of the app” language in nearly everybody’s EULA.
You don't need "that version of the system library" to debug anything. If you write decent code that adheres to the published APIs, it works or it's Android's fault. Should it be Android's fault and you deploy a workaround, there's a pretty good chance your workaround will start breaking the moment Android fixes their bug. But this has nothing to do with that.
Let's face it. This is the freakin' Facebook app. It's not doing anything so incredibly revolutionary in the field of computing that requires it to be intimately involved in system libraries. It needs to display cat pictures and take in and emit text and make HTTP and HTTPs calls over the network, oh and monitor the user's every move, even while they sleep.
One reason to do this would be to discover what other apps the user has on their device which may not be detectable by other methods. That is valuable business intelligence that could be used in various ways for maintaining a competitive advantage. I got this idea from this reply:
To the extent that Facebook has any utility at all, it works fine on a mobile web browser and when you close the tab it's gone. Why does anyone install the app?
I don't use Facebook, but mobile apps tend to add features compared to the web app, such as notifications, integration with other apps (e.g. contacts), and a cleaner UX.
That being said, even if I used Facebook, there's no way I'd install their app because I don't trust them.
Indeed mbasic.facebook.com works, but it doesn't look as good and doesn't support message notifications. It's a trade-off I'll take for now, but not everyone will.
Grabbing rootkit artifacts that could be on the device?
Its just that its not Facebooks place to do this. I wouldn't expect a app linux binary to upload the contents of /usr/lib, or a windows app to start sending system32 dll's off system.
FB can try to sell this as a 'lite-AntiVirus' type service, but that is not its place. There is no indication the app is doing this. Its FB being creepy as usual.
If Google did it, it would be less creepy, just like how Microsoft can grab malicious files detected by Defender -- but they write, support and protect the OS! FB is just an app. It shouldn't be harvesting its users operating system files!
Google already does what people are spewing apologetics about as justification for Facebook's behaviour, but they do it the right way.
Google's SafetyNet scans the system files, but it looks for _specific_ files that should not be there and ensures that certain files that must remain unaltered actually remained unaltered to ensure that the security model is still intact, so it doesn't need to violate copyright laws by stealing copies of files off the user's phone without permission or user awareness.
...and funny that you mention Windows Defender because it repeatedly advises the user that it might upload files to Microsoft and asks for their approval for doing so at multiple points. Microsoft is being perfectly transparent about what they're doing and giving users the ability to opt-out. They're also the people who make the entire operating system so they've got an obligation to try real hard to prevent another Blaster incident. Facebook just makes a social media app.
This is the biggest one for me. Anyone who has that data is capable of playing back the 0-days that affect android. How many android phones are kept out of date?
As other user mentioned, the Android ecosystem is like the Wild West. Given there's a report for 2.5B active devices, how many can be affected by such an attack?
1% would affect 25M devices, around the population of Australia.
10% - 250 million devices.
40% - 1 billion devices...
Perhaps their motivation is to launch a campaign saying "Facebook keeps you safe!" by scanning your phone, and use it to justify people signing up for more surveillance. Perhaps this is a stealth beta test.
That’s one misconception (or misnomer) about Facebook and Google. They don’t “sell or give away your data”. They sell access to you based on your data. The distinction is important if we want to pass laws limiting what they can do with your data. If there was a law passed saying they couldn’t “share your data” they would just shrug.
As other commenters have mentioned, traditional sandboxing mechanisms would do little here. Applications are always given read access to system libraries because they need them to function.
i think one of the biggest nuts to crack is that end user is in app space and cant black list apps [such as FB] from system procs and resources. If we could sniff and/or hook for requests to read the entire library all at once, or for such a request from a particular app, and ~pihole it or give it a honeypot to suck on for data.
Fully homomorphic public key encryption is the solution. We need such a crypto that hides the instructions and operands but does not alter the visible arity of instructions, and allow arithmetic operations on operands.
Kiss your battery life goodbye then, because HE requires a massive amount of CPU power. So much that it's outright prohibitive for use on a cell phone for at least the next 5-6 years, minimum (Moore's Law)
Buy a second device. $100 will get you a good used phone, $50 a decent used tablet. Then you can segregate your own use, say making your primary device microg+fdroid, and only using the device with the surveillance culture apps where necessary.
If the company leaders and employees have any integrity left, they should quit their jobs and do something that's actually worth doing for humanity and mankind.
We should create a "privacy hall of shame" (I was tempted to call it the "privacy offender registry") and list the names of all the employees who work on these features, along with an easy-to-read blurb which explains how the feature could be misused. Bonus points for linking to their social profile. If you cannot find the actual person, go up the org chart and list the person closest on the hierarchy.
Not that it is going to matter, any more than you can dissuade members of a cult by telling them they should forego their membership. It just seems to bring the cult closer together.
Sandboxing on iOS would not stop this sort of thing. Not that this would be useful on iOS, some all the libraries are combined into one file, mapped into every process, and easy to grab from a firmware image. (I guess this could be useful if you’re trying to debug something on an internal install?)
No it's not. They're compressing the libraries and sending them all upstream. They're also eating into data caps (probably they're doing this only over Wi-Fi) let alone flouting all kinds of copyright laws
The creators of those various libraries should have a valid legal case against Facebook here, if they want to exercise it. I doubt any users are being harmed by this but it’s a violation of the software creator’s rights.