Sounds like this is more in line with what they did with ApplePay vs traditional credit cards--I.e. They give you randomized IDs each time so the other party can't track you from transaction to transaction. Adds can still appear but they won't know who you are, so it's a direct shot at Google and others looking to give people "targeted" adds based on user behavior. I agree it's an issue that needs addressed. Just because I searched for X two days ago doesn't mean i want to see adverts on X for the next two months.
Apple Pay does offer enhanced privacy by not transmitting your name along with your card number, but it doesn't randomize the number with every transaction. In fact you may not want that as it would disrupt email receipt systems and loyalty programs (absent some parallel mechanism).
Apple Pay does something called tokenization, and the goal is more fraud protection than privacy. It generates one new number at card enrollment and uses that exclusively. By using a unique account number which can only be issued by Apple Pay devices, it means it doesn't matter if someone hacks the merchant and steals your number. They can't use it without the associated Apple Pay generated cryptogram, secured by your PIN / fingerprint.
Honestly the enhanced security of Apple Pay is underhyped. It's really great.
You should read the iOS security guide portion that explains Apple Pay, the security benefits and amount of thought that has gone into the system are pretty amazing.
Apple Pay is a generation ahead of chip and pin. It obviates the need to enter a PIN which is more convenient and invulnerable to attacks on PIN terminals, which have defeated chip and pin in the past.[1]
Some merchants I've noticed no matter how much you spend still require a signature, which I don't really understand. Others such as Kroger require a signature if it's over $50 but they don't take Apple Pay.
What about contactless cards? I personally find contactless cards, common in the UK, much more convenient than having to faff about with my phone, which may be out of battery.
I find the opposite. My phone is just in my pocket, I pull it out tap the fingerprint scanner and I'm ready. My card is tucked away in my wallet and takes a bit longer.
Admittedly being out of battery is a hard problem, but I chose my phone partly because it has decent battery life.
Two reasons Apple Pay is better than contactless cards. One is security. For purchases over a certain amount, cards make you enter a PIN. This is inconvenient and also susceptible to attack. Apple Pay secures every single transaction with your PIN, made convenient by Touch ID.
Two, if "faffing about" is your concern (thank you for that expression by the way, i'm going to start using it), pulling out a card is not really much different from pulling out a phone. But the Apple Watch supports Apple Pay, and that way you don't have to pull out anything. It's really convenient.
If you're wondering how it works securely with the watch, it's pretty smart. You unlock the watch with a PIN when you put it on. It senses when you've removed it from your wrist, so it just stays unlocked until that point. Therefore all Apple Pay purchases are PIN authorized without having to prompt you for it or a fingerprint. All you have to do is wave your wrist by the terminal and confirm.
I am in Australia and we have had contactless payments for years. Apple Pay with an iPhone was more trouble than using a card. On the other hand since I got my Apple Watch I haven't carried credit cards with me. It started out as a 1 month experiment to see if I could survive and it has never been a problem.
Phone based payments seem to have really failed to gain momentum here in Australia too; we've had contactless cards for a few years now and they've become the norm, using a phone seems to have marginal benefit over that.
My understanding is that contactless cards have limit of £30 per transaction in the UK. Apple Pay (perhaps it's the same with Android Pay?) has a significantly higher limit.
25 € in Austria. But for higher amounts you can still just hover the card above the terminal and then enter your PIN instead of sticking the card in (at least at some stores).
Interesting. In most stores in Holland my experience is that over a certain amount, or after a certain number of transactions, I get a 'pin required' message. I then have to wait for the cashier to do some voodoo, only then insert my card (before that causes issues), and enter my PIN. This is the case pretty much everywhere.
I love contactless payment, and this occasional complication doesn't cause enough trouble, but it's still really annoying because I'd expect exactly what you describe to be the case.
Note regarding Apple Pay: due to the way credit card networks implement network tokenization, online merchants can actually track you across multiple transactions. You get a per-device ID, of which you can have at most 10 per card (for Visa). Ideally, unique payment tokens would be derived from these IDs for each transaction. In reality, the ID is sent along with a random cryptogram for each transaction, leaving tracking possible (on the same device only). This is because these tokens have to be in the same format as card numbers, and the ranges available to issuers are rather limited.
I've had that problem before as well. My wife was going to Mexico and looking for a new swimsuit, and so I was hitting up the SwimCo website. Cue three months of women's swimwear ads from SwimCo – and nothing else. Almost every single ad on every single page was the same ad in different shapes, all of them for SwimCo.
Recently too I've noticed that Amazon is putting ads in my Instagram feed for specifically things that I've looked at on Amazon within the last day or two. I'll literally click a link to a book or do a search, and then four hours later it'll show up in an ad in the app.
Aside from being kind of pointless (I already know about these items, why are you showing them to me later that day?), it's also all kinds of creepy and unsettling to see Amazon advertising six things I've seen recently, and not even on the same device I was viewing them on.
Whats worse is when colleagues at work can look over your shoulder and see everything you are considering buying. Luckily for me my purchasing habits are pretty mundane, but I can imagine this could get quite embarrassing for people shopping for more risky items.
Indeed. It was especially awkward after I searched for "Willy Wonka costume", which prompted Amazon to also show me the results for "Willy costume". Some of those items were then clearly visible in almost every Amazon ad and recommendation I received during the next couple of months.
That's actually not how ApplePay works. You get a new randomized credit card number, but only once. Shops can still track you by checking for the number. You can check that yourself by looking at receipts when you pay with ApplePay - each receipt features the same numbers (most receipt only show the last 4 digits, but they are always the same when you pay with ApplePay).
Indeed, I observed this because our local grocer asks for your email address when checking out so it can send receipts there. After providing mine it never asked again.
It would be really cool if it generated new numbers each time and had an amount coded to that number. So when I wave my Apple Pay device over the reader it would display the amount on the device, I would approve, and then a number would be handed back that's only good for that amount.
It's a randomized number per original card. Every merchant sees the same number. According to some other poster, if you have several devices (like an iPhone and an Apple Watch), then you get a new number for each device.
So, not only can a single merchant track you, but all merchants can cross-reference the data they have about you and track your whereabouts, purchasing habits etc. They just don't know who you are anymore, because that information is not transmitted. Unless one merchant asks for your email or home address, and this merchant then adds that email to a shared database, at which point we're back to step 1 and the merchants know everything about you.
Or you just use any kind of loyalty card/ account when making a payment using apple pay even once. :(
I didn't realize it only randomized once and am now disappointed in the way apple marketed it.
But would you rather see ads for something you might be interested in (however tangentially) or something completely random? Personally I prefer the former, as long as there is some basic sanity filtering involved.
Thumbs up for Apple distinguishing themselves by their pro-privacy stance, as opposed to MS, who don't have anything to win by Win10's excessive "telemetry" IMHO.
I recently realised that any company taking on Google (e.g. Apple, Mozilla, ...) that is afraid they won't be able to take them on in areas like machine learning or sheer size, is realising, rightfully so, that being pro-privacy is the one thing they can compete on with Google that Google will never be able to imitate. Pretty sweet, actually.
I think a big thing about AI with Apple is the fact that their stance on privacy makes it a bit harder to compete with the harvested data sets of Google. From what I understand though, according to the reactions on the papers they released a bit back, they're not doing so poorly.
That being said, and I've never used Cortana or Alexa, but I hear they're pretty decent compared to Siri.
Exactly, I think they figure that if they have to harvest data themselves and try to beat Google at its own game, they have a larger chance of losing than if they concede on quality by not harvesting data (as much), but try to offset that by being privacy-conscious. And yeah, perhaps they'll still be able to do a pretty decent job that might not give them too much of a disadvantage compared to Google. You often see, I think, that with a few years delay many machine learning advancements can be reproduced offline.
You do know that Apple makes mobile devices, right? And so it would make sense that Apple is part of the group the supports hardware standards in the mobile device industry, they're not just a lobbying group.
You mean the Walkman, right? Why yes, I heard of it, but thanks for double checking.
Anyway, If they're pro-privacy, it would also make sense for them to raise hell about the group they're part of for some other reason is arguing that browsing data shouldn't be considered private, of all things.
5 downvotes huh; anyone able to explain how the above is compatible with a strong pro-privacy stance? That's like saying you're a vegetarian, except for saturday noon.
Apple may be a member of CTIA, and CTIA may have argued for a particular anti-privacy stance, but that doesn't mean Apple supports that stance. There are a lot of members of CTIA, and CTIA presumably lobbies for a lot of things, not just ISP privacy rules.
May be a member? They are. And I tried looking for an statement by Apple on that issue, couldn't find one. So until I see one, I'll say the lobby group speaks for them.
Is the money that lobby groups works with organized in such a way that Apple's money doesn't go towards this particular thing? If not, what does "not supporting that stance" even mean? That they outsourced their fight for it, so that they can have the nasty outcome and stay morally pure, that as long as there is some bending over backwards possibility of Apple being against it, they're against it?
> There are a lot of members of CTIA,
Apple is a giant, not just one member among hundreds. If they are quiet on this, it has their support. Or are you saying they might not even be aware? Just didn't find it important enough to scream bloody murder about it? No matter how you try to spin it, can you spin it into something really good?
> and CTIA presumably lobbies for a lot of things, not just ISP privacy rules.
Unless you're meaning to say the also have a part of the budged for lobbying for the opposite of this effort I can only ask "yes, and?".
You're going to great lengths to argue that the tech company with the best track record for privacy is secretly trying to violate your privacy. You're wrong.
I'm not going to great lengths to $yourstrawman, certainly not in a kiddo phrasing like "is secretly trying to violate your privacy." I say what I say, in the words I use, and apparently none of you can argue with one bit of it, yet you downvote like mad regardless.
Were any of you even aware of the information? I looked at that list out of curiosity, and was surprised to see Apple. I was less surprised to see others, but Apple did surprise me. Not as surprising as the pathetic reaction here so far, but still surprising. And I don't take Apple seriously since the 1 button mouses, you know? I still believed their "privacy in our walled garden" stance, it's not like that required flattering them.
> the tech company with the best track record for privacy
I simply never bought into the premise that I have to pick among the presented turds. At that level of size and desire to be a middleman just to be a middleman, they're all trash sadly, and if you think criticizing one means bolstering the others, that's your outlook, another premise I don't share.
> We believe these additions will help us take the next step toward shipping Tracking Protection in Firefox beyond Private Browsing Mode. Look for that study in late 2017.
Not exactly the same, but there's Privacy Badger [1] from EFF that works on Firefox, Chrome and Opera. If you'd like to see a visualization of the tracking for your browsing habits, there's Lightbeam [2] from Mozilla. Both these have been around for a few years now.
Oh. I am sorry. True, it doesn't appear for normal windows. Anyway, as some other user suggested, visit "about:config" and change the privacy settings there.
I didn't realize it was only activated in private windows. Now that you've made me notice I consider this misleading, on the settings page it says nothing about being private windows only.
It most certainly does say that this only applies to private windows, "Use Tracking Protection in Private Windows" [1]. If you want the always using tracking protection you can set privacy.trackingprotection.enabled to true in about:config or install the disconnect extension.
Apple has every reason to do so, as their revenue doesn't come from ads. But I guess Google also has the motivation to enable this in Chrome, to make other ad networks less effective (and Google ads more effective).
If it happens, what does this mean for the internet advertising business?
They are completely different beasts. Internet Explorer merely offers the option to enable[1] "Do Not Track", which websites and advertisers are free to ignore[2], while Safari's new ad tracker blocker "uses machine learning to identify trackers, segregate the cross-site scripting data, put it away so now your privacy — your browsing history — is your own"[3].
Also worth pointing out that Safari has had the 'do not track' feature for years and Twitter recently announced they are going to start ignoring it (a good example of how useless it is). So this new protection is very necessary and a great USP for Safari.
You are actually incorrect. Tracking Protection refers to an IE feature that lets you set "Tracking Protection Lists", which block traffic to specified domains and URLs. You can see a bit about them here: https://msdn.microsoft.com/en-us/library/hh273400(v=vs.85).a...
The whole "Do Not Track" default thing was, of course, a huge fiasco, as Google and others chose to ignore IE's default usage of it.
I don't know why this has been downvoted. Tracking Protection Lists are one of the best and unsung features of IE. People don't realize that they're different from Do Not Track.
Arguably, the fact that you don't have to trust a random third party extension code is a perk. And since this is a pretty straight up text file format it works off of, it's easy to roll your own or customize it as you wish.
As I said in one of my comments, it's a bit janky to set up because you have to select one of the Tracking Protection Lists from their add-on gallery to turn it on, there's no default list pre-selected.
Going to my IE right now to activate it, I have to say this is a janky solution. It opens the Add-ons window, where you can see you have no Tracking Protection Lists. Then you can click to browse the add-on gallery for them, and then you have to scroll down and pick a list from a set of options.
While this is flexible, open, and that's all good, the lack of a common sense default and a multi-step setup process is probably why like... even I am not using this right now.
If Apple does this by default, it's gonna make a huge dent in Google Analytics' numbers, whereas probably almost nobody uses the feature in IE.
If you're interested, I created and maintain a tracking protection list based on the Ghostery and Disconnect filter lists. It's concise, fast, and better than anything in the IE gallery. https://amtopel.github.io/tpl/
IE isn't really a browser I use heavily personally. Has Microsoft carried the feature forward to Edge, or are they relying on extensions from the Store for that?
Most sites don't obey the do not track header. Edge on the other hand is much more nefarious, by default it sends all data sent by POST requests to Microsoft. I was surprised to find Bing sending data from people who use Edge on my site to try and improve their search results. There are so many security and privacy issues with this it's not funny at all.
I've never heard that Edge sends absolutely all POST requests to any website to Microsoft as well. Could you share an article or something proving this?
You're joking right? Factory reset your iPhone sometime and note all of the prompts to share and access your data. The only difference between MS and Apple is that Microsoft made the mistake of presenting everything all on one screen instead of breaking it up over 5 or 10 prompts as you use the device.
There's a huge difference between being asked if it's ok to share your data and just sharing it by default. Additionally, Apple doesn't offer any kind of way for anyone but you to decrypt your data.
I think he’s referring to aggregated anonymized usage data, which people can opt in or out of with no effect on function. (This is different than messages, etc., which are stored on Apple’s servers but end-to-end encrypted.)
How can you say that when Windows 10 had Cortana turned on by default? Siri is not enabled by default on a Mac (just like all the other services) and you have to purposely choose to have them turned on during install.
So you've never actually installed Windows 10 then? Because from the beginning it's asked for permission to share your data for things like Cortona, the touch keyboard, ink, voice, etc. Based on user response they've evolved the interface and made it clearer removing anti-patterns.
This is the same stuff Apple asks for permissions on.
Microsoft doesn't let you turn off telemetry data entirely. Things like hardware configurations and installed drivers are still send, albeit anonymously so that Microsoft can better support the OS.
Again Apple does something similar. Spotlight and Safari send data back to Apple even when you're making queries against other services (e.g. DuckDuckGo Searches). And like Microsoft, there's no UI to disabled it.
Yes, I have. I installed it 2 weeks before it was released to the public as part of the Windows Insider program and then again on several machines after release and, most recently, in the Creator's Update. Until there was a huge backlash against MS, all of those things were opt-out and turned on by default, including Cortana and the ink features. Apple does not turn any of these on by default, nor have they ever, and you have to opt-in to those features to use them.
Spotlight and Safari send anonymous data back that is parsed and separated so that it can't be used to identify the machine, user, or account that they came from. That's wildly different from the MS approach even after all the changes made on MS's end.
You're right that it's opt-out with Microsoft and opt-in with Apple. However Microsoft's opt-out screen appears during setup before you ever even reach the desktop. Where as Apple's opt-in is a nag that occurs periodically as you use the device or anytime you install an update.
I understand that Apple has publicly disclosed how they anonymize data and roll identifiers but Microsoft hasn't so you really can't say if telemetry data can be tied back to a user or not because you don't know.
Apple's opt-in is only triggered when you attempt to use a feature that relies on a function of the opt-in or when a new OS feature utilizes those functions. It's not a nag. It doesn't ask you until you want to use it. Microsoft assumes you want to use it and hides the opt-out settings under an "Advanced" button during setup when it asks about new features and then promptly asks you again after it assumes that you don't know what you're doing. The "Are you sure?" prompts on Windows are far more egregious.
Absolutely and so does everyone else. Google is great at asking/nagging you to turn features back on that you've turned off in Android. And every time you update iOS, if you have Location Services or some other feature like iCloud turned off then you get nagged to turn them back on.
There are ways to disable all of the various data collection mechanisms but they aren't sanctioned by Microsoft and MS provides no user interface for them. There are several projects on Github that perform varying degrees of this.
This suggests that Google/Facebook/Twitter will still be able to track you, assuming you use their websites regularly, but advertising companies that don't have pages frequented by the average internet user won't.
If I understand this correctly, the obvious counter-measure is for all links on example-recipes.com to go through example-tracker.com, which then immediately redirects to the original website with the linked-to content. Sort of like the weird link URLs in Google's SERPs.
This is great, but unfortunately, until Apple ups its browser security game, Safari is a non-starter. On macOS, switching from any other browser to Chrome is in the top 3 things you can do to materially improve your security in ways that actually matter in the real world.
Just to add some context, on macOS you can look at the seat-belt policy as a rough analog of for basic sandboxing guarantees, where the fewer exceptions you have the stronger your sandbox is. From that perspective, Chrome's policy has around 1/10th the exceptions of Safari.
And of course, that's before we get into more complex forms of isolation that Chrome implements, such as the sandboxed GPU process, or ongoing work into things like network sandboxing, the macOS bootstrap sandbox, and site isolation (origin-bound renderer sandboxing).
For anyone following, this is Justin Schuh of the Chrome security team (and co-author of TAOSSA, probably still the best book in all of software security).
Another thing Chrome does out of the box that Safari doesn't is U2F.
Still another is Chrome's industry-leading TLS management, including the pioneering of HPKP and the Chrome/Firefox pin list, and the aggressive policing of the WebPKI CAs.
I've been pretty aggressively terse in this thread, because I didn't even realize this was a live argument anymore. Safari is simply not as secure as Chrome, and it's less secure in ways that are meaningful to normal users.
The question is a bit complex than a simple reading of these files. Mac OS sandboxing allows dynamic extension of the sandbox, which would not be reflected in the profile (I'd bet Safari does more of this than Blink though). Also, as you mentioned, it's relevant to look at what's factored into separate processes, and how those processes are sandboxed. Safari's Network process has been networked since 2013, so I don't think you can count Chrome's ongoing work to do so as a Chrome advantage.
If you add these things up, the difference in practical effectiveness is not as wide as one might think.
I don't keep up too much on Safari these days, so congrats on moving the network stack out of the content process. But looking at the current WebProcess seat-belt policy and what gets initialized, it looks like there's still far too much attack surface relative to Chrome. Things like audio/video capture and other permissioned Web APIs appear to be permitted directly inside the sandbox. And the GPU attack surface alone is a giant vector for escape--plus all the other potential escape vectors posed by that very long list of mach services.
So yeah, the seat belt policies alone aren't determinative, which is why I called them "a rough analog". And it's hard to say what gets pulled in through warmup (which is why we'll be eliminating it with our v2 bootstrap sandbox). Accepting that, it's pretty clear that there's just dramatically less attack surface exposed from inside Chrome's sandbox versus Safari's.
The network stack has been out of the content process for a super long time, it is not a new thing. (Ironically, Chrome engineers argued strenuously against doing it when we first started).
You're right that separate GPU process is a huge advantage for Chrome. Kudos on that, and we'll likely have to move in the same direction sooner or later.
Audio/video capture is temporary and not in currently shipping Safari. It was just the simplest path to getting WebRTC up and running. We plan to fix it before we ship. I agree with you that it's risky attack surface.
Also agree with you that we expose more mach services and for lots of them it would be better not to expose them. A tradeoff here is that Chrome (as I understand it) provides most of those facilities via brokers that are often not sandboxed themselves. It used to be many of those things were just done by the application process.
I suspect over time we'll see our respective sandbox models become more similar over time, especially on macOS.
> The network stack has been out of the content process for a super long time, it is not a new thing.
FWIW, Chrome's network stack doesn't live in the content process either. It's not currently sandboxed, but it's in a process that has no scripting runtime or other dynamic content, so it's still pretty high bar for exploit. The exact reasons for the current situation have to do with some legacy Windows support that has since been removed, which is why the sandboxing work is now moving forward. So, I definitely appreciate your situation with adding some sandbox exceptions for WebRTC.
> I suspect over time we'll see our respective sandbox models become more similar over time, especially on macOS.
Fair. But I will say that Chrome being cross-platform tends to naturally push us in the direction of eliminating sandbox attack surface. Our supported platforms just differ so much that it's easiest to lock down the OS as much as possible and implement narrower, origin-bound capability brokers inside Chrome. If I were more tightly bound to a given OS implementation, I expect I'd have a lot more fights about sandboxing, because it's easier for devs to just standardize on what the OS gives you.
It does seem like being cross-platform makes it more natural for Chrome to lock down the content process very tightly, and provides a strong incentive to do so. On the other hand, it may make it more difficult or less natural to lock down some of the other processes.
On our end, it's natural to sandbox every new process we introduce, but also easy to fudge what is allowed in sandbox profiles. Sometimes we have a choice of accessing a service through a separate process, or working to make sure that service itself is more secure (sandboxed itself, offers thinner and properly validated IPC interface, etc). In many cases, the real right choice may be to do both. As well as fuzzing the heck out of every IPC boundary.
Can you give specific examples why Chrome is significantly better than other browsers, including Firefox, Opera? Chrome is a non starter for me because of its resource usage and battery hunger.
One specific area where Safari is better than Chrome is in private browsing mode. In Safari, each tab is completely separate, and the cookies aren't shared (as far as I can tell) whereas in Chrome, it's only separate as a whole "private browsing session." They each have their pros/cons but I prefer Safari's model.
Then try to work back either Edge's or Chrome's approach to security to specific Safari features and design.
The Chrome security team is probably the most sophisticated software security team in the industry (lest you think I'm in the tank for Google, I'd say the iOS platform security team is a close 2nd --- and, to be clear: Safari is a different story on iOS).
If you don't have the kind of security Chrome provides, then in reality everyone can track you, because all they have to do to own up your machine is get you to look at a web page.
If that's true, then I'd rather go down fighting (no matter how futile that is) than willingly give up any more private information to Google. I think the time for pragmatism when it comes to privacy is long over.
I'm saying what I said: if your browser isn't adequately secure, all the anti-tracking features don't much matter, because the people you really need to worry about will be able to own up your entire machine and quietly persist themselves into it.
Google doesn't need backdoors into Chrome, in the same way that it's technically not cheating if you adjust the rules to fit your demands better than others (see f.i. AMP).
> because the people you really need to worry about
I think we disagree who to really worry about. I worry more about persistent low-level corporate surveillance more than hacker attacks because while the latter is more acute and can cause great financial harm, the former is whats going to damage my freedom and right to privacy once the government decides it wants to firehose all that data.
This post has lots of info about Chrome and Edge RCE defenses. Super informative on this front. But is surprising light on detail about what makes their sandbox more robust than Edge's. (I don't know near enough about the Edge sandbox to assess this claim for myself.)
You work on the Apple Safari team. Are you really saying you feel like Safari's sandbox and anti-exploit features are comparable to those of Chrome? That would be a newsworthy claim.
Safari's sandbox is weaker in some ways and stronger in others. Saying which is overall stronger would be a judgment call. I wouldn't make a claim like that without spelling out at least some of the details.
This subthread is about the sandbox so I'm not sure why you threw in "and anti-exploit features". I'd probably say without qualification that Chrome has better memory corruption mitigations.
I hoped you might have concrete feedback on what aspects of our sandbox we should shore up. We have our own ideas but of course an informed outside view would be valuable.
In what ways would you say the Safari sandbox is stronger than Chrome's, on macOS?
How would you compare Safari's anti-exploit technology (allocator hardening, Javascript engine hardening, &c) to that of Chrome? Do you think you do anything better than Chrome does on that front?
Your original post here made a bold claim with no qualification and no supporting details. You're not providing any backing to your claim but at the same time you're asking me to give details. Plus you've repeatedly thrown in anti-exploit tech which wasn't the original point of contention.
It would be easy to get the impression that you're trying to shift the burden of proof and move the goal posts. Despite this, I will try to assume good faith.
I think you original post gave the impression that Safari either has no sandbox, or has a wildly ineffective sandbox. You didn't directly state it, but at least some users understandably took away that implication. I think this is inaccurate and unfair.
One piece of evidence we have is grey market prices for end-to-end Safari exploits (with full sandbox escape). By this metric, breaking out of our sandbox on Mac or iOS is not trivial, and is at least comparable in difficulty to Chrome or Edge on Mac, Windows or Android. On the flip side, it seems to be significantly easier to get inside-the-sandbox remote code execution in Safari if you go by market prices, hacking contests, etc. That's something we're working on. Chrome and Edge definitely have materially better mitigations here (as I said in my earlier post).
And finally, to answer your question: One small way Safari has better sandboxing is the we sandbox our network process (something that Chrome is still working on).
My contention was that Safari is less safe than Chrome, not that Safari's sandbox was in particular worse than Chrome's. Nevertheless, on balance, Safari's sandbox is significantly worse than Chrome's. I think --- but you'd know better than I would --- that this is because browser security is a platform problem for Apple, and an application problem at Google. Apple's platform-level mitigations are very powerful on iOS, but substantially less powerful on general-purpose operating systems. Chrome's sandboxing is specific to Chrome itself, and thus finer grained and more powerful.
I think if you create a breakdown of all the facets of browser security, it will look something like this:
You actually did make a claim that Safari's sandbox was in particular worse than Chrome's, in the post I directly replied to. That is what got my dander up. Elsewhere you implied that the Safari sandbox comparable to the Java sandbox. I hope you will now agree that the Safari sandbox is closer to Chrome's than to Java's.
I don't know enough about the full spectrum of security technologies in all the browsers to have an informed opinion on your rating scorecard, but some thoughts:
Your assumption is that browser security is (only) a platform problem for Apple is wrong. If that was true, we wouldn't have dedicated sandbox profiles for the WebKit content process and its various helpers, which are much tighter than the system default app sandbox on both macOS and iOS. All, macOS has significant system-level defenses, though obviously not as strong as iOS.
Safari and Chrome both use the same underlying OS facilities on macOS to implement their respective sandboxes, so I don't think it's right that "Chrome's sandboxing is specific to Chrome itself" to any greater than Safari's (or really, WebKit's). It's also not more fine-grained. My understanding of the Chrome sandbox model is that their ideal is to deny everything, based on designing around the very coarse grained mechanisms in Windows. The macOS/iOS sandbox model is intrinsically built around fine-grained permissions, and Safari grants more of them to our content process. So if anything Safari's sandbox is more fine-grained (but I am not sure this is an advantage).
On the scorecard itself:
- It's really hard to compare sandboxing technologies across platforms. My vague impression is that Safari's is stronger than Edge's and macOS Chrome has perhaps a small overall edge over macOS Safari in terms of effectiveness. I'm also not totally sure you can even do a linear ranking. For instance, only Edge puts their JIT outside the content process, but I am not sure this means they have the strongest sandbox overall.
- Anti-exploit: agree with the top two, not sure I'd put Firefox over Safari.
- UX: I'm not totally sure how you are grading, but you should be aware that Safari has a really good built-in password manager. Passwords are securely stored in Keychain and we offer to generate random per-site passwords at account creation or password change time. I don't even know the vast majority of my website passwords. With iOS 11 this will be expanded to sharing website passwords with corresponding native apps for those sites, removing the main remaining reason to have a simple password.
- TLS: Not knowledgable enough here but note that we're moving to boingssl in the upcoming OSes and have cert pinning and HSTS and all that good stuff.
- Library security: not entirely sure what you mean by that.
I broadly agree with Justin Schuch's point in the post you linked that isolation technologies are more important on a philosophical level. Also I would give kudos to Chrome and Edge for having excellent overall security.
Sorry for the delayed response. Also: I have to be terse about some of these things for work reasons.
First, regarding isolation: using the same OS facility to block system calls is a superficial similarity between Chromium and Safari. Chromium and Safari are divided into process components differently, and block different system calls. Chromium exposes much less to its renderer process than Safari does to WebProcess. Not only that, but Chromium has finer-grained components; the GPU isn't exposed to Chromium renderers the way it is to Safari WebProcesses. This isn't a theoretical difference, as you know (but readers here don't): IOKit has been a source of WebProcess sandbox escapes for Safari. Safari isolates the network process and Chrome doesn't, but the network process is a low-priority attack surface. The highest priority attack surface is the one reachable directly from content-controlled Javascript. You say Chromium's edge over Safari is "small overall". We agree that the edge exists, but I strongly disagree that the delta is small.
We agree on anti-exploitation. In fact, if Safari got better here, I'd be less nervous about people running Safari. What are the plans here? The combination of (1) general purpose operating system, (2) rich attack surface exposed to WebProcess, and (3) lack of serious runtime hardening is most of my argument against using Safari. The rest of this list is "nice to have" stuff.
Regarding UX: Chromium has a well-regarded security UX team. Does Apple staff a dedicated security UX team for Safari? Chromium supports U2F natively. When will Safari? I think Chromium, Safari, and Firefox are closer together here than the browsers are on other facets of this list; I don't think Safari does a bad job here, just not as good of a job as Chromium.
Regarding TLS: Adopting Google's BoringSSL library is a fine start and I know Apple has strong crypto people on the Secure Transport team. But does Safari support HPKP? (If so, when did that happen?) Why is it virtually always Google's TLS team detecting and punishing rogue CAs? What CA BR violations were detected by the Safari team, or any other team at Apple? Has Safari done anything like the Google PQ handshake experiment? It feels a little unfair holding Apple to the standard of what is basically the most sophisticated Web PKI team on the planet, but that's a real part of browser security.
Regarding "Library Security": I don't know what to call this item and so I'm not surprised that you're confused, but: how does Apple's work fuzzing and doing vulnerability research in the underlying libraries that the browser depends on compare to Google's work doing the same thing? I think we both know the answer: nothing Apple is doing is close to what Google's in-house offensive researchers are doing. Apple benefits from the work Google does here and so can draft off Google's team here, but Google prioritizes their in-house offensive work to help Chromium.
I could make a similar scorecard for iOS versus Android and I think you'd see the reverse on these rankings, with Apple in the lead on basically everything. But browser security isn't hardware security, and on macOS, I don't think Safari and Chrome are close. I think Chrome is significantly more secure.
I know your replies here are probably somewhat aggravating and definitely time-consuming, but I appreciate your level head and the detail and information you provide. I don't have a complex understanding of any of these technical details and you explain things in a clear and concise way.
Just because both features are named "sandbox" doesn't make them equivalent; the exact same argument says you should also be happy to run hostile Java applets, which, after all, are sandboxed.
You make this claim over and over again and then double down on it when pressed (you literally call Chrome and macOS Safari "incomparable"), but you fail to provide any reason or evidence to support your claim and instead always insist on counter-evidence. When pressed (further down in this comment thread), the only thing you share is this vague idea that Apple isn't incentivized to make Safari secure on macOS. You also compare Safari/Chrome sandboxes several times after declaring them incomparable.
You then, unprompted, create a comparison of all the major browsers again with no citations or supportive reasoning.
While I'm not sure how effective they are on OS X, I have a hard time agreeing with any suggestion that Chrome is anything but the worst possible option for browser security right now. Chrome's official extension store is full of malware which collect not just your browsing data, but the contents of every page you view, and Google has shown almost zero interest in policing it. And a large percentage of malicious websites are designed to get users to install these malicious extensions.
Chrome may be relatively decent at preventing a webpage from compromising your OS, but in the modern era, a compromised browser is as bad or worse anyways, since that's where most of your sensitive activity goes.
While many HN readers will know to avoid the perils of this crud, I don't feel Chrome can be recommended over IE6 to the wider Internet while this remains so commonplace. Safe use of Chrome requires constant vigilance.
In light of Chrome's issues, I feel like a claim that switching to Chrome is important for security to require an exceptional evidence of vulnerability in the other browser.
I'm not saying it doesn't suck. But it's going to keep sucking as long as people are willing to pretend that Safari already has comparable security to Chrome.
Then why not fix the horrendous browser performance of Chrome? It's not like people's complaints about how much energy it uses relative to Safari are new. People have been complaining for YEARS.
According to another comment you work on the Chrome team. So unlike most everyone here, you're actually in a position to fix one of the two options.
I'd combine those two, and then my #3 would probably be making sure that you can't easily click on things in emails that open documents in local applications, and my #4 would be some combination of FDE and encrypted DMGs for projects and sensitive files.
Stopping fingerprinting right now is essentially impossible for a motivated attacker. It's enough to block the dumb trackers, but as long as performance is a consideration caches will exist. And as long as caches exist, so will fingerprinting.
I doubt the next Safari also addresses browser fingerprinting. Otherwise, Apple would have mentioned it. Most likely, ad networks will adopt browser fingerprinting over the next few months, and then Apple will introduce some solution to that in a year or two.
Simple way would be tainting any JS/DOM data that interacts with the font metrics API (or one of a number of other similar APIs) and then not allowing tainted data to be used as parameters in network requests.
You don't even need the font metrics API. Draw a span containing the character "m", measure the width of the span using Element.clientWidth. Unless you taint (almost literally) the entire CSSOM, you can pull off similar things.
Alternately: why not anonymize CSSOM return values? Your browser might have access to OS fonts A+B+C, but if your JS asked the CSSOM about the size of characters on the page, the answer it would give would come from an "alternate world" where the browser only has access to the web-safe fonts, and so is using one of them.
Pixel-correct measurement of fonts / text is a must-have for certain specific applications like subtitle renderers. (I maintain one.)
For a specific example, it's more pleasing to split a long line of text in a way that all the split lines have roughly the same length - "a a a b b b" -> "a a a\nb b b". But CSS only gives you one way to split lines - as much text as possible in all but the last line and whatever's leftover in the last line - "a a a b b\nb". This means a renderer library has to be able to measure the width of text to be able to insert linebreaks itself.
Huge amounts of the web will break: anything doing anything layout-related with JS will likely break.
Changing line-lengths will cause odd bits of layout breakage, so just giving bogus results as if rendered with a different set of fonts won't work properly either.
Browsers already treat first-party cookies differently than third-party ones. Report a homogenized fingerprint to Google Analytics, but a real one to the main site.
Looks like this will stop (after 24 hours) some companies from doing an initial redirection to set cookies for tracking purposes... Example:
1. Search Google for hockey sticks
2. Click on search result hockeystick.com
3. hockeystick.com issues a 302 to adcompany.com which then issues a 302 back to hockeystick.com
Why the 302? Because in Safari, you could only access cookies in a 3rd party context if you've seen a domain in a 1st party context. Setting a cookie in adcompany.com in a 1st party context gives you the ability to read that cookie in a 3rd party context which could be used for tracking purposes.
The URLS would be different. Companies also rewrite internal links as you're navigating a site to accomplish the same thing. Example: https://baycloud.com/thirdparty-redirect
They're just being a little sophisticated in how they block third-party cookies. This will hardly stop other tracking scripts, tracking images, widely-used fingerprinting techniques and related js calls. So nothing remotely close to even Brave let alone a TOR or the Epic Privacy Browser.
We're trying to do the most extreme thing we can do short of blocking ads. To be more effective, you end up blocking ads, whether intentionally or as a side effect.
This blocks more than just cookies by the way, it affects all client-side state. And client-side state is still the primary and most reliable tool used for tracking, even though other methods exist, such as browser fingerprinting, behavioral fingerprinting, and IP-based tracking.
The big question to me is whether it's enabled by default, and whether it blocks requests to Google Analytics. If so, that's an interesting shot across the bow.
Maybe I'm overly paranoid, but I assume Google does all sorts of fingerprinting (documented and not) via GA. Why else would it be free if it didn't provide a big upside for Google?
Why do they need fingerprinting? They can just give you an identifier and combine it with your login on the Google sites to connect it to your identity.
Isn't it like this that all data is gathered anyway, but site owner can access the more advanced tracers with paid subscriptions?
I don't intent to provoke FUD, I seriously don't know. This would sound like rational choice for Google since they need this data to run their business.
This is actually really concerning to me. If they blocked Google Analytics, it would severely damage that data. It'd be bad news for site owners who just want to quantify their traffic.
....so? Site owners are not guaranteed this access; their script runs on the client computer.
I say this as someone who does a lot of analytical research and re-targeting and would be hurt if this was rolled out on a larger scale; I just don't think I have a right to the data.
Imagine you were a police officer with this mentality.
"As someone who investigates lots of crimes, it's totally fine if someone invokes the fifth amendment, I don't have the right to compel them to answer."
"I mean if you don't care about something prevents you from doing your job, why even join the force?"
Doesn't seem that ridiculous a comparison to me. You don't have a right to compel something from someone else, but that doesn't meant you just have to give up at whatever task you are trying to accomplish
When I read a comment, the first thing I do is read the ones above it so I understand the limited scope of the comment. I don't immediately rub my hands with glee and go about compiling a list of situations to which the comment doesn't apply. I guess that sort of thing appeals to some.
Complaining about losing data is in no way the same as assuming entitlement status or forcing someone to do something. The job of a police officer is to enforce laws, and exercise human judgement. The issue with the fifth amendment would be handled by lawyers in courts, not by the officers on the ground. IT jobs have completely different parameters. The comparison with police officer is entirely irrelevant and as such I don't want to continue that discussion.
Kind of presuming the wrong thing there. There's still work to be done, right? Just because something would make the job easier does not mean it should be done, ethics come first.
I was guessing that the reason that you did not feel entitled onthe data was for ethical reasons (at least that's how I feel). I also feel it's ethical to use the data, as long as it's freely given.
Why is wrong for people to want to protect something they have of value, from someone else just harvesting it from them? I can see why this is annoying but can you really not see the other side of this situation?
Cookies are one way to track users. They are not the only one. Google Analytics is so ubiquitous...I can't see Google missing the opportunity to leverage it.
The cynic in me sees this as cutting off Google, and then tracking within the browser so they become the source of cross-internet tracking. I'd be on the lookout for any new 'personalization' feature that comes in to the browser. E.g. WWDC 2018: 'Today we're happy to announce Siri integration with safari! She will provide personalized recommendations and results by applying machine learning to your documents and data!'
> Siri now suggests searches in Safari based on what you were just reading. And when you confirm an appointment or a flight on a travel website, Siri asks if you want to add it to your calendar.
Search for "Smarter about you." on this page: https://www.apple.com/ios/ios-11-preview/ Looks like it's done on the device though, End-to-end encrypted with your other devices.
Firefox (Nightly at least, I don't follow stable :D) also has built-in tracking protection, only in Private Browsing by default (about:config to enable everywhere).
It says a lot about the state of the web that both Apple and Google are looking at publishers and saying "Look, if you won't fix your websites, we'll fix them for you" (Google in the form of AMP on mobile devices). However, as one of those who subscribes to the opinion that AMP breaks the web, I greatly prefer Apple's approach.
It makes me wonder how many publishers at national newspapers and magazines are even aware of what’s going on.
It is well-known that Apple uses Omniture (acquired by Adobe, aka SiteCatalyst, aka 2o7.net, etc.).
As in 192.168.0.2o7.net. Remember, "SWF" stands for Small Web File. Yes, they actually tried to get users to swallow this when Shockwave Flash started to be used in devious ways, such as to track users.
Omniture's business is third party tracking cookies similar to Google Analytics or KISSmetrics. Not sure and don't care whether Flash is used so much anymore. If too young to rememeber search and ye shall find information about "permanent, Flash cookies" that could not be removed.
Apple is not saying "We will not engage with companies selling third party tracking cookie services." Clearly they are not opposed to third party tracking cookies in principle.
Instead they are announcing some change to their browser. Wow, exciting. It is not clear what exactly this announcement accomplishes for users. Probably nothing. If you are trying to avoid ads and tracking, popular browsers (without extensions, etc.) are not your friends.
In practice, User-Agent strings (which are just HTTP headers) have been shown to be pretty effective at uniquely identifying and tracking most people. So even disabling JavaScript and Cookies only goes so far.
Source? Because the only information contained in user-agent strings in modern browsers are browser version (realistically limited to vendor since browsers auto-update) and operating system version. So basically all you're going to get is (Chrome/Firefox/Edge/Internet Explorer/Safari on Windows/Linux/Mac), which isn't much.
It's more than just the browser, it's the exact, EXACT version of the browser which can be very revealing if you're not updating your browser (almost) every day. For example: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.85 Safari/537.36
I can't speak for Chrome or Safari, but Firefox's UA is pretty sparse:
Mozilla/5.0 (X11; Linux x86_64; rv:55.0) Gecko/20100101 Firefox/55.0
This a totally custom, self-compiled build--and there is absolutely no reflection of that in the UA. Also note that the Mozilla/5.0 and Gecko/20100101 fields are frozen and are only there because sites break if they're not there.
>It's more than just the browser, it's the exact, EXACT version of the browser which can be very revealing if you're not updating your browser (almost) every day
Is there a reason why you don't have auto-update enabled in your browser?
Also, auto-updaters don't apply updates right away, so as long as you're not a few versions behind the latest, you will blend into the crowd.
A quick Wikipedia search turns up more fields [1]. Although some of these fields are not 100% accurate due to historical reasons (I'm looking at you IE). I'd bet there are a couple other data points they gather via JS to finger print.
Example:
Mozilla/5.0 (iPad; U; CPU OS 3_2_1 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Mobile/7B405
I compared 2 chrome versions and it seems that most of the version numbers there are static.
Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2979.0 Safari/537.36
Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.10 Safari/537.36
As for iOS 10 it's pretty sparse as well.
Mozilla/5.0 (iPhone; CPU iPhone OS 10_0_1 like Mac OS X) AppleWebKit/602.1.50 (KHTML, like Gecko) Version/10.0 Mobile/14A403 Safari/602.1
It's slightly worse than windows because it probably discloses your device type, but there are tens (hundreds?) of thousands of users for each iphone variant.
Good read here.[1] An example could be using your installed fonts. Like I was saying they probably use a bunch of other JS tricks. These 3rd parties aren't going to disclose anything.
Due to how the site takes into account ALL user-agent strings ever collected, it overestimates how unique an user-agent string is. Realistically in a given point in time, there are only a few dozen user-agent strings in widespread use (due to how few bits of information actually gets put into it). Unless you're using a special snowflake browser/operating system you should be fine.
I misspoke when I said it was simply User-Agent - they appear to fingerprinting based on other items such as installed fonts, etc. I believe when they say it's unique, it means, "unique". Not, "reasonably uncommon". And if that's the case, it's been up for years and has never encountered a system exactly like my current one. I'm on a very popular Linux distro used by most of my co-workers at a mid-size company, and I have the same set of work-related plugins installed as all of them, plus LastPass and Ad Block Pro. So not mainstream by any means, but also not going out of my way to be a snowflake, either.
Maybe the next frontier in browser anti-tracking is to stop sending a User-Agent header, or to build in functionality like one of the browser add-ons that randomly pretend to be different browsers.
There are a number of ways. Cookies are one, but you can also collect other kinds of data from a web browser to uniquely identify a user across multiple sessions. Generally speaking, if you can run JavaScript, you can track the user. This is done by all advertisers and most little widgets like Facebook or Disqus comments, like and tweet buttons, etc.
It's cookies and the change isn't really earth shattering but it does close the "redirection trick" loophole that some companies were using to track you across domains. See my example here for more specific details: https://news.ycombinator.com/item?id=14493373
Most trackers use cookies or other client-side state to track you across the web. There's also various fingerprinting techniques but they are less reliable.
Glad to see a player big enough to cause some damage taking this up. At this point, anything that harms Facebook/Google and those trying to mimic their data collection tactics should be considered good for the web and internet.
Later they demoed they can track your interests in what you've read on the web to show you personalized news on their news app and keyboard autocompletion.
And if this is true, this is a huge plus, because nobody else is doing it. Because Windows is cloud-based for this stuff, just like Google. None of the 'smart' features of Windows 10 work unless you enable what amounts to a keylogger and the ability to send a lot of extra data about what you do to their cloud service. (Well above and beyond the telemetry most people worry about here.)
That'd be first-party tracking. If I'm using an Apple app I expect Apple to know it, but I don't necessarily expect Mixpanel or Google Analytics or Segment.
Do you want crappy ads? Then go ahead, make tracking more difficult. Tracking helps you see ads for things you actually want to see. It's not some kind of grand conspiracy.
I have yet to see the supposedly relevant ads that all this targeting is supposed to get me. The closest to targeted I've gotten is seeing stuff I already just bought on Amazon.
For example: The user will no longer see the ads for items they may have previously searched for (even if this is unwanted/unpopular) - that's a a change. Sorry to be so pedantic but it's not accurate to say nothing behaves differently.
There is no option to turn off phoning home to Apple in Apple's pre-installed operating systems. Every user of iOS is constantly pinging Apple servers all day every day.
Connect an iOS device to the internet and watch the network. The user is given no control over this. All users are assumed to need Apple's help setting the system time.
The networking functionality of NeXT/Apple's operating systems is based on open source BSD operating system code.
But BSD does not phone home to some organization when you install it. Why not? Surely Apple's approach is the best one for all users, right?
It is amusing to watch these companies proclaim they will block others from tracking and serving ads while continuing to siphon user data themselves, often in ways that are all but transparent to users. Apple can block everyone else, then I can block Apple. OK by me.
Someone in this thread made some comment about Microsoft Edge not tracking users. Do people seriously believe nonsense like that? MS was dumping debug output via DrWatson to the network long before collecting user data for profit was even a strategy.
Connect a Windows computer to the internet and watch the network. All on by default. Unlike Apple, they have no prepared explanation/justification why they need to do this.
And even if they did, who cares? Users prefer not to be tracked. Companies are admitting they know this.
Users could opt-in to tracking if they believed they were getting some benefit.
But that is not how this game works. There is no "opt-in". It is on by default. There was no intention to make tracking a "choice".
Probably because companies know what the choice of users would be and it would not be favorable to the company.
But that is not something we are allowed to discuss.