My $0.02: A). join (or start) a startup. I suspect most would love to have someone like you or B). at a larger organization, join a team that is carved out from the overall org working on a new initiative. Depending on the company, job posts may specifically call out this sort of team.
Anything by Judea Pearl, but especially the Book of Why[1], is good. He comes at causality from a CS perspective, which I think would make sense for most people on here.
Economists also have a big causality literature which might be less accessible for HN folks but I think is still interesting and important. For a good intro to all that, I suggest Scott Cunningham’s “Causal Inference: The Mixtape.”[2]
Pearls model based causality is from a set up the same as structural econometric models of the economists.
Here, economists largely care about identification and partial identification (check work by Manski), which is at the same time related to causation but also a more practical matter of data and model choice.
This is delightful. As someone who uses AI/ML/MI/... for security, I find there is not nearly enough understanding for how attackers can subvert decision systems in practice.
Tripwire is a linux util for doing just that. However you need some read-only media to store the hashes and I think rootkits can still just intercept the read calls.
Some years ago, I had Tripwire installed for a few days but quickly removed it again because whenever I upgraded installed packages, I'd get a storm of messages about files which had changed and that was just annoying since I was the one who had initiated the action that caused the files to change, but at the same time there were so many files that changed of course, that I had no way of distinguishing legitimate changes (as they all were) from any potential illegitimate changes.
That's precisely what it's supposed to do. If you update the Tripwire db every time you "initiate an action that causes monitored files to change" - then it does a _magnificent_ job of telling you when someone _else_ changes those files.
You need to run 'tripwire --update' every time you run 'apt-get update' or 'pip install foo' or 'npm install bah' or whatever - then you wont get that storm of false positives.
Yes - a few times a year when normal and authorised things or people have unexpectedly changed files in tripwire-protected places, and in ~20 years I think three times when I'd had an intrusion.
Those three timely notifications of real breaches have made 20+ years worth of occasional false positives 100% worth it.
I'm on mobile so links are annoying to get, but Apple has a great PDF on iOS security in general, which includes details on their protections against kernel patching. Windows has KPP, and Mac OS has SIP. Not familiar with anything for Linix but I'd be shocked if there weren't multiple incompatible implementations of similar features.
Realistically, this is also something virtualization can help guard against. If your OS is initialized from a known good version external to the VM, every time the VM starts, you greatly increase the difficulty for an attacker to get persistant root.
Doubt it. Once you have control of the system, why would you not be able to just disable the check? What they should do is enforce code signing at the processor level.
Too bad that has it's own issues. Android/Google definitely has the clout to be able to call the shots (ie force the processor manufacturer to issue them a CA cert for their own use), but what would happen if ARM then became a dominant server arch?
Also, putting code signing in the processor wouldn't fix the problem: code signing already happens in higher levels (ie higher than userland), so moving the verification a level up would likely take the exploitable bugs with it. The problem remains.
If the checks are enforced by the bootloader which is signed, then you cannot disable them. I think this is how the system partition is protected on recent Android versions. As this attack demonstrates, simply requiring all code to be signed isn't enough.
Oh, "this important security update requires a Wi-Fi network connection to download". Really? It's only 40.5 MB. Let me decide, please, how I use my data.
Am I missing a setting that allows me to install an important security update on a network of my choosing?
Yeah, apple needs to fix this. They have been bumping the max app size for non-wifi downloads throughout the years from 10mb to 100mb, but they haven't kept up for the actual security updates.
For extra hilarity, if you have two iphones available, you can use the personal hotspot feature between them and install the updates even though it's all 3g/4g anyways.
Put your SIM card in another (i)phone, make a personal hotspot, update.
If you don't have another phone, use a friends' phone and/or data that shares an access point.
If you don't have a SIM card, move to a country where they protect consumer rights, so you can change phone without asking your carrier.
I don't get the point you're trying to make here. We've lost control because there's a serious vulnerability? We've lost control because Apple can patch the OS?
Well its sort of a general thing. We can't even control what runs on our devices and they run so fast you might not even notice something new running. Also stopping hacker from getting in remotely is hard for 24/7 connected devices.
Even on desktop machines (Linux or Mac for me), there are processes running that I don't really know what they are doing. The OS is actually very complex and you could insert another process and it can go and send stuff out and it would be hard to notice. I was also thinking in context of Windows 10 sending out who knows what all the time ( I don't use windows, but I think they called telemetry..).
In the past when everything wasn't connected together and the connections were slower this wasn't as much of an issue. Although that does allow us to patch quickly and easily. Apple sees to it you'll be hounded till you update..
Its doesn't seem easy to fix. Maybe safer languages will lead to less hackable code.
> Even on desktop machines (Linux or Mac for me), there are processes running that I don't really know what they are doing.
That's been the case pretty much since Windows 2000 (or even 98).
> In the past when everything wasn't connected together and the connections were slower this wasn't as much of an issue
Viruses were really bad even when everything was pretty much airgapped. They were not vectors for state-level attacks only because of cultural elements (you weren't walking with an exploitable beacon in your pocket; there was little value in exploiting what were basically glorified typewriters; and established interests weren't taking this sort of thing particularly seriously outside of the US).
> Maybe safer languages will lead to less hackable code.
JavaScript is fairly safe: it runs in a VM, right? Guess what was used to persist this exploit across reboots...
I don't think this is something that we can "fix" at all. Door locks are ridiculously ineffective and exploitable, but very few people feel the need to use anything different. Similarly, computing devices will always be exploitable one way or the other, but people will keep using them; what we can do is to limit the surface attack as much as possible, and to avoid placing everything online (hello, IoT!) just for the hell of it.
Even on desktop machines (Linux or Mac for me), there are processes running that I don't really know what they are doing.
I don't run Linux so I can't comment on that one, but surely there are "simple" Linux distributions that don't start countless unrecognizable processes?
Mac is a hopeless case; a veritable plethora of inexplicable processes.
In contrast, I just logged in to my OpenBSD firewall. I was able to easily recognize everything that was running. The OpenBSD startup procedure is very simple to understand. It's easy to know exactly what processes are started and why.
This happened the moment you bought an iPhone. Not that Android is much better: Apple (and to a certain extent, previous feature phone manufacturers) set the stage for treating consumers as too dumb to use their phones as they like, and the rest of the smartphone arena happily followed suit. There's never been a point in time where I was satisfied with the heavy constraints placed on users by smartphone OS makers. And I'm not approaching this from a Stallmanesque, philosophical perspective, but a plain old ease-of-use one.
One upside of this is that a large percentage of devices are up to date. It's quite a contrast to other platforms (mobile or otherwise). Just how benign is big brother though?
I wrote a deep dive that demystifies what I believe is going on in their heads and shared it privately earlier in the year.
By popular request, I'm posting it publicly now. I hope it resonates.