If you have a Mac, then "Image Transfer" is what you are looking for. It allows to simply sync fotos and images as their raw files (original quality if so configured) to a directory.
From there you can do whatever you want with them (put them on multiple separate disks, upload them to galleries, strip jpg and distribute them etc.
You do miss out on face detection, but stuff like 'locations' can be easily done with the GPS data in the EXIF, just loop through your files and rip the details out of EXIF, store in DB and make a little interface for it et voila.
But yes, in the end, I would love to be able to say "store my shit at X" where X is my private colocated box, while still using apple's pretty cool photo management services.
Till then (which will never come ;) ) I just have the small 3CHF/month plan which gives 200GiB, which is good enough for 'the best photos of the last few months'.
PS: Indeed, one has to plug the phone into the computer and then do the copy/sync, but that also gives on the moment to run iTunes for a full backup, always a good idea just in case stuff breaks/goes-missing/etc.
> If you have a Mac, then "Image Transfer" is what you are looking for. It allows to simply sync fotos and images as their raw files (original quality if so configured) to a directory.
The Image Capture application (probably previously named Image Transfer) will not preserve Albums and Favorites as far as I can tell.
Anyways, as mentioned I bought the third-party backup protocol speaking application after this unfortunate event, the application which is able to transfer the data into folders based on the albums and favorites that I have on the phone. But I appreciate the comment nonetheless.
Submitting to HN, as I've noticed a few folks here too asked about this.
Primarily pushed to github for historical reasons and to show how relatively simple that code was that was forwarding all the packets on the SixXS PoPs... ;)
See README.md for a few more details about the system, though the code is light on comments, it should be relatively straight forward.
DNSSEC is primarily, like TLS, about message integrity. TLS adds encryption of content so that sniffers can't read along. With DNSSEC everybody still reads along.
DNSCurve is not really deployed, thus there will always be alternatives springing up.
DNSCurve is also not end-to-end, until all authoritative servers support it.
Google has their DNS over HTTPS thing btw, which is scary but is another alternative if you want to hide what your doing (except to the server you ask questions to, but you can do that from Tor ;) ).
The best alternative you currently and for a long time will have is VPN/Tor though: get a tunnel to a host/net you trust to not betray the content of your connections (be that logging or network analysis).
Passive DNS will always exist (as it happens in the recursor, hence dnscurve does not help). And due to the caching and scalability properties of DNS it will never internally be encrypted, otherwise those two properties will be gone. The moment they are gone it won't be DNS anymore, and maybe that is a good thing and also possible in the world of today where bandwidth is less of an issue and most people use google to search for things.
Heck, google could just include the IP addresses of the servers in the HTTPS response, that way, one only needs to know where Google lives, the rest will be transported over HTTPS....
Long live that the web is not only web though. And I think there is a great future ahead for .onion-alike sites when their usability and accessibility rises as currently it is mostly BBS days: you need to know the correct number, and DNS is human readable, and Google is what most people use to find sites.
TLS is not "primarily about message integrity". To see why this isn't true, observe the targets of most (all?) of the recent TLS attacks: recovery of session tokens.
You do trust the origin site to send you to the correct next site right? :)
The big problem here is that you'll always still need DNS in a lot of cases, as webpages have long not been single-origin resources; most have to load all those tracking pages; also this would require all webpages to include that method, and also only works for web, the Internet is more than that.
I am looking forward to "DNS" pointing to more than just IPv4 and IPv6 though like in the above silly example ;)
Being one of those "DNS people" (having given a few talks etc on the subject), I also have been telling people that I cannot privately deploy DNSSEC, simply because the failure model is too high.
Currently with BIND and Knot one can setup automatic signing of zones, which makes things a bit easier, but it relies on setting up a master-slave relationship. My current deployment of auth dns servers is simply a bunch of master nsd's, that get rsynced with their configs and reloaded (after a config test :). If one of them breaks, the others keep on running happily. In the case of master-slave though, if the master (the sole thing with the current keys), breaks (box dead, IP down, routing issue), the slaves after a while also do not know what to do anymore....
Thus, after all your hard work you go on vacation for a bit.... you come back and everything is gone, as your domain does not properly resolve anymore, no valid keys are there, not properly signed, not properly rotated...
The failure model is too big; even if you run a 24/7 NOC, they will only notice problems when they hit you; and they will have to monitor DNSSEC verification specifically to notice them, and not get notified through twitter about it.
From a client perspective the failure model is a whole lot worse: when DNSSEC verification fails the answer is a flat-out denial of service. At least with TLS when a cert is invalid the client gets a chance to peek at it and go 'meh, looks okay to me' (even though that is a 'badidea' for chrome users ;) )
The problem with DNSSEC though is that with how the DNSSEC system works, with delegation in mind, it is hard to come up with something 'better' than NSEC3, this as you want to be able to avoid people from listing your whole DNS zone, but you also want to be able to delegate subdomains to other folks.
And that is also what makes the root special: the keys have to be deployed in the resolvers world-wide and everybody needs them before they can use them. (Similar to TLS root certs); but the same goes for crypto-options: everything needs it before they can upgrade; for the Web, that is 'solved' by having aggressive browser-updates, though as can be seen from ssllabs those are not the only clients, and people primarily update their Chrome and Firefox, thus there is a long-tail there too, hence why people do not normally configure the Mozilla Modern TLS configuration as they break those other, not updated, clients.
For resolvers this is worse getting new crypto in there, let alone keys: they are embedded typically in the OS (OSX mDNSResponder (which has bunches of problems over the years), Windows has it's own, on Linux it depends on the day which one you get); but worse: large swaths of people rarely run upgrades on those systems...
Plus to add more pain to this: there are these people running Docker and other container images that never ever get updates. Oh, and then there is this magic called Android, congrats on 15% deployment for a 1 year old OS....
To finalize, my rant: unless somebody figures out an easy way to 'upgrade the world' in a relatively short time frame (~2 to 3 months), we'll always be stuck with older software/configs(keys,etc)
And older software means: broken implementations that do not rotate keys properly, that do not have the latest keys, that do not have the latest TLS certs, that do not have security properties fixed.
And thus also, that even if somebody replaced/fixed DNSSEC: there will always be clients that will not work along...
It is fun to report those things to Google Project Zero and then find that people on that side obviously do not understand that security bypasses are... well... security issues.
full submission reproduced below, just in case they radar-disappear the item... duping items is apparently what Project Zero does so that the items disappear from Google results...
---
PREAMBLE
Thank you for an amazingly solid looking ChromeOS. Happy that I picked up a nice little Acer CB3-111, thought about plonking GalliumOS/QubesOS or heck OpenBSD on it, but with the TPM model and the disk wiping, not going to.
Just wanted to note this discovery so that you are aware of it and hopefully can address the problem as it would improve the status quo. Keep up the good work!
Greets,
Jeroen Massar <jeroen@massar.ch>
VULNERABILITY DETAILS
By disabling Wireless on the login screen, or just not being connected, only a username and password are required to login to ChromeOS instead of the otherwise normally required 2FA token.
This design might be because some of the "Second Factors" (SMS/Voice) rely on network connectivity to work and/or token details not being cached locally?
But for FIDO U2F (eg Yubikeys aka "Security Key"[1]) and TOTP no connectivity is technically needed (outside of a reasonable time-sync). The ChromeOS host must have cached the authentication tokens/details though to know that they exist.
The article at [2] even mentions "No connection, no problem... It even works when your device has no phone or data connectivity."
First the normal edition:
- Take a ChromeOS based Chromebook (tested with version mentioned above)
- Have a "Security Key" (eg Yubikeo NEO etc) enabled on the Google Account as one of the 2FA methods.
- Have Wireless enabled
- Login with username, then enter password, then answer the FIDO U2F ("Security Key") token challenge
All good as it should be.
Now the bad edition:
- Logout & shutdown the machine
- Turn it on
- Disconnect the wireless from the menu (or just make connectivity otherwise unavailable)
- Login with username, then password
- Do NOT get a question about Second Factors, just see a ~5 second "Please wait..." that disappears
- Voila, logged in.
That is BAD, as you just logged in without 2FA while that is configured on the account.
Now the extra fun part:
- Turn on wireless
- Login to Gmail/GooglePlus etc, and all your credentials are there, as that machine is trusted and cookies etc are cached.
And just in case (we are now 'online' / wireless is active):
- Logout (no shutdown/reboot)
- Login with username, password.... and indeed asks for 2FA now.
Thus showing that toggling wireless affects the requirement for 2FA.... and that is bad.
EXPECTED SITUATION
- Being asked for a Second Factor even though one is not "online".
As now you are walking through say an airport with no connectivity, and even with the token at home, just the username and password would be sufficient to login.
SIDE NOTE
For the Google Account (jeroen@massar.ch) I have configured:
- "strong" password
and as Second Factors:
- FIDO U2F: Two separate Yubikeys configured
- TOTP ("Google Authenticator") configured
- SMS/Voice verification to cellphone
- Backupcodes on a piece of paper in a secure place.
Normally, when connected to The Internet(tm), one will need username(email), password and one of the Second Factors. But disconnect and none of the Second Factors are needed anymore.
SIDE NOTE2
The Google Account password changer considers "GoogleChrome" a "strong" password.... might want to check against a dictionary that such simple things cannot be used, especially as 2FA can be bypassed that easily.....
Likely they’ll just ban your account for some "vulnerability abuse" or "hacking", even though they themselves said just days ago "this is not a vulnerability".
I’ve seen it before, reported a vulnerability to Google, got a "not a vulnerability, not eligible for anything" back, published the PoC on my website, and Google subsequently blacklisted my domain, IP range, and everything.
While login happens with a Google account, guest logins are possible.
The advice given typically is to have a separate account for the device 'ownership', that way, even if your main account gets blocked you still have access to it.
As there is a Linux kernel underneath, basically anything could in theory work; but non-standard options require 'developer mode' operation which kinda destroys the security model of ChromeOS.
Android apps (on some models of the laptops) might work though.
Having support for Wireguard would be pretty neat, but afaik not possible yet.
OpenVPN works quite fine for my use cases up to now.
Neat tool, and glad it's here for others to see. It doesn't look like it matches the specific kind of ovpn setup I'm using (for example, I don't have a client key/cert).
When I last looked at the situation, I couldn't find an ONC equivalent for some of the options in the .ovpn file. It might be the docs or even functionality have gotten better. (I remember looking at some PDF, for example, and now there's https://chromium.googlesource.com/chromium/src/+/master/comp...) It's also possible I just missed something last time. Either way I should take another look.
If you have a Mac, then "Image Transfer" is what you are looking for. It allows to simply sync fotos and images as their raw files (original quality if so configured) to a directory.
From there you can do whatever you want with them (put them on multiple separate disks, upload them to galleries, strip jpg and distribute them etc.
You do miss out on face detection, but stuff like 'locations' can be easily done with the GPS data in the EXIF, just loop through your files and rip the details out of EXIF, store in DB and make a little interface for it et voila.
But yes, in the end, I would love to be able to say "store my shit at X" where X is my private colocated box, while still using apple's pretty cool photo management services. Till then (which will never come ;) ) I just have the small 3CHF/month plan which gives 200GiB, which is good enough for 'the best photos of the last few months'.
PS: Indeed, one has to plug the phone into the computer and then do the copy/sync, but that also gives on the moment to run iTunes for a full backup, always a good idea just in case stuff breaks/goes-missing/etc.