probably test their migrator for multiple months on hundreds or thousands of testing devices
If I remember right, it was said at the time that they had included the migrator code in one or more of the releases before the one in which they actually did the migration. They ran it during those upgrades in a "do as much of the migration as we can without actually committing the changes" mode so that encountered errors could be reported back. So they had most of it tested on millions of devices.
I have a similar problem where the rollover region is not at the actual position of the rendered element but much higher. So you where you have to hover over is above the actual element. Really annoying. Happens a lot on Twitter or Youtube website but also at other places.
Because if you see OP's comment, they were testing their migrator on real users' devices (in what I presume is a blue green manner, without explicitly informing them except in small print) before their "official launch". If their beta test didn't go well, they will have to rollback their change, consuming more writes.
I sometimes listen to the All in podcast. The 4 VCs recently blasted Apple saying they don't know why there are so many engineers. And they don't know why an iPhone is so expensive. They last longer now and don't need to upgrade as often. To their eyes an iPhone has become boring.
My reply is Apple supports close to 2 billion devices. They have to work and never crash. They also need to get upgraded for security patches and new features. The fact they find them "boring" is a good thing. Means iPhones have become an intrisic part of their lives.
This article describes one of the many things Apple has to do to make them "boring". Indeed a feat of engineering.
Now, when they replicate that success with Siri, that will be a game changer. Hopefully advances in LLMs will bring that day forward.
After seeing so many failed file system projects at vmware, I was pretty skeptical when I first heard about APFS. The fact it succeeded with so little noise just blew me away. It proves apples has the best system engineers in the world today.
Here's some info from gherkin0 back from June of 2016:
gherkin0 on June 26, 2016 | next [–]
> Fun fact: Dominic Giampalo (who wrote the BeOS file system) is on the APFS team. His book "Practical File System Design" is an excellent description of a traditional UNIX file system design. May be out of print now but I think used copies turn up on Amazon.
It looks like he has a PDF up on his website:
http://www.nobius.org/~dbg/practical-file-system-design.pdf
Absolutely. I doubt we'll ever see it, but it'd be really cool to see a blog post or write-up about just how many edge cases they found and had to work around--especially the more obscure ones. Obviously at Apple's scale, something that only hits one out of a million users still has the potential to be a big problem.
As someone who went through a MacOS APFS migration, I can certainly say it wasn't smooth.
Maybe the limited capabilities that mobile device APIs give apps and user was helpful avoiding issues - can't have multiple partitions or weird time machine issues on something without partitions or time machine.
I had no problems with the Mac OS upgrade but then I didn't have custom partitions setup and Time Machine on an external drive has always worked for me.
Worked on my hackintosh back then. But again, just one partition. I was triple booting but always disconnected the other drives when doing OS upgrades.
Honestly I think that's because Apple cares far more about the iOS devices than it does about Mac OS (and did even in 2017). Mac OS is a second-class citizen for Apple. The main focus is on devices, and then services.
It would be more impressive if I understood how big of a change this actually was. There's very little context in this post.
What was the previous filesystem, and how different was APFS?
If they both operated on the same core principles or underlying code, then the migration might have been trivial. Is it just a new "skin" on the prior FS?
If it's a radically new filesystem based on all new concepts, then that's a much bigger feat of engineering.
What did this new FS accomplish? Was it faster? More reliable? Easier to work with? Added functionality?
It was a massive change, but one they prepared for by having the same migration happen on laptops/desktops
The old filesystem was HFS+, released in 1998 and APFS was released in 2017.
APFS added encryption, data integrity checksums, copy on write, snapshots, logical volume support. The migration wasn't just a "search and replace the name and ship it", but an intensive re-writing of the filesystem metadata during the migration.
Released in 1998 for Mac OS 8.1. That HFS+ worked at all for Mac OS X was a minor miracle; it was absolutely not designed for use on a modern UNIX, and support for some features like deleting in-use files involved some egregious hacks (like temporarily stashing files in invisible directories).
I'm still shocked that they went with HFS+ for so long, for all its shortcomings.
Early Mac OS X did support UFS (not sure which variant, probably an early BSD?) but never fully and eventually removed it. HFS support for backwards compatibility was necessary, but making it the boot FS for so many years did hold the platform back.
There was talk about ZFS at one point, but it never happened - maybe due to licensing. A large variety of FSes is definitely something Linux has over BSD and permissive licensed software. Even ZFS isn't fully permissive which only leaves HAMMER(2) as the FS with next-gen features and a BSD license.
> I'm still shocked that they went with HFS+ for so long, for all its shortcomings.
At the same time, it's impressive that the basic design of HFS held up as well as it did! HFS was initially introduced in 1985, and HFS+ was a fairly conservative update to support larger volumes (and, later, metadata journaling).
> There was talk about ZFS at one point, but it never happened - maybe due to licensing.
That seems very likely. Apple's experiments with ZFS ended around 2009, right about the same time that Oracle finalized their acquisition of Sun.
At the time ZFS on FreeBSD came with some pretty serious caveats regarding performance and memory usage. It was pretty clearly designed for servers and not really the sort of thing designed to run on your laptop, much less iPhone. Apple probably made the right choice for technical reasons alone.
Since then ZFS has improved and machines have become faster.
Apple wrote an interesting article[0] way back in 2000 about some of difficulties in integrating Unix/Mac, especially files and filesystems.
e.g. case sensitive vs insensitive, different path separators : vs / , lack of file IDs on UFS, resource forks, hard links and the above mentioned deleting files that are open.
I believe it has checksums on its own metadata, but no checksums on actual user data. So you won't lose a whole folder or volume due to bitrot, but it doesn't provide protection against a flipped bit in one of your photos.
I wonder why they don't do checksums on data. Too complex? Is bitrot not a common problem on modern SSDs? It's also not clear what should a user do after they've received a notification that a scrub found a mismatch. It's not like you can replace the soldered on SSD... It would be still great for external ones though.
They claimed it's not needed because Apple hardware is very good; from [1]:
"The APFS engineers I talked to cited strong ECC protection within Apple storage devices. Both flash SSDs and magnetic media HDDs use redundant data to detect and correct errors. The engineers contend that Apple devices basically don’t return bogus data."
I don't think it's a good decision either; obviously you can run systems without checksums (we've been doing so for decades) but "netter safe than sorry" seems to be the smart thing. It's pretty cheap to checksum data, which is why all the next-gen filesystems (ZFS, btrfs, bcachefs) do so.
Yeah it's strange and quite disappointing. Even if Apple devices never returned bogus data (doubtful), there's a whole industry of external disks, largely fuelled by Apple's ridiculous SSD prices, that you're putting APFS on and which are often of dubious quality. Then again, the interview is now 8 years old and they must have been working on something since.
HFS+ was an old, journaling filesystem. The simplification is that it is a filesystem that just writes the changes you ask for directly to disk (e.g., overwriting the content of a file), with the catch that it first writes what it's about to do to the journal. The journal mainly just helps figure out what was going on if you pull the power mid-write so it can minimize the inevitable damage, rather than just leaving you with a completely bricked device.
APFS is a Copy on Write filesystem. The entire disk is organized as a tree starting at the root. Every time you change a file, no matter how little the changes is, you do it by writing a new file, creating a new tree of the whole disk with your new file in it, and finally updating the root pointer to point to the tree. Files are only removed later when they are garbage collected after not having been a part of any tree for a while.
This means that if you pull the power, the filesystem either looks exactly like it did before because the root pointer was not changed or has all your written changes exactly as you described them. The caveat is that Apple chose not to protect against certain types of bitrot.
It also means that you can take snapshots by just saving a reference to a tree, and roll back to or inspect that snapshot later, without having to pay for a copy of the files - creating the snapshot is instantaneous, and the consumed storage is just the sum of unique files in all trees.
This can also be used to have multiple roots (e.g., one for the current OS version, one for the new update being prepared) and subvolumes (e.g., one for the OS, one for the user, one for each app, whatever), again only paying for the sum of unique files.
This may sound expensive over allowing small writes to be done directly, but SSDs can basically only be written in whole 4k blocks, and the SSD controller moves your data around whenever you write to avoid wearing out the same block anyway. As such, there is a lower bound for I/O overhead, and there's a bunch of tricks the OS can pull to make the CoW overhead virtually disappear, like write caches/coalescing.
Yes, it's radically different. The difference is similar to that between ext4 and btrfs or ZFS.
It replaces HFS+, which itself predates macOS/OS X and over the course of its life got a lot of Unix-specific modern features bolted on in the form of various clever/scary hacks (e.g. hard links).
This seems like goal post moving. The question was if the migration was complicated, the answer is yes.
Whether or not APFS is revolutionary or the best option is not the discussion. The core fact is that Apple needed to move away from HFS+ and they decided to move to their own FS, which brought in a bunch of changes that are standard in other OSs.
And they did it in a smooth and fairly uneventful way, so that's remarkable.
No smartphone these days gives you access at the mass storage level anymore (because it causes issues for concurrent access from the phone and computer, because there's often encryption etc.), so the underlying file system in this scenario is completely irrelevant and has no bearing on you being able to access the phone's data on Linux.
Android uses MTP or PTP (can't remember which) for that type of access, while iOS uses something proprietary, but there's a Linux implementation that worked reasonably well for me last time I had a Linux laptop: https://libimobiledevice.org/
You're way offtopic just because you wanted to be all dumb and tribal. Nobody claimed APFS was better than your favorite filesystem, simmer down.
But also, I wanted to let you know that even in your tribalism you're being stupid. AFPS is not a clone of ZFS. Not even close! It has a fraction of ZFS's features. Why? Because it doesn't need them. Why? Because Apple is tightly focused on client devices, and those do not need the huge list of server-oriented ZFS features, or the overhead that comes with them. And that overhead is considerable! People are often advised that 8GB RAM is the bare minimum for a server running ZFS filesystems, and much much more RAM is desirable for performance. Apple deployed APFS to iPhones with as little as 1GB RAM. ZFS was simply not an option.
APFS is its own thing. Deal with it.
Also, your whole schtick here sucks. Only one FS gets to be "FIRST!" at anything. Other filesystems which implement that feature are not necessarily "clones". To actually make the judgement that cloning occurred, you'd have to get real technical and look at both the algorithms and the on-disk layout, and if you actually did this there is no way in a million years you could come away thinking APFS is anything other than original work.
Ideas? Well duh, people build on other people's ideas all the time. If idea stealing was completely forbidden, once Gutenberg invented the printing press, nobody else could have built them, and where would humanity be now?
It’s a completely different file system. It support things like zero cost copying, snapshots, full disk encryption, multiple volumes within a single partitions and is generally better suited to modern SSDs.
I think it was also helped by the fact that iPhones and iPads are basically consoles. It's significantly easier to predict the state of the filesystem on those than on general purpose PCs.
A lot fewer devices than Apple, but with changing a device's entire operating system a lot more can go wrong. I wasn't on the team at the time, but maybe someone else can chime in with more details.
Windows also had FAT16 -> FAT32 and FAT32 -> NTFS converters. In the 90s. But nobody produced blog posts praising them for them, this was just a Windows feature.
For additional context, APFS is only supported on SSDs. My iMac didn't have an SSD, but it did have an upgraded HDD that I chose while buying the machine from Apple. While performing the OS upgrade that was supposed to determine APFS wasn't applicable, it thrashed the bootloader and wouldn't proceed (something about not finding the unmounted partition? IDK it was several years ago).
I hadn't been keeping proper backups either, but was able to manually backup the most important stuff using Target Disk Mode. Then, to Apple's credit, I eventually did a full reinstallation of the broken macOS, which amazingly kept my files intact.
EDIT: I originally said the migration "bricked my computer", but greedo is correct that it wasn't permanent damage.
I guess not, but it did render it inoperable for several months. Someone less technical wouldn't have been able to recover it at all (nor could the Apple Geniuses over my 2-3 visits there).
Can you please not post in the flamewar style like this? It's not what this site is for, and destroys what it is for, so we have to ban accounts that do it repeatedly.
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to them, that would be good. You can still make your substantive points thoughtfully. But please avoid generic tangents (e.g. $BigCo-boo-or-yay) in addition to name-calling, fulmination, and the other things the guidelines ask you not to do.
If I remember right, it was said at the time that they had included the migrator code in one or more of the releases before the one in which they actually did the migration. They ran it during those upgrades in a "do as much of the migration as we can without actually committing the changes" mode so that encountered errors could be reported back. So they had most of it tested on millions of devices.