>It allows servers to integrate with the strong authenticators now built into devices, like Windows Hello or Apple’s Touch ID.
The problem I have with this is that some consumers won't understand the degree to which all of their access depends on their MS/Apple account. Consider the hypothetical person using this for all their access, smacking their finger down on the reader habitually for every pop-up. Ok now they lose their phone. Do they remember their apple password? Do they know how to recover their account? Are they locked out of their digital lives now? For some percentage of people that answer would seem to be yes.
For example, the Apple account recovery process [1] seems particularly vulnerable scenarios where your devices and wallet are lost together, so don't get your bag stollen while traveling if you don't remember your apple password.
I agree that this is an important next step in making WebAuthn accesible enough to supplant passwords. Note that once you're recovering your password, you've already lost access to your end-to-end encrypted data like keychain, and that's a good thing. It's still a solvable problem, and there are three things that I'd like to see in this area.
a) The ability to share passkeys across vendors, including the ability to implement a "sync fabric" as some folks in the WebAuthn working group have called it, so it's interoperable beyond the major vendors.
b) For these vendors to strengthen their own log in experience. Apple only allows their own TOTP implementation and SMS fallback to authenticate to iCloud. I'd like to use WebAuthn exclusively here, so I could back up access to my now-precious Keychain that holds all my FIDO credentials with a YubiKey.
c) A better story about backing up security keys. Implementing a) would give us that. Devices that can be initialized with a given seed like some common hardware crypto wallets would give us that, albeit not without introducing changes to the threat model -- you have to store the seed and input it somehow -- and hhttps://www.yubico.com/blog/yubico-proposes-webauthn-protoco... would give us that as well.
The external dependency, for there to be organizations managing implementations for this 'sync fabric' makes the whole thing quite tenuous and subject to political manoeuvring and other unforeseen factors that come with maintaining hosted solutions.
Based on past observation of large tech companies, I just cannot see this happening:
> so it's interoperable beyond the major vendors.
(though of course time will tell.) Instead I only see this good-faith initiative being turned into a tool to further promote user lock-in to respective ecosystems and platforms.
I don't disagree that it will be hard, but I think that ultimately all that is needed for this to be possible is a standard TPM API that lets you export key material wrapped in a public key (corresponding to another TPM, presumably) only if it's signed by the TPM itself. This would let implementers build something equivalent to Apple's circle of trust (https://support.apple.com/guide/security/secure-keychain-syn...), and use the new API to share 'Passkeys' between devices.
Whether having an open syncing fabric is enough for vendors to want to interoperate with it I don't know, but if they ship TPM comformant hardware, you as a consumer would have the option to use either fabric.
I glossed over a lot of details and the implementation might not end up looking like that, but I believe something similar would be sufficient to kickstart the effort.
> The ability to share passkeys across vendors, including the ability to implement a "sync fabric" as some folks in the WebAuthn working group have called it, so it's interoperable beyond the major vendors.
This is a bit of a new challenge, because websites that consume authentication sometimes need to be able to reason about that authentication's strengths and risks. The models we use for this today, built around countering risks through multiple factors, are not set up to map to these new credentials which are abstracted by software and don't represent a single of the factors strongly.
It may be better to have third party sync fabrics that are cross platform, like a 1Password or Lastpass or Bitwarden, and indicate this so that the quality of the authentication can be reasoned about.
This is already true. iPhones, and Macs for the past few years, are so integrated with iCloud that losing access to your Apple ID is going to give you a very, very bad day. Apple clearly recognizes this with their work on physically pairing devices via iCloud.
WebAuthN is, on the whole, a very good thing. Cross-browser authentication that gets lifted up to a biometric or token level will reduce the vast, vast majority of customer support and friction to creating accounts. Additionally it reduces the dependence on OAuth2, having to "register" applications and requiring customers to remember "did I use Google to sign in here? Did I use Facebook?" etc etc.
Sane password manager uses a local database which you can't be banned from. You can easily make a backup of that database to some usb drive and put it into safe. You can't make a backup of your iCloud account and be sure that Apple won't ban you because you're living in a place that US does not like.
I'm sure the user who remembers to back up their password manager will also remember their Apple (or whatever) password. Parent was talking about failure cases of forgetting your "master password" - that's problematic regardless of the tech.
> I'm sure the user who remembers to back up their password manager will also remember their Apple (or whatever) password. Parent was talking about failure cases of forgetting your "master password" - that's problematic regardless of the tech.
Apple password is not enough to recover access. Apple usually uses 2-factor auth.
Does the service have access to your apple id email? I presume that it does which means that theoretically I should be able to restore access by sending a password link to that email.
That's presuming that user didn't use icloud email for his icloud account, of course.
Suggestion for https://webauthn.io/ - right now it has a form front and center which asks me to select "Attestation Type" (from "None", "Indirect" or "Direct") and "Authenticater Type" (from "Unspecified", "Cross Platform" or "Platform (TPM)")
I'm a big nerd, and I had absolutely no idea what to do when faced with those choices.
I strongly suggest dropping those options from that initial demo - pick the most likely defaults instead. You can tuck them away in the "Advanced settings" section instead.
1. website X (email, dmv, etc) wants to log you in.
2. it accepts only apple, microsoft and google as brokers. with direct attestation! because otherwise there's no way to prevent spam. ha!
3. you are redirect to these sites, they will required a DRM plugin to run native code after your browser and check the crypto module in your device (whatever is proprietary for apple or google phones, or a TPM device on windows and if you are very very very lucky, linux)
4. now you are redirect back to site x.
How attestation prevent spam? by leaking your identity via side channels:
"""
Generally speaking, attestation keys have associated attestation certificates, and those certificates chain to a root certificate that the service trusts. This is how the service establishes its trust in the authenticator’s attestation key.
"""
of course the spec sells you with misleading promises. About device ID "must be model, not serial number" etc... but none of that is part of the spec, and even if included in fido3, guess which parts apple and google will screw up or ignore?
This article reads like every other article in this space. Heavy on the cryptography, light on everyday business problems.
Okay, so Microsoft is listed as a partner.
I have an ASP.NET web site and an Azure tenant.
How do I make it use WebAuthN?
Then… what will help desk do when a user calls up because they can’t authenticate?
A user signed up on their phone and sat down at their corporate PC. How can they use the credential stored in their phone to log on to the site via the PC?
Is there a way to tell, from Javascript, what kind of authentication the user is going to be doing?
Do I have to tell my users "Login with webauthn" (which none of them will ever do), or can I say "Login with FaceID" by detecting that they are on an iPhone with FaceID?
I tested on webauthn.io using a Macbook with TouchID. On Safari it uses TouchID, whereas on Firefox it prompts me to insert a security key. My users think security keys are expensive and scary, but they really like TouchID. How do I only show the option on Safari when there is TouchID, but not Firefox which only supports security keys? Am I back to UA sniffing and building my own lookup tables of available features like back in 2005?
I don't understand why authentication usually requires you to type in some 6-digit number from your phone. From an ideal user experience point of view, why not just pop up a dialog on your phone, wait 1 second (to prevent accidental taps), show "decline" or "approve" options, and that triggers the authentication to proceed? This seems like an experience that Apple would design.
Even better, use a thumbprint to authorize on the phone, to add one more layer of security. Then you hit the trifecta of verifying 1) something you know (the password entered on the website), 2) something you own (your phone), and 3) something about you (your fingerprint).
Push notification 2FA certainly has some UX benefits over manually entering a TOTP, but there are documented cases of attackers gaining access by triggering the 2nd factor push and a user dutifully pressing "approve". Office365 is an especially bad vector for this risk as it prompts for auth throughout the day at seemingly random intervals while using o365 services. Users are trained to hit accept to keep going even if there isn't a password entry dialog that obviously triggered the 2nd factor ask.
WebAuthN makes this a moot point though. All the auth is handled under the hood. There is no password or TOTP code to enter and yet in the right setup can be 2FA with minimal user interaction. The keys are stored resident on your device (something you have) and there is interaction to unlock them (finger/are or PIN/know). Best of all it's unphishable since the keys are unique per domain, so lookalike domains won't work.
Google actually has this implemented for signing into Google with desktop Chrome and Android, but I'm not sure it's standardized yet. Ideally Google will make this mechanism usable with all WebAuthn-supporting sites.
(It seems like there's two different forms of this that Google has implemented. One form is done simply: when you log into Google, Google tells your phone to show a prompt, but this form is still phishable, because if you're using a phishing site while an attacker is proxying your login, they could still trigger it. The other form involves desktop Chrome talking over Bluetooth to your phone to verify what domain you're looking at. This method is immune to phishing domains like other WebAuthn authentication methods so presumably this is what they'd try to standardize. It does involve multiple moving parts so it's not too surprising it's not standardized yet.)
> From an ideal user experience point of view, why not just pop up a dialog on your phone, wait 1 second (to prevent accidental taps), show "decline" or "approve" options, and that triggers the authentication to proceed?
Because it's insecure. Because you as a user don't know which login attempt the prompt is for. This is especially bad when combined with applications with persistent connections that occasionally decide they need to re-up their credentials. It allows for attacks where you just spam someone with login requests until they either misclick or just get fed up.
Seems to me the MFA app could, with the approve/deny prompt, display the application the request is for.
If you delay the request of the MFA until after the password has been verified, then even a single "unexpected" MFA would be an indicator of the password having been compromised.
… and if it's insecure, well … MS is using that flow.
Simply displaying the application is insufficient; the spamming issue would remain as they would spam the most common app that asks for auth at random intervals. An actual fix would involve displaying a randomly generated sequence in the app and in the notification and training users to check, but there would still be plenty of people who would just say yes without thinking.
MS has that flow as an option and it can be disabled. In my job life I've already heard from regulators who want it off.
The actual fix is to move to webauthn where the user experience is excellent and the security is much stronger than any password flow could ever be no matter what stuff you pile on top.
For now it is mostly just that "pull-based" TOTP is cheaper/easier and works in more "offline" or "partially-disconnected" scenarios. Your phone and the website only have to directly "communicate" once: that QR code to bootstrap the secret key. After that all of the math is independent: the math to generate codes is done entirely on the phone and the math to verify it is done entirely on the website.
There is a growing support for "Push" style authorization tools that more directly communicate between the devices. Up to now the tools have been mostly vendor-specific. Google has push notification authorization in the Google ecosystem. Apple has push notification authorization in the Apple ecosystem. Microsoft has push notification authorization the Microsoft ecosystem. The growing WebAuthn standards (for which the linked post is a Guide to working with them) are exactly the sorts of standards that are being built to increase inter-operability between vendors and trying to make "push" style authorization cheaper/easier/more ubiquitous on the web. (Those standards aren't 100% there yet for multi-vendor interoperability as other comments in these threads accurately nitpick, but this is still a giant step forward in that direction.)
Also, if your TOTP Authenticator app isn't already using your device's fingerprint or Face ID biometric locks, consider moving to a TOTP app that does. Most of the major ones do, exactly for that "trifecta" reason of layered security.
I'm not an expert in authentication but afaik TOTP (and HOTP) work completely offline. That means you could store your keys on a device that doesn't have internet access. On that device you can do whatever you want. Some TOTP apps allow you to lock your keys with an additional passphrase or a biometric factor.
From my (maybe naive) POV as a user I tend to agree, it would be nice to have a standard for push-based authentication so that I can actually see when someone else has made it past the password prompt. Although email notifications would largely solve that problem (if more websites used them).
I"m not seeing a huge benefit of the personal device being offline, while you're trying to log into an online service. But let's say there was a need for that, what about using bluetooth or wifi direct to push to the device?
Like I said, I don't know the standards, so I don't know the authors' intentions. But there are actually specialized devices which do nothing but generate TOTP tokens, so that seems to be a use case. (The keys don't have to be on a phone or in a particular app.)
A push token usually means you're utilizing a service such as Okta, RSA, Symantec VIP, etc. whereas RFC TOTP can just be managed locally and the user can choose a 2FA app of their liking.
> I don't understand why authentication usually requires you to type in some 6-digit number from your phone.
Because it proves that the user logging in is in control of a device that they've linked to the account. When you add an account to whatever app you're using (Google Authenticator, Authy, etc.) what it's actually doing is receiving a cryptographic key that it uses to generate the 6 digit code based on the current time. Without that key, the proper 6 digit code can't be generated.
I think the procedure I described also can do this, but the 6-digit code is sent in the background. I don't see why a human has to physically write out 6 digits from phone to computer, instead of it just happening automatically.
I main difference here is usability. The current process is going into an app, finding+choosing the website from a list, tapping that website, manually copying from one screen to another, checking that you copied the digits correctly, then confirming. This is stressful and takes about a minute. A process where you just confirm a dialog, or use your fingerprint takes 2 seconds, and doesn't require the mental effort of memorizing and writing out 6 digits. If the people working on security can't see the enormous difference between the two workflows, then this is hopeless.
It's the same issue that plagues the security-minded people who think regular users will go around copying and storing each others' PGP keys.
The problem, IMO, is the lack of standardization. Okta has its own implementation, Google has its own implementation, etc. TOTPs in stark contrast are pretty universal and not that much harder to deal with, provided you have some way to backup and recover your TOTP keys (this is Authy's selling point, for example).
Why in the world is this API so complex? Devs will continue to implement password authentication even when there's general browser support until either there's a simpler way to register and sign in users.
How could it be simpler? You ask the browser for a credential associated with an opaque ID, and get back a public key with its own ID. To use the credential, you ask the browser to sign a challenge (another opaque ID), and the browser gives you back a signature, which you check against the key. It's almost the simplest thing that could possibly work; it's much, much simpler than OAuth2.
This page is an excellent summary introduction to WebAuthn, but it makes the API look more complicated than it is, because it surfaces a lot of options you probably won't care about.
> Is attestation optional or required? Is there a default? Can I not care? What do I read to not care?
Attestation is optional, the default is not to have it, and you should not use it. It's something some people insisted they needed, and eventually it's easier to let them have the feature than keep explaining why it's a bad idea.
So, you can elide that element of the structure, or, write "none" instead of "direct" here to say you don't want attestation.
If you just throw the attestation away (or do something morally equivalent, like log it and then only read the logs once in the first week when you're interested) then you annoyed the user (this prompts me to permit attestation, and explicitly refuse) for no purpose. Hopefully in authentication it's not necessary to explain why higher friction is not good.
If you use the attestation to make decisions (e.g. allow Yubico products, refuse everything else) now you've got the burden of those decisions. You need to stay on top of new products, evaluate them and decide what's acceptable. For a private site you can control the burden by e.g. requiring employees to use the company branded Yubico authenticator, and just making sure you allow any new batches Yubico ships when you buy more, but for a public site this is definitely a real piece of work for a small team.
If you're doing that work - to what end? Do you have the information to make wise choices? I think the user is far more likely to know whether the authenticator suits their purpose than you. They may know that - because they're constantly working with abrasive chemicals using their bare hands - their fingerprints are trash and so the fingerprint reader you recommend doesn't work, they go with the PIN entry device. Or despite your preference for USB-C they work all day with servers that only have USB-A ports so they want a USB-A authenticator.
For a private site you actually might have policy coming from above that's most easily implemented using attestation. Like stupid password policies, at some point you've told the Powers That Be what you recommend, they've ignored it, you just do what they said and roll your eyes. On a public site it's just crazy.
Attestation compromises FIDO's "Relying parties shouldn't care what the authenticator is" design because now you do care what the authenticator is. So as a purist that's enough reason to rule it out.
I was considering trying to implement this in an imageboard engine project, with the idea being that instead of relying on IP addresses as a way to curb abuse without requiring users to make accounts, every user would instead cryptographically sign their posts and that would serve as a form of identity while still practically speaking being more anonymous than having to log IPs. But the reliance on Microsoft and Apple, and there being no documentation I could find about how this would work on Linux or a BSD, made it a no go for me.
I wish it were possible to authenticate to web services as easily as just sharing a GPG or SSH public key to a server and signing a challenge to prove your identity, but there would probably be security and usability concerns with doing something like that.
I understand how users register and login on a device - but it gets complicated when wanting to allow users to couple multiple devices to an account.
Anyone here with recommendations?
Say you registered via smartphone and now want to login on the desktop - do you tell the desktop user to grab his mobile safari, login and pull up some PIN-No which then to type into the desktop client?
And how would people recover accounts if their devices are lost?
I guess technically one can always come up with some solution - but while FIDO gives a unified, cross-device way for users to login and register – it’s the complete opposite when it comes to the aforementioned issues
Microsoft, Google, and Apple recently announced support and commitment for multi-device credentials, which you can share between devices in the same "sync fabric". See some discussion on Hacker News here: https://news.ycombinator.com/item?id=31294316
>Say you registered via smartphone and now want to login on the desktop - do you tell the desktop user to grab his mobile safari, login and pull up some PIN-No which then to type into the desktop client?
That's basically how it has to be done, at least for on-device authenticators. Granted, you can replace the PIN code mechanism with some other one, like having the website email you a one-time authentication URL that you can then use to access the website to add your desktop authentication.
If you use a portable authenticator (Yubikey), then you can just use the authenticator on the phone and on the desktop. The ones with NFC will perform the same authentication on mobile and desktop.
According to Mozilla's docs & that issue, it seems like WebAuthn considers the UA and the "authenticator" different. The authenticator, here, is a hardware device, either a separate security key or a TPM. But that really means that the hardware the keys are on is a SPoF, and it shows in the comments in the Github issue.
The thread seems to resolve this with "every RP should allow the user to add more authenticators"; this is, to me, something that's not going to work:
1. asking a user to buy two phones, TPMs, or hardware security keys isn't going to happen, realistically, and the consequence of it can't be "the user is locked out of their account"
2. even if I do 1, I need to do the "add a second authenticator" flow for my backup authenticator each time I register an account. In the event that I do have an authenticator become compromised, or simply just succumb to hardware failure (what's the warranty on a modern phone?) I need to do an O(accounts) operation to migrate to a new authenticator?
3. OIDC has taught me that RPs don't get this. I think StackOverflow is the only site I've seen correctly separate "account on the RP" from "auth from OP used to login" into a one-to-many relationship. I doubt RPs will get it right for WebAuthn, but we'll see.
… I should note that MFAs share some of these problems too. I used Google Authenticator for an MFA, and then my phone reached the end of its useful life, after a short 9 years. Turns out: there's no way to migrate Google Authenticator's data to a new device! Thankfully, in my case, I was retiring the device, and it wasn't completely dead, so I could still use it, and thus, migrate the accounts. And it was an O(accounts) operation. ("Thankfully" (/s), many of my "2FA" accounts were using SMS, so there wasn't too much to migrate from Google Auth.)
At least with a password manager, I can back up & move the managed passwords to another device. (Though to be clear, OIDC > each site has its own password, to me.)
I don't see any user verification anywhere here. All I see is Javascript calls that create a credential on the client without ever asking the user about it. How is this not vulnerable to someone else impersonating me?
It's on the other portion of FIDO2: CTAP, short for Client to Authenticator Protocol. Your authenticator (A TPM, a YubiKey, maybe a phone) verifies your biometrics, pin, or presence before signing a request for the client (the browser, in the case of WebAuthn).
If you request User Verification (UV) rather than just User Presence (UP) then the authenticator does whatever it does verify this is actually the same user, and not just any human. That's up to the authenticator, and is irrelevant to the Javascript code.
On the cheapest devices this is typically a PIN entry, so maybe a GUI window appears on their PC and they type in some value the authenticator knows, maybe it's "FuckGoogle" or "870429". But there are devices where it's fingerprint recognition, it's entirely up to the device (except that the PIN entry feature has explicit protocol support so that your OS can help out)
After it sees acceptable verification, the authenticator signs the message it's sending back, it doesn't actually understand most of the message, because it is just a very dumb piece of electronics, but it does understand a series of bitflags, in this case the flags UP and UV which we mentioned at the start. So it knows it is signing a message about User Verification (or in most cases on the Web, it is not asked for User Verification, because for e.g. second factor on the web you needn't have that, "I have the device" is the second factor)
Is there a reason why client side TLS certificates never took off as a form of authentication? Unlike webauthn, it doesn't depend on a specific application level protocol like HTTP.
Because the UX is 100% absolutely terrible. I mean it's so bad that even die-hard fans can't use it regularly. It's worse than GIT's UX .. well OK maybe it's as bad as Git's UX.
Then why not work on improving the UX in the OS and/or application instead of coming up with an entirely new standard?
For example, when creating accounts on websites like here or reddit, part of the process should involve the application you're using creating a CSR and the website sending the certificate back. The application should then store the certificate and whenever you log in, the application knows to use the certificate and private key when negotiating a connection.
And the application doesn't have to be a web browser. It could also be an email client or IRC client, for example.
All(well all the ones anyone ever uses) the browsers now are OSS and nobody has done the work.
As for doing public key crypto, that's exactly what webauthn does. It doesn't use X.509 and TLS under the hood, but it essentially is a public key crypto system. It works in reverse of what you are saying though. The device(TPM, Yubikey, etc) stores the private key, the website just does verification checks that the device really does have the secret key. Note: there are lots of details I skipped over on purpose.
The problem I have with webauthn is the fact that it depends on the HTTP application protocol. TLS works regardless of the application protocol used. If I'm using Thunderbird to read and send email over IMAP and SMTP respectively, there shouldn't be a requirement that I use HTTP for authentication.
I could never get it to work on Android phones. The FIDO2 implementation of CTAP, short for Client to Authenticator Protocol. I tried registering it on many sites including https://webauthn.io/ and it never worked. I tried it last with the Google Pixel 4 with the latest software update. Anyone get it to work Android phones? If so where?
But in every FIDO2 registration there is a pop up for both Android phones and USB security keys with the API call. The prompt for registering a device shows both "Add a new Android phone" or "USB security key". When you click on "Add a new Android phone"it leads to a barcode that when you scan nothing happens. Seems real confusing especially if they want broader adoption.
2nd-factor WebAuthn should work fine with Android phones today. Any Android phone with a current version of Play Services and Chrome should be able to scan the QR code, if it doesn't work then it's a bug (or, at least, an old QR scanner).
When you say "nothing happens": what's the QR scanner? Do you have Chrome installed on the device? What are the app versions (from Settings) of Play Services and Chrome?
I'm not sure what "it" is in this context. I have a Pixel 2 which is (consults list) registered at Google, Login.gov, DropBox and GitHub. Most other places I have only my physical Security Keys registered, but those I added the Pixel 2. In particular it isn't registered at Facebook because I only book faces inside the dedicated Firefox container on a single machine in a single location, so as to reduce my exposure.
In my experience as the phone gets older features start to "deteriorate". For example, the previous phone didn't make or receive telephone calls after a few weeks without rebooting, this Pixel 2 ceases to work as a Contactless payment device after a few days without rebooting, and FIDO is the same way, so, maybe try rebooting? I figure that one day Google will rewrite their core systems in Rust or some other language that reduces the rate of heisenbugs and they'll have fewer features that "deteriorate" in this way. Also I should buy a new phone at some point.
I cannot stress how much I really do not want this to be a thing. I go to extensive links to not use OAuth against Google, Microsoft or anything else - there's no way I want a ban in one place to suddenly percolate to every service I may or may have not used that credential.
Same with tying things to my phone: phone number is acceptable, that I port between devices, but my phone, it's contents etc.? Essentially disposable: subject to provider whims, API incompatibilities or apps not implementing backup interfaces (i.e. there's no way to get my home screen back exactly how I had it when I switch Android phones). If it doesn't wind up on one of my computers via Syncthing, I pretty much assume it's going to be deleted forever at some point.
FIDO2 and webauthn are agnostic to requiring a FAANG account. It's not required. Yes most developers are lazy and will just use the stuff MS or Google or Apple put out that implements all of this, but it's by no means required at all to use their stuff.
I'm wondering whether we've reached a point where we need to tell people that quantum computers might one day be able to "break" their public keys (ie retreive the corresponding private keys) whenever we tell them to use them for sensitive use cases. The possibility was known for a long time, but now it doesn't seem crazy distant anymore.
If anyone wants to experiment and add Web Authentication to your web app we’ve built an API that makes it really easy to get started:
https://Passwordless.dev
I’ve spent the last 4 years working on the fido2-net-lib and been running the Passwordless api for about two years now. Happy to answer any questions.
My biggest problem with webauthn, it REQUIRES javascript. I know it's 2022 and everyone just throws the JS security issues under the rug and pretends that running JS for security is 100% totally awesome.
Yes you can mitigate the security issues a lot with a myriad of security headers, that still doesn't mean you aren't expanding your threat model a lot by enabling JS for authentication.
I for one look forward to the day when you won't be able to turn on a phone or a computer or access the Internet without submitting to a fingerprint and retina scan so that all your activity can be uploaded into the national security state database - for the protection of the children, of course. I believe China has already implemented all of this? Nice to see the 'free world' rushing to catch up.
Not sure how this is relevant? There is bunch of hardware solutions that don't involve fingerprint/personal data, like yubikeys or even software solutions that just requires you to generate a key locally that you can then use.
The problem I have with this is that some consumers won't understand the degree to which all of their access depends on their MS/Apple account. Consider the hypothetical person using this for all their access, smacking their finger down on the reader habitually for every pop-up. Ok now they lose their phone. Do they remember their apple password? Do they know how to recover their account? Are they locked out of their digital lives now? For some percentage of people that answer would seem to be yes.
For example, the Apple account recovery process [1] seems particularly vulnerable scenarios where your devices and wallet are lost together, so don't get your bag stollen while traveling if you don't remember your apple password.
[1] https://support.apple.com/en-us/HT204921