Hacker News new | past | comments | ask | show | jobs | submit login
Bring your own key was a lie (edgeless.systems)
33 points by m1ghtym0 on Oct 13, 2022 | hide | past | favorite | 27 comments



What's the HN equivalent of Slashvertisement?

> Fair enough, but then why did you split the trust in the first place using BYOK? Let’s think about the threat model here. Anyone leveraging privilege in the cloud or exploiting vulnerabilities in the isolation of cloud tenants, will inevitably gain access to the cryptographic services and eventually to your keys. So, do you trust the entire public cloud and its tenant? And why go through all that trouble of BYOK and separation of concerns?

Because as owner of the CMK, I can rotate it as often as I want and limit the exposure if one instance of the CMK has been exposed by the CSP.

And I can decide my own policy on having CMK shared across CSP and not be tied to just one CSP.

Confidential Computing is not a replacement for the discussion on who manages the key-encryption-key.


Slashvertisements, I forgot about those. I think they are considered virtuous now, especially on this forum.


Corporations expending energy refuting each other's bullshit/misleading claims means there's less energy available to con prospective customers, so I see that as a win.


You are correct, KMS implement important aspects of key management. The conclusion of the article is not replacing KMS with Confidential Computing. Instead, the idea is to combine them to achieve the ultimate goal of protecting sensitive data. CC does not solve the who manages the KEK problem, it solves the using the DEK securely, accessing the KEK securely, and eventually, effectively protecting the processed data question.


And also it is another line of defense, which can only improve security, not worsen it.


I agree, it's defense in depth.

However, suppose I'm a famous carmaker [1]. What are the chances that I screw up and publish my CMK in a public repo, compared to the chances of my CSP screwing up and publishing my tenant's PMK on a public repo?

[1] https://news.ycombinator.com/item?id=33155138


What is the Slashvertisement?


I assume its like the term slashdotted which is from slashdot, in this case the headline baits you into some companies advertisement article intended to make you buy into their product.

I wasnt a heavy Slashdot user I read it nearly daily until I found HN.


> What is the Slashvertisement?

I assume that it is a portmanteu of "Slashdot" (https://slashdot.org/) and "advertisement".


Post a technical article highlighting problems with X, and as part of it, mention your product which happens to not have those problems or deals with it a certain way.

So a blog post with the author advertising a product.


A targeted form of unsolicited advertising (ie. spam).


If you don't trust your CSP (the threat model discussed in the article) then I'm not sure that confidential computing will save you as you're relying on the CSP to implement and provide that service faithfully.

The CSP installs chooses and installs and manages the hardware, you can likely only interface with that hardware through CSP provided software, if the CSP is malicious it would seem likely that they could backdoor this stack to allow them access to encryption keys...

If you don't trust the CSPs surely the right answer is on-prem hardware.


I hear that argument a lot. The key aspect here is remote attestation. Often enough CC is only seen from a memory encryption angle. It's maybe not straight forward, however remote attestation and of course the verifiability of such attestation claims are what makes CC unique.

The remote attestation capabilities of CC hardware allow to establish a secure channel from the hardware to the user, taking the CSP fully out of the equation. That applies even though the CSP implements the IaaS in between.

There is documentation that explains this in more detail if that's of interest to anyone following these discussions: * https://confidentialcomputing.io/wp-content/uploads/sites/85... * https://content.edgeless.systems/hubfs/Confidential%20Comput...


AMD's SEV supports providing an attestation to the launch state of the VM, including information about whether the hypervisor has any visibility into the contents of the VM. If this works as described it does genuinely let you decouple trust from the CSP, instead placing it purely in the CPU vendor.

But I agree with the general thrust of the post - simply providing your own keys isn't sufficient to remove the CSP from the set of people you need to trust. There are reasons to do this (eg, you want the ability to extract your encrypted data and make use of it, or you want to have a chain of trust back to keys that you control), but the moment you upload a private key anywhere it's obviously no longer private in the same sense it was before you did that.


So with AMDs SEV (and I'm guessing similar systems) what's the interface by which a customer will get that information?

What I'm interested in is, is there not a CSP controlled API between the literal hardware and the CSP customer, that might be subject to attack?


The OS running inside the VM hits an external API (one you control, not the CSP), that returns a challenge, the CPU signs a response that includes that challenge and its state, you verify that the signature chains back to AMD. The CSP isn't directly involved in the exchange.


So the CSP has physical access to the CPU (and the rest of the hardware), is it possible to attest that it hasn't been tampered with after it leaves the CPU manufacturer's control?

(I'm not saying that's it's in anyway easy to modify, but if our threat model here is a malicious company with the resources of AWS/Azure/GCP then it seems sensible to consider even difficult attacks)


The thing you're looking for is called remote attestation. That means there is a direct channel from the hardware to the user that attests the confidentiality and integrity of the VM. Such attestation statement is signed by a key burned into the CPU at production time. The remaining attack vector is leaking that key from the hardware itself. There is academic research on this topic. In essence, while technically possible, it is considered not practical, especially not at scale.


Do you know of anyone offering this at the moment? I wonder if you could use that for Vault authentication somehow.


Constellation (a Kubernetes distro) [1] on Azure would give you this attestation feature. You could then run sth like HashiCorp's Vault in that cluster. You will know that all nodes of that cluster are in the state that you expect them to be through the attestation statement.

[1] https://github.com/edgelesssys/constellation

Disclaimer: I work for Edgeless Systems.


This. A thousand times this.


Author promotes deep hardware solutions, like Trusted Computing. I thought we've already passed that? TPM, Intel ME, AMD PSP...? If you are in a position to distrust your provider DON'T USE CLOUD. Build your infrastructure, use colocation, bring your own server. If you fear you may get poisoned, don't eat in public restaurants. And BYOK is just a minimal norm of basic security.


Isn't the normal practice here to provide a key use transparency log?

HSMs can produce a log every time the key is used, and as long as you trust the hardware to be doing the right thing (if you don't you've got bigger problems) the log should be verifiable so you know if there's a missing entry.

Cloud provider has the key, but gives you verifiable logs of it being used so you can catch them if they're doing the wrong thing.


They say that part of the solution is that their system provides runtime encryption, so third parties, whatever and whoever they might be, can't see the data. But where's physically the key used to encrypt? If it still has to be on the cloud, which I think would be the case, then the problem isn't solved.


There's also a lot of exciting developments coming out of academia, see structured encryption and its applications for encrypted search for example https://esl.cs.brown.edu/pubs/


Same argument can be used for encrypted EBS devices. I see some of these solutions as a way to be able to add a checkmark in some compliance report somewhere - they don't necessarily add a lot of actual security.


BYOK was a lie because it was only protecting keys at rest. When the environment in use becomes accessible to so many actors, customers lose control of their keys and identities once they are accessible to that hostile environment. “Bring your own key — share it with everyone.”

Confidential Computing fundamentally changes this by providing protection for keys in use and enabling trusted and verifiable runtime environments.

Solutions such as Constellation solve the shortcomings of BYOK using Confidential Computing so you can finally “Bring your own key — keep it yours”.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: