No, the point is to get sensitive data out of the env variables, which nowadays get stored in plaintext in an .env file or similar. This is a solution for storing and retrieving secrets using AWS credentials. Essentially an online password manager for your application.
But AWS Secrets Manager does that already, without the Agent. It seems like the main value-add of the Agent is that you don’t have to manage a cache in your application code but still get the performance/cost advantage of having one.
So you don't have to manage a cache, but you do have to manage a network-connected sidecar service? You can make the "N programming languages" argument for why this isn't just a library, but they already have the aws-sdk, and Secrets Manager clients in that SDK, what advantage would this hypothetically have over a caching mechanism native to the SDK and internal to the application process?
The Security section of the Readme actually recommends using the SDK when it is a viable option. Seems like this is meant to fill a small gap for niche scenarios where that isn't an option, for some reason.
Yeah I think this line should really be at the utter-tippy-top of the README
> The Secrets Manager Agent provides compatibility for legacy applications that access secrets through an existing agent or that need caching for langages not supported through other solutions.
This does not appear to be an interesting, state of the art, purported best practices way to access Secrets Manager; its just a shim for legacy apps. And that does make sense, as there are many languages which do not have SDKs available, but do have an HTTP client library; though I question how much of the demand for something like this comes from how insanely complex authenticating with the AWS API is.
What about the credentials used to access AWS credentials? I think there's a good case for centralised credentials where they are shared across applications, though I would seriously question the need to share them across applications. But what you're achieving here as far as I can tell is just making secret retrieval more convoluted (for both devs and hypothetical attackers). Not to beat the dead horse, but obscurity != security.
When you deploy your code to AWS lambda or EC2 the code can simply access the appropriate secret stores as dictated by the IAM policy. If you haven't bought into AWS as a whole you're right that there's no good reason to use secret manager.
If you're in AWS, you get credentials from the metadata service.
If you're outside AWS, workloads assume roles using OIDC. If you still have access keys, generally speaking, you're doing it wrong.
Technically true, but in practice the role means you don't have to care about them. They're an implementation detail that's managed by AWS. Could be flying mice for all the app dev cares.
Sure, under the hood it is still access keys. Very temporarily defined access keys that going the normal happy path means you're not directly handling. What I'm really meaning by my above comment is you're not configuring your workload with ACCESS_KEY=abc123 SECRET_ACCESS_KEY=xyz789.
They aren't configured, but they're not as temporary as one might hope (i.e. they don't rotate on every read, for example), and it's pretty trivial set of exploits to leak them, especially in Kubernetes clusters with incorrectly configured worker nodes.
A much better solution would be for AWS to offer a domain socket or device inside VMs that will sign requests, such that the private material isn't even available to leak.
There are no credential, you are supposed to use identity-based auth: your lambda / ec2 / eks pods etc have a IAM role, so there are no secret in any form