Hacker News new | past | comments | ask | show | jobs | submit login
Best practices for managing and storing secrets like API keys and credentials (gitguardian.com)
200 points by mackenzie-gg on June 12, 2020 | hide | past | favorite | 35 comments



This is pretty much why we created encpass.sh(https://github.com/plyint/encpass.sh). It's just a single POSIX compliant shell script whose only real requirement is having OpenSSL installed.

Sometimes when you are hacking on a shell script or you have some configuration management pieces you just need a simple way to store and access secrets locally without having to invest in a lot of infrastructure. (Especially if you are working for an employer where you don't even get to have a say in the infra)

At Plyint(https://plyint.com), we actually use it to manage team level secrets through Keybase. To make this easier we wrote an extension encpass-keybase.sh(https://github.com/plyint/encpass.sh/blob/master/extensions/...), which uses Keybase's keys, encryption, and git repos.


https://github.com/plyint/encpass.sh/blob/93d42340/encpass.s...

It uses PBKDF2 with 10,000 iterations if anyone else was wondering.

The code is clean, kudos.


Is there an easy way to tell the version of an encpass.sh if all you have is the script?

Say I was checking my encpass.sh files on a bunch of different nodes and wanted to make sure they were all based on the current master version - would I have to sha256 them and compare?


Yes, just compute the sha256 checksums and compare. I had been meaning to add a version command to encpass.sh for a while now and just hadn't made time for it. Your comment was the impetus I needed, so I've gone ahead and added support for computing the sha256 checksums of encpass.sh and any extension and also displaying the tag version.

I've cut a new release for the version command, v4.1.0. See the release notes here for additional details -> https://github.com/plyint/encpass.sh/releases/tag/v4.1.0


Great article.

I would point out, though, that the disadvantages listed against Secrets as a Service need not apply to Hashicorp's Vault:

1) Single point of failure: Vault Enterprise offers high availability solutions that should be able to mitigate much of this (at a cost, of course).

2) Codebase must be changed: Vault (and Consul) really shine here: Consul Template -- and Envconsul -- can be used to seamlessly integrate legacy code with Vault.

3) System-level access must be protected carefully: Well, this is always true, but Vault gives you more options than many others here as well: based on risk assessments, you can choose to limit the secrets you issue, particularly when you're talking about system-level access. You can have very short TTLs, one-time-use wrappers that limit the exposure of said secrets, etc.

(I don't mean to sound like a shill, BTW. It's just that these points easily jumped to mind, having just certified as a Vault Associate. YMMV. :-) )


Regarding 1, I don't think you need Enterprise to have HA, I'm pretty sure it comes with Vault OSS. You may be thinking about Vault Disaster Recovery which makes one cluster fail to another one, but HA is in OSS.


I was thinking of on-call support being a key part of the overall concept of HA, but, yes, good point: much of this risk can be mitigated, even with OSS.


HA is there in OSS now with their raft store.

I think you could’ve also done in the past if you used consul for vault’s storage.


HA has always been in OSS


The "Store secrets safely" section seems to propose three alternative solutions: store secrets encrypted in Git, expose secrets to applications with environment variables, and encrypt secrets with KMS. But those are solutions to different problems. You can use all three.

For example you can use sops (https://github.com/mozilla/sops) to store encrypted secrets in Git, using AWS/GCP KMS to encrypt/decrypt them (or encrypt/decrypt the encryption key), and have infrastructure automation that gets the secrets from sops and exposes them to applications as environment variables.


For storing secrets as environment variables on your local machine I recommend envchain: https://github.com/sorah/envchain

It uses gnome-keyring (linux) or keychain (mac).

Unfortunately you need to install it from source as it's not popular, or known at all, but how about we give it some love and push to include it in package repositories (debian, arch, etc)?


https://github.com/mozilla/sops is quite good to store encrypted secrets. Because the decryption key is stored in a KMS the secret access can be revoked any time.


This post is spot on! For anyone looking for a light-weight solution based on environment variables/12-factor, check out EnvKey (I'm the founder) - https://www.envkey.com

It's cloud-hosted, but uses client-side end-to-end encryption to avoid trusting our servers (all clients are open source). The focus is on seamless integration (generally ~1 minute for a new project), intuitive ux, and platform-agnosticism.

We're also close to launching a v2 that can run on your own infrastructure with HA and offers a lot more power and flexibility--version control, audit logs, a CLI for automation, 'config blocks' that can be re-used across apps and services, managing local environments, SSO, teams, and event hooks are some of the highlights.

Also, we're hiring (remote in the USA). Our stack is TypeScript/Go/Polyglot. Please get in touch if you're interested in this stuff! dane@envkey.com


> Secrets management systems such as Hashicorp Vault or AWS Key Management Service

KMS isn't a secrets management system. It's for managing encryption keys, though it can be used in tandem with AWS's Parameter Store, which can be used as a secret management system. As a matter of fact, Task Definitions in ECS can pull those secrets out as environment variables in your containers, which IMO is pretty elegant.


I laugh every time I see KMS used as the initialism for that, because in my circles that's short for "Kill MySelf."


This post is great! I would also add using the built-in secrets store your infra provider has in staging and production. For example, Heroku has config vars.

Shamless ad: If you guys are looking for a free and easy to use secrets manager with powerful features that work on every stack, I would recommend Doppler (YC W19). For transparency, I am the CEO.

Doppler is a cloud-hosted secrets manager designed to win the heart of the developer while meeting all the requirements of your security team. It works great in local development and production, can be nearly 100% automated, and has built-in versioning, reusable configs, audit logs, SSO, granular access controls, can automatically sync with your infra provider's secrets store (ex: Heroku Config Vars). It also has high availability features built into every part of our stack (from our open-source CLI creating encrypting fallback files to our servers running on multiple infrastructures).

Feel free to create a free account (no cc required) at https://doppler.com


We built ironhide https://github.com/IronCoreLabs/ironhide to help developers share secrets with each other and with CI.


Shameless plug for a k8s oriented tool that fits this space:

https://github.com/bitnami-labs/sealed-secrets


the current solutions for secrets management just seem to pass the buck, this problem is far from solved. Where you going to store your secrets that spin up your k8s environment? It eventually comes down to protecting your PGP key I think. Where do you store the secret for your admin account to AWS? Or to your domain name provider? Or bank account? 1password?

I suppose once your core infrastructure is up you just generate random passwords and store them in k8s for access when bringing up your infrastructure.


For passwords, you can use 1password or another password manager--that's pretty much solved at this point.

Secrets that are needed in server or development contexts are a lot trickier. There's a fundamental tension between making them widely and easily available (which makes development and ops easier), and restricting access (which is necessary for security). You'll also probably have multiple versions of the same secrets in different environments, have teams of developers that all need to stay in sync, etc. etc. Password management, as important as it is, has fewer moving pieces.


For AWS, locally, you store your secrets in your user directory which shouldn’t be anywhere near your hit repository. All of the SDKs will automatically find your keys there. When you run your code on either EC2, Lambda, or ECS (Docker), the SDK will automatically use the keys based on the attached role.

You should also require MFA to use your admin credentials either programmatically or on the web.

With AWS, you can even use an IAM role to connect to Mysql and Postgres so you don’t need to store a password for database access. You can use the SDK to generate a temporary password to the database.


I think the only really credible answer to this is "vault unseal." The first phase of availability zone turnup is assembling your quorum.


> Using wildcard commands like git add *or git add . can easily capture files that should not enter a git repository

No they cant, because of .gitignore


Because mistakes happen and adding a secret into path that is not hit by gitignore can happen. At least if you manually add you will have to consciously add the path that leaks secret.

It is a defense in depth. Ideally first layer protects you, but this way you have another strong layer.


Giving up "git add ." is a steep price. I prefer to store my secrets outside the working copy. Then when I cd into the working copy, direnv sets environment variables either pointing to or catting the files from my home directory. This works beautifully with Ansible Vault.


I'm curious why people use "git add ."? I don't see the convenience over "git add -i" and quickly reading through the items to add.

Realistically if you have a reasonable git workflow any add operation should only have a handful of files that you can quickly scan through to make sure nothing fishy is present and at most interactive mode adds a few seconds to the add process for you to read through everything.

Note: This is an entirely serious question and I am hoping for an answer. I promise I'm not trying to antagonise anyone. I just see so many people reaching for "add ." instead of "add -i".


Keeping a file untracked is never what I want. Either it goes in the repo or it goes in .gitignore. My gitignores are usually comprehensive, and when they're not I'll catch it in my self-review of the PR.


That's fair but isn't it more work to go back and have to rebase your commits to remove something that accidentally got skipped in your .gitignore?

I say this because I am also a firm believer of if it shouldn't be committed it should be in the .gitignore but I've only ever been burned by that when either myself or others weren't using interactive add.

Basically, isn't it easier to fix a .gitignore at commit time than at PR time?


I think the idea is "defense in depth" - but you are correct - if you use env vars/vaults, and a proper .gitignore, this isn't an issue.


rule #1: never give developers production secrets. That way they wont be able to misuse them or push them to the repo. From my experience devs only care whether their software works, security is often an afterthought.

rule #2: its not only secrets management, the whole stack should go through security hardening and regular security review.


Rule #1 is really important. Unfortunately many small companies and most startups only have developers and no separate operations department. The best those developers can do is keeping secrets out of git, slack and email. I'm using git secret for a project of a customer with one internal developer and about five external consultants.


if that's the case, then only a team lead / most senior dev should have prod secrets. and I would prefer to be very low tech in terms of secrets management.

definitely not passing secrets in ENV (cause any process can access ENV and exfiltrate) or as command line argument (cause they will be logged as all tty commands are)


> Use local environment variables, when feasible

Hmm no, environments vars are very convenient but leak like crazy easily.


Citation, please.

No situation readily come to mind where an attacker could gain access to env vars but would not also have access to any other means of persistence.


Great article




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: