Do not connect with agent forwarding, as doing so would allow the server operator to connect to other locations as you.
Do not forward environment information, though the typical ssh default is not to.
You will likely leak your username. If you connect from an internet reachable host, and you made the mistake of not doing the first item in this list, they could easily connect back to you, not requiring any zero days.
Other probably lower ROI attacks might include forcing you down to using extremely poor protocol versions or crypto options, resulting in potential information exposure if you remained online long enough to push a relevant sample of traffic. I would pin the client to a very tight set of allowed protocols and cipher suites.
Your terminal emulator program should ideally be sandboxed, iTerm, xterm, rxvt, etc have had bugs found and most aren't regularly fuzzed.
Similarly, having been in the ssh code base plenty, I'm not really sure I would wholly trust the standard openssh(1) client post-auth against a malicious server. It's highly macro-conditioned C with subtle semantics and invariants spread all over the place, extremely large functions, in-line parsing and in-house crypto. It does some things well, like trying to clear keys from memory early, but it's not written in a safe language, nor is it written in a safe way. As far as I know, the client is not fuzzed (though I'd be happy to find out I'm wrong). It also, depending on configuration calls out to other libraries with unfortunate history, zlib in particular, which while there hasn't been a known recent issue, there have been serious issues in the past.
Depending on how it was sourced, there may be other issues too. If you look in the OpenBSD repository for example, you'll find the libz it is linking is from zlib 1.2.3, so a good 10 years older than the last relatively serious zlib exploit, which is about 5 years old. The zlib changelog in OpenBSD does not seem to include the patch for CVE-2016-9841. This doesn't prove anything that significant, only points out the reality that this stuff doesn't get as many eyeballs as it really should. I just went diving for 10 minutes and this is what I found. In case you're wondering, the function in question is called from inflate, which is called from ssh_packet_read_poll2 (one of the aforementioned extremely long and macro-configured ssh functions), and is called in both the server and client dispatch code.
Using a modern web browser is a much safer way to go about this, in the end.
> As far as I know, the client is not fuzzed (though I'd be happy to find out I'm wrong).
Just touching on this one part, the rest still applies, openssh does use fuzzing. [0][1] Both client and daemon are fuzzed using AFL, though it does seem to be on an ad-hoc basis rather than automated, but it generally happens before a new release.
Unfortunately, to run AFL on openssh, they do have to patch it a bit, so what gets fuzzed and what is released isn't 1-to-1. This is because the privilege separations tend to defeat methods of detecting most of those sorts of bugs on their own.
Note that they could only log back into your machine if you use the same credentials to between machines.
This is one of the arguments for generating a unique SSH key on each machine you use. It makes it far harder to break in if you mess up somewhere along the way.
Not necessarily. If you have multiple keys active in local your SSH Agent, then connect to a malicious host with Agent Forwarding enabled, the malicious host could try to connect to to a third host and I believe it will try to use all active keys from the local agent.
Personally my approach is to use a unique GPG Authentication key per machine with gpg-agent. They can't log back into the current machine and unless it's a targetted attack they shouldn't have any knowledge of my other machines.
Of course there's a list of common services that you could probably try and they could gain access there like say push/pull on github/gitlab however as long as those common services have another layer of protection (i.e. mandatory commit signing) it should limit the effective attack area pretty effectively.
I also generally find that ssh connections will be one way (i.e. you typically only set up SSH authentication to flow in a specific direction). As long as your SSH authentication graph is directed and acyclic (i.e. no loops and connections only go in one direction), there is little ability for a malicious server to access other nodes in the SSH auth graph provided you connect from a leaf or near leaf node.
I don't use agent forwarding because of the issues with it but there are definitely ways to reduce the attack area that it provides.
Docker containers aren't provably secure. If you want isolation, use a VM that doesn't have host file system access. This way, if the VM is compromised, just throw it away and it can't leak out the way containers do.
Not only are they not provably secure (very few things are), they are explicitly not intended for use as a security boundary. Their whole gimmick is lightweight containers you can use instead of VMs if you trust everyone who's going to run code under them.
To disambiguate: I don't mean formal verification like seL4, I mean it hasn't been thoroughly audited to show it is reasonably secure. Docker security of images and running containers is pretty shit as I brought up on GH in the beginning. Developers just shrugged it off and focused on whiz-bang features.
The conflation of what amounts to fancy Linux cgroups trickery with hypervisors is a depressing misunderstanding of isolation.
It's not enabled by default, but unfortunately I've seen many SSH config related articles that advocate some scary stuff like setting ForwardAgent yes for Host * combined with ssh-add <every-key> in .zshrc/.bashrc
A privacy precaution would be to `ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no jobs.hackclub.com`. By default ssh will send all its public keys to a server unless given an identify file to use as an arg or in ~/.ssh/config.
The “public” in public key just means it doesn’t need to be secret, for cryptographic purposes. It’s different to your public identity as a person — I don’t think I’ve ever seen an ssh key used for that, in practice.
I might have multiple ssh key pairs related to my different roles as: high school teacher, two different GitHub users, peer to peer pharmaceuticals distributor, and upstanding private citizen.
I cannot see a scenario where prospective employers would want to connect these identities.
Employers are greatly interested in how prospective employees feel about following the law. Learning about your "street pharmacy side-gig" gives a clear answer of that.
Huh? People routinely have public identification they don’t share with prospective employers. I personally use the same handle everywhere so any employer who doesn’t want me can go to hell without me telling them. A lot of people keep quite public things quite private from their employers. And why shouldn’t they? Their employers are not their owners.
Yes, but there's a reason those are called "public" keys. The reason is that you don't suffer any harm by giving them out.
Except that they may be publicly identified with you. In that case, and only that case, giving them out would involve purporting to be the person who is publicly associated with the keys. (It wouldn't prove it, because, after all, those keys are public; anyone can know and distribute them.)
So this concern appears to be that you want to apply for a job without disclosing your identity. I think that's a strange thing to do.
The other reply to this comment misses the fact that an nonce is used in the client authentication process. Thus, one server to which you successfully authenticate using a public key cannot replay that against a different server that accepts the same key. There is a unique value that is sent to the client, hashed, and then signed with the private key.
Anyone can download your public SSH keys from GitHub (github.com/<username>.keys). The Ubuntu Server installed uses this to make setting up a mostly headless server easier.
If you mean geofft's comment, I don't believe they're talking about a replay attack. thaumasiotes wrote "It wouldn't prove it, because anyone could be presenting the public key", but geofft is saying that if the server claims to recognize the key and requests to continue authentication using it, then your client will potentially provide the proof—invisibly and automatically, if the private key is passwordless/agent-loaded. There is no second server; this is the original server being able to confirm that you are actually in possession of a supposedly-unrelated-to-anything key. (I have not verified whether the order of operations in the protocol actually works this way; I'm just interpreting what geofft is saying.)
> (It wouldn't prove it, because, after all, those keys are public; anyone can know and distribute them.)
I don't believe this is true, right? You do a private key operation demonstrating you possess the private key associated with the public key.
Or, by contradiction: Since the key is public, any server can put the fingerprint of the key in an authorized_keys file. It can then challenge you to log in in a way that exactly matches what a real server you'd actually want to log into would do, because a real server doesn't have your private key either. If your client could also authenticate to the server in a way that didn't prove anything beyond possession of the public key, then it could do the same to some actual server, i.e., the SSH protocol would have no meaningful authentication at all. Because we know the SSH protocol is not completely and trivially broken, this cannot be true.
(I think you also overestimate the value of technical deniability - certainly outside a court of law, nobody is obligated to think, "Well, it could be a complete coincidence, so I'm going to disregard this piece of information I just learned." And I wouldn't bet on it inside a court of law either.)
If I had two servers allowing ssh, and you logged into one of them by providing a public key which I added to the authorized_keys file, it would be a good guess that it was still you if you logged into my second server the same way.
Is that what you’re trying to say?
Granted it still wouldn’t prove it, because we are not our ssh keys. We’re all potentially one malware infection away from having our private keys compromised. Also, if someone wanted to "shed" the identity associated with a public key they could always just "accidentally" leak the private key in a public git commit.
> Also, if someone wanted to "shed" the identity associated with a public key they could always just "accidentally" leak the private key in a public git commit.
That would allow anyone to prove that they owned the public key, which prevents the original owner from using it. But it seems like, if you want to stop using the key, it's simpler to just stop using it. What does leaking the private key accomplish that deleting the private key doesn't also accomplish?
> I think you also overestimate the value of technical deniability
Huh? I presented the claim to identity that submitting a public key implicitly makes as being the only thing that our hypothetical applicant is seeking to avoid. I valued the technical deniability at zero.
But I said above, and say again here, that most job applicants are not seeking to avoid disclosing their identity as they apply for a job. They are usually specifically trying to highlight it.
I would say more specifically: if you have any SSH keys that are associated with non-career nyms (which is a perfectly reasonable thing to do) then keep them separate and only include them for known associated hosts, perhaps by using IdentityFile in Host blocks in .ssh/config.
I think the OP's service is pretty cool, it reminds me of ye olde BBS's. I am actually writing my personal resume as a command prompt based on old 80's PCs as well, albeit in HTML/JS, so I do dig the aesthetic.
Application-wise: The statement "In case you want to apply for a job without admitting who you are" is begging the question that the service is actually for job applications, something we have no trust in or knowledge of other than the title of a post on a public forum.
Identity-wise: You're also making the assumption that key = person. Keys can be set up to authenticate client applications and remote services with each other. People can have dozens of keys for various things they have installed via wizards or copy-pasting tutorials which they may not even be aware of. Key pairs are also shared by email and internal docs far more often than they should be with limited control over who they are distributed to.
Harm-wise: If I were an evildoer, I would have spent my career obtaining and organising databases full of all sorts of information; email addresses, hashed passwords, usernames / aliases, phone numbers, etc. I'd definitely have a special database set aside for key-pairs I've scraped from various plaintext sources that I haven't found a use for. The opportunity to target a subset of industry professionals (with presumably more privileged access to information than the average joe) to correlate even a small fraction of known public keys with specific IPs, email addresses, even hackernews aliases would be a huge value add to my "services". You could just slurp the data in, then even if you get no hits, maybe a year or two down the line it becomes relevant.
For anyone dealing with this kind of threat vector on the daily the stakes are pretty high and can include bankruptcy and professional ruin. Yeah we all visit random websites, but it's not every day people connect to an SSH server outside of their trust network. Do you really wanna be that guy whose key was used to leak a database full of medical data or something?
The audience of this website include people who work with PII and may not be familiar with the intricacies of the SSH command line utility, and the state of affairs in information security is pretty bleak in IT-backed organisations as we see every single day, so in this context I don't think it's cool to bash people being privacy conscious.
If you use the same public key across services then there's a good chance that your user can be identified. Github, for example, publishes users' public keys [0]. So if I re-use the same public key then you know it's me. Re-using the same public key is bad for privacy. But if you combine it with other security nightmares.
With agent forwarding the remote can enumerate all of your unlocked keys. The solution is 1) do not enable agent forwarding and 2) do not use key agents.
With X11 forwarding the remote side has basically full access to your local session. The solution is don't enable X11 forwarding.
I remember back where there were some code execution bugs in putty, a friend would pose as a naive Linux noob on IRC, go into hacking channels, and ask if people could help him fix some problem, and he would get shells on anyone who tried to log in to his machine.
Not quite security-related, but ssh is very pushy about host key verification and insists on adding keys to known hosts. That isn't always a desired behavior, so I have this:
alias sshn="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
And here I thought this was for applying for jobs :( For people like me who live in SSH and C. Sadly, it appears to be some sort of MUD with a slack channel.
Pretty sweet job listing :) I indirectly worked with Hack Club in High School - really great experience. They helped organize a bunch of events and foster a community.
If you're doing agent forwarding in your .ssh/config for * this could be a massive security thread. Mods should probably put a disclaimer at the top of this thread.
Not GP, but pretty frequently, although my situation is not normal.
I'd still suggest that sharing your keys by default is ... a bad idea. Sure, fingerprints help, but there's the chance of accidentally or maliciously being redirected to a rogue server.
I agree * is unsafe, but then again ssh devs chose to include it so that means they deemed it not completely useless, thus there are probably enough people who use it to make this thread somewhat malicious.
Also if you or another script once added * to your .ssh/config long ago it may still be lurking there among other lines without you realizing.
I agree checking fingerprints helps, but that can turned off pretty easily too.
You should use a 301 (permanent) redirect, not a 302 (temporary) redirect as you currently are.
Just speaking generally, I found the use of an http: link on this very curious, given that it’s about SSH (and thus encryption in no little part), so I’m guessing you’re not a “Web Developer”. So here’s my advice on the matter: everything on the web should be HTTPS now; nothing should be plain-text HTTP, with zero exceptions.
I’m going to be callous and say that Amiga browsers aren’t on the web. It’s a dead platform, if you’re actually targeting something for it you’re kind of not targeting the web proper, but rather a historical artefact that kinda lives in the same space. :-)
In fairness isn't that a result of your relationship with your isp? If you remain because of price or no other services available why wouldn't you use a vpn knowing your isp is a hostile actor and probably trying to deeply inspect packets, etc?
> You can be your own vpn provider if thats a big concern.
You just shift the trust around. Now I have to trust the hoster, e.g. OVH instead of my local ISP. Really the best thing you can do is end-to-end encryption, don't send plaintext over the internet.
> Your isp knows you visited a certain domain with https. That's a concern.
Practically speaking, it is because some modern browsers (and extensions) will throw up an error (either by-passable or not) if it's HTTP only. Yeah it's not necessary in many cases, but it's growing to be the standard expected behavior.
Because it’s the expected behavior. If you showed up to an interview for an attorney position and your interviewer was shirtless you’d think they’d lost their mind.
There tend to be about 0–2 http: entries on the front page. A fairly large fraction of those are old things. http: submissions on domains that support https: (whether or not they redirect to it by default) are very uncommon.
SSH, once you (verify and) accept the first key exchange cannot be tampered with in transit, nor have the contents viewed by those sitting in-between.
However, I believe everyone's stdout/stderr is available to everyone, if everyone is the same user, and if that user has read access to /proc, so that confidentiality is only restricted those who access to the server.
This is actually just a Go app which implements SSH using the Go standard library's SSH module. It's not like you're really SSH-ing into a server. See the source: https://github.com/hackclub/jobs
In "public access single-app" cases like this, the user is generally connected to a restricted shell or other program which (… supposedly—this is very easy to get wrong if you're running anything that isn't specifically built for it) does not allow arbitrary executable access.
It's also relatively trivial to set something up like this as SSH server supports auth delegation using PAM, and PAM can be configured with a single line using pam_succeed_if.so to disable auth only for a certain user.
Imagine making a job board work over ssh and then using slack to communicate.
Especially when things like this exist and are much more in the spirit: https://github.com/shazow/ssh-chat