As I start on my knowledge-transition from AWS to GCP (see prior article), one of the first things I wanted to figure out was "how do I re-use my project-related SSH-keys to access Linux-based, GCP-hosted instances similar to how I do it for AWS-hosted instances".
In general, I prefer not to make my cloud-hosted Linux instances directly-reachable via public IP addresses. In the case of AWS, I've spent the last few years leveraging SSH-over-SSM. AWS added the capability to access EC2s' interactive shells via SSM back in 2018. Initially, that access was mediated through a web-browser. However, not long after making the "shell via SSM" capability available through the AWS web consoles, AWS published an article on how you can leverage your local workstation's native SSH client.
Sadly, this article no longer seems to show up in web-searches, at least, not on high up in the search results: there's now a number of people that have published their own blog-entries on the topic and those somehow manage to rank higher than AWS's own guidance. I've even alluded to it in some of my other, SSH-tagged posts. Bleah. I'm not going to link to any of the "usurpers'" articles because I don't want to reward that behavior. However, if you check my article, "So You Work in Private VPCs and Want CLI Access to Your Linux EC2s?", does link to another, less "easy-button" article that is on an AWS URL.
Lack of link-out to reference articles aside, the ability to use my local (OpenSSH) client tunneled through SSM has meant:
- Not having to deal with the AWS web console — and, if your AWS accounts are security-focussed in their setup, you might not even have access to the AWS web consoles — and its clunkiness.
- Not having to make your EC2s available via public IPs — be that an ephemeral IP, an elastic IP or even public-facing elastic loadbalancer. This in turn meant:
- Not having to make the choice of "do I allow the world to bang on my EC2s' SSH daemon or do I have to ass around with maintaining IP whitelists". Given that I'm a remote worker who's both mobile and a habitual user of VPN services, maintaining IP whitelists was always a freaking chore …especially if an account I was working in didn't allow access to the EC2 web console (so I could update my security-groups' whitelists)
- Not even having to sacrifice VPC-security by allowing public ingress — whether directly or via a bastion-host
- Being able to use scp from wherever I was working whenever I needed to transfer files from home to my EC2(s) …and not having to ass with copying shit to an internet-reachable file-share from my workstation and then pulling the file(s) to the EC2(s)
Bonus, transiting SSM meant that there was additional, cloud-layer security-logging (even to the point one can kludge a keystroke-logger — there's a linkout to a how-to from my previously-linked "So You Work…" post). Providing additional logging capabilities generally makes your security folks happy.
Oof: tangent. Let's try to get back on track…
At any rate, GCP provides a native capability that's analogous to tunneling SSH over SSM. Specifically, the gcloud CLI utility includes the ability to easily leverage GCP's Identity Aware Proxy. The gcloud CLI utility includes an SSH-client wrapper, accessed by using `gcloud compute ssh …` and tacking on the `--tunnel-through-iap`flag. Problem is, its default behavior is to use its own key, rather than one of your keys. Maybe it's just me, but it feels like the documentation for how to override that behavior is somewhere in the "ain't great" neighborhood. Similarly, the various web-searches I did were turning up other, "ain't great" guidances. Now, I'm going to contribute my own, probably "ain't great", guidance.
Tangent inbound…
One of the things that `gcloud compute ssh …` does is that it takes whatever SSH key you're using and adds it to your GCP project's metadata. You can see any such keys by looking at your project's metadata. If you're in the GCP web-console, this is found by clicking through the service-menus ("Compute Engine → Metadata"). Once you're in the metadata console, you can click on the "SSH Keys" tab. If there are keys attached to the project, you will get a display like:
Similarly, you can get this information from the CLI:
gcloud compute project-info describe --format='json(commonInstanceMetadata.items)' | \ jq -r '.commonInstanceMetadata.items | .[] | select(.key == "ssh-keys") | .value'
The above can probably be done more-efficiently and wholly within the `gcloud` command (i.e., not need to use `jq`), but I'm too lazy to continue banging on it.
</tangent>
With the SSH public-key attached to the project's metadata, the Google agent running within the VM will insert it into a user's "${HOME}/.ssh/authorized_keys" file. This allows the VM-administrator to SSH in with the matching private key (whether ushing `gcloud compute ssh` or other SSH client). Upon logging in, one can see the results of the agent having done this by looking at the logged-in user's "${HOME}/.ssh/authorized_keys" file:
$ gcloud compute ssh \ --tunnel-through-iap \ --ssh-key-file="/tmp/rsa_test-20250522" \ test-user@development-vm WARNING: To increase the performance of the tunnel, consider installing NumPy. For instructions, please see https://cloud.google.com/iap/docs/using-tcp-forwarding#increasing_the_tcp_upload_bandwidth Last login: Fri May 23 20:01:48 2025 from 35.235.244.34 [test-user@development-vm ~]$ cat ~/.ssh/authorized_keys # Added by Google ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyjNt3anbU1XNaUbNLk8sZq+8lOY+WSQ/QHWAN+uzBKxNfZXIi/EnRjvgudMl4tiNVlEGPa+VA+he8TpQAomvSYSTgelNFaCzukiJ0wMJKKCb1u2QXRBV3k8ihbZx8nKE2OBHonOu4lGlMWl7P0mq4m7ir+t/1Pf9lGlbNvx1WgEdp4tnFO/eGIvBupPSwS8ew6p+ulwJBa9Po6KwNWg1UiG5BVLAejWJYZeBZ44dKQCc1i60ziFqr2lC4jktl032ftAGQaT+rA7RhppzErAn53eC5c70skt0EcFVd/y1773f2rjow+9VzSLJ9QKTSMp9meoLyqJpuctiwSLbCb4L2fSdsdXQcn+0ncEkbM4gvvqDWT8l4mL8Ar2xxYcIssEGqJ1uhLQgGPMXlb02PbePU8KIVt2ViW/s3fIwwUdNmewRxIdjPrIa2ddOmTy4SP6Js9lP/Y8yU4et9k9oLbl6eDg95d50uzFCIX5thEgQygWNrqBQjphWcbSvPO3kh1Z0= test-user@development-vm
Unfortunately, the `gcloud` utility's SSH wrapper doesn't — or doesn't reliably do so — know how to interact with any SSH-agent that might be in use. So, while it will do SSH key-forwarding, just fine (any keys in the local SSH-agent will be show if one executes `ssh-add -l` from the GCP-hosted instance), if the key-file you passed with the `--ssh-key-file` option has a password associated with it, you'll likely be prompted for it.