Friday, August 22, 2025

PluralSight: Am I Being Scammed?

Well, that's fucking dandy…

I've had a multi-year account with CloudGuru. It was useful when seeking new AWS certifications or brushing up for my re-sits. However, after they got bought by PluralSight, I set my account to not auto-renew as the content under the new site was not nearly as good as what had originally been available through CloudGuru, nor was it as good as what was available through Coursera or the CSP vendors' own learning-systems.

This morning, I get an email from PluralSight saying "You’ve renewed your subscription". Given that I'd set my CloudGuru account to not auto-renew and never set up new billing under PluralSight, this came as a bit of a shock. So, I clicked on the link in the email expecting it to allow me to view my account. I was presented with a login page for which I had no credentials. Similarly, hitting the "forgot my password" button never resulted in the promised password-reset email arriving.

Next, I checked the credit card account I previously had bound to my CloudGuru account. Unsurprisingly, I found a pending-charge from PluralSight. Contacted the phone number listed on the (pending) charge. Got dumped into a call-tree (naturally). Navigated the call-tree to sort out this billing "mistake". Got dumped into a hold-queue for nearly 15 minutes. Finally, the hold music ended and I was dumped over to a "there's no one available to answer your call: would you like to leave a message" automated-response. Uh... You couldn't have just dumped me straight to that if there's no one answering calls??? At any rate, I left a message. However, at this point, this all REALLY feels like a scam (or yet another private-equity enshittification cum cash-grab). I guess the best way that entities like PluralSight can make money off their acquisitions is to ignore the acquired companies' customers' wishes and just "convert"/renew them any way.

Fuck those guys.

Wednesday, August 20, 2025

WSL Space Recovery

Recently, the backup client I use for my laptop started popping up, "operation aborted: out of space" messages. I'd been confused because, the last time I'd looked — just a few days prior — I had over 100GiB of free space in my boot-drive. Yet, when I looked at my disk-utilization, I only had a few tens of MiB free. WHAT??

So, I began the process of tracking down what was suddenly chewing up all my disk-space. Ultimately, I found a nearly 70GiB "ext4.vhdx" buried deep within my Windows home-directory's "AppData" hierarchy.

Did a quick web search and found that what I was seeing was the virtual hard disk for my WSL2 instance. This confused me because I'd been pretty scrupulous in keeping my WSL2 instance's storage in check. In fact, when I checked, my WSL instances visible storage was only 23GiB, nowhere near the 70GiB+ of the "ext4.vhdx" file that was backing that 23GiB of visible storage-usage.

Further Googling turned up that WSL doesn't really reclaim freed storage. So, that differential between visible storage an the size of ext4.vhdx" file was effectively wastage. Presumably my virtual hard drive was significantly sparse.

Next thing I looked up was "how to reclaim wasted space in a WSL drive." Ultimately discovered that I needed to:

  1. Stop my WSL instance
  2. Back up my WSL instance
  3. "Optimize" my instance's hard drive
  4. Restart my WSL instance

The first step was dead easy: just fire up a command.exe (or PowerShell session), then issue a `wsl --shutdown`.

Second step was also fairly easy, as it was something I was doing every few months, any way: while my backup client should be backing up the virtual hard disk, I don't trust that those backups are anything beyond "crash consistent". At any rate, I took the opportunity to do a:

wsl --export <WSL_DISTRO> G:\WSL_Backups\<WSL_DISTRO>\backup-$( date '+%Y%m%d' ).tar

Once I found articles on the subject, the VHD compression was also pretty straight-forward (and, thus far, no need for the backups):

  1. Open a PowerShell window (the optimization-command is a PowerShell commandlet) 
  2. Navigate to the directory hosting the VHD file 
  3. Execute Optimize-VHD -Path .\ext4.vhdx -Mode full.

The optimization crushed the VHD file back down to a hair larger than the (internal to the instance) disk-size. 

Restarting my WSL instance is just going into my Search box and typing in the name of my instance, then clicking on the menu-item that appears.

Situation fixed, my next problem was, "why the hell did I end up with such a huge disk-image in the first place". The answer to that didn't really come until a few days later when my VHD blew up again.

I primarily use my WSL instance for work-related stuff. Recently, I'd been using podman to do some — a bunch, actually — container work. Worse, some of that Podman-based container-work was resulting in Buildah images getting generated. Whenever I would run `podman system prune --all --volumes && podman system prune --external`, the tool would tell me I'd recovered 5-20GiB of space (particularly the --external runs). 

Those space-recovery numbers made it occurr to me, "are my podman activities blowing my disk up?" So, after a fresh `… Optimize-VHD …` run, I decided to see if I could intentionally provoke a "VHD is multiples of my visible-use" situation. And, yes, I could.

Moral of the story: while you can use WSL instances with things like Podman, doing so will likely make it so you'll need to habituate to doing more system cleanup activities. 

Friday, May 23, 2025

Using Arbitrary SSH Keys With GCP-Hosted Linux Instances (Or: "WTF: This Works on AWS")

 As I start on my knowledge-transition from AWS to GCP (see prior article), one of the first things I wanted to figure out was "how do I re-use my project-related SSH-keys to access Linux-based, GCP-hosted instances similar to how I do it for AWS-hosted instances".

In general, I prefer not to make my cloud-hosted Linux instances directly-reachable via public IP addresses. In the case of AWS, I've spent the last few years leveraging SSH-over-SSM. AWS added the capability to access EC2s' interactive shells via SSM back in 2018. Initially, that access was mediated through a web-browser. However, not long after making the "shell via SSM" capability available through the AWS web consoles, AWS published an article on how you can leverage your local workstation's native SSH client.

Sadly, this article no longer seems to show up in web-searches, at least, not on high up in the search results: there's now a number of people that have published their own blog-entries on the topic and those somehow manage to rank higher than AWS's own guidance. I've even alluded to it in some of my other, SSH-tagged posts. Bleah. I'm not going to link to any of the "usurpers'" articles because I don't want to reward that behavior. However, if you check my article, "So You Work in Private VPCs and Want CLI Access to Your Linux EC2s?", does link to another, less "easy-button" article that is on an AWS URL.

Lack of link-out to reference articles aside, the ability to use my local (OpenSSH) client tunneled through SSM has meant:

  • Not having to deal with the AWS web console — and, if your AWS accounts are security-focussed in their setup, you might not even have access to the AWS web consoles — and its clunkiness.
  • Not having to make your EC2s available via public IPs — be that an ephemeral IP, an elastic IP or even public-facing elastic loadbalancer. This in turn meant:
    • Not having to make the choice of "do I allow the world to bang on my EC2s' SSH daemon or do I have to ass around with maintaining IP whitelists". Given that I'm a remote worker who's both mobile and a habitual user of VPN services, maintaining IP whitelists was always a freaking chore …especially if an account I was working in didn't allow access to the EC2 web console (so I could update my security-groups' whitelists)
    • Not even having to sacrifice VPC-security by allowing public ingress — whether directly or via a bastion-host
  • Being able to use scp from wherever I was working whenever I needed to transfer files from home to my EC2(s) …and not having to ass with copying shit to an internet-reachable file-share from my workstation and then pulling the file(s) to the EC2(s)

Bonus, transiting SSM meant that there was additional, cloud-layer security-logging (even to the point one can kludge a keystroke-logger — there's a linkout to a how-to from my previously-linked "So You Work…" post). Providing additional logging capabilities generally makes your security folks happy.

 Oof: tangent. Let's try to get back on track…

At any rate, GCP provides a native capability that's analogous to tunneling SSH over SSM. Specifically, the gcloud CLI utility includes the ability to easily leverage GCP's Identity Aware Proxy. The gcloud CLI utility includes an SSH-client wrapper, accessed by using `gcloud compute ssh …` and tacking on the `--tunnel-through-iap`flag. Problem is, its default behavior is to use its own key, rather than one of your keys. Maybe it's just me, but it feels like the documentation for how to override that behavior is somewhere in the "ain't great" neighborhood. Similarly, the various web-searches I did were turning up other, "ain't great" guidances. Now, I'm going to contribute my own, probably "ain't great", guidance. 

Tangent inbound…

One of the things that `gcloud compute ssh …` does is that it takes whatever SSH key you're using and adds it to your GCP project's metadata. You can see any such keys by looking at your project's metadata. If you're in the GCP web-console, this is found by clicking through the service-menus ("Compute Engine → Metadata"). Once you're in the metadata console, you can click on the "SSH Keys" tab. If there are keys attached to the project, you will get a display like:


Similarly, you can get this information from the CLI:

gcloud compute project-info describe --format='json(commonInstanceMetadata.items)' | \
jq -r '.commonInstanceMetadata.items | .[] | select(.key == "ssh-keys") | .value'

The above can probably be done more-efficiently and wholly within the `gcloud` command (i.e., not need to use `jq`), but I'm too lazy to continue banging on it.

</tangent>

With the SSH public-key attached to the project's metadata, the Google agent running within the VM will insert it into a user's "${HOME}/.ssh/authorized_keys" file. This allows the VM-administrator to SSH in with the matching private key (whether ushing `gcloud compute ssh` or other SSH client). Upon logging in, one can see the results of the agent having done this by looking at the logged-in user's "${HOME}/.ssh/authorized_keys" file:

$ gcloud compute ssh \
  --tunnel-through-iap \
  --ssh-key-file="/tmp/rsa_test-20250522" \
  test-user@development-vm
WARNING:

To increase the performance of the tunnel, consider installing NumPy. For instructions,
please see https://cloud.google.com/iap/docs/using-tcp-forwarding#increasing_the_tcp_upload_bandwidth

Last login: Fri May 23 20:01:48 2025 from 35.235.244.34
[test-user@development-vm ~]$ cat ~/.ssh/authorized_keys
# Added by Google
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyjNt3anbU1XNaUbNLk8sZq+8lOY+WSQ/QHWAN+uzBKxNfZXIi/EnRjvgudMl4tiNVlEGPa+VA+he8TpQAomvSYSTgelNFaCzukiJ0wMJKKCb1u2QXRBV3k8ihbZx8nKE2OBHonOu4lGlMWl7P0mq4m7ir+t/1Pf9lGlbNvx1WgEdp4tnFO/eGIvBupPSwS8ew6p+ulwJBa9Po6KwNWg1UiG5BVLAejWJYZeBZ44dKQCc1i60ziFqr2lC4jktl032ftAGQaT+rA7RhppzErAn53eC5c70skt0EcFVd/y1773f2rjow+9VzSLJ9QKTSMp9meoLyqJpuctiwSLbCb4L2fSdsdXQcn+0ncEkbM4gvvqDWT8l4mL8Ar2xxYcIssEGqJ1uhLQgGPMXlb02PbePU8KIVt2ViW/s3fIwwUdNmewRxIdjPrIa2ddOmTy4SP6Js9lP/Y8yU4et9k9oLbl6eDg95d50uzFCIX5thEgQygWNrqBQjphWcbSvPO3kh1Z0= test-user@development-vm

Unfortunately, the `gcloud` utility's SSH wrapper doesn't — or doesn't reliably do so — know how to interact with any SSH-agent that might be in use. So, while it will do SSH key-forwarding, just fine (any keys in the local SSH-agent will be show if one executes `ssh-add -l` from the GCP-hosted instance), if the key-file you passed with the `--ssh-key-file` option has a password associated with it, you'll likely be prompted for it.