Thursday, February 12, 2026

I Hate PowerShell (Installment 3769)

One of my customers has a group of developers. They're also tasked with SRE-style responsibilities as things move from development to production. Since the production environment is significantly locked down, these developers cannot access it from their laptops. As such, they don't have the kind of service-management tooling available to them in case shit goes sideways. To work around this issue, they requested the creation of a "jump box" in the production-environment that would have a reliable set of tools installed. Even though everything in production is either Linux or Kubernetes based, they wanted the "jump box" to be Windows-based. So, one of my peers hand-created a suitable "jump box" for them — installing all of the desired tools and creating the desired, RDP-enabled users.

In general, we don't like to have such hand-made boxes. They tend to be poorly maintained (such that their security auditors become sadder and sadder with the passing of each patching cycle) and, if they're the victim of system-breakage. Further, automating builds makes it much easier to do cost-control since you can better implement an "intantiate as you need it" deployment (or repair/replace) model. As such, they needed someone to automate what had previously been hand-jammed.

Since I'd just come off of a project where I'd delivered a bunch of "hardening" automation, I was the stuckee. I, uh, don't generally work with Windows. I especially don't really do automation for Windows-based systems. So, I accepted the task know that, "fun times be ahead," for me.

Part of the task involved hardening the "jump box" using a framework we'd been pushing our various customers to use for the past decade or so. Since there was already a "bootstrap" (PowerShell) script written for this purpose, I opted to use that script as my starting-point — why (wholly) reinvent the wheel, eh?

That script worked by downloading and executing the hardening framework. Easiest path forward, for me, was to update that script to similarly download and execute a my script.

At any rate, the first-pass at authoring my script was just to install a set of developer-oriented (really, more like "SRE oriented") tools. I wrote it in such a way that the automation user could specify specific versions of the desired tooling (by giving the URLs of the tools they wanted installed). It worked well enough. However, it didn't include the creation of RDP-enabled users. As such, once the "jump box" was built, the developers/SREs couldn't login without someone hand-jamming the creation of local users.

Today, I extended my script to allow the use of an external, JSON-formatted, user-specification file.

I wanted the various users' attributes to be flexible (e.g., the "is an admin user" attribute should allow a value that was any of `true`, `false` or undefined). The version of Powershell on the automation-targets apparently has some kind of "strict" mode set as the default behavior. So, when three of my JSON-file's testing user-definitions didn't have the "is an admin user" attribute defined, Powershell noped-out on that "problem". 

So, I did some digging around (Microsoft documentation, StackExchange, etc.). Figured out how to write the block in a manner that operates properly when the "strict" mode is set.

It was, to me, quite ugly compared to automation I've written for Linux-based hosts. To implement in a way that left the "strict" mode interpreter happy, I implemented my function like:

function Parse-JsonFile {
  # Where to write downloaded user-creation spec-file to
  $UserCreationFile = "${SaveDir}\$(${UserCreationUrl}.split("/")[-1])"

  # Download user-creation spec-file
  Download-File -Url ${UserCreationUrl} -SavePath ${UserCreationFile}

  # Abort if given file-path is not valid
  if ( -not ( Test-Path $UserCreationFile ) ) {
      Write-Error "File not found: $UserCreationFile"
      return
  }

  # Load JSON-payload from file and convert to PS object
  $JsonStream = Get-Content -Raw -Path "${UserCreationFile}" | ConvertFrom-Json

  # The structure has a 'Users' array containing a single object with dynamic keys
  foreach ($userContainer in $JsonStream.Users) {
    # Iterate through each dynamic key (the usernames)
    foreach ($username in $userContainer.psobject.Properties.Name) {
      # Get the array associated with that username
      $userDetails = $userContainer.$username

      foreach ($detail in $userDetails) {
        # Create "full name" attribute to user-creation function
        $FullName = ${detail}.givenName + " " + ${detail}.surname

        # Safely set WantsAdmin in a Strict-mode safe way
        if ( $detail.psobject.Properties.Name -contains "localAdmin" ) {
          $WantsAdmin = ${detail}.localAdmin
        } else {
          $WantsAdmin = $null
        }

        if ( ${WantsAdmin} -eq "true" ) {
          Create-User -UserUidName "$username" `
            -UserFullName ${FullName} `
            -UserPasswd ${detail}.initialPassword `
            -UserIsAdmin
        } else {
          Create-User -UserUidName "$username" `
            -UserFullName ${FullName} `
            -UserPasswd ${detail}.initialPassword
        }
      }
    }
  }
}

I understand that this is far from optimal way to do things. I hate how non-terse it feels like PowerShell makes me write things. However, I am not a PowerShell guy. It works. This is the first pass at it. I'll probably try to improve it in future iterations. 

I still hold out hope that we can convince their to use a Linux-based "jump box" (since, as mentioned early, their production environment is wholly Linux and Kubernetes-based) …At which point, I'll probably be tasked with figuring out how to RDP-enable a box with similar tooling and user-access. 

Tuesday, January 6, 2026

Crib Notes: Removing Un-Tracked "Junk" From Git Repositories

 I have a couple of git-managed projects where the projects' CI-configuration takes documentation-inputs — usually Markdown files — and renders those inputs into other formats (usually HTML for hosting on platforms like Read The Docs. While the documentation-inputs are tracked in git, the rendered outputs are not tracked.

Indeed, they're normally not even generated in contributors' local copies of the GitHub- or GitLab-hosted repositories. At best, the projects are adequately Docker-enabled so as to make it easy to generate "preview" renderings (to save on uploading documentation-updates that have errors and saving the time and resources lost to server-side rendering of the contents).

If one does avail themselves of the "preview" capability, it can leave grumph in the local repository copies. This grumph can lead to non-representative (i.e., "stale") content being previewed. To avoid this, one generally wants to ensure that such "preview" content is cleaned up before the next (local) generation of "preview" content is performed.

The git client provides a nifty little method for performing cleanups of such content. This method is in the form of `git clean`. Unfortunately, running just `git clean` typically won't result in the desired results. One needs to add further flags to it. I've found that, for my use-cases, the most-appropriate/thorough invocation is via `git clean -fdx`. 

 This is also useful if, in the course of doing updating a repository — say, as part of a significant refactor — you find you've done a number of `mv <DIR>{,-OLD}` types of operations (not exactly the "school" method to underpin refactors, but provides an easy path for "before/after" comparisons). Such directories and similar content will also get wiped away by `git clean -fdx`.