Showing posts with label annoyances. Show all posts
Showing posts with label annoyances. Show all posts

Tuesday, February 11, 2020

"Simple" Doesn't Always Mean It's Actually "Simpler"

Let it be known, if you're part of the group of people that have been foisting "simplified" markup tools on the community at large, I probably want to chop you in the adam's apple. HTML just ain't that hard to learn, especially the basics you'd need to do project documentation. And, if you find that your "simplified" documentation-language isn't sufficient to documentation tasks, the solution isn't to continue down the path of making your "simplified" markup language more complex. That's simply a sign that you screwed up and should probably set fire to what you've done to date.

We've been through this before with the whole "SGML was too hard, let's create HTML" debacle. I don't want to be back here again in 10-15 years having to deal with a plethora of new "simplified" markup languages just because today's "simplified" markup languages have become too complex.


  • A dozen plus flavors of things all claiming to be "markdown" isn't an improvement over knowing basic HTML and CSS
  • Having to differentiate the subtleties between each of the flavors isn't an improvement over knowing basic HTML and CSS.
  • Relying on bridge markup tools like reStructured isn't an improvement over knowing basic HTML and CSS (especially if I have to pollute my markdown with it). And, frankly, its syntax is more clunky and gibberish than either HTML or even troff/nroff.

Knock off the sprawling simplifications. You're not improving things, you're making things even more of a shit show (and, by extension, further discouraging people to write documentation at all).

Thursday, December 5, 2019

Seriously Jenkins^H^H^H^H I Already Used That

A number of months ago, I delivered a set of CloudFormation templates an Jenkins pipelines to drive them. Recently, I was brought back onto the project to help them clean some things up. One of the questions I was asked was, "is there any way we can reduce the number of parameters the Jenkins jobs require?"

While I'd originally developed the pipelines on a Jenkins server that had the "Rebuild" plugin, the Jenkins servers they were trying to use didn't have that plugin. Thus, in order to re-run a Jenkins job, they had two choices: use the built-in "replay" option or the built in "build with parameters" option. The former precludes the ability to change parameter values. The latter means that you have to repopulate all of the parameters values. When a Jenkins job has only a very few parameters, using the "build with parameters" option is relatively painless. When you start topping five parameters, it becomes more and more painful to use when all you want to do is tweak one or two values.

Unfortunately, for the sake of portability across this customer's various Jenkins domains, my pipelines require a minimum of four parameters just to enable tailoring for a specific Jenkins domain's environmental uniqueness. Yeah, you'd think that the various domains' Jenkins services would be sufficiently identical to not require this ...but we don't live in a perfect world. Apparently, even though the same group owns three of the domains to use, each deployment is pretty much wholly unlike the others.


At any rate... I replied back, "I can probably make it so that the pipelines read the bulk of their parameters from an S3-hosted file, but it will take me some figuring out. Once I do, you should only need to specify which Jenkins stored-credentials to use and the S3 path of the parameter file". Yesterday, I set about figuring out how to do that. It was, uh, beastly.

At any rate, what I found was that I could store  parameter/value-pairs in a plain text file posted to S3. I could then stream-down that file and use a tool like awk to extract the values and assign them to values. Only problem is, I like to segment my Jenkins pipelines ...and it's kind of painful (in much the same way that rubbing ghost peppers into an open wound is "kind of" painful) to make variables set in one job-stage available in another job-stage. Ultimately, what I came up with was code similar to the following (I'm injecting explanation within the job-skeleton to hopefully make things easier to follow):

pipeline {

    agent any

    […elided…]

    environment {
        AWS_DEFAULT_REGION = "${AwsRegion}"
        AWS_SVC_ENDPOINT = "${AwsSvcEndpoint}"
        AWS_CA_BUNDLE = '/etc/pki/tls/certs/ca-bundle.crt'
        REQUESTS_CA_BUNDLE = '/etc/pki/tls/certs/ca-bundle.crt'
    }

My customer operates in a couple of different AWS partitions. The environment{} block customizes the job's behavior so that it can work across the various partitions. Unfortunately, can't really hard-code those values and still maintain portability. Thus, those values are populated from the following  parameters{} section:
   parameters {
         string(name: 'AwsRegion', defaultValue: 'us-east-1', description: 'Amazon region to deploy resources into')
         string(name: 'AwsSvcEndpoint',  description: 'Override the AWS service-endpoint as necessary')
         string(name: 'AwsCred', description: 'Jenkins-stored AWS credential with which to execute cloud-layer commands')
         string(name: 'ParmFileS3location', description: 'S3 URL for parameter file (e.g., "s3:///")')
    }

The parameters{} section allows a pipeline-user to specify environment-appropriate values for the AwsRegion, AwsSvcEndpoint and AwsCred used for governing the behavior of the AWS CLI utilities. Yes, there are plugins available that would obviate needing to use the AWS CLI, but, as with other plugins I can't rely on being universally-available, I can't rely on the more-advanced AWS-related plugins. Thus, I have to rely on the AWS CLI since that one actually is available in all of their Jenkins environments. But for the need to work across AWS partitions, I could have made the pipeline require only a single parameter: ParmFileS3location.

What follows is the stage that prepares the run-environment for the rest of the Jenkins job:
    stages {
        stage ('Push Vals Into Job-Environment') {
            steps {
                // Make sure work-directory is clean //
                deleteDir()

                // Fetch parm-file
                withCredentials([[
                    $class: 'AmazonWebServicesCredentialsBinding',
                    accessKeyVariable: 'AWS_ACCESS_KEY_ID',
                    credentialsId: "${AwsCred}",
                    secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
                ]]) {
                    sh '''#!/bin/bash
                        # For compatibility with ancient AWS CLI utilities
                        if [[ -v ${AWS_SVC_ENDPOINT+x} ]]
                        then
                           AWSCMD="aws s3 --endpoint-url s3.${AWS_SVC_ENDPOINT}"
                        else
                           AWSCMD="aws s3"
                        fi
                        ${AWSCMD} --region "${AwsRegion}" cp "${ParmFileS3location}" Pipeline.envs
                    '''
                }
                // Populate job-env from parm-file
script { def GitCred = sh script:'awk -F "=" \'/GitCred/{ print $2 }\' Pipeline.envs', returnStdout: true env.GitCred = GitCred.trim() def GitProjUrl = sh script:'awk -F "=" \'/GitProjUrl/{ print $2 }\' Pipeline.envs', returnStdout: true env.GitProjUrl = GitProjUrl.trim() def GitProjBranch = sh script:'awk -F "=" \'/GitProjBranch/{ print $2 }\' Pipeline.envs', returnStdout: true env.GitProjBranch = GitProjBranch.trim() […elided…] } } }

The above stage-definition has three main steps:
  1. The deleteDir() statement ensures that the workspace assigned on the Jenkins agent-node doesn't contain any content left over from prior runs. Leftovers can have bad effects on subsequent runs. Bad juju.
  2. The shell invocation is wrapped in a call to the Jenkins credentials-binding plugin (and the CloudBees AWS helper-plugin). Wrapping the shell-invocation, this way, allows the contained call to the AWS CLI to work as desired. Worth noting:

    • The credentials-binding plugin is a default Jenkins plugin
    • The CloudBees AWS helper-plugin is not

    If the CloudBees plugin is missing, the above won't work. Fortunately, that's one of the optional plugins they do seem to have in all of the Jenkins domains they're using.
  3. The script{} section does the heavy lifting of pulling values from the downloaded parameters file and making those values available to subsequent job-stages
The really important part to explain is the script{} section, as the prior two are easily understood from either the Jenkins pipeline documentation or the innumerable Google-hits you'd get on a basic search. Basically, for each parameter that I need to extract from the parameter file and make available to subsequent job-stages, I have to do a couple things:

  1. I have to define a variable scoped to the currently-running stage
  2. I have to pull value-data from the parameter file and assign it to the stage-local variable. I use a call to a sub-shell so that I can use awk to do the extraction.
  3. I then create a global-scope environment variable from the stage-local variable. I need to do things this way so that I can invoke the .trim() method against the stage-local variable. Failing to do that leaves an unwanted <CRLF> at the end of my environment variable's value. To me, this feels like back when I was writing Perl code for CGI scripts and other utilities and had to call chomp() on everything. At any rate, absent the need to clip off the deleterious <CRLF>, I probably could have done a direct assignment. Which is to say, I might have been able to simply do:
    env.GitProjUrl = sh script:'awk -F "=" \'/GitProjUrl/{ print $2 }\' Pipeline.envs',
        returnStdout: true
Once the parameter files' parameter-values have all been pushed to the Jenkins job's environment, they're now available for use. In this particular case, that means I can then use the Jenkins git SCM sub-module to pull the desired branch/tag from the desired git project using the Jenkins-stored SSH credential specified within the parameters file:

        stage("Print fetched Info") {
            steps {
                checkout scm: [
                        $class: 'GitSCM',
                        userRemoteConfigs: [[
                            url: "${GitProjUrl}",
                            credentialsId: "${GitCred}"
                        ]],
                        branches: [[
                            name: "${GitProjBranch}"
                        ]]
                    ],
                    poll: false
            }
        }
    }


But, yeah, sorting this out resulted in quite a few more shouts of "seriously, Jenkins?!?"

Friday, January 18, 2019

GitLab: You're Kidding Me, Right?

Some of the organizations I do work for run their own, internal/private git servers (mostly GitLab CE or EE but the occasional GitHub EE). However, the way we try to structure our contracts, we maintain overall ownership of code we produce. As part of this, we do all of our development in our corporate GitHub.Com account. When customers want the content in their git servers, we set up a replication-job to take care of the requisite heavy-lifting.

One of the side-effects of developing externally, this way, is that the internal/private git service won't really know about the email addresses associated with the externally-sourced commits. While you can add all of your external email addresses to your account within the internal/private git service, some of those external email addresses may not be verifiable (e.g., if you use GitHub's "noreply" address-hiding option).

GitLab makes having these non-verifiable addresses in your commit-history not particularly fun/easy to resolve. To "fix" the problem, you need to go into the GitLab server's administration CLI and fix things. So, to add my GitHub "noreply" email, I needed to do:

  1. SSH to the GitLab server
  2. Change privileges (sudo) to an account that has the ability to invoke the administration CLI
  3. Start the GitLab administration CLI
  4. Use a query to set a modification-handle for the target account (my contributor account)
  5. Add a new email address (the GitHub "noreply" address)
  6. Tell GitLab "you don't need to verify this" (mandatory: this must be said in a Obi-Wan Kenobi voice)
  7. Hit save and exit the administration CLI
For me, this basically looked like:
-------------------------------------------------------------------------------------
 GitLab:       11.6.5 (237bddc)
 GitLab Shell: 8.4.3
 postgresql:   9.6.10
-------------------------------------------------------------------------------------
Loading production environment (Rails 5.0.7)
irb(main):002:0> user = User.find_by(email: 'my@ldap.email.address')
=> #
irb(main):003:0> user.email = 'ferricoxide@users.noreply.github.com'
=> "ferricoxide@users.noreply.github.com"
irb(main):004:0> user.skip_reconfirmation!
=> true
irb(main):005:0> user.save!
=> true
irb(main):006:0>
Once this is done, when I look at my profile page, my GitHub "noreply" address appears as verified (and all commits associated with that address show up with my Avatar)

Thursday, August 16, 2018

Systemd And Timing Issues

Unlike a lot of Linux people, I'm not a knee-jerk hater of systemd. My "salaried UNIX" background, up through 2008, was primarily with OSes like Solaris and AIX. With Solaris, in particular, I was used to sytemd-type init-systems due to SMF.

That said, making the switch from RHEL and CentOS 6 to RHEL and CentOS 7 hasn't been without its issues. The change from upstart to systemd is a lot more dramatic than from SysV-init to upstart.

Much of the pain with systemd comes with COTS software originally written to work on EL6. Some vendors really only due fairly cursory testing before saying something is EL7 compatible. Many — especially earlier in the EL 7 lifecycle — didn't bother creating systemd services at all. They simply relied on systemd-sysv-generator utility to do the dirty work for them.

While the systemd-sysv-generator utility does a fairly decent job, one of the places it can fall down is if the legacy-init script (files hosted in /etc/rc.d/init.d) is actually a symbolic link to someplace else in the filesystem. Even then, it's not super much a problem if "someplace else" is still within the "/" filesystem. However, if your SOPs include "segregate OS and application onto different filesystems", then "someplace else can" very much be a problem — when "someplace else" is on a different filesystem from "/".

Recently, I was asked to automate the installation of some COTS software with the "it works on EL6 so it ought to work on EL7" type of "compatibility". Not only did the software not come with systemd service files, its legacy-init files linked out to software installed in /opt. Our shop's SOPs are of the "applications on their own filesystems" variety. Thus, the /opt/<APPLICATION> directory is actually its own filesystem hosted on its own storage device. After doing the installation, I'd reboot the system. ...And when the system came back, even though there was a boot script in /etc/rc.d/init.d, the service wasn't starting. Poring over the logs, I eventually found:
systemd-sysv-generator[NNN]: stat() failed on /etc/rc.d/init.d/<script_name>
No such file or directory
This struck me odd given that the link and its destination very much did exist.

Turns out, systemd invokes the systemd-sysv-generator utility very early in the system-initialization proces. It invokes it so early, in fact, that the /opt/<APPLICATION>  filesystem has yet to be mounted when it runs. Thus, when it's looking to do the conversion the file the sym-link points to actually does not yet exist.

My first thought was, "screw it: I'll just write a systemd service file for the stupid application." Unfortunately, the application's starter was kind of a rats nest of suck and fail; complete and utter lossage. Trying to invoke it from directly via a systemd service definition resulted in the application's packaged controller-process not knowing where to find a stupid number of its sub-components. Brittle. So, I searched for other alternatives...

Eventually, my searches led me to both the nugget about when systemd invokes the systemd-sysv-generator utility and how to overcome the "sym-link to a yet-to-be-mounted-filesystem" problem. Under systemd-enabled systems, there's a new-with-systemd mount-option you can place in /etc/fstab — x-initrd.mount. You also need to make sure that your filesystem's fs_passno is set to "0" ...and if your filesystem lives on an LVM2 volume, you need to update your GRUB2 config to ensure that the LVM gets onlined prior to systemd invoking the systemd-sysv-generator utility. Fugly.

At any rate, once I implemented this fix, the systemd-sysv-generator utility became happy with the sym-linked legacy-init script ...And my vendor's crappy application was happy to restart on reboot.

Given that I'm deploying on AWS, I was able to accommodate setting these fstab options by doing:
mounts:
  - [ "/dev/nvme1n1", "/opt/<application> , "auto", "defaults,x-initrd.mount", "0", "0" ]
Within my cloud-init declaration-block. This should work in any context that allows you to use cloud-init.

I wish I could say that this was the worst problem I've run into with this particular application. But, really, this application is an all around steaming pile of technology.

Monday, December 12, 2011

Who Cut Off My `finger`

Overall, I'm not trying to make this, my "more serious blog" a dumping-ground for rants. So, please forgive me this rant and please feel free to skip this post....

I've been using UNIX and similar systems for a long time, now. So, I'm kind of set in my ways in the things I do on systems and the tools I expect to be there. When someone capriciously removes a useful tool, I get a touch upset.

`finger` is one of those useful tools. Sadly, because people have, in the mists of time` misconfigured finger, security folks now like to either simply disable it or remove it altogether. Fine. Whatever: I appreciate that there might be security concerns. However, if you're going to remove a given command, at least make sure you're accomplishing something useful for the penalty you make system users pay in frustration and lost time. If you decide to remove the finger command, then you should probably also make it so I can't get the same, damned information via:

• `id`
• `who`
• `whoami`
• `last`
• `getent passwd <USERID>`
• (etc.)

If I can run all of those commands, I've still got all the ability to get the data you're trying to hide by removing `finger`. So, what have you accomplished other than to piss me off and make it so I have to get data via other avenues? Seriously: "WTF"?

Why the hell is it that, when someone reads a "security best practice", they go ahead and blindly implement something without bothering to ask the next, logical questions: "does doing this, by itself, achieve my security goal," "is the potential negative impact on system users more than balanced-out by increased system security", "is there a better way to do this and achieve my security goals" and "what goal am I actually acheiving by taking this measure." If you don't ask these questions (and have good, strong answers to each), you probably shouldn't be following these "best practices."