Unfortunately, my customer is very early in their DevOps journey. While my customer has some privately-hosted toolchain services, they're not really fully fleshed out: their GitLab has no runners; their Jenkins is not general access; etc. In short, not a lot of ability to develop in their environment — at least not in a way that allows me to set up automated validation of my work.
Ultimately, I opted to move my initial efforts to my laptop with the goal of exporting the results. Because my customer is a RHEL environment, I set up RHEL and CentOS 7 and 8 VMs on my laptop via Hyper•V.
Side-note: While on prior laptops I used other virtualization solutions, I'm using Hyper•V because it came with Windows 10, not because I prefer it over other options. Hypervisor selection aside…
As easy as VMs are to rebuild, I've yet to actually take the time out to automate my VMs' builds to make it less painful if I do something that renders one of them utterly FUBAR. Needless to say, I don't particularly want to crap-up my VMs, right now. So, how to provide a degree of blast-isolation within those VMs to hopefully better-avoid not-yet-automated rebuilds?
As easy as VMs are to rebuild, I've yet to actually take the time out to automate my VMs' builds to make it less painful if I do something that renders one of them utterly FUBAR. Needless to say, I don't particularly want to crap-up my VMs, right now. So, how to provide a degree of blast-isolation within those VMs to hopefully better-avoid not-yet-automated rebuilds?
Containers can be a great approach. And, for something as simple as experimenting with Ansible and writing actual playbooks, it's more than sufficient. That said, since my VMs are all Enterpise Linux 7.8 or higher, Podman seemed the easier path than Docker ...and definitely easier than either full Kubernetes or K3S. After all, Podman is just a `yum install` away from being able to start cranking containers. Podman also means can run containers in user-space (without needing to set up Kubernetes or K3S), which further limits how hard I can bone myself.
At any rate, I've been playing around with Ansible, teaching myself how to author flexible playbooks and even starting to write some content that will eventually go into production for my customer. However, after creating and destroying dozens of containers over the past couple weeks, I happened to notice that the partition my ${HOME} is on was nearly full. I'd made the silly assumption that when I killed and removed my running containers that the associated storage was released. Instead, I found that my ${HOME}/.local/share/containers was chewing up nearly 4GiB of space. Worse, when I ran find (ahead of doing any rms), I was getting all sorts of permission denied errors. This kind of surprised me since I thought that, by running in user-space, any files that would be created would be owned by me.
So, I hit up the almighty Googs. I ended up finding Dan Walsh's blog-entry on the topic. Turns out that, because of how Podman uses name-spaces, it creates files that my non-privileged user can't actually directly access. Per the blog-entry, instead of being able to just do find ${HOME}/.local/share/containers -mtime +3 | xargs rm, I had to invoke buildah unshare and do my cleanup using that context.
So, "today I learned" ...and now I have over 3GiB of the nearly 4GiB of space back.
So, "today I learned" ...and now I have over 3GiB of the nearly 4GiB of space back.