Thursday, August 25, 2016

Use the Force, LUKS

Not like there aren't a bunch of LUKS guides out there already ...mostly posting this one for myself.

Today, was working on turning the (attrocious - other than a long-past deadline, DISA, do you even care what you're publishing?) RHEL 7 V0R2 STIGs specifications into configuration management elements for our enterprise CM system. Got to the STIG item for "ensure that data-at-rest is encrypted as appropriate". This particular element is only semi-automatable ...since it's one of those "context" rules that has a "if local policy requires it" back-biting element to it. At any rate, this particular STIG-item prescribes the use of LUKs.

As I set about to write the code for this security-element, it occurred to me, "we typically use array-based storage encryption - or things like KMS in cloud deployments - that I can't remember how to cofigure LUKS ...least of all configure it so it doesn't require human intervention to mount volumes." So, like any good Linux-tech, I petitioned the gods of Google. Lo, there were many results — most falling into either the "here's how you encrypt a device" or the "here's how you take an encrypted device and make the OS automatically remount it at boot" camps. I was looking to do both so that my test-rig could be rebooted and just have the volume there. I was worried about testing whether devices were encrypted, not whether leaving keys on a system was adequately secure.

At any rate, at least for testing purposes (and in case I need to remember these later), here's what I synthesized from my Google searches.

  1. Create a directory for storing encryption key-files. Ensure that directory is readable only by the root user:
    install -d -m 0700 -o root -g root /etc/crypt.d
  2. Create a 4KB key from randomized data (stronger encryption key than typical, password-based unlock mechanisms):
    # dd if=/dev/urandom of=/etc/crypt.d/cryptFS.key bs=1024 count=4
    ...writing the key to the previously-created, protected directory. Up the key-length by increasing the value of the count parameter.
     
  3. Use the key to create an encrypted raw device:
    # cryptsetup --key-file /etc/crypt.d/cryptFS.key \
    --cipher aes-cbc-essiv:sha256 luksFormat /dev/CryptVG/CryptVol
  4. Activate/open the encrypted device for writing:
    # cryptsetup luksOpen --key-file /etc/crypt.d/cryptFS.key \
    /dev/CryptVG/CryptVol CryptVol_crypt
    Pass the location of the encryption-key using the --key-file parameter.
     
  5. Add a mapping to the crypttab file:
    # ( printf "CryptVol_crypt\t/dev/CryptVG/CryptVol\t" ;
       printf "/etc/crypt.d/cryptFS.key\tluks\n" ) >> /etc/crypttab
    The OS will use this mapping-file at boot-time to open the encrypted device and ready it for mounting. The four column-values to the map are:
    1. Device-mapper Node: this is the name of the writable block-device used for creating filesystem structures and for mounting. The value is relative. When the device is activated, it will be assigned the device name /dev/mapper/<key_value>
    2. Hosting-Device: The physical device that hosts the encrypted psuedo-device. This can be a basic hard disk, a partition on a disk or an LVM volume.
    3. Key Location: Where the device's decryption-key is stored.
    4. Encryption Type: What encryption-method was used to encrypt the device (typically "luks")
     
  6. Create a filesystem on the opened encrypted device:
    # mkfs -t ext4 /dev/mapper/CryptVol_crypt
  7. Add the encrypted device's mount-information to the host's /etc/fstab file:
    # ( printf "/dev/mapper/CryptVol_crypt\t/cryptfs\text4" ;
       printf "defaults\t0 0\n" ) >> /etc/fstab
  8. Verify that everything works by hand-mounting the device (`mount -a`)
  9. Reboot the system (`init 6`) to verify that the encrypted device(s) automatically mount at boot-time
Keys and mappings in place, the system will reboot with the LUKSed devices opened and mounted. The above method's also good if you wanted to give each LUKS-protected device its own, device-specific key-file.

Note: You will really want to back up these key files. If you somehow lose the host OS but not the encrypted devices, the only way you'll be able to re-open those devices if you're able to restore the key-files to the new system. Absent those keys, you better have good backups of the unencrypted data - becuase you're starting from scratch.

Tuesday, August 2, 2016

Supporting Dynamic root-disk in LVM-enabled Templates - EL7 Edition

In my previous article,  Supporting Dynamic root-disk in LVM-enabled Templates, I discussed the challenges around supporting LVM-enabled VM-templates in cloud-based deployments of Enterprise Linux 6 VMs. The kernel used for Enterprise Linux 7 distributions makes template-based deployment of LVM-enabled VMs a bit easier. Instead of having to add an RPM from EPEL and then hack that RPM to make it support LVM2-encapsulated root volumes/filesystems, one need only ensure that the cloud-utils-growpart RPM is installed and do same launch-time massaging via cloud-init. By way of example:
#cloud-config
runcmd:
  - /usr/bin/growpart /dev/xvda 2
  - pvresize /dev/xvda2
  - lvresize -r -L +2G VolGroup00/logVol
  - lvresize -r -L +2G VolGroup00/auditVol
Will cause the launched instance to:
  1. Grow the second partition on the boot disk to the end of the disk
  2. Instruct LVM to resize the PV to match the new partition-size
  3. Instruct LVM to grow the VolGroup00/logVol volume — and the filesystem on top of it — by 2GiB
  4. Instruct LVM to grow the VolGroup00/auditVol volume — and the filesystem on top of it — by 2GiB
Upon login, the above launch-time configuration-actions can be verified by using `vgdisplay -s` and `lvs --segments -o +devices`:
# vgdisplay -s
  "VolGroup00" 29.53 GiB [23.53 GiB used / 6.00 GiB free]
# lvs --segments -o +devices
  LV       VG         Attr       #Str Type   SSize Devices
  auditVol VolGroup00 -wi-ao----    1 linear 8.53g /dev/xvda2(2816)
  auditVol VolGroup00 -wi-ao----    1 linear 2.00g /dev/xvda2(5512)
  homeVol  VolGroup00 -wi-ao----    1 linear 1.00g /dev/xvda2(1536)
  logVol   VolGroup00 -wi-ao----    1 linear 2.00g /dev/xvda2(2304)
  logVol   VolGroup00 -wi-ao----    1 linear 2.00g /dev/xvda2(5000)
  rootVol  VolGroup00 -wi-ao----    1 linear 4.00g /dev/xvda2(0)
  swapVol  VolGroup00 -wi-ao----    1 linear 2.00g /dev/xvda2(1024)
  varVol   VolGroup00 -wi-ao----    1 linear 2.00g /dev/xvda2(1792)

Supporting Dynamic root-disk in LVM-enabled Templates

One of the main customers I support has undertaken adoption of cloud-based services. This customer's IA team also requires that the OS drive be carved up to keep logging and audit activities separate from the rest of the OS disks. Previous to adoption of cloud-based services, this was a non-problem.

Since moving to the cloud — and using a build-method that generates launch-templates directly in the cloud (EL6 and EL7) — the use of LVM has proven problematic - particularly with EL6. Out-of-th-box, EL6 does not support dynamic resizing of the boot disk. This means that specifying a larger-than-default root-disk when launching a template is pointless if using a "stock" EL6 template. This can be overcome by creating a custom launch-template and that uses the dracut-modules-growroot from EPEL in that template.

Unfortunately, this EPEL RPM is only part of the picture. The downloaded dracut-modules-growroot RPM only supports growing "/" partition to the size of the larger-than-default disk if the template's root disk is either wholly unpartitioned or the disk is partitioned such that the "/" partition is the last partition on disk. It does not support a case where the "/" filesystem is hosted within an LVM2 volume-group. To get around this, it is necessary to patch the growroot.sh script that the dracut-modules-growroot RPM installs:
--- /usr/share/dracut/modules.d/50growroot/growroot.sh  2013-11-22 13:32:42.000000000 +0000
+++ growroot.sh 2016-08-02 15:56:54.308094011 +0000
@@ -18,8 +18,20 @@
 }

 _growroot() {
-       # Remove 'block:' prefix and find the root device
-       rootdev=$(readlink -f "${root#block:}")
+       # Compute root-device
+       if [ -z "${root##*mapper*}" ]
+       then
+               set -- "${root##*mapper/}"
+               VOLGRP=${1%-*}
+               ROOTVOL=${1#*-}
+               rootdev=$(readlink -f $(pvs --noheadings | awk '/'${VOLGRP}'/{print $1}'))
+               _info "'/' is hosted on an LVM2 volume: setting \$rootdev to ${rootdev}"
+       else
+               # Remove 'block:' prefix and find the root device
+               rootdev=$(readlink -f "${root#block:}")
+       fi
+
+       # root arg was nulled at some point...
        if [ -z "${rootdev}" ] ; then
                _warning "unable to find root device"
                return
Once the growroot.sh script has been patched, it will be necessary to generate and update the template's initramfs with the grow functionality enabled. If the template has multiple kernels installed, it will be desirable to ensure that each is functionally-enabled. A quick way to ensure that all of the initramfs files in the template are properly-enabled is to execute:

rpm -qa kernel | sed 's/^kernel-//' | \
   xargs -I {} dracut -f /boot/initramfs-{}.img
Note1: The above will likely put data into the template's /var/log/dracut.log file. It is likely desirable to null-out this file (along with all other log files) prior to sealing the template.

Note2: Patching the growroot.sh script will cause RPM-verification to fail in VMs launched from the template. This can either be handled as a known/expected exception or can be averted by performing a `yum reinstall dracut-modules-growroot` in the template or in the VMs launched from the template.

Credit: The above is an extension to a solution that I found at Backslasher.Net (my Google-fu was strong the day that I wanted to solve this problem!)