The current SCAP guidelines for Red Hat Enterprise Linux (RHEL) 6 draw the bulk of their content straight from the DISA STIGS for RHEL 6. There are a few differences, here and there, but the commonality between the SCAP and STIG guidance - at least as of the SCAP XCCDF 1.1.4 and STIG Version 1, Release 5, respectively - is probably just shy of 100% when measured on the recommended tests and fixes. In turn, automating the guidance in these specifications allow you to quickly crank out predictably-secure Red Hat, CentOS, Scientific Linux or Amazon Linux systems.
For the privately-hosted cloud initiatives, supporting this guidance was a straight-forward matter. The solutions my customer uses all support the capability to network-boot and provision a virtual machine (VM) from which to create a template. Amazon didn't provide similar functionality to my customer, somewhat limiting some of the things that can be done to create a fully-customized instance or resulting template (Amazon Machine Image - or "AMI" - in EC2 terminology).
For the most part this wasn't a problem to my customer. Perhaps the biggest sticking-point was that it meant that, at least initially, partitioning schemes used on the privately-hosted VMs couldn't be easily replicated on the EC2 instances.
Section 2.1.1 of the SCAP guidance calls for "/tmp", "/var", "/var/log", "/var/log/audit", and "/home" to each be on their own, dedicated partitions, separate from the "/" partition. On the privately-hosted cloud solutions, use of a common, network-based KickStart was used to carve the boot-disk into a /boot partition and an LVM volume-group (VG). The boot VG was then carved up to create the SCAP-mandated partitions.
With the lack of network-booting/provisioning support, it meant we didn't have the capability to extend our KickStart methodologies to the EC2 environment. Further, at least initially, Amazon didn't provide support for use of LVM on boot disks. The combination of the two limitations meant my customer couldn't easily meet the SCAP partioning requiremts. Lack of LVM meant that the boot disk had to be carved up using bare /dev/sdX devices. Lack of console defeated the ability to repartition an already-built system to create the requisite partitons on the boot disk. Initially, this meant that the AMIs we could field were limited to "/boot" and "/" partitions. This meant config-drift between the hosting environments and meant we had to get security-waivers for the Amazon-hosted environment.
Not being one who well-tolerates these kind of arbitrary-feeling deviances, I got to cracking with my Google searches. Most of what I found were older documents that focussed on how to create LVM-enabled, S3-backed AMIs. These weren't at all what I wanted - they were a pain in the ass to create, were stoopidly time-consuming to transfer into EC2 and the resultant AMIs hamstrung me on the instance-types I could spawn from them. So, I kept scouring around. In the comments section to one of the references for S3-backed AMIs, I saw a comment about doing a chroot() build. So, I used that as my next branch of Googling about.
Didn't find a lot for RHEL-based distros - mostly Ubuntu and some others. That said, it gave me the starting point that I needed to find my ultimate solution. Basically, that solution comes down to:
- Pick an EL-based AMI from the Amazon Marketplace (I chose a CentOS one - I figured that using an EL-based starting point would ease creating my EL-based AMI since I'd already have all the tools I needed and in package names/formats I was already familiar with)
- Launch the smallest instance-size possible from the Marketplace AMI (8GB when I was researching the problem)
- Attach an EBS volume to the running instance - I set mine to the minimum size possible (8GB) figuring I could either grow the resultant volumes or, once I got my methods down/automated, use a larger EBS for my custom AMI.
- Carve the attached EBS up into two (primary) partitions. I like using `parted` for this, since I can specify the desired, multi-partition layout (and all the offsets, partition types/labels, etc.) in one long command-string.
- I kept "/boot" in the 200-400MB range. Could probably keep it smaller since the plans weren't so much to patch instantiations as much as periodically use automated build tools to launch instances from updated AMIs and re-deploy the applications onto the new/updated instances.
- I gave the rest of the disk to the partition that would host my root VG.
- I `vgcreate`d my root volume group, then carved it up into the SCAP-mandated partitions (minus "/tmp" - we do that as a tmpfs filesystem since the A/V tools that SCAP wants you to have tend to kill system performance if "/tmp" is on disk - probably not relevant in EC2, but consistency across environments was a goal of the exercise)
- Create ext4 filesystems on each of my LVs and my "/boot" partition.
- Mount all of the filesystems under "/mnt" to support a chroot-able install (i.e., "/mnt/root", "/mnt/root/var", etc.)
- Create base device-nodes within my chroot-able install-tree (you'll want/need "/dev/console", "/dev/null", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/tty" and "/dev/ptmx" - modes, ownerships and major/minor numbers should match what's in your live OS's)
- Setup loopback mounts for "/proc", "/sys", "/dev/pts" and "/dev/shm",
- Create "/etc/fstab" and "/etc/mtab" files within my chroot-able install-tree (should resemble the mount-scheme you want in your final AMI - dropping the "/mnt/root" from the paths)
- Use `yum` to install the same package-sets to the chroot that our normal KickStart processes would install.
- The `yum` install should have created all of your "/boot" files with the exception of your "grub.conf" type files.
- Create a "/mnt/boot/grub.conf" file with vmlinuz/initramfs references matching the ones installed by `yum`.
- Create links to your "grub.conf" file:
- You should have an "/mnt/root/etc/grub.conf" file that's a sym-link to your "/mnt/root/boot/grub.conf" file (be careful how you create this sym-link so you don't create an invalid link)
- Similarly, you'll want a "/mnt/root/boot/grub/grub.conf" linked up to "/mnt/root/boot/grub.conf" (not always necessary, but it's a belt-and-suspenders solution to some issues related to creating PVM AMIs)
- Create a basic eth0 config file at "/mnt/root/etc/sysconfig/network-scripts/ifcfg-eth0". EC2 instances require the use of DHCP for networking to work properly. A minimal network config file should look something like:
DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=on IPV6INIT=no
- Create a basic network-config file at "/mnt/root/etc/sysconfig/network". A minimal network config file should look something like:
NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=localhost.localdomain
- Append "UseDNS no" and "PermitRootLogin without-password" to the end of your "/mnt/root/etc/ssh/sshd_config" file. The former fixes connect-speed problems related to EC2's use of private IPs on their hosted instances. The latter allows you to SSH in as root for the initial login - but only with a valid SSH key (don't want to make newly-launched instances instantly ownable!)
- Assuming you want instances started from your AMI to use SELinux:
- Do a `touch /mnt/root/.autorelabel`
- Make sure that the "SELINUX" value in "/mnt/root/etc/selinux/config" is set to either "permissive" or "enforcing"
- Create an unprivileged login user within the chroot-able install-tree. Make sure a password is set and the the user is able to use `sudo` to access root (since I recommend setting root's password to a random value).
- Create boot init script that will download your AWS public key into the root and/or maintenance user's ${HOME}/.ssh/authorized_keys file. At its most basic, this should be a run-once script that looks like:
curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/pubkey install --mode 0700 -d ${KEYDESTDIR} install --mode 0600 /tmp/pubkey ${KEYDESTDIR}/authorized_keys
- Clean up the chroot-able install-tree:
yum --installroot=/mnt/root/ -y clean packages rm -rf /mnt/root/var/cache/yum rm -rf /mnt/root/var/lib/yum cat /dev/null > /mnt/root/root/.bash_history
- Unmount all of the chroot-able install-tree's filesystems.
- Use `vgchange` to deactivate the root VG
- Using the AWS console, create a snapshot of the attached EBS.
- Once the snapshot completes, you can then use the AWS console to create an AMI from the EBS-snapshot using the "Create Image" option. It is key that you set the "Root Device Name", "Virtualization Type" and "Kernel ID" parameters to appropriate values.
- The "Root Device Name" value will auto-populate as "/dev/sda1" - change this to "/dev/sda"
- The "Virtualization Type" should be set as "Paravirtual".
- The appropriate value for the "Kernel ID" parameter will vary from AWS availability-region to AWS availability-region (for example, the value for "US-East (N. Virginia)" will be different from the value for "EU (Ireland)"). In the drop-down, look for a description field that contains "pv-grub-hd00". There will be several. Look for the highest-numbered option that matches your AMIs architecture (for example, I would select the kernel with the description "pv-grub-hd00_1.04-x86_64.gz" for my x86_64-based EL 6.x custom AMI).
- Click the "Create" button, then wait for the AMI-creation to finish.
- Once the "Create" finishes, the AMI should be listed in your "AMIs" section of the AWS console.
- Test the new AMI by launching an instance. If the instance successfully completes its launch checks and you are able to SSH into it, you've successfully created a custom, PVM AMI (HWM AMIs are fairly easily created, as well, but require some slight deviations that I'll cover in another document).
I've automated much of the above tasks using some simple shell scripts and the Amazon EC2 tools. Use of the EC2 tools is well documented by Amazon. Their use allows me to automate everything within the instance launched from the Marketplace AMI (I keep all my scripts in Git, so, prepping a Marketplace AMI for building custom AMIs takes maybe two minutes on top of launching the generic Marketplace AMI). When automated as I have, you can go from launching your Marketplace AMI to having a launchable custom AMI in as little as twenty minutes.
Properly automated, generating updated AMIs as security fixes or other patch bundles come out is as simple as kicking off a script, hitting the vending machine for a fresh Mountain Dew, then coming back to launch new, custom AMIs.