Thursday, March 26, 2015

Could You Make It Just A Little More Difficult?

Every so often, you need to share out an HTML file via a public web-server. Back in the pre-cloud days, this meant that you'd toss it up on your vanity web server and call it a day. In the cloud era, you have other options. Amazon's S3 was the first one that I used, but other cloud-storage services can be leveraged in a similar manner for static content.

One of those others is Google's "Drive" service. Unfortunately, Google doesn't exactly seem to want to make it a straight forward affair to share static web content straight from drive. It's easy if you want viewers to see the raw HTML, but not so great if you want them to see rendered HTML.

At any rate, as of the writing of this document (warning: judging by my Google results, the method seems to change over time and even Google's own help pages aren't really kept up to date), this was what I had to do:

  1. Create a new folder in Google Drive (optional: if you're willing to make your top-level gDrive folder publicly-browsable or already have a folder that's set - or you're willing to set - as publicly-browsable, you can skip this step)
  2. Edit the sharing on the folder, setting the permission to "Public View"
  3. Navigate into the folder. Take note of its path. It will look something like:
    https://drive.google.com/drive/#folders/0F3SA-qkPpztNflU1bUtyekYYC091a2ttHZJpMElwTm9UcFNqN1pNMlf3iUlTUkJ0UU5PUVk
  4. Take the left part of the URL, up to and including "/#folders/" and nuke it
  5. Replace the deleted part of the original URL and replace it with:
    http://www.googledrive.com/host/
    The browsable URL to your publicly-viewable folder will now look like:
    http://www.googledrive.com/host/0F3SA-qkPpztNflU1bUtyekYYC091a2ttHZJpMElwTm9UcFNqN1pNMlf3iUlTUkJ0UU5PUVk
  6. Clicking on  that link will take you to the folder holding your static web content. To get the sharable URL for your file(s), click on the link to the file.
  7. When the file opens in your browser, copy the URL for the file, then send it out (for the sake of your recipients' sanity, you might want to pump it through a URL-shorter service, first - like maybe goo.gl)
In my case, I was trying to collaborate on a system-hardening toolset. I'd run my system through a security scanner that had flagged a number of false findings (actually, all the "fail" findings turned out to be bogus). I wanted to share the report with him and the rules files that the security tool had reported against. So, I sorted out the above so I could post links into our collaboration tool.

Maybe one day Google will make sharing static web content from Drive as (relatively) easy as Amazon has with S3. A "share as web page" button sure would be nice.

Wednesday, March 25, 2015

So You Don't Want to BYOL

The Amazon Web Services MarketPlace is pretty awesome. There's  oodles of pre-made machine templates to choose some. Even in the face of all that choice, it's not unusual to find that, of all the choices you have, none quite fit your needs. That's the scenario I found myself in.

Right now, I'm supporting a customer that's a heavy user of Linux for their business support systems. They're in the process of migrating from our legacy hosting environment to hosting things on AWS. During their development phase, use of CentOS was sufficient for their needs. As they move to production, however, they want "real" Red Hat Enterprise Linux.

Go up on the MarketPlace and there's plenty of options to choose from. However, my customer doesn't want to deal with buying a stand-alone entitlement to patch-support for their AWS-hosted systems. This requirement considerably cuts down on the useful choices in the MarketPlace. There's still "license included" Red Hat options to choose from.

Unfortunately, my customer also has fairly specific partitioning requirements that are not met by the "license included" AMIs. When using CentOS, this wan't a problem - CentOS's patch repos are open-access. Creating an AMI with suitable partitioning and access to those public repos is about a 20 minute process. While some of that process is leveragable for creating a Red Hat AMI, making the resultant AMI be "license included" is a bit more challenging.

When I tried to simply re-use my CentOS process, supplemented by the Amazon repo RPMs, I ended up with a system that, when I did a yum-query, got me 401 errors. I was missing something.

Google searches weren't terribly helpful in solving my problem. I found a lot of "how do I do this" posts, but damned few that actually included the answer. Ultimately, what it turns out to be is that if you generate your AMI from an EBS snapshot, instances launched from that AMI don't have an entitlement key to access the Amazon yum repos. You can see this by looking at your launched instance's metadata:
# curl http://169.254.169.254/latest/dynamic/instance-identity/document
{
  "accountId" : "717243568699",
  "architecture" : "x86_64",
  "availabilityZone" : "us-west-2b",
  "billingProducts" : null,
  "devpayProductCodes" : null,
  "imageId" : "ami-9df0ec7a",
  "instanceId" : "i-51825ba7",
  "instanceType" : "t1.micro",
  "kernelId" : "aki-fc8f11cc",
  "pendingTime" : "2015-03-25T19:04:51Z",
  "privateIp" : "172.31.19.148",
  "ramdiskId" : null,
  "region" : "us-east-1",
  "version" : "2010-08-31"
}

Specifically, what you want to look at is the value for "billingProducts". If it's "null", your yum isn't going to be able to access the Amazon RPM repositories. Where I came up close to empty on my Google searches was "how to make this attribute persist across images".

I found a small note in a community forum post indicating that AMIs generated from an EBS snapshot will always have "billingProducts" set to "null". This is due to a limitation in the tool used to register an image from a snapshot.

To get around this limitation, one has to create an AMI from a instance of an entitled AMI. Basically, after you've created the EBS you've readied to make a custom AMI, you do a disk-swap with a properly-entitled instance. You then use the "create image" option from that instance. Once you launch AMI you created via the EBS-swap, your instance's metadata will now look something like:
# curl http://169.254.169.254/latest/dynamic/instance-identity/document
{
  "accountId" : "717243568699",
  "architecture" : "x86_64",
  "availabilityZone" : "us-west-2b",
  "billingProducts" : [ "bp-6fa54006" ],
  "devpayProductCodes" : null,
  "imageId" : "ami-9df0ec7a",
  "instanceId" : "i-51825ba7",
  "instanceType" : "t1.micro",
  "kernelId" : "aki-fc8f11cc",
  "pendingTime" : "2015-03-25T19:04:51Z",
  "privateIp" : "172.31.19.148",
  "ramdiskId" : null,
  "region" : "us-east-1",
  "version" : "2010-08-31"
}

Once that "billingProducts" is set, the cloud-init related first-boot scripts will take that "billingProducts" and use it to register the system with the Amazon yum repos. VoilĂ : you  now have a fully custom AMI that uses Amazon-billed access to Red Hat updates.

Note on Compatibility: the Red Hat provided PVM AMIs do not yield well to this method. The Red Hat provided PVM AMIs are all designed with their boot/root device set to /dev/sda1. To date, attempts to leverage the above techniques for PVM AMIs that require their boot/root device set to /dev/sda (used when using a single, partitioned EBS to host a bare /boot partition and LVM-managed root partitions) have not met with success.