As part of our tenancy, process, we'd come up with a standard build request form. In general, we prefer a model that separates application data from OS data. In addition to the usual "how much RAM and CPU do you need" information, the form includes configuration-capture items for storage for applications hosted on the VMs. The table has inputs for both the requested supplemental storage sizes and where/how to mount those chunks.
This first tenant simply filled in a sum of their total additional storage request with no indication as to how they expected to use it. After several iterations of "if you have specific requirements, we need you to detail them" emails, I sent a final "absent the requested configuration specifications, the storage will be added but left unconfigured". It was finally at this point that the tenant responded back saying "there's a setup guide at this URL - please read that and configure accordingly".
Normally, this is not how we do things. The solution we offer tends to be more of a extended IaaS model: in addition to providing a VM container, we provide a basic, hardened OS configuration (installing an OS and patching it to a target-state) configure basic networking and name-resolution and perform basic storage configuration tasks.
This first tenant was coming from a physical Windows/Red Hat environment and were testing the waters of virtualization. As a result, most of their configuration expectations were based on physical servers (SAN based storage with native multipathing-support). The reference documents they pointed us to were designed for implementing Oracle on a physical system using ASM on top of Linux dm-multipath storage objects ...not something normally done within an ESX-hosted Red Hat Linux configuration.
We weren't going to layer-on dm-multipath support, but the tenant still had the expectation of using "friendly" storage object names for ASM. The easy "friendly" storage object name path is to use LVM. However, Oracle generally recommends against using ASM in conjunction with third-party logical volume management systems. So, LVM was off the table. How best to give the desired storage configs?
I opted to let udev do the work for me. Unfortunately, because we weren't anticipating this particular requirements-set, the VM templates we'd created didn't have some of the hooks available that would allow udev to do its thing. Specifically, no UUIDs were being presented into the Linux guests. Further complicating things is the fact that, with the hardened Linux build we furnish, most of the udev tools and the various hardware information tools are not present. Down side is that it made things more difficult than they probably absolutely needed to be. The up side is the following procedures should be portable across a fairly wide variety of Linux implementations:
- To have VMware provide serial number information - from which UUIDs can be generated by the guest operating system, it's necessary to make a modification to the VM's advance configuration options. Ensure that the “disk.EnabledUUID” has been created for the VM and the value set to “TRUE”. Specific method for doing so varies depending on whether you use the vSphere web UI or the VPX client (or even the vmcli or direct editing of config files) to do your configuration tasks. Google for the specifics of your preferred management method.
- If the you had to create/change the value in the prior step, reboot the VM so that the config changes take effect
- Present the disks to be used by ASM to the Linux guest – if adding SCSI controllers, this step will need to be done while guest is powered off.
- Verify that VM is able to see new VMDKs. If suplemental disk presentation was done while the VM was running, initiate a SCSI-bus rescan (e.g., `echo "- - -" > /sys/class/scsi_host/host1/rescan`)
- Lay :down an aligned, full-disk partition with the `parted` utility for each presented VMDK/disk. For example, if one of the newly-presented VMDKs was seen by the Linux OS as /dev/sdb:
# parted –s /dev/sdb -- mklabel msdos mkpart primary ext3 1024s 100%
Specifying an explicit starting-block (at 1024 or 2048 blocks) and using the relative ending-location, as above, will help ensure that your partition is created on an even storage-boundary. Google around for discussions on storage alignment and positive impact on virtualization environments for details on why the immediately-prior is usually a Good Thing™.
- Ensure that the “options=” line in the “/etc/scsi_id.config” file contains the “-g” option
- For each newly-presented disk, execute the command `/sbin/scsi_id -g -s /block/{sd_device}` and capture the output.
- Copy each disk’s serial number (obtained in the prior step) is copied into the “/etc/udev.d/rules.d/99-oracle-udev.rules” file
- Edit the “/etc/udev.d/rules.d/99-oracle-udev.rules” file, ensuring that each serial number has an entry similar to:
KERNEL=="sd*",BUS=="scsi",ENV{ID_SERIAL}=="{scsi_id}", NAME="ASM/disk1", OWNER="oracle", GROUP="oinstall", MODE="660"
The "{scsi_id}" shown above is a variable name: substitute with the values previously captured via the `/sbin/scsi_id` command. The "NAME=" field should be similarly edited to suite and should be unique for each SCSI serial number.
Note: If attempting to make per disk friendly-names (e.g., “/dev/db1p1”, “/dev/db2p1”, “/dev/frap1”, etc.) it will be necessary to match LUNs by size to appropriate ‘NAME=’ entries
- Reboot the system so that the udev service can process the new rule entries
- Verify that the desired “/dev/ASM/<NAME>” entries exist
- Configure storage-consumers (e.g., “ASM”) to reference the aligned udev-defined device-nodes.
No comments:
Post a Comment