Monday, December 12, 2011

Who Cut Off My `finger`

Overall, I'm not trying to make this, my "more serious blog" a dumping-ground for rants. So, please forgive me this rant and please feel free to skip this post....

I've been using UNIX and similar systems for a long time, now. So, I'm kind of set in my ways in the things I do on systems and the tools I expect to be there. When someone capriciously removes a useful tool, I get a touch upset.

`finger` is one of those useful tools. Sadly, because people have, in the mists of time` misconfigured finger, security folks now like to either simply disable it or remove it altogether. Fine. Whatever: I appreciate that there might be security concerns. However, if you're going to remove a given command, at least make sure you're accomplishing something useful for the penalty you make system users pay in frustration and lost time. If you decide to remove the finger command, then you should probably also make it so I can't get the same, damned information via:

• `id`
• `who`
• `whoami`
• `last`
• `getent passwd <USERID>`
• (etc.)

If I can run all of those commands, I've still got all the ability to get the data you're trying to hide by removing `finger`. So, what have you accomplished other than to piss me off and make it so I have to get data via other avenues? Seriously: "WTF"?

Why the hell is it that, when someone reads a "security best practice", they go ahead and blindly implement something without bothering to ask the next, logical questions: "does doing this, by itself, achieve my security goal," "is the potential negative impact on system users more than balanced-out by increased system security", "is there a better way to do this and achieve my security goals" and "what goal am I actually acheiving by taking this measure." If you don't ask these questions (and have good, strong answers to each), you probably shouldn't be following these "best practices."

Wednesday, September 7, 2011

NetBackup with Active Directory Authentication on UNIX Systems

While the specific hosts that I used for this exercise were all RedHat-based, it should work for any UNIX platform that both NetBackup 6.0/6.5/7.0/7.1 and Likewise Open are installed onto.

I'm a big fan of leveraging centralized-authentication services wherever possible. It makes life in a multi-host environment - particularly where hosts can number from the dozens to the thousands - a lot easier when you only have to remember one or two passwords. It's even more valuable in modern security environments where policies require frequent password changes (or, if you've even been through the whole "we've had a security incident, all the passwords on all of the systems and applications need to be changed, immediately" exercise). Over the years, I've used things like NIS, NIS+, LDAP, Kerberos and Active Directory to do my centralized authentication. If your primary platforms are UNIX-based, NIS, NIS+, LDAP and Kerberos have traditionally been relatively straight-forward to set up and use.

I use the caveat of "relatively" because, particularly in the early implementations of each service, things weren't always dead-simple. Right now, we seem to be mid-way through the "easiness" life-cycle of using Active Directory as a centralized authentication source for UNIX operating systems and UNIX-hosted applications. Linux and OSX seem to be leading the charge in the OS space for ease of integration via native tools. There's also a number of third-party vendors out there who provide commercial and free solutions to do it for you, as well. In our enterprise, we chose LikeWise, because, at the time, it was the only free option that also worked reasonably-well with our large and complex Active Directory implementation. Unfortunately, not all of the makers of software that runs on the UNIX hosts seem to have been keeping up on the whole "AD-integration within UNIX operating environment" front.

My latest pain in the ass, in this arena, is Veritas NetBackup. While Symantec likes to tout the value of NetBackup Access Control (NBAC) in a multi-administrator - particularly one where different administrators may have radically different NetBackup skill sets or other differentiating factors - using it in a mixed-platform environment is kind of sucktackular to set up. While modern UNIX systems have the PAM framework to make writing an application's authentication framework relatively trivial, Symantec seems to still be stuck in the pre-PAM era. NBAC's group lookup components appear to still rely on direct consultation of a server's locally-maintained group files rather than just doing a call to the host OS's authentication frameworks.

When I discovered this problem, I opened a support case with Symantec. Unfortunately, their response was "set up a Windows-based authentication broker". My NetBackup environment is almost entirely RedHat-based (actually, unless/until we implement BareMetal Restore (BMR) or other backup modules that require specific OSes be added into the mix, it is entirely RedHat-based). The idea of having to build a Windows server just to act as an authentication broker struck me as a rather stupid way to go about things. It adds yet another server to my environment and, unless I cluster that server, it introduces a single point of failure into and otherwise fairly resilient NetBackup design. I'd designed my NetBackup environment with a virtualized master server (with DRS and SRM supporting it) and multiple media servers for both throughput and redundancy

We already use LikeWise Open to provide AD-base user and group management service for our Linux and Solaris hosts. When I first was running NetBackup through my engineering process, using the old Java auth.conf method for login management worked like a champ. The Java auth.conf-based systems just assumes that any users trying to access the Java UI are users that are managed through /etc/passwd. All you have to do is add the requisite user/rights entries into the auth.conf file and Java treats AD-provided users the same as it treats locally-managed users. Because of this, I suspected that I could work around Symantec's authorization coding lameness.

After a bit of playing around with NBAC, I discovered that, so long as the UNIX group I wanted to map rights to existed in /etc/group, NBAC would see it as a valid, mappable "UNIX PWD" group. I tested by seeing if it would at least let me map the UNIX "wheel" group to one of the NBAC privilege groups. Whereas, even if I could look up the group via getent, if it didn't exist in /etc/group, NBAC would tell me it was an invalid group. Having already verified that a group's presence in /etc/group allowed NBAC to use a group, I proceded to use getent to copy my NetBackup-related groups out of Active Directory and into my /etc/group file (all you have to do is a quick `getent [GROUPNAME] >> /etc/group` and you've populated your /etc/group file).

Unfortunately, I didn't quite have the full groups picture. When I logged in using my AD credentials, I didn't have any of the expected mapped-privileges. I remembered that I'd explicitly emptied the userids from the group entries I'd added to my /etc/group file (I'd actually sed'ed the getents to do it ...can't remembery why, at this point - probably just anticipating the issue of not including userids in the /etc/group file entries). So, I logged out of the Java UI and reran my getent's - this time leaving the userids in place. I logged back into the Java UI and this time I had my mapped privileges. Eureka. 

Still I wasn't quite done. I knew that, if I was going to roll this solution into production, I'd have to cron-out a job to keep the getent file up-to date with the changing AD group memberships. I noticed, while nuking my group entry out of getent, that only my userid was on the group line and not every member of the group. Ultimately, tracked it down to LikeWise not doing full group enumeration by default. So, I was going to have to force LikeWise to enumerate the group's membership before running my getent's.

I proceded to dig around in /opt/likewise/bin for likely candidates for forcing the enumeration. After trying several lw*group* commands, I found that doing a `lw-find-group-by-name [ADGROUP]` did the trick. Once that was run, my getent's produced fully-populated entries in my /etc/group file. I was then able to map rights to various AD groups and wrote a cron script to take care keeping my /etc/group file in sync with Active Directory.

In other words, I was able to get NBAC to work with Active Directory in an all RedHat environment and no need to set up a Windows server just to be an authentication broker. Overall, I was able to create a much lighter-weight, portable solution.

Friday, July 8, 2011

CLARiiON Report Data Verification

Earlier this year, the organization I work for decided to put into production an enterprise-oriented storage resource management (SRM) system. The tool we bought is actually pretty cool. We install collectors into each of our major data centers and they pull storage utilization data off of all of our storage arrays, SAN switches and storage clients (you know: the Windows and UNIX boxes that use up all that array-based storage). Then, all those collectors pump out the collected data to a reporting server at our main data center. The reporting server is capable of producing all kinds of nifty/pretty reports: configuration snapshots, performance reports, trending reports, utilization profiles, etc.

As cool as all this is, you have the essential problem of "how do I know that the data in all those pretty reports is actually accurate?" Ten or fifteen years ago, when array-based storage was fairly new and storage was still the realm of systems administrators with coding skills, you'd ask you nearest scruffy misanthrope, "could you verify the numbers on this report," and get an answer back within a few hours (and then within minutes each subsequent time you asked). Unfortunately, in the modern, GUI-driven world, asking your storage guys to verify numbers can be like pulling teeth. Many modern storage guys aren't really coders and frequently don't know the quick and easy way to get you hard numbers out of the devices they manage. In some cases, you may watch them cut and paste from the individual array's management web UIs into something like MicroSoft Calculator. So, you'll have to wait and, often times, you'll have to continually prod them for the data because it's such a pain in the ass for them to produce.

With our SRM rollout, I found myself in just such a situation. Fortunately, I've been doing Unix system adminstration for the best part of 20 years and, therefore, am rather familiar with scripting. I frequently wish I was able to code in better reporting languages, but I just don't have the time to keep my "real" coding skills up to par. I'm also not terribly patient. So, after waiting a couple weeks for our storage guys to get me the numbers I'd asked for, I said to myself, "screw it: there's gotta be a quicker/better way."

In the case of our CLARiiONs, that better way was to use the NaviCLI (or, these days, the NaviSECCLI). This is a tool set that has been around a looooooong time, in one form or another, and has been available for pretty much any OS that you might attach to a CLARiiON as a storage client. These days, it's a TCP/IP-based commandline tool - prior to NaviCLI, you either had platform-specific tools (IRIX circa 1997 had a CLI-based tool that did queries through the SCSI bus to the array) or you logged directly into the array's RS232 port and used its onboard tools (hopefully, you had a terminal or terminal program that allowed you to capture output) ...but I digress.

If you own EMC equipment, you've hopefully got maintenance contracts that give you rights to download tools and utilities from the EMC support site. NaviCLI is one such tool. Once you install it, you have a nifty little command-line tool that you can wrap inside of scripts. You can create these scripts to both provisioning tasks and reporting tasks. My use, in this case, was reporting.'

The SRM we bought came with a number of canned-reports - including ones for CLARiiON devices. Unfortunately, the numbers we were getting from our SRM were indicating that we only had about 77TiB on one of our arrays when the EMC order sheets said we should have had about 102TiB. That's a bit of a discrepancy. I was able to wrap some NaviCLI commands into a couple scripts (one that reported on RAID-group capacity and one that reported physical and logical disk capacities [ed.: please note that these scripts are meant to be illustrative of what you can do, but aren't really something you'd want to have as the nexus of your enterprise-reporting strategy. They're slow to run, particularly on larger arrays]) and verify that the 77TiB was sort of right and that the 102TiB was also sorta right. The group capacity script basically just spits out two numbers - total raw capacity and total capacity allocatable to clients (without reporting on how much of either is already allocated to clients). The disk capacity script reports how the disks are organized (e.g., RAID1, RAID5, Spare, etc.) - printing total number of disks in each configuration category and how much raw capacity that represented. Basically, the SRM tool was reporting the maximum number of blocks that were configured into RAID groups, not the total raw physical blocks in the array that we'd thought it was supposed to report.

Having these number in hand allowed us to tear apart the SRM's database queries and tables so that we could see what information it was grabbing, how it was storing/organizing it and how to improve on the vendor-supplied standard reports. Mostly, it consisted of changing the titles of some existing fields and adding some fields to the final report.

Yeah, all of this begs the question "what was the value of buying an SRM when you had to reverse-engineer it to make the presented data meaningful?" To be honest, "I dunno." I guess, at the very least, we bought a framework through which we could put together pretty reports and ones that were more specifically meaningful to us (though, to be honest, I'm a little surprised that we're the only customers of the SRM vendor to have found the canned-reports to be "sadly lacking"). It also gave me an opportunity to give our storage guys a better idea of the powerful tools they had available to them if only they were willing to dabble at the command line (even on Windows).

Still the vendor did provide a technical resource to help us get things sorted out faster than we might have done without that assistance. So, I guess that's something?

Wednesday, July 6, 2011

Show Me the Boot Info!

For crusty old systems administrators (such as yours truly), the modern Linux boot sequence can be a touch annoying. I mean, the graphical boot system is pretty and all, but, I absolutely hate having to continually click on buttons just to see the boot details. And, while I know that some Linux distributions give you the option of viewing the boot details by either disabling the graphical boot system completely (i.e., nuke out the "rhgb" option from your grub.conf's kernel line) or switching to an alternate virtual console configured to show boot messages, that's just kind of a suck solution. Besides, if your default Linux build is like the one my company uses, you don't even have the alternate VCs as an option.

Now, this is a RedHat-centric blog, since that's what we use at my place of work (we've a few devices that use embedded SuSE, but, I probably void the service agreement any time I directly access the shell on those!). So, my "solution" is going to be expressed in terms of RedHat (and, by extension, CentOS, Scientific Linux, Fedora and a few others). For many things in RedHat, they give you nifty files in /etc/sysconfig that allow you to customize behaviors. So, I'd made the silly assumption that there'd be an /etc/sysconfig/rhgb type of file. No such luck. So, I dug around in the init scripts (grep -li is great for this, by the way) to see if there were any mentions of tt>rhgb. There was. Well, there was mention of rhgb-client in /etc/init.d/functions.

Unfortunately, even though our standard build seems to include manual pages for every installed component, I couldn't find a manual page for rhgb-client (or an infodoc, for that matter). The best I was able to find was a /usr/share/doc/rhgb-${VERSION}/HOW_IT_WORKS file (I'm assuming that ${VERSION} is consistent with the version of the RHGB RPM installed - it seemed to be). While an interesting read, it's not exactly the best, most exhaustive document I've ever read. It's about what you'd expect from a typical README file, I guess. Still, it didn't display what, if any, arguments that the rhgb-client would take.

Not wanting to do anything too calamitous, I called `rhgb-client --help` as a non-privileged user. I was gladdened to see that it didn't give me one of those annoying "you must be root to run this command" errors. It also gave some usage details:

rhgb-client --help
Usage: rhgb-client [OPTION...]
  -u, --update=STRING      Update a service's status
  -d, --details=STRING     Show the details page (yes/no).
  -p, --ping               See if the server is alive
  -q, --quit               Tells the server to quit
  -s, --sysinit            Inform the server that we've finished rc.sysinit

Help options:
  -?, --help               Show this help message
  --usage                  Display brief usage message

I'd hoped that since /etc/init.d/functions had shown an "--update" argument, it might take other arguments (and, correctly, assumed one would be "--help"). So, I used the above and updated my /etc/init.d/functions script and added "--details=yes" and rebooted. Lo and behold: I get the graphical boot session but get to see all the detailed boot messages, too! Hurrah.

Still, it seemed odd that, since the RHGB components are (sorta) configurable, there wasn't a file in /etc/sysconfig to set the requisite options. I hate having to hack config files that are likely to get overwritten the next time the associated RPM gets updated. I also figure that I can't be the only person out there that wants the graphical boot system and details. So, why havent the RHGB maintainers fixed this (and, yes, I realize that Linux is a community thing and I'm free to contribute fixes to it - I'd just hoped that someone like RedHat or SuSE would have had enough complaints from commercial UNIX converts to have already done it for me)? Oh well, one of these days, I suppose.

Monday, May 9, 2011

`iptables` for the Obsessive Compulsive

While I probably don't meet the clinical definition for being obsessive compulsive, I do tend to like to keep things highly organized. This is reflected heavily in the way I like to manage computer systems.
My employer, as part of the default security posture for production Linux systems, requires the use of iptables. If you've ever looked at an iptables file, they tend to be a spaghetti of arcana. Most tables start out fairly basic and might look something like:
-A INPUT -i lo -j ACCEPT
   -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
   -A INPUT -p udp -m udp --dport 53 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 53 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 587 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 993 -j ACCEPT
   -A INPUT -j REJECT --reject-with icmp-host-prohibited
This would probably be typical of a LAMP server that's also providing DNS and mail services.As it stands, it's fairly manageble and easy to follow if you've got even a slight familiarity with iptables or even firewalls in general.
Where iptables starts to become unfun is when you start to get fancy with it. I started going down this "unfun" path when I put in place a defense against SSHD brute-forcers. I had to add a group of rules just to handle what, above, was done with a single line. Initially, this started to "spaghettify" my iptables configuration. It ended up making the above look like:
-A INPUT -i lo -j ACCEPT
   -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
   -A INPUT -p udp -m udp --dport 53 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 53 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name ssh_safe --rsource
   -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 300 --hitcount 3 --name ssh_safe --rsource -j LOG --log-prefix "SSH CONN. REJECT: "
   -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 300 --hitcount 3 --name ssh_safe --rsource -j DROP
   -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 587 -j ACCEPT
   -A INPUT -p tcp -m tcp --dport 993 -j ACCEPT
   -A INPUT -j REJECT --reject-with icmp-host-prohibited
Not quite as straight-forward any more. Not "tidy", as I tend to refer to these kinds of things. It gave me that "tick" I get whenever I see something messy. So, how to fix it? Well, my first step was to use iptables' comments module. That allowed me to make the configuration a bit more self-documenting (if you ever look at my shell scripts or my "real" programming, they're littered with comments - makes it easier to go back and remember what the hell you did and why). However, it still "wasn't quite right". So, I decided, "I'll dump all of those SSH-related rules into a single rule group" and then reference that group from the main iptables policy:
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -p tcp -m comment --comment "Forward to SSH attack-handler" -m tcp --dport 22 -j ssh-defense
-A INPUT -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 25 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 587 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 993 -j ACCEPT
-A ssh-defense -p tcp -m comment --comment "SSH: track" -m tcp --dport 22 -m state --state NEW -m recent --set --name ssh_safe --rsource
-A ssh-defense -p tcp -m comment --comment "SSH: attack-log" -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 300 --hitcount 3 --name ssh_safe --rsource -j LOG --log-prefix "SSH CONN. REJECT: "
-A ssh-defense -p tcp -m comment --comment "SSH: attack-block" -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 300 --hitcount 3 --name ssh_safe --rsource -j DROP
-A ssh-defense -p tcp -m comment --comment "SSH: accept" -m tcp --dport 22 -j ACCEPT
Ok, so the above doesn't really look any less spaghetti-like. That's ok. This isn't exactly where we-despaghettify things. The above is mostly meant to be machine read. If you want to see the difference in things, use the `iptables -L` command. Or, to really see the difference, issue `iptables -L INPUT ; iptables -L ssh-defense`:
Chain INPUT (policy ACCEPT)
   target     prot opt source               destination
   ACCEPT     all  --  anywhere             anywhere
   DROP       all  --  anywhere             loopback/8
   ACCEPT     all  --  anywhere             anywhere            state RELATED,ESTABLISHED
   ACCEPT     udp  --  anywhere             anywhere            udp dpt:domain
   ssh-defense  tcp  --  anywhere             anywhere            /* Forward to SSH attack-handler */ tcp dpt:ssh
   ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:domain
   ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:http
   ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:https
   ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:smtp
   ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:submission
   ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:imaps

   Chain ssh-defense (1 references)
   target     prot opt source               destination
              tcp  --  anywhere             anywhere            /* SSH: track */ tcp dpt:ssh state NEW recent: SET name: ssh_safe side: source
   LOG        tcp  --  anywhere             anywhere            /* SSH: attack-log */ tcp dpt:ssh state NEW recent: UPDATE seconds: 300 hit_count: 3 name: ssh_safe side: source LOG level warning prefix `SSH CONN. REJECT: '
   DROP       tcp  --  anywhere             anywhere            /* SSH: attack-block */ tcp dpt:ssh state NEW recent: UPDATE seconds: 300 hit_count: 3 name: ssh_safe side: source
   ACCEPT     tcp  --  anywhere             anywhere            /* SSH: accept */ tcp dpt:ssh
Even if you don't find the above any more self-documenting or easier to handle (in a wide xterm, it looks much better), it does have one other value: it makes it harder for people to muck up whatever flow or readability that your iptables configuration has. Because you've externalized a group of directives, someone's going to have to go out of their way to intersperse random rules into your iptables configuration. If it's just your own server, this probably has little value (unless you've got MPD). However, if you have shared administration duties, it can be a sanity-saver.

Thursday, May 5, 2011

Linux Active Directory Integration and PAM

Previously, I've written about using LikeWise to provide Active Directory integration to Linux and Solaris hosts. One of the down sides of LikeWise (and several other similar integration tools) is that it tends to make it such that, if a user has an account in Active Directory, they can log into the UNIX or Linux boxes you've bound to your domain. In fact, while walking someone through setting up LikeWise with the automated configuration scripts I'd written, that person asked, "you mean anyone with an AD account can log in?"

Now, this had occurred to me when I was testing the package for the engineer who was productizing LikeWise for our enterprise build. But, it hadn't really been a priority, at the time. Unfortunately, when someone who isn't necessarily a "security first" kind of person hits you with that question/observation, you know that the folks for whom security is more of a "Job #1" are eventually going to come for you (even if you weren't the one who was responsible for engineering the solution). Besides, I had other priorities to take care of.

This week was a semi-slack week at work. There was some kind of organizational get-together going on that had most of the IT folks out of town discussing global information technology strategies. Fortunately, I'd not had to take part in that event. So, I've spent the week revisiting some stuff I'd rolled out (or been part of the rollout of) but wasn't completely happy with. The "AD integration giving everyone access" thing was one of them. So, I began by consulting the almighty Google. When I'd found stuff I that seemed promising, I fired up a test VM and started trying it out.

Now, SSH (and several other services) weren't really a problem. Many applications allow you to internally regulate who can use the service. For example, with OpenSSH, you can modify the sshd_config file to explicitly define which users and groups can and cannot access your box through that service (for those of you who hit this page looking for tips, do a `man sshd_config` and grep for AllowUsers and AllowGroups for more in-depth information). Unfortunately, it's predictable enough to figure that people that are gonna whine about AD integration giving away the farm are gonna bitch if you tell them they have to modify the configuration of each and ever service they want to protect. No, most people want to be able to go to one place and take care of things with on action or one set of consistent actions. I can't blame them: I feel the same way. Everyone wants things done easily. Part of "easily" generally implies "consistently" and/or "in one place".

Fortunately, any good UNIX or Linux implementation leverages the Pluggable Authentication Management system (aka. PAM). There's about a bazillion different PAM modules out there that allow you to configure any given service's authentication to do or test a similar variety of attributes. My assumption for solving this particular issue was that, while there might be dozens or hundreds of groups (and thousands of users) in an Active Directory forrest, one would only want to grant a very few groups access to an AD-bound UNIX/Linux host. So, I wasn't looking for something that made it easy to grants lots of groups access in one swell-foop. In fact, I was kind of looking for things that made that not an easy thing to do (after all, why lock stuff down if you're just going to blast it back open, again?). I was also looking for something that I could fairly reliably find on generic PAM implementations. The pam_succeed_if is just about tailor made for those requirements.

LikeWise (and the other AD integration methods) add entries into your PAM system to allow users allowed by those authentication subsystems to login, pretty much, unconditionally. Unfortunately, those PAM modules don't often include methods for controlling which users are able to login once their AD authentication has succeeded. Since the PAM system uses a stackable authentication module, you can insert access controls earlier into the stack to cause a user access to fail out earlier than the AD module would otherwise grant the access. If you wanted to be able to allow users in AD_Group1 and AD_Group2 to log in, but not other groups, you'd modify your pam stack to insert the control ahead of the AD allow module.

     account    [default=ignore success=1] pam_succeed_if.so user ingroup AD_Group1 quiet_success
     account    [default=ignore success=1] pam_succeed_if.so user ingroup AD_Group2 quiet_success
     account    [default=bad success=ignore] pam_succeed_if.so user ingroup wheel quiet_success
     account    sufficient    /lib/security/pam_lsass.so

The above is processed such that if a user is a member of the AD-managed group "AD_Group1" or "AD_Group2", it sets the test's success flag to true. If the user isn't a member of those two groups, testing falls through to the next group check - is the user a member of the group wheel (if yes, fall through to the next test; if no, then there's a failure and the user's access is denied). Downside of using this particular PAM module is that it's only availble to you on *N*X systems with a plethora of PAM modules. This is true for many Linux releases - and I know it to be part of RedHat-related releases - but probably won't be available on less PAM-rich *N*X systems (yet one more reason to cast Solaris on the dung-heap, frankly). If your particular *N*X system doesn't have it, you can probably find the sourcecode for it and create yourself the requisite model for your OS.

Monday, May 2, 2011

Vanity Linux Servers and SSH Brute-Forcers

Let me start by saying, that, for years (think basically since OpenSSH became available) I have run my personal, public-facing SSH services relatively locked-down. No matter what the default security posture for the application was - whether compiled from source or using the host operating systems defaults - the first things I did was to ensure that PermitRootLogin was set to "no". I used to allow tunneled clear-text passwords (way back in the day), but even that I've habitually disabled for (probably) a decade, now. In other words, if you want to SSH into one of my systems, you had to do so as a regular user and you had to do it using key-based logins. Even if you did manage to break in as one of those non-privileged users, I used access controls to limit which users could elevate privileges to root.

Now, I never went as far as changing the ports my SSH servers listened on. This always seemed kind of pointless. I'm sure there's plenty of script kiddies whose cracking-scripts don't look for services running on alternate ports, but I've never found much value relying on "security by obscurity".

At any rate, I figured this was enough to keep me basically safe. And, to date, it seems to have. That said, I do periodically get annoyed at seeing my system logs filled with the "Too many authentication failures for root" and "POSSIBLE BREAK-IN ATTEMPT" messages. However, most of the solutions to such problems seemed to be log-scrapers that then blacklisted the attack sources. As I've indicated in prior posts, I'm lazy. Going through the effort of setting up log-scrapers and tying them to blacklisting scripts was more effort than I felt necessary to address something that seemed, primarily to be only a nuisance. So, I never bothered.

I've also been a longtime user of tools like PortSentry (and its equivalents). So, I usually picked up attacks before they got terribly far. Unfortunately, as Linux has become more popular, there seems to be a lot more service-specific attacks and less broad-spectrum attacks (attacks preceded by probing of all possible entry points). Net result: I'm seeing more of the nuisance alerts in my logs.

Still, there's that laziness thing. Fortunately, I'd recently sat through a RedHat training class. And, while I was absolutely floored when the instructor told me that even RHEL 6 still ships with PermitRootLogin set to "yes", he let me know that recent RHEL patch levels included iptables modules that made things like fail2ban somewhat redundant. Unfortunately, he didn't go into any further detail. So, I had to go and dig around for how to do it.

Note: previously, I'd never really bothered with using iptables. I mean, for services that don't require Internet-at-large access, I'd always used things like TCPWrappers or configuring to only listen on loopback or domain sockets to prevent exposing the services. Thus, with my systems, the only Internet-reachable ports were the ones that had to be. There never really seemed to be a point in enabling a local firewall when the system wasn't acting as a gateway to other systems. However, the possibility of leveraging iptables in a useful way kind of changed all that.

Point of honesty, here: the other reason I'd never bothered with iptables was that its syntax was a tad arcane. While I'd once bothered to learn the syntax for ipfilter - a firewall solution with similarly arcane syntax - so that I could use a Solaris-based system as a firewall for my house, converting my ipfilter knowledge to iptables didn't seem worth the effort.

So, I decided to dig into it. I read through manual pages. I looked at websites. I dug through my Linux boxes netfilter directories to see if I could find the relevant iptables modules and see if they were internally documented. Initially, I thought the iptables module my instructor had been referring to was the ipt_limit module. Reading up on it, the ipt_limit module looked kind of nifty. So, I started playing around with it. As I played with it (and dug around online), I found there was an even better iptables module, ipt_recent. I now assume the better module was the one he was referring to. At any rate, dinking with both, I eventually set about getting things to a state I liked.

First thing I did, when setting up iptables was decided to be a nazi about my default security stance. That was accommodated with one simple rule: `iptables -P INPUT DROP`. If you start up iptables with no rules, you get the equivalent default INPUT filter rule of `iptables -P INPUT ACCEPT`. I'd seen some documentation where people like to set there's to `iptables -P INPUT REJECT`. I like "DROP" better than "REJECT" - probably because it suits the more dickish side of me. I mean, if someone's going to chew up my systems resources by probing me or attempting to break in, why should I do them the favor of telling their TCP stack to end the connection immediately? Screw that: let their TCP stack send out SYNs and be ignored. Depending on whether they've cranked down their TCP stack, those unanswered SYNs will mean that they will end up with a bunch of connection attempts stuck in a wait sequence. Polite TCP/IP behavior says that, when you send out a SYN, you wait for an ACK for some predetermined period before you consider the attempt to be failed and execute your TCP/IP abort and cleanup sequence. That can be several tens of seconds to a few hours. During that interval, the attack source has resources tied up. If I sent a REJECT, they could go into immediate cleanup, meaning they can more quickly move onto their next attack with all their system resources freed up.

The down side of setting your default policy to either REJECT or DROP is that it applies to all your interfaces. So, not only will your public-facing network connectivity cease, so will your loopback traffic. Depending on how tightly you want to secure your system, you could bother to iterate all of the loopback exceptions. Most people will probably find it sufficient to simply set up the rule `iptables -A INPUT -i lo0 -j ACCEPT`. Just bear in mind that more wiley attackers can spoof things to make it appear to come through loopback and take advantage of that blanket exception to your DROP or REJECT rules (though, this can be mitigated by setting up rules to block loopback traffic that appears on your "real" intefaces - something like `-A INPUT -i eth0 -d 127.0.0.0/8 -j DROP` will do it).

The next thing you'll want to bear in mind with the defualt REJECT or DROP is that, without further fine-tuning, it will apply to each and every packet hitting that filterset. Some TCP/IP connections start on one port, but then get moved off to or involve other ports. If that happens, your connection's not gonna quite work right. One way to work around that, is to use a state table to manage established connections or related connections. Use a rule like `iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT` to accommodate that.

At this point you're ready to start punching the service-specific holes in your default-deny firewall. On a hobbyist or vanity type system, you might be running things like DNS, HTTP(S), SMTP, and IMAP. That will look like:

-A INPUT -p udp -m udp --dport 53 -j ACCEPT       # DNS via UDP (typically used for individual DNS lookups)
-A INPUT -p tcp -m tcp --dport 53 -j ACCEPT       # DNS via TCP (typically used for large zone transfers)
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT       # HTTP
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT       # HTTP over SSL
-A INPUT -p tcp -m tcp --dport 25 -j ACCEPT       # SMTP
-A INPUT -p tcp -m tcp --dport 587 -j ACCEPT       # SMTP submission via STARTTLS
-A INPUT -p tcp -m tcp --dport 993 -j ACCEPT       # IMAPv4 + SSL

What the ipt_limits module gets you is the ability to rate-limit connection attempts to a service. This can be a simple as ensuring that only "so many connections" per second are allowed access to the service, limiting the number of connections per time interval per source or outright blacklisting a source that too frequently connects.

Doing the first can be done within the SSH and/or TCP Wrappers (or, for services run through xinetd, through your xinetd config). Downside of this is, since it's not distinguishing sources, if you're being attacked, you won't be able to get in since the overall number of connections will have been exceeded. Generally, potentially allowing others to lock you out of your own system is considered to be "not a Good Thing™ to do". But, if you want to risk it, add a rule that looks something like  `-A INPUT -m limit --limit 3/minute -m tcp -p tcp --dport 22 -j ACCEPT` to your iptables configuration and be on about your way (using the ipt_limit module).

If you want to be a bit more targeted in your approach, the ipt_recent module can be leveraged. I used a complex of rules like the following:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -m recent --set --name sshtrack --rsource
   -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 60 --hitcount 3 --name sshtrack --rsource -j LOG --log-prefix "ssh rejection: "
   -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update  --seconds 60 --hitcount 3 --name sshtrack --rsource -j DROP
   -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
What the above four rules do is:
  • For each new connection attempt to port 22, add the remote source address to the "sshtrack" tracking table
  • If this is the third such new connection within 60 seconds, update the remote source address entry in the tracking table and log rejection action
  • If this is the third such new connection within 60 seconds, update the remote source address entry in the tracking table and DROP the connection
  • Otherwise, accept the new connection.
I could have chosen to simply "rcheck" rather than "update" the "sshtrack" table. However, by using "update", it essentially resets the time to live from the last connect attempt packet to whatever might be the next attempt. This way, you get the full sixty second window rather than (60 - ConnectInterval). If it becomes apparent that attackers start to use slow attacks to get past the rule, one can up the "seconds" from 60 to some other value. I chose 60 as a start. It might be reasonable to up it to 300 or even 900 since it's unlikely that I'm going to want to start more than three SSH sessions to the box within a 15 minute interval.

As a bit of reference: on RHEL-based systems, you can check what iptables modules are available by listing out '/usr/include/linux/netfilter_ipv4/ipt_*'. You can then (for most) use `iptables -m [MODULE] --help` to show you the options for a given module. For example:
# iptables -m recent --help | sed -n '/^recent v.*options:/,$p'
     recent v1.3.5 options:
     [!] --set                       Add source address to list, always matches.
     [!] --rcheck                    Match if source address in list.
     [!] --update                    Match if source address in list, also update last-seen time.
     [!] --remove                    Match if source address in list, also removes that address from list.
         --seconds seconds           For check and update commands above.
                                     Specifies that the match will only occur if source address last seen within
                                     the last 'seconds' seconds.
         --hitcount hits             For check and update commands above.
                                     Specifies that the match will only occur if source address seen hits times.
                                     May be used in conjunction with the seconds option.
         --rttl                      For check and update commands above.
                                     Specifies that the match will only occur if the source address and the TTL
                                     match between this packet and the one which was set.
                                     Useful if you have problems with people spoofing their source address in order
                                     to DoS you via this module.
         --name name                 Name of the recent list to be used.  DEFAULT used if none given.
         --rsource                   Match/Save the source address of each packet in the recent list table (default).
         --rdest                     Match/Save the destination address of each packet in the recent list table.
     ipt_recent v0.3.1: Stephen Frost .  http://snowman.net/projects/ipt_recent/
Gives you the options for the "recent" iptables module and a URL for further infomation lookup.

Friday, April 22, 2011

Automating Yum Setup

Recently, as part of my employer's "employee development" programs, I took Red Hat's RH255 class so that I could prep for getting my RHCSA and RHCE (my opinions on the experience are grist for another post - and I don't know, yet, whether that post should be in this blog or one of my personal blogs). The RH255 class I took was based on RHEL 6. In the US, they moved to training based on RHEL 6.0 about six months ago. One of the interesting things (I thought) my instructor said was that RHEL 6.0 included a function that tried to automatically configure `yum` to take mounted CDROMs and ISOs and treat them as installation repositories.

I may have misheard or misinterpreted what he said. It may also be a case that, since my instructor is in the RHEL 6.1 beta program, he was referring to a feature in RHEL 6.1 and not RHEL 6.0. Whatever the case may be, popping an RHEL 6.0 DVD (or ISO) into and RHEL 6.0 machine does not (yet) cause that DVD to be automatically included in `yum`'s repository search. Fortunately, the RHEL 6.x media has been set up (prior RHELs may also have been, it's just been so long since I've built an RHEL 5 system from other than an automated-build system) so that it's easy enough to automatically set up the DVD or ISO for inclusion in `yum`'s seaches. All of the yum repository is already on the media. So, you needn't muck about with doing the createrepo stuff (and the time that running createrepo against a multi-gigabyte DVD can take). Below is the script I wrote to take this already-present data and make it available to `yum`:

#!/bin/sh
MNTDIR=${1:-UNDEF}
# Make sure we passed a location to check
if [ ${MNTDIR} == "UNDEF" ]
then
   echo "Usage: $0 <ISO Mount Location>"
   exit
fi
# Make sure it's actually a directory
if [ ! -d ${MNTDIR} ]
then
   echo "No such directory"
   exit
fi
#ID directories containing repository data
REPODIRS=$(find ${MNTDIR} -type d -name repodata)
if [ "${REPODIRS}" == "" ]
then
   echo "No repository data found"
   exit
fi
for DIR in ${REPODIRS}
to
   DIRNAME=$(echo ${DIR} | sed -e 's#'${MNTDIR}'/##' -e 's#/repodata##')
   BASEURL=$(dirname ${DIR})
   echo "[${DIRNAME}]"
   echo "baseurl=${BASEURL}"
   echo "enabled=1"
   echo
done

The above script takes a directory location (presumably where you mounted your DVD or ISO) and produces yum repository configuration output. I opted to have it output as a capturable stream rather than as a file. That way, I'd have to option of either capturing to a temp location or  directly into /etc/yum.repos.d. Doing so gave more flexibility in what to name a file (so that I didn't clobber any existing files). Granted, I could have parameterized the name and location of the output file, but chose not to. If you want to use the above and would rather have the output go directly to an output file in /etc/yum.repos.d (or elsewhere), it's a trivial modification to the above. Have at it.

At any rate, once you run the above and take the resultant output and move it into a file in /etc/yum.repos.d, subsequent invocations of the `yum` command should now include your DVD or ISO. If it fails to do so (or errors result) it's probably because the new .repo file contains section IDs that collide with exiting ones. Just edit your new file to eliminate any collisions.

While I've done some basic error-checking in the file (specifically: ensuring the user passed a location and that the location is a valid directory), I didn't write it to gracefully handle repository directories that have spaces in their path-names. It's also fairly trivial to fix the script to accommodate it. However, it's probably even easier to just make sure you don't mount your DVD or ISO with spaces in the path-names. I'm lazy, so, I'm choosing the latter option and avoiding the coding exercise. If you find you need to have spaces in your path-names and want to use my script, feel free to make the necessary modifications to accommodate your use of space-containing path names.

Tuesday, April 19, 2011

Handling Resized LUNs on RHEL 5

Perhaps it's a reflection of it being nearly two years since touching a non-Linux *NIX operating system, but I feel like I recall Solaris, Irix, HP/UX and AIX all handling resized LUNs far more gracefully than Linux currently seems to. I seem to recall that, if you resized a LUN on your storage array and presented it to your *N*X host, it was fairly non-disruptive to make that storage available to the filesystems residing on top of the LUN. Maybe I'm mis-remembering, but I can't help but feel that Linux is still showing it's lack of Enterprise-level maturity in this area.

Don't get me wrong, you can get Linux to make use of re-sized LUNs without having to reboot the box. So, that's definitely a good start. But, thus far, I haven't managed to tangle loose a method that doesn't require me to unmount a filesystem (and, if I'm using LVM or multipathd, having to disrupt them, as well).

That said, if you've ended up on this page, it's probably because you're trying to figure out "how do I get RedHat to make use of the extra space on the LUN my SAN administrator grew for me" and Google (et. al.) sent you here.

In sorting this issue out, it seems like the critically-deficient piece of the puzzle is the way in which Linux updates device geometry. As near as I can tell, it doesn't really notice geometry changes by itself, and, the tools available for making it see geometry changes aren't yet optimized for fully on-the-fly configuration changes. But, at least they do provide a method that saves you the several minutes that a physical host reboot can cost you.

In digging about an playing with my test system, what I've come up with is a workflow something like the following:

  1. Unmount any filesystems residing on the re-configured LUN (`umount`)
  2. Stop any logical volumes that are currently active on the LUN (`vgchange`)
  3. Nuke out any partitions that reference the last block of the LUN's previous geometry (`fdisk` for this)
  4. Tell linux to reread the geometry info for the LUN (`blockdev --rereadpt` for this)
  5. Re-create the previously-nuked partition, referencing the new ending-block  (typically `fdisk` - particularly if you have to do block offsets for your SAN solution - for this and add `kpartx` if you're using a multipathing solution)
  6. Grow any logical volume elements containing that re-created/grown partition (`pvresize` for this)
  7. Grow and restart any logical volumes containing that re-created/grown partition (`vgchange` and `lvresize` for this)
  8. Grow and remount any filesystems that were on the re-created/grown partition (`e2fsresize` for this - unless you're using an incompatible filesystem-type - and then `mount`)

Omitted from the list above, but should be inferred by the clueful reader is "stop and restart any processes that use the grown-LUN" (The `fuser` command is really helpful for this).

Obviously, if you're using something other than LVM for your logical volume managment (e.g., VxVM), the `vgchange`, `pvresize` and `lvresize` commands have to be replaced with the appropriate logical volume management system's equivalent commands.

At any rate, if anyone knows how I can call `blockdev --rereadpt` without needing to stop filesystems (etc.), please comment. I'm really trying to figure out the least disruptive way of accomodating resized LUNs and haven't quite got to where I think I should be able to get.

Tuesday, March 22, 2011

udev Abuse

I've probably mentioned before that I am a lazy systems administrator. So, I tend to like things be "self-documenting" and I like things to be as consistent as possible across platforms. I particularly like when a command that's basically common between two operating system versions gives me all of the same kinds of information - particularly if it's information that helps me avoid running multiple other commands.

I've also probably mentioned that, while I've managed a number of different UNIX and UNIX-like operating systems, over the years, the bulk of that has been on Sun systems (not that I prefer Sun systems - I actually always preferred IRIX with AIX a close second). So, I'm used to the Sun way of doing things (and, no, I will never accept that as now being the "Oracle way").

As someone coming from a heavy-Solaris background, I got used to NIC devices being assigned names that reflected the vendor/driver/architecture of the NIC in question. The fact that I could have ten NICs from ten different vendors, each with their own set of capabilities, but all just show up with a NIC device name of ethX, under Linux, always drove me kind of nuts. Yes, I know that I can get the information from other tools (`ethtool`, `kudzu`, looking through "/sys/class/net", etc.) but why should I have to when a crappy OS like Solaris allows me to get all that kind of stuff just by typing `ifconfig -a`?

Fortunately, Linux does provide to "fix" this grievous lack of self-documenting output. You just have to mess with the udev device-naming rules. These rules are stored under "/etc/udev/rules.d". In my particular case, I had a system that was equipped with a pair of dual-ported 10Gbps Mellanox Ethernet cards, a pair of Broadcom NetXtreme 10Gbps Ethernet NICs  and a quad-port Broadcom card with 1Gbps Ethernet NICs on it. Now, for what I was using the system for, I didn't particularly care about the 1Gbps NICs, but I did care about the 10GBps NICs. I had specific plans for laying out my system. Even more importantly, once I turned the system over, I didn't want to be pestered by (less Linux-savvy) people about "which device is which kind of NIC." So, I improvised. I created my own rule file, "61-net_custom.rules", to make udev give more self-documenting (Solaris-esque) names to the 10Gbps NICs. Two simple rules:

DRIVER=="bnx2x", NAME="bnx%n"
DRIVER=="mlx4_en", NAME="mlxiv%n"

And my Broadcom 10Gbps NICs started showing up as bnxX devices and my Mellanox 10Gbps NICs started showing up as mlxivX devices in my `ifconfig -a` output. Well... I did have to tell udev to update itself so it would rename the devices, but, you get the general idea. Unfortunately, Linux purists (not sure you have such given how much of a mongrel Linux is) would probably whine about this. Furthering the misfortune is that, because Linux doesn't have standard driver-specific device naming for NICs (e.g., unlike Solaris where someone sees "ce0" and they know it's the first Cerdes 1Gbps Ethernet NIC in the system), the names I've chosen won't necessarily be inherently meaningful. Oh well, that's what a run-book is for, I suppose.

Wednesday, March 9, 2011

Issues With LikeWise Open for UNIX/Linux Active Directory Integration

Recently it came up on management's radar that, "we're starting to get more UNIX systems out in our farms and we need some way to manage logins for those hosts in much the same way we do for our Windows hosts." Previous central login managment was done through the traditional tools like NIS or Kerberos (but mostly, just relied on ever-expiring local password tables). In a security conscious environment, NIS really isn't a responsible choice, any longer. Standing up a Kerberos infrastructure just to support UNIX systems is kind of an inefficient use of resources - especially when you already have a serviceable Active Directory infrastructure out there.

Fortunately, in the UNIX world, particularly in the Open Systems (Linux, BSD, etc.) realm, there are a number of choices available for joining a system to Active Directory. Many modern UNIX operating systems include this kind of functionality through Winbind. However, the various UNIX vendors or packagers don't necessarily keep fully up to date with the Windbind version they ship in their OS (looking in your direction, here, Sun Oracle). So, if you're running or looking to run Active Directory 2008 the Winbind included with your UNIX may not be a workable solution. Even if your UNIX does include an up to date version of Winbind (but not the bleeding-edge ones that are part of the next-generation Samba project), it may not be up to the task if you have a particularly large or complex Active Directory namespace. In this case, you'll probably be stuck using a commercial offering (or, a commercial AD-integration product's "free" version).
We were kind of stuck in the latter boat. Our AD namespaces tend to be rather large and rather complex. So, we started experimenting with the free version of the LikeWise product, LikeWise Open. In the test labs, with the smaller and simpler Active Directory deployment that supports it, we ran into no issues in testing. However, when we tried to take it into production, we ran into some issues in environments that had "messy" Active Directory deployments.

Specifically, the error we were getting (and could never really find a workaround for in the LikeWise Open support forums) was the "LW_ERROR_ENUM_DOMAIN_TRUSTS_FAILED" error. The domain that was giving us fits was a large domain (tens of thousands of users and thousands more groups, server objects and other, miscellaneous AD objects) that consisted of many AD servers scattered around the globe. Further complicating matters was the fact that these ADs were members of multiple domains and therefore had cross-realm/domain-trust components, as well. Further complicating it is the fact that not all of these AD servers, particularly the ones that had trust relationships with other realms, fully agreed on what time it was.
On the down side, the problem and the lack of easily-Googleable solutions cast doubt as to whether we'd be able to use this product in our enterprise. On the plus side, it gave me a chance to do some troubleshooting. I'm one of those masochists that likes a good challenge and digging into the guts of things. So, I did a bunch of online research as well as poring through the LikeWise administration guides.

Older versions of Likewise used to allow configuration-tweaking through an lsassd.conf file (indeed, other vendors, such as VMware still have this). The latest iteration of LikeWise, unfortunately, does not. Instead, the makers of LikeWise have decided to implement a Windows-esque "registry" for their product. Dunno why plain text files don't work (or even XML files, for that matter) - probably just wanted to make things more familiar for Windows admins that might get saddled with brining those evil Unix boxes into their domains. Whatever. It's painful but not insurmountable. LikeWise provides tools for hacking this file: lwregshell and lw-edit-reg. For me, lw-edit-reg was a more comfortable tool to use. All I had to do was make sure my EDITOR environmental variable was set to VI and I could hack the file to my heart's content with `vi`.

At any rate, I fired up lw-edit-reg and began to dig around for likely tweakables. Given that my error mentioned "TRUSTS", I did a global search for any parameters metniong "TRUSTS". I found the parameter, "DomainManagerIgnoreAllTrusts", and saw that it was set to false (well, the registry equivalent which was "dword:00000000"). So, I tried changing that to "true" (modifying the value to "dword:00000001"). I then bounced all of the LikeWise processes and re-attempted to join my box to the messy domain. VoilĂ , it worked and all was happy in UNIX-as-AD-Client land!

Of course, it wasn't until after I'd found my fix through the lw-edit-reg manipulations that I thought to check to see if lwconfig could be used. It seems like there's two variants of the lwconfig command in the LikeWise 6 release family. One supports the "--dump" option (which shows all tunable parameters along with their currently-set values) and one does not (thus, seeing available tunables and their current values are a bit less straight-forward). At any rate, upon further investigation, I found that I could use lwconfig to make my settings changes. Thus, it makes the software installation and configuration a lot more scriptable. I can modify my automated installation policies to do an `lwconfig DomainManagerIgnoreAllTrusts true` operation if it has issues with the normal `domainjoin-cli ...` operation.