Showing posts with label cli. Show all posts
Showing posts with label cli. Show all posts

Thursday, June 20, 2024

Keeping It Clean: EKS and `kubectl` Configuration

Previously, I was worried about, "how do I make it so that kubectl can talk to my EKS clusters".  However, after several days of standing up and tearing down EKS clusters across a several accounts, I discovered that my ~/.kube/config file had absolutely exploded in size and its manageability reduced to all but zero. And, while aws eks update-kubeconfig --name <CLUSTER_NAME> is great, its lack of a `--delete` suboption is kind of horrible when you want or need to clean out long-since-deleted clusters from your environment. So, onto "next best thing", I guess…

Ultimately, that "next best thing" was setting a KUBECONFIG environment-variable as part of my configuration/setup tasks (e.g., something like `export KUBECONFIG=${HOME}/.kube/config.d/MyAccount.conf`). While not as good as I'd like to think a `aws eks update-kubeconfig --name <CLUSTER_NAME> --delete would be, it at least means that:

  1. Each AWS account's EKS's configuration-stanzas are kept wholly separate from each other
  2. Reduces cleanup to simply overwriting – or straight up nuking – per-account ${HOME}/.kube/config.d/MyAccount.conf files

…I tend to like to keep my stuff "tidy". This kind of configuration-separation facilitates scratching that (OCDish) itch. 

The above is derived, in part, from the Organizing Cluster Access Using kubeconfig Files document

Monday, June 17, 2024

Crib Notes: Accessing EKS Cluster with `kubectl`

While AWS does provide a CLI tool – eksctl –for talking to EKS resources, it's not suitable for all Kubernetes actions one might wish to engage in. Instead, one must use the more-generic access provided through the more-broadly used tool, kubectl. Both tools will generally be needed, however.

If, like me, your AWS resources are only reachable through IAM roles – rather than IAM user credentials – it will be necessary to use the AWS CLI tool's eks update-kubeconfig subcommand. The general setup workflow will look like:

  1. Set up your profile definition(s)
  2. Use the AWS CLI's sso login to authenticate your CLI into AWS (e.g., `aws sso login --no-browser`)
  3. Verify that you've successfully logged in to your target IAM role (e.g., `aws sts get-caller-identity` …or any AWS command, really)
  4. Use the AWS CLI to update your ~/.kube/config file with the `eks update-kubeconfig` subcommand (e.g., `aws eks update-kubeconfig --name thjones2-test-01`)
  5. Validate that you're able to execute kubectl commands and get back the kind of data that you expect to get (e.g., `kubectl get pods --all-namespaces` to get a list of all running pods in all namespaces within the target EKS cluster)

Tuesday, September 20, 2022

Crib Notes: Quick Audit of EC2 Instance-Types

Was recently working on a project for a customer who was having performance issues. Noticed the customer was using t2.* for the problematic system. Also knew that I'd seen them using pre-Nitro instance-types on some other systems they'd previously complained about performance problems with. Wanted to put a quick list of "you might want to consider updating these guys" EC2s. Ended up executing:


$ aws ec2 describe-instances \
   --query 'Reservations[].Instances[].{Name:Tags[?Key == `Name`].Value,InstanceType:InstanceType}' \
   --output text | \
sed -e 'N;s/\nNAME//;P;D'

Because the describe-instances's command-output is multi-line – even with the applied --query filter – adding the sed filter was necessary to provide a nice, table-like output:

t3.medium       ingress.dev-lab.local
t2.medium       etcd1.dev-lab.local
m5.xlarge       k8snode.dev-lab.local
m6i.large       runner.dev-lab.local
t2.small        dns1.dev-lab.local
t3.medium       k8smaster.dev-lab.local
t2.medium       bastion.dev-lab.local
t3.medium       ingress.dev-lab.local
t2.medium       etcd0.dev-lab.local
m5.xlarge       k8snode.dev-lab.local
m6i.large       runner.dev-lab.local
m5.xlarge       k8snode.dev-lab.local
t2.xlarge       workstation.dev-lab.local
t2.medium       proxy.dev-lab.local
t2.small        dns0.dev-lab.local
t3.medium       ingress.dev-lab.local
t2.medium       etcd2.dev-lab.local
m5.xlarge       k8snode.dev-lab.local
t2.medium       mail.dev-lab.local
m6i.large       runner.dev-lab.local
t2.small        dns2.dev-lab.local
t3.medium       k8smaster.dev-lab.local
t2.medium       bastion.dev-lab.local
t2.medium       proxy.dev-lab.local

Friday, January 18, 2019

GitLab: You're Kidding Me, Right?

Some of the organizations I do work for run their own, internal/private git servers (mostly GitLab CE or EE but the occasional GitHub EE). However, the way we try to structure our contracts, we maintain overall ownership of code we produce. As part of this, we do all of our development in our corporate GitHub.Com account. When customers want the content in their git servers, we set up a replication-job to take care of the requisite heavy-lifting.

One of the side-effects of developing externally, this way, is that the internal/private git service won't really know about the email addresses associated with the externally-sourced commits. While you can add all of your external email addresses to your account within the internal/private git service, some of those external email addresses may not be verifiable (e.g., if you use GitHub's "noreply" address-hiding option).

GitLab makes having these non-verifiable addresses in your commit-history not particularly fun/easy to resolve. To "fix" the problem, you need to go into the GitLab server's administration CLI and fix things. So, to add my GitHub "noreply" email, I needed to do:

  1. SSH to the GitLab server
  2. Change privileges (sudo) to an account that has the ability to invoke the administration CLI
  3. Start the GitLab administration CLI
  4. Use a query to set a modification-handle for the target account (my contributor account)
  5. Add a new email address (the GitHub "noreply" address)
  6. Tell GitLab "you don't need to verify this" (mandatory: this must be said in a Obi-Wan Kenobi voice)
  7. Hit save and exit the administration CLI
For me, this basically looked like:
-------------------------------------------------------------------------------------
 GitLab:       11.6.5 (237bddc)
 GitLab Shell: 8.4.3
 postgresql:   9.6.10
-------------------------------------------------------------------------------------
Loading production environment (Rails 5.0.7)
irb(main):002:0> user = User.find_by(email: 'my@ldap.email.address')
=> #
irb(main):003:0> user.email = 'ferricoxide@users.noreply.github.com'
=> "ferricoxide@users.noreply.github.com"
irb(main):004:0> user.skip_reconfirmation!
=> true
irb(main):005:0> user.save!
=> true
irb(main):006:0>
Once this is done, when I look at my profile page, my GitHub "noreply" address appears as verified (and all commits associated with that address show up with my Avatar)

Tuesday, December 5, 2017

I've Used How Much Space??

A customer of mine needed me to help them implement a full CI/CD tool-chain in AWS. As part of that implementation, they wanted to have daily backups. Small problem: the enterprise backup software that their organization normally uses isn't available in their AWS-hosted development account/environment. That environment is mostly "support it yourself".

Fortunately, AWS has a number of tools that can help with things like backup tasks. The customer didn't have strong specifications on how they wanted things backed up, retention-periods, etc. Just "we need daily backups". So I threw them together some basic "pump it into S3" type of jobs with the caveat "you'll want to keep an eye on this because, right now, there's no data lifecycle elements in place".

For the first several months things ran fine. Then, as they often do, problems began popping up. Their backup jobs started experiencing periodic errors. Wasn't able to find underlying causes. However, in my searching around, it occurred to me "wonder if these guys have been aging stuff off like I warned them they'd probably want to do."

AWS provides a nifty GUI option in the S3 console that will show you storage utilization. A quick look in their S3 backup buckets told me, "doesn't look like they have".

Not being over much of a GUI-jockey, I wanted something I could run from the CLI that could be fed to an out-of-band notifier. The AWS CLI offers the `s3api` tool-set that comes in handy for such actions. My first dig through (and some Googling), I sorted out "how do I get a total-utilization view for this bucket". It looks something like:

aws s3api list-objects --bucket toolbox-s3res-12wjd9bihhuuu-backups-q5l4kntxp35k \
    --output json --query "[sum(Contents[].Size), length(Contents[])]" | \
    awk 'NR!=2 {print $0;next} NR==2 {print $0/1024/1024/1024" GB"}'
[
1671.5 GB
    423759
]

The above agreed with the GUI and was more space than I'd assumed they'd be using at this point. So, I wanted to see "can I clean up".

aws s3api list-objects --bucket toolbox-s3res-12wjd9bihhuuu-backups-q5l4kntxp35k \
    --prefix Backups/YYYYMMDD/ --output json \
    --query "[sum(Contents[].Size), length(Contents[])]" | \
    awk 'NR!=2 {print $0;next} NR==2 {print $0/1024/1024/1024" GB"}'
[
198.397 GB
    50048
]
That "one day's worth of backups" was also more than expected. Last time I'd censused their backups (earlier in the summer), they had maybe 40GiB worth of data. They wanted a week's worth of backups. However, at 200GiB/day worth of backups, I could see that I really wasn't going to be able to trim the utilization. Also meant that maybe they were keeping on top of aging things off.
Note: yes, S3 has lifecycle policies that allow you to automate moving things to lower-cost tiers. Unfortunately, the auto-tiering (at least from regular S3 to S3-IA) has a minimum age of 30 days. Not helpful, here.
Saving grace: at least I snuffled up a way to verify get metrics without the web GUI. As a side effect, also meant I had a way to see that the amount that reaches S3 matches the amount being exported from their CI/CD applications.