Thursday, June 20, 2024

Keeping It Clean: EKS and `kubectl` Configuration

Previously, I was worried about, "how do I make it so that kubectl can talk to my EKS clusters".  However, after several days of standing up and tearing down EKS clusters across a several accounts, I discovered that my ~/.kube/config file had absolutely exploded in size and its manageability reduced to all but zero. And, while aws eks update-kubeconfig --name <CLUSTER_NAME> is great, its lack of a `--delete` suboption is kind of horrible when you want or need to clean out long-since-deleted clusters from your environment. So, onto "next best thing", I guess…

Ultimately, that "next best thing" was setting a KUBECONFIG environment-variable as part of my configuration/setup tasks (e.g., something like `export KUBECONFIG=${HOME}/.kube/config.d/MyAccount.conf`). While not as good as I'd like to think a `aws eks update-kubeconfig --name <CLUSTER_NAME> --delete would be, it at least means that:

  1. Each AWS account's EKS's configuration-stanzas are kept wholly separate from each other
  2. Reduces cleanup to simply overwriting – or straight up nuking – per-account ${HOME}/.kube/config.d/MyAccount.conf files

…I tend to like to keep my stuff "tidy". This kind of configuration-separation facilitates scratching that (OCDish) itch. 

The above is derived, in part, from the Organizing Cluster Access Using kubeconfig Files document

Monday, June 17, 2024

Crib Notes: Accessing EKS Cluster with `kubectl`

While AWS does provide a CLI tool – eksctl –for talking to EKS resources, it's not suitable for all Kubernetes actions one might wish to engage in. Instead, one must use the more-generic access provided through the more-broadly used tool, kubectl. Both tools will generally be needed, however.

If, like me, your AWS resources are only reachable through IAM roles – rather than IAM user credentials – it will be necessary to use the AWS CLI tool's eks update-kubeconfig subcommand. The general setup workflow will look like:

  1. Set up your profile definition(s)
  2. Use the AWS CLI's sso login to authenticate your CLI into AWS (e.g., `aws sso login --no-browser`)
  3. Verify that you've successfully logged in to your target IAM role (e.g., `aws sts get-caller-identity` …or any AWS command, really)
  4. Use the AWS CLI to update your ~/.kube/config file with the `eks update-kubeconfig` subcommand (e.g., `aws eks update-kubeconfig --name thjones2-test-01`)
  5. Validate that you're able to execute kubectl commands and get back the kind of data that you expect to get (e.g., `kubectl get pods --all-namespaces` to get a list of all running pods in all namespaces within the target EKS cluster)