A few weeks ago, I got assigned to a new project. Like a lot of my work, it's fully remote. Unlike most of my prior such gigs, while the customer does implement network-isolation for their cloud-hosted resources, they aren't leveraging any kind of trusted developer desktop solution (cloud-hosted or otherwise). Instead, they have per-environment bastion-clusters and leverage IP white-listing to allow remote access to those bastions. To make that white-listing more-manageable, they require each of their vendors to coalesce all of the vendor-employees behind a single origin-IP.
Working for a small company, the way we ended up implementing things was to put a Linux-based EC2 (our "jump-box") behind an EIP. The customer adds that IP to their bastions' whitelist-set. That EC2 is also configured with a default-deny security-group with each of the team members' home IP addresses whitelisted.
Not wanting to incur pointless EC2 charges, the EC2 is in a single-node AutoScaling Group (ASG) with scheduled scaling actions. At the beginning of each business day, the scheduled scaling-action takes the instance-count from 0 to 1. Similarly, at the end of each business day, the scheduled scaling-action takes the instance-count from 1 to 0.
This deployment-management choice also has the benefit of not only reducing compute-costs but ensures that there's not a host available to attack outside of business hours (in case the the default-deny + whitelisted source IPs isn't enough protection). Since the auto-scaled instance's launch-automation includes an "apply all available patches" action, it means that day's EC2 is fully updated with respect to security and other patches. Further, it means that on the off chance that someone had broken into a given instantiation, any beachhead they establish goes "poof!" when the end-of-day scale-to-zero action occurs.
Obviously, it's not an absolutely 100% bulletproof safety-setup, but it does raise the bar fairly high for would-be attackers
At any rate, beyond our "jump box" are the customer's bastion nodes and their trusted IPs list. From the customer-bastions, we can then access the hosts that they have configured for development activities to be run from. While they don't rebuild their bastions or the "developer host" instances as frequently as we do our "jump box", we have been trying to nudge them in a similar direction.
For further fun, the customer-systems require using a 2FA token to access. Fortunately, they use PIN-protected PIVs rather than RSA fobs.
Overall, to get to the point where I'm able to either SSH into the customer's "developer host" instances or use VSCode's git-over-ssh capabilities, I have to go:
- Laptop
- (Employer's) Jump Box
- (Customer's) Bastion
- (Customer's) Development host
…and they do it for each of hosts 2-4 (or just 3 & 4 for the consultants that are VPNing to a trusted network). Further, to keep each hop's connection open, they fire up `top` (or similar) after each hop's connection is established.$ ssh -N -L <LOCAL_PORT>:<REMOTE_HOST>:<REMOTE_PORT> <USER>@<REMOTE_HOST> -i ~/.ssh/key.pub
$ ssh <CUSTOMER>-dev
LogLevel error
UserKnownHostsFile /dev/null
To a given host's configuration-stanza. Warning: the accorded convenience does come with the potential cost of exposing you to undetected man-in-the-middle attacks.StrictHostKeyChecking false