Most of the AWS projects I work on, both currently and historically, have deployed most, if not all, of their EC2s into private VPC subnets. This means that, if one wants to be able directly login to their Linux EC2s' interactive shells, they're out of luck. Historically, to get something akin to direct access one had to set up bastion-hosts in a public VPC subnet, and then jump through to the EC2s one actually wanted to login to. How well one secured those bastion-hosts could make-or-break how well-isolated their private VPC subnets – and associated resources – were.
If you were the sole administrator or part of a small team, or were part of an arbitrary-sized administration-group that all worked from a common network (i.e., from behind a corporate firewall or through a corporate VPN), keeping a bastion-host secure was fairly easy. All you had to do was set up a security-group that allowed only SSH connections and only allowed them from one or a few source IP addresses (e.g. your corporate firewall's outbound NAT IP address). For a bit of extra security, one could eve prohibit password-based logins on the Linux bastions (instead, using SSH key-based login, SmartCards, etc. for authenticating logins). However, if you were a member of a team of non-trivial size and your team members were geographically-distributed, maintaining whitelists to protect bastion-hosts could become painful. That painfulness would be magnified if that distributed team's members were either frequently changing-up their work locations or were coming from locations where their outbound IP address would change with any degree of frequency (e.g., work-from-home staff whose ISPs would frequently change their routers' outbound IPs).
A few years ago, AWS introduced SSM and the ability to tunnel SSH connections through SSM (see the re:Post article for more). With appropriate account-level security-controls, the need for dedicated bastion-hosts and maintenance of whitelists effectively vanished. Instead, all one had to do was:
- Register an SSH key to the target EC2s' account
- Set up their local SSH client to allow SSH-over-SSM
- Then SSH "directly" to their target EC2s
SSM would, effectively, "take care of the rest" …including logging of connections. If one were feeling really enterprising, one could enable key-logging for those SSM-tunneled SSH connections (a good search-engine query should turn up configuration guides; one such guide is toptal's). This would, undoubtedly make your organization's IA team really happy (and may even be required depending on security-requirements your organization is legally-required to adhere to) – especially if they don't yet have an enterprise session-logging tool purchased.
But what if your EC2s are hosting applications that require GUI-based access to set up and/or administer? Generally, you have two choices:
- X11 display-redirection
- SSH port-forwarding
Unfortunately, SSM is a fairly low-throughput solution. So, while doing X11 display-redirection from an EC2 in a public VPC subnet may be more than adequately performant, the same cannot be said when done through an SSH-over-SSM tunnel. Doing X11 display-redirection of a remote browser session – or, worse, an entire graphical desktop session (e.g., KDE or Gnome desktops) – is paaaaaainfully slow. For my own tastes, it's uselessly slow.
Alternately, one can use SSH port-forwarding as part of that SSH-over-SSM session. Then, instead of trying to send rendered graphics over the tunnel, one only sends the pre-rendered data. It's a much lighter traffic load with the result being a much quicker/livelier response. It's also pretty easy to set up. Something like:
ssh -L localhost:8080:$( aws ec2 describe-instances \ --query 'Reservations[].Instances[].PrivateIpAddress' \ --output text \ --instance-ids <EC2_INSTANCE_ID> ):80 <USERID>@<EC2_INSTANCE_ID>
Is all you need. In the above, the argument to the -L flag is saying, "set up a tcp/8080 listener on my local machine and have it forward connections to the remote machine's tcp/80". The local and remote ports can be varied for your specific needs. You can even set up dynamic-forwarding by creating a SOCKS proxy (but this document is meant to be a starting point, not dive into the weeds).
Note that, while the above is using a subshell (via the $( … ) shell-syntax) to snarf the remote EC2's private IP address, one should be able to simply substitute "localhost". I simply prefer to try to speak to the remote's ethernet, rather than loopback, interface, since doing so can help identify firewall-type issues that might interfere with others' use of the target service.
No comments:
Post a Comment