Part of the concept behind Opsworks is the ability to create and destroy instances dynamically. If your instances are configured by Chef recipes all the way from AMI to processing production workload, this is probably something you do pretty regularly.
But this probably means that the IP addresses behind your instances change regularly. At some point you might get tired of constantly going back to the Opsworks console to get an IP address, I know I did.
It turns out it's not too difficult to generate an ssh config file using boto3 to pull down the instances IP addresses. I chose to do this in python, and an example script is below. In my case, our instances all have private IP addresses, so that's the property I'm using.
But this probably means that the IP addresses behind your instances change regularly. At some point you might get tired of constantly going back to the Opsworks console to get an IP address, I know I did.
It turns out it's not too difficult to generate an ssh config file using boto3 to pull down the instances IP addresses. I chose to do this in python, and an example script is below. In my case, our instances all have private IP addresses, so that's the property I'm using.
import os import boto3 ssh_config_filename = '/home/meuser/.ssh/config' if os.path.exists(ssh_config_filename): os.remove(ssh_config_filename) if not os.path.exists('/home/meuser/.ssh/'): os.mkdir('/home/meuser/.ssh/') profiles = {'NoPHI':[{'StackName':'My-Dev-Stack','IdentityFile':'my-dev-private-key.pem', 'ShortName':'dev'} ], 'PHI':[{'StackName':'My-prod-stack','IdentityFile':'my-prod-private-key.pem', 'ShortName':'prod'}] } for profile in profiles.keys(): session = boto3.Session(profile_name=profile) opsworks_client = session.client('opsworks') opsworks_stacks = opsworks_client.describe_stacks()['Stacks'] for opsworks_stack in opsworks_stacks: for stack in profiles[profile]: if opsworks_stack['Name'] == stack['StackName']: instances = opsworks_client.describe_instances(StackId=opsworks_stack['StackId']) for instance in instances['Instances']: with open(ssh_config_filename, "a") as ssh_config_file: ssh_config_file.write("Host " + (stack['ShortName'] + '-' + instance['Hostname']).lower() + '\n') ssh_config_file.write(" Hostname " + instance['PrivateIp'] + '\n') ssh_config_file.write(" User ubuntu\n") ssh_config_file.write(" IdentityFile " + '/home/meuser/keys/' + stack['IdentityFile'] + '\n') ssh_config_file.write("\n") ssh_config_file.write("\n")
This script will run through the different AWS account profiles you specify, find the instances in the stacks you specify, and let you ssh into them using
ssh dev-myinstance1
If you have an instance in your Opsworks stack named "myinstance1". If you run linux as your working machine, you're really done at this point. But if you're on Windows like me, there's another step that can make this even easier: running this script in a Docker linux container to make ssh'ing around easier.
First, you'll need to install Docker for windows. It might be helpful to go through some of their walk throughs as well if you aren't familiar with Docker.
Once you have the Docker daemon installed and running, you'll need to create a Docker image from a Docker file that can run the python script we have above. I've got an example below of using the ubuntu:latest image, installing python, moving over your AWS secret keys and private keys for sshing to the image, and running the python script.
You will need to put the files being moved over (ssh_config_updater.py, my-prod-private-key.pem, my-dev-private-key.pem, and credentials) in the same directory as the docker file.
FROM ubuntu:latest RUN useradd -d /home/meuser -m meuser RUN apt-get update RUN apt-get install -y python-pip RUN pip install --upgrade pip RUN apt-get install -y vim RUN apt-get install -y ssh RUN pip install --upgrade awscli ADD my-dev-private-key.pem /home/meuser/keys/my-dev-private-key-dev.pem ADD my-prod-private-key.pem /home/meuser/keys/my-prod-private-key.pem RUN chmod 600 /home/meuser/keys/* RUN chown bolson /home/meuser/keys/* ADD ssh_config_updater.py /home/meuser/ssh_config_updater.py ADD credentials /home/meuser/.aws/credentials RUN pip install boto3 USER meuser WORKDIR /home/meuser RUN python /home/meuser/ssh_config_updater.py CMD /bin/bash
Once you have your Dockerfile and build directory setup, you can run the command below with the docker daemon running.
docker build -t opsworks-manage .
Once that command finishes, you can ssh into your instances with
docker run -it --name opsworks-manage opsworks-manage ssh dev-myinstance1
This creates a running container with the name opsworks-manage. You can re-use this container to ssh into instances using
docker exec -it opsworks-manage ssh dev-myinstance1
A couple notes, I'm using the default "ubuntu" account AWS builds into Ubuntu instances for simplicity. This is a root account, and in practice you should create another account to use for normal management, either through an opsworks recipe or by using Opsworks to create the user account.
Another note, because this example copies over ssh keys and credentials files to the Docker container, you should never push this image to a container registry. If you plan on version controlling the Dockerfile, you should make sure to use a .gitignore file to keep that sensitive information out of source control.
No comments:
Post a Comment