Despite using automated deploys for most things I work on I had put off setting up such a mechanism for this site. Not sure what took so long.
With CircleCI I added a circle.yml
file of:
dependencies:
override:
- pip install -r requirements.txt
test:
override:
- make build
deployment:
deploy:
branch: master
commands:
- make upload
And then an S3 user with the right permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1492350849000",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::philipcristiano.com",
"arn:aws:s3:::philipcristiano.com/*"
]
}
]
}
Git workflows can come in many flavors. Once the code hits a continuous integration system your workflow will need to trigger a deploy to production. A common way of handling this is to create a Git tag that will trigger the deployment. Using a Git tag to trigger the deployment can lead to increased risk against safely deploying your code.
These risks can be countered in multiple ways, but these are patterns I've seen in the deployment process for various services.
Your process may allow anyone trigger a deploy to production. In many ways this is a good thing. In GitHub though, certain branches can be protected in order to enforce a certain workflow such as requiring each pull request receive approval from 1 other person.
Tags in Github do now have such a protection. Anyone with write access could push a tag, bypassing the Github workflow.
Any commit in the repository can be tagged. There is little difference (to Git) between a tag on the latest commit and a tag on a commit from 3 months ago. If your process relies on some semantic meaning for these tags you will have to encode that information and handle it in your deployment automation.
As I started to use the OmniOS on AWS I ran into the problem that it does not, by default, include a way to execute the AWS User Data script when starting an instance. The User Data scripts provides a wonderful mechanism to bootstrap new instances. Without it, you may be required to use a configuration management tool or manually configure instances after they have booted.
The script is helpfully available through the instance metadata
API
at 169.254.169.254 with the URL http://169.254.169.254/2016-09-02/user-data
. It
should be simple to pull that down and execute the script with SMF!
I've put together a script to do this.
It runs with SMF with a default timeout 15 minutes and will restart if there are errors.
There is a handy dandy install
script in the repo that will download and install the
needed files. At the moment this isn't packaged as this script is needed before I would
set up a package repository.
There is still the problem of how to get this into an AWS AMI. Packer can build the image for us so that the AMI we launch will already have this script. The buildfile for this image is rather simple but the whole process is a powerful one.
To get your own OmniOS AMI with AWSMF-Data installed you can use the above Packer build.
Install Packer
Clone the repo
$ git clone https://github.com/philipcristiano/packer-omnios.git`
build.sh
after setting a few variables$ export AWS_ACCESS_KEY_ID=...
$ export AWS_SECRET_ACCESS_KEY=...
$ export VPC_ID=...
$ export SUBNET_ID=...
$ ./build.sh
VPC_ID
and SUBNET_ID
are only required if you have a need to specify them
(like no default VPC in your account), in which case the build.sh
can be
modified.
From here we can create User Data scripts in AWS and have new EC2 instances run code when they start!