Git workflows can come in many flavors. Once the code hits a continuous integration system your workflow will need to trigger a deploy to production. A common way of handling this is to create a Git tag that will trigger the deployment. Using a Git tag to trigger the deployment can lead to increased risk against safely deploying your code.
These risks can be countered in multiple ways, but these are patterns I've seen in the deployment process for various services.
Your process may allow anyone trigger a deploy to production. In many ways this is a good thing. In GitHub though, certain branches can be protected in order to enforce a certain workflow such as requiring each pull request receive approval from 1 other person.
Tags in Github do now have such a protection. Anyone with write access could push a tag, bypassing the Github workflow.
Any commit in the repository can be tagged. There is little difference (to Git) between a tag on the latest commit and a tag on a commit from 3 months ago. If your process relies on some semantic meaning for these tags you will have to encode that information and handle it in your deployment automation.
I spent some time recently attempting to setup some software on a NixOS system I have at home. It looks like declarative containers were removed in an earlier version of NixOS as they weren't quite ready for use. After some searching I was able to find an example with rkt
!
Setting up a container can be as simple as adding this to your /etc/nixos/configuration.nix
:
virtualisation.rkt.enable = true;
systemd.services."rkt-nginx" = {
description = "Nginx (rkt)";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Slice = "machine.slice";
ExecStart = ''\
${pkgs.rkt}/bin/rkt run --insecure-options=image \
--net=host \
docker://nginx
'';
KillMode = "mixed";
Restart = "always";
};
};
I recently found that my DHCP leasing on OVH was unreliable. The address worked at one point, but after a few months/reboots I found that the instance could not longer obtain a lease. After a few attempts to release/renew, I decided to set a static IP.
The General Administration page has general information about setting this. The IP from your OVH control panel for the specific server is needed. From that information the routing gateway can be determined.
The gateway is the same as the IP of the server with the last octet replaced
with 254
. If the IP is 10.2.3.4
, the gateway is 10.2.3.254
. To set this on the host:
ipadm create-addr -T static -a $SERVER_IP/32 ixgbe0/v4
route -p add default $GATEWAY_IP
Listing all packages (with FMRI) can be useful to see what you could install. It wasn't immediately obvious to me and couldn't easily find how to do.
pkg list -afv $PACKAGE
-af
lists all versions, regardless of installation state
-v
Includes the FMRI in the output
If you don't see a newer version you think should be there, try a pkg refresh
!
With the release of OmniosCS I've found myself needing packages from OmniTI's Managed Services repository.
My first attempt was to copy packages with pkgrecv. This however caused problems where the IPS server doesn't know about the repository. Adding the repository to the IPS server didn't fix the problem.
This can be fixed by changing the repository FMRI before uploading.
Despite using automated deploys for most things I work on I had put off setting up such a mechanism for this site. Not sure what took so long.
With CircleCI I added a circle.yml
file of:
dependencies:
override:
- pip install -r requirements.txt
test:
override:
- make build
deployment:
deploy:
branch: master
commands:
- make upload
And then an S3 user with the right permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1492350849000",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::philipcristiano.com",
"arn:aws:s3:::philipcristiano.com/*"
]
}
]
}
As I started to use the OmniOS on AWS I ran into the problem that it does not, by default, include a way to execute the AWS User Data script when starting an instance. The User Data scripts provides a wonderful mechanism to bootstrap new instances. Without it, you may be required to use a configuration management tool or manually configure instances after they have booted.
The script is helpfully available through the instance metadata
API
at 169.254.169.254 with the URL http://169.254.169.254/2016-09-02/user-data
. It
should be simple to pull that down and execute the script with SMF!
I've put together a script to do this.
It runs with SMF with a default timeout 15 minutes and will restart if there are errors.
There is a handy dandy install
script in the repo that will download and install the
needed files. At the moment this isn't packaged as this script is needed before I would
set up a package repository.
There is still the problem of how to get this into an AWS AMI. Packer can build the image for us so that the AMI we launch will already have this script. The buildfile for this image is rather simple but the whole process is a powerful one.
To get your own OmniOS AMI with AWSMF-Data installed you can use the above Packer build.
Install Packer
Clone the repo
$ git clone https://github.com/philipcristiano/packer-omnios.git`
build.sh
after setting a few variables$ export AWS_ACCESS_KEY_ID=...
$ export AWS_SECRET_ACCESS_KEY=...
$ export VPC_ID=...
$ export SUBNET_ID=...
$ ./build.sh
VPC_ID
and SUBNET_ID
are only required if you have a need to specify them
(like no default VPC in your account), in which case the build.sh
can be
modified.
From here we can create User Data scripts in AWS and have new EC2 instances run code when they start!
A previous post showed how to install files. If you wanted to run a service from that package there are a few more steps.
The Service Management Facility provides a way to manage services in OmniOS. If you are running a service you installed from a package, this is the way to do it.
We will need to complete a few steps to package up a service and deploy it with IPS.
Create an SMF manifest that instructs SMF how to run our service
Deploy the SMF manifest
Start the service.
Optionally, the service can be modified to read SMF properties so that it can
be configured through svccfg
A service manifest is an XML documents that contain the information required to run a command as a service. This would normally mean that you have to create a new XML document for each service. Thankfully there is the tool Manifold that can create an manifest with answers to the relevant questions.
Packaging for OmniOS goes over how to create a package using the same build system as is used for building OmniOS. The layout of this repository seems designed for building already written software to be used in OmniOS. If you need to package your own software then this can be more overhead then you are looking for. The tools used by that GitHub repository are included in the default installation of OmniOS and have plenty of documentation on Oracle's site about how to use IPS. It turns out you can start making packages for OmniOS with only a few commands.
This post will cover the tools required to create a package, not necessarily best practices in packaging for OmniOS.
I've created an example repository that can build and upload a package to an IPS package depot if you want to skip ahead.
The packaging commands we will be using are
pkgsend - Generates the package manifest and publishes the package
pkgmogrify - Transforms the package manifest
pkglint - Linter for package manifests
pkgfmt - Formatter for package manifest
pkgrepo - (optional) Refresh the repository search index after upload
We will be packaging a Hello World script stored in hello-world.sh
.
#!/usr/bin/bash
echo Hello World!
This file needs an execute bit as well so we will run
chmod +x hello-world.sh
pkgsend
will generate a manifest for us if we can build a directory that
mimics the deployed layout. If we put our script in build/usr/bin
(and remove
the extension) then run pkgsend generate build
we will get a manifest of
files and directories to package.
$ /usr/bin/pkgsend generate build
dir group=bin mode=0755 owner=root path=usr
dir group=bin mode=0755 owner=root path=usr/bin
file usr/bin/hello-world group=bin mode=0755 owner=root path=usr/bin/hello-world
Our manifest so far says we need two directories and a file. This would be
enough of a manifest to start with but can be problematic if the directories
don't line up with the host used to install the package. It would be better to
remove the directories and assume that /usr/bin
already exists on the system,
since it really should already be there.
The command pkgmogrify
can take a manifest and a transform file and output a
transformed manifest.
A simple transform to do this will be stored in transform.mog
<transform dir path=usr -> drop>
This will drop any directories that include the path usr
. If you need are
building a more complex directory structure then using something like
usr/bin$
as the path will only drop the common /usr/bin
elements from the
manifest.
For this we will write the manifest to a file the mogrify it to remove the directories.
$ /usr/bin/pkgsend generate build > manifest.pm5.1
$ /usr/bin/pkgmogrify manifest.pm5.1 transform.mog
file usr/bin/hello-world group=bin mode=0755 owner=root path=usr/bin/hello-world
This now has just our script in the manifest. Using pkgmogrify
we can easily
script changes to manifests instead of relying on manual changes to clean up a
generated manifest.
We'll write the updated manifest to a new file
$ /usr/bin/pkgmogrify manifest.pm5.1 transform.mog > manifest.pm5.2
We have the manifest for what the package should contain but we still need to describe the package with metadata. We will need to include at least a name, version, description, and summary for the package.
The name and version are contained in an Fault Managed Resource Identifier or FMRI.
I recommend reading the link above about proper format and conventions for
FMRIs but for now we will write metadata.mog
to contain
set name=pkg.fmri value=example/hello-world@0.1.0,0.1.0-0.1.0:20160915T211427Z
set name=pkg.description value="Hello World"
set name=pkg.summary value="Hello World shell script"
We can use pkgmogrify
to combine our metadata and current manifest file to
make a file manifest used for publishing our package. In this case we use
pkgfmt
to format the file as well.
$ /usr/bin/pkgmogrify metadata.mog manifest.pm5.2 | pkgfmt > manifest.pm5.final
The manifest we have now should work for publishing the package. We can verify
using pkglint
on the final manifest to check.
$ /usr/bin/pkglint manifest.pm5.final
Lint engine setup...
Starting lint run...
$ echo $?
0
No errors or warnings, wonderful!
We now have a directory structure for the package we would like to create as well as a manifest saying how to install the files. We can publish these components to an IPS package depot with pkgsend
$ pkgsend publish -s PKGSERVER -d build/ manifest.pm5.final
pkg://myrepo.example.com/example/hello-world@0.1.0,0.1.0-0.1.0:20160916T182806Z
PUBLISHED
-s
specifies the package server, -d
specifies the directory to read, and we pass along the path to our manifest. Our package was then published!
If you are using an HTTP depotd server to publish and see the error pkgsend: Publisher 'default' has no repositories that support the 'open/0'
you will
need to disable read-only mode
for the server or publish to a filesystem repository.
The HTTP depotd interface doesn't refresh the search index when a package is published. This can be done with the pkgrepo
command.
$ pkgrepo refresh -s PKGSERVER
After uploading a package to an OmniOS package repository I was unable to find
the package by searching. The package could be installed and local searching
would find it, but the depotd server didn't know how to find the package when
searching. Restarting pkg/server
would work around the issue but having to do
that after each publish would get annoying.
There is a command pkgrepo that will refresh the search index remotely!
Running
pkgrepo refresh -s PKGSRVR
is enough to reload the search index.
When publishing to an IPS depotd server you may see the line
pkgsend: Publisher 'default' has no repositories that support the 'open/0' operation.
If the depotd server will show you a web page but publishing does not work with
pkgsend you may have the server setup in read only mode. svccfg
will allow you to change the property with
svccfg -s pkg/server setprop pkg/readonly = false
Don't do this to a server on the internet though, placing an HTTP server in front of depotd will allow you to add authentication. This is otherwise insecure!
I've started trying to package some software for OmniOS for personal use. The OmniOS Packaging page in the wiki goes through how to do it using the tools used to build the OS. This is a bit more than I would want to do when publishing software to GitHub. I would rather not rely on a repository used to build the OS just to package one piece of software.
A few months ago I was trying to package a personal project and got most of the way there! So far there is a make target that will package an Erlang release into an IPS package. I think it only got as far as putting the files on disk. I still to add the SMF manifest and fix permissions, but it's much smaller when used to package a single piece of software.
As part of the process of shaving some yaks today I wound up needing to upgrade my development server to the latest version of OmniOS. I originally installed the LTS version and planned to stay there till the next release. It turns out there isn't much reason not to upgrade to the latest version. You will get needed security updates either way but be able to get around any bugs with OS-related things that have been fixed in the mean time.
The Upgrading to r151014 or
later page had the
needed information and worked quickly. I ran into an issue with the datasets
for my zones causing the problem pkg: Unable to clone the current boot environment
when trying to update with pkg
. All the zones I care about
are recreated with configuration management so I didn't have a problem
destroying the dataset and recreating them. If it were production I would
have at least snapshotted the needed datasets before destroying them.
For the next release I think I'll update a bit sooner!
While updating Ansible I ran into an issue with an extras module for ZFS and Solaris. A playbook that used to work to set a mount point no longer worked. I was seeing errors that ended in
if int(version) >= 34:\r\nValueError: invalid literal for int() with base 10: '-'\r\n", "msg": "MODULE FAILURE"
An issue was
filed in June
and fixed last month. This change isn't in the latest Ansible 2.1.1.0 which I
was using. For the time being I've added the
extras repository devel
branch as a submodule and used ANSIBLE_LIBRARY=...
to get a fixed version.
I recently came across a static site generator written in Erlang called LambdaPad. I looked around a bit while trying to find a static site generator that would work with Contentful that I would enjoy working with. Most static site generators expect to source documents from the filesystem but LambdaPad allows any source of data you can write in Erlang!
Contentful is a CMS with an API and is free for small use cases. It is easier to use their API as a source then to have other people edit a Git repository in my expected case.
My Github has a branch that can source Contentful entries and provide them to templates. After adding some documentation, examples, and handling Contentful pagination it should be ready for a PR.
... another example of me spending more time on infrastructure instead of a user-facing project which began this tangent!.
This week I started trying to install OmniOS in a Vultr instance. I'm not sure where I first saw Vultr listed but was drawn to it because they offer custom ISO installs. OmniOS isn't supported by most hosting vendors so I would need to install via a custom ISO.
Setting up an account was quick on Vultr, including $5 free credit for opening an account. When creating a new instance you can select the custom ISO after you've added it via URL to your account. They will transfer the ISO to the right datacenter, attach it, then boot up the instance.
The ISO booted fine but installing OmniOS onto the instance didn't work. It turns out that the OmniOS installer doesn't like the way Vultr exposes disks as block devices to the instance. This was mentioned by Dan McDonald in the #omnios channel after he helped me debug. Originally I tweeted about trying to install it and he followed up. He was very helpful and mentioned that the installer is due to be replaced which will work around this issue, but it won't be right away.
It seems just running OmniOS on baremetal is the way to go. I might wind up getting a colo'd box at this point.
Creating an Erlang release with Relx is straightforward and quick but you still need to get it onto a machine. You could script something yourself, maybe even using a configuration management tool. You could also create a Debian package which would make your sysadmin happy and even make it easy to uninstall!
In this example I'll use FPM although
the Debian toolchain would work as well. This will assume that you can already
make a release with Relx and that you put your release files into rel
within
your project. This may not follow all Debian best-practices but it will give
you a .deb
file that will install and look kind of like any other package.
The package will include the Erlang Runtime System so you won't need to install
Erlang on the target system or any other dependencies before installing your
package.
You likely already include a sys.config
file with your release but it would
be nice to be able to configure the release after the package has been installed.
This is usually done with files in /etc
or /etc/PACKAGE
. Your sys.config
can
be easily updated to make this happen!
Assuming you aren't configuring anything to start your sys.config
would look like
[].
With a relx.config
including
{sys_config, "./rel/sys.config"}.
To make this include an /etc
file using the config
documentation says you can include
a file path (absolute path preferred). This would make your Relx sys.config
look like:
["/etc/PACKAGE/PACKAGE.config"].
Simple! We don't need any post-install configuration right now but we should include
the config-less file so that Erlang can find it when trying to use sys.config
Create
a file in rel/PACKAGE/PACKAGE.config
:
[].
Now this file can be updated with your configuration management tool without requiring changing any files within the release!
On Debian/Ubuntu systems it's not uncommon to have a /etc/default/PACKAGE
file as
well that allows you to add any environment variables you would like to use for
your application. I ran across this needing to set the ulimit
. For now
we will create a file in rel/etc/default/PACKAGE
that sets the ulimit
.
ulimit -n 65536
It's nice to have a system user that runs your release and not require some other
tool to create it. This can be done with FPM's --before-install
option to pass
in the path to an appropriate script. More can be included but for now we will
create a file rel/before-install
with the contents
adduser PACKAGE --system
So that before this package is installed dpkg
will create the user for us.
Your release should generally start right after the system does and it is
helpful to follow the standard init system of your distribution. This becoming
SystemD or Upstart depending on your distribution/derivative but for this
example we will stick with SysV-style init. This get slightly more complex but
we will start with the example and then walk through each line. This requires
that you use the extended start script from Relx with the option
{extended_start_script, true}.
#!/bin/sh
HOME=/opt/PACKAGE/
[ -f /etc/default/PACKAGE ] && . /etc/default/PACKAGE
mkdir -p /var/log/PACKAGE
chown -R /opt/PACKAGE /var/lib/PACKAGE /opt/PACKAGE/log /var/log/PACKAGE
su PACKAGE -mc "/opt/PACKAGE/bin/PACKAGE $@"
First #!/bin/sh
, use the sh to execute.
Erlang and your release really want a HOME
variable. We will for now install
the application into /opt
so that /opt/PACKAGE
will be used as HOME
Next we test for the defaults file we created before and if it exists we will source it into this script. While the package will create the file it's still polite to check if it exists before sourcing.
mkdir
and chown
are used so that the log/var directories and the release
itself all belong to the user we created in before-install
. More directories
can be added if you need something specific.
Finally with su
we will pass the arguments to the init script through
to the extended start script from Relx. The extended start script includes
things like start
and stop
that are familiar for an init script but
also includes ways to easily get a remote console connected to the Erlang
VM!
Since this script will use a dir in /var/lib
create the respective directory
within rel rel/var/lib/PACKAGE
Until now we just created files that would be used by FPM, now we can tell FPM to create the package. This could be done on any OS, not just the one that you intend to distribute the package to, but it's generally easier to use the same OS as we will include the Erlang Runtime System with the package as well.
fpm -s dir -t deb -n PACKAGE -v VERSION \
--before-install=rel/before-install \
_rel/PACKAGE=/opt/ \
rel/init=/etc/init.d/PACKAGE \
rel/var/lib/PACKAGE/=/var/lib/PACKAGE/ \
rel/etc/PACKAGE/PACKAGE.config=/etc/PACKAGE/PACKAGE.config \
rel/etc/default/PACKAGE=/etc/default/PACKAGE
Going through some of the options
-s dir
says to create this package from a directory, instead of some other
packaging format (of which FPM supports many!)
-t deb
creates a Debian package as output
-n PACKAGE
name the package
-v VERSION
Give the package this version. This should probably be determined
by your Makefile or build system and not be hardcoded.
--before-install=rel/before-install
Adds the before-install
script for FPM
so that it can be executed when you are installing the package.
The rest of the options tell FPM to take the relative file location and place it
at the absolute location when installing the package. This includes the release
into /opt/
, our init script, var/lib directory, etc
config, and defaults
file.
Running this command will create the package for you and output a .deb
file
you can install on another machine. This includes ERTS and requires no dependencies
beyond what comes in a fresh install! If you've found this helpful please let
me know on Twitter!
I've started trying to test ETSDB with Common Test and found that it wasn't terribly straightforward to test the Riak Core vnode. The vnode is managed by a Riak Core gen_fsm and isn't a built-in OTP behavior.
I wanted to include the Riak Core gen_fsm to make sure that I integrated it
properly. First you want to spin up the riak_core_vnode
with your vnode
implementation and save the config in the Pid.
Similarly to tear it down you should send a message to stop the FSM. This requires a
tear down call and adding a handler in your vnode to return a stop
.
That includes the send_command
which is a variation from the Riak Core
source. It will handle sending the message in a way that can get the response
sent back to the sending process. Riak Core does some mucking around to deal
with running with the full application.
Now you can call send_command
with the Pid of the FSM and with the ref
returned can pull that messages out of the mailbox!
I've started splitting out useful time-series database functions from ETSDB into their own library as LevelTSDB. This is mostly so I don't have to test everything again for some things I would eventually like to make.
Starting to use a new static site generator now that there are bunch of good ones in Python. I find Python/pip more sane to use than Ruby/bundler/rbenv