Error Publishing to pkg.depotd

Posted on

When publishing to an IPS depotd server you may see the line

pkgsend: Publisher 'default' has no repositories that support the 'open/0' operation.

If the depotd server will show you a web page but publishing does not work with pkgsend you may have the server setup in read only mode. svccfg will allow you to change the property with

svccfg -s pkg/server setprop pkg/readonly = false

Don't do this to a server on the internet though, placing an HTTP server in front of depotd will allow you to add authentication. This is otherwise insecure!


Building IPS Packages For OmniOS

Posted on

I've started trying to package some software for OmniOS for personal use. The OmniOS Packaging page in the wiki goes through how to do it using the tools used to build the OS. This is a bit more than I would want to do when publishing software to GitHub. I would rather not rely on a repository used to build the OS just to package one piece of software.

A few months ago I was trying to package a personal project and got most of the way there! So far there is a make target that will package an Erlang release into an IPS package. I think it only got as far as putting the files on disk. I still to add the SMF manifest and fix permissions, but it's much smaller when used to package a single piece of software.


Upgrading OmniOS is Surprisingly Easy

Posted on

As part of the process of shaving some yaks today I wound up needing to upgrade my development server to the latest version of OmniOS. I originally installed the LTS version and planned to stay there till the next release. It turns out there isn't much reason not to upgrade to the latest version. You will get needed security updates either way but be able to get around any bugs with OS-related things that have been fixed in the mean time.

The Upgrading to r151014 or later page had the needed information and worked quickly. I ran into an issue with the datasets for my zones causing the problem pkg: Unable to clone the current boot environment when trying to update with pkg. All the zones I care about are recreated with configuration management so I didn't have a problem destroying the dataset and recreating them. If it were production I would have at least snapshotted the needed datasets before destroying them.

For the next release I think I'll update a bit sooner!


Ansible ZFS Bug For Solaris

Posted on

While updating Ansible I ran into an issue with an extras module for ZFS and Solaris. A playbook that used to work to set a mount point no longer worked. I was seeing errors that ended in

if int(version) >= 34:\r\nValueError: invalid literal for int() with base 10: '-'\r\n", "msg": "MODULE FAILURE"

An issue was filed in June and fixed last month. This change isn't in the latest Ansible 2.1.1.0 which I was using. For the time being I've added the extras repository devel branch as a submodule and used ANSIBLE_LIBRARY=... to get a fixed version.


LambdaPad

Posted on

I recently came across a static site generator written in Erlang called LambdaPad. I looked around a bit while trying to find a static site generator that would work with Contentful that I would enjoy working with. Most static site generators expect to source documents from the filesystem but LambdaPad allows any source of data you can write in Erlang!

Contentful is a CMS with an API and is free for small use cases. It is easier to use their API as a source then to have other people edit a Git repository in my expected case.

My Github has a branch that can source Contentful entries and provide them to templates. After adding some documentation, examples, and handling Contentful pagination it should be ready for a PR.

... another example of me spending more time on infrastructure instead of a user-facing project which began this tangent!.


OmniOS on Vultr

Posted on

This week I started trying to install OmniOS in a Vultr instance. I'm not sure where I first saw Vultr listed but was drawn to it because they offer custom ISO installs. OmniOS isn't supported by most hosting vendors so I would need to install via a custom ISO.

Setting up an account was quick on Vultr, including $5 free credit for opening an account. When creating a new instance you can select the custom ISO after you've added it via URL to your account. They will transfer the ISO to the right datacenter, attach it, then boot up the instance.

The ISO booted fine but installing OmniOS onto the instance didn't work. It turns out that the OmniOS installer doesn't like the way Vultr exposes disks as block devices to the instance. This was mentioned by Dan McDonald in the #omnios channel after he helped me debug. Originally I tweeted about trying to install it and he followed up. He was very helpful and mentioned that the installer is due to be replaced which will work around this issue, but it won't be right away.

It seems just running OmniOS on baremetal is the way to go. I might wind up getting a colo'd box at this point.


Debian Packaging an Erlang Relx Release

Posted on

Creating an Erlang release with Relx is straightforward and quick but you still need to get it onto a machine. You could script something yourself, maybe even using a configuration management tool. You could also create a Debian package which would make your sysadmin happy and even make it easy to uninstall!

In this example I'll use FPM although the Debian toolchain would work as well. This will assume that you can already make a release with Relx and that you put your release files into rel within your project. This may not follow all Debian best-practices but it will give you a .deb file that will install and look kind of like any other package. The package will include the Erlang Runtime System so you won't need to install Erlang on the target system or any other dependencies before installing your package.

Application configuration

You likely already include a sys.config file with your release but it would be nice to be able to configure the release after the package has been installed. This is usually done with files in /etc or /etc/PACKAGE. Your sys.config can be easily updated to make this happen!

Assuming you aren't configuring anything to start your sys.config would look like

[].

With a relx.config including

{sys_config, "./rel/sys.config"}.

To make this include an /etc file using the config documentation says you can include a file path (absolute path preferred). This would make your Relx sys.config look like:

["/etc/PACKAGE/PACKAGE.config"].

Simple! We don't need any post-install configuration right now but we should include the config-less file so that Erlang can find it when trying to use sys.config Create a file in rel/PACKAGE/PACKAGE.config:

[].

Now this file can be updated with your configuration management tool without requiring changing any files within the release!

On Debian/Ubuntu systems it's not uncommon to have a /etc/default/PACKAGE file as well that allows you to add any environment variables you would like to use for your application. I ran across this needing to set the ulimit. For now we will create a file in rel/etc/default/PACKAGE that sets the ulimit.

ulimit -n 65536

Making a user

It's nice to have a system user that runs your release and not require some other tool to create it. This can be done with FPM's --before-install option to pass in the path to an appropriate script. More can be included but for now we will create a file rel/before-install with the contents

adduser PACKAGE --system

So that before this package is installed dpkg will create the user for us.

Init

Your release should generally start right after the system does and it is helpful to follow the standard init system of your distribution. This becoming SystemD or Upstart depending on your distribution/derivative but for this example we will stick with SysV-style init. This get slightly more complex but we will start with the example and then walk through each line. This requires that you use the extended start script from Relx with the option {extended_start_script, true}.

#!/bin/sh
HOME=/opt/PACKAGE/

[ -f /etc/default/PACKAGE ] && . /etc/default/PACKAGE

mkdir -p /var/log/PACKAGE

chown -R /opt/PACKAGE /var/lib/PACKAGE /opt/PACKAGE/log /var/log/PACKAGE

su PACKAGE -mc "/opt/PACKAGE/bin/PACKAGE $@"

First #!/bin/sh, use the sh to execute.

Erlang and your release really want a HOME variable. We will for now install the application into /opt so that /opt/PACKAGE will be used as HOME

Next we test for the defaults file we created before and if it exists we will source it into this script. While the package will create the file it's still polite to check if it exists before sourcing.

mkdir and chown are used so that the log/var directories and the release itself all belong to the user we created in before-install. More directories can be added if you need something specific.

Finally with su we will pass the arguments to the init script through to the extended start script from Relx. The extended start script includes things like start and stop that are familiar for an init script but also includes ways to easily get a remote console connected to the Erlang VM!

Since this script will use a dir in /var/lib create the respective directory within rel rel/var/lib/PACKAGE

Creating the Package

Until now we just created files that would be used by FPM, now we can tell FPM to create the package. This could be done on any OS, not just the one that you intend to distribute the package to, but it's generally easier to use the same OS as we will include the Erlang Runtime System with the package as well.

fpm -s dir -t deb -n PACKAGE -v VERSION \
	--before-install=rel/before-install \
	_rel/PACKAGE=/opt/ \
	rel/init=/etc/init.d/PACKAGE \
	rel/var/lib/PACKAGE/=/var/lib/PACKAGE/ \
	rel/etc/PACKAGE/PACKAGE.config=/etc/PACKAGE/PACKAGE.config \
	rel/etc/default/PACKAGE=/etc/default/PACKAGE

Going through some of the options

-s dir says to create this package from a directory, instead of some other packaging format (of which FPM supports many!)

-t deb creates a Debian package as output

-n PACKAGE name the package

-v VERSION Give the package this version. This should probably be determined by your Makefile or build system and not be hardcoded.

--before-install=rel/before-install Adds the before-install script for FPM so that it can be executed when you are installing the package.

The rest of the options tell FPM to take the relative file location and place it at the absolute location when installing the package. This includes the release into /opt/, our init script, var/lib directory, etc config, and defaults file.

You now have a package!

Running this command will create the package for you and output a .deb file you can install on another machine. This includes ERTS and requires no dependencies beyond what comes in a fresh install! If you've found this helpful please let me know on Twitter!


Testing Riak Core VNodes

Posted on

I've started trying to test ETSDB with Common Test and found that it wasn't terribly straightforward to test the Riak Core vnode. The vnode is managed by a Riak Core gen_fsm and isn't a built-in OTP behavior.

I wanted to include the Riak Core gen_fsm to make sure that I integrated it properly. First you want to spin up the riak_core_vnode with your vnode implementation and save the config in the Pid.

Similarly to tear it down you should send a message to stop the FSM. This requires a tear down call and adding a handler in your vnode to return a stop.

That includes the send_command which is a variation from the Riak Core source. It will handle sending the message in a way that can get the response sent back to the sending process. Riak Core does some mucking around to deal with running with the full application.

Now you can call send_command with the Pid of the FSM and with the ref returned can pull that messages out of the mailbox!


LevelTSDB

Posted on

I've started splitting out useful time-series database functions from ETSDB into their own library as LevelTSDB. This is mostly so I don't have to test everything again for some things I would eventually like to make.


Nikola Generator

Posted on

Starting to use a new static site generator now that there are bunch of good ones in Python. I find Python/pip more sane to use than Ruby/bundler/rbenv


Deploying Python Without Downtime

Posted on

When you first start out deploying your application it can be easy to just run supervisor restart all or service my_app restart to get your current version into production. This is great when you are starting out but eventually you will try to connect while your application is starting up and see HTTP 503s while you application is booting up.

Eventually you might discover that Gunicorn and uWSGI can reload your application without closing the socket so your web requests will just be delayed a bit delayed as your application starts. This works fine as long as your application doesn't take too long to start. Unfortunately some applications at work can take a minute to start, too long to have connections waiting at the socket.

The Gunicorn reloading using kill -HUP $PID will stop all worker processes then start them again. The slow init for workers tends to cause problems. uWSGI has chain reloading which will restart workers one at a time. I need support for Tornado which doesn't fit well with uWSGI.

With a Load Balancer

A common technique is to remove a single server from the load balancer, upgrade/restart the application, then bring it back. We are using load balancers but it requires coordination while provisioning nodes using the HAProxy management socket in order to schedule this. Our deploys currently deploy to all nodes simultaneously, not one-by-one, an even larger change. It would also be possible to fool the healthcheck by 404'ing the status page then waiting for LBs to take the node out of the pool. That requires a bit more waiting than I want, 2 healthcheck failures with 5 second intervals, for each server, plus time to reintegrate the web process once the upgrade is finished.

Gunicorn Reload ++

Gunicorn will automatically restart failed web processes so it would be possible to just kill each process, sleeping in between, until you get through all the child processes. This works but if application start times change significantly we are either waiting too long for restarts or not long enough and risking some downtime.

Since Gunicorn includes Python hooks into the application it should be possible to write a snippet that will notify the restart process when the worker application is ready. Gunicorn didn't have the needed hook but it was simple to contribute the change. It requires master until a new release is made.

Now our restart process takes advantage of the fact that a single socket has multiple processes accepting connections. Restarting will slightly diminish our capacity (1/N) but we will continue to handle traffic without letting connections wait too long.

The general process for this is

  for child_pid of gunicorn-master:
    kill child_pid
    wait for app startup

My first version of this used shell and nc to listen on UDP for an application startup. This worked well although integrating our process manager into shell was a bit more then I would like to do.

The restart script should be called with the PID of the Gunicorn master restart.sh $PID

and works in tandem with a post_worker_init script that will notify the script when the app is running.

If we had this WSGI application for example:

We could even do things like check the /_status page to verify the application is working.

Be careful with trying to run too much of your application in this healthcheck, if for any reason your post_worker_init raises an error then the worker will exit, preventing your application from starting. This may be a problem when you are checking a DB connection that may go away, even if you application could work it won't be able to boot.

Now with our applications that take a minute to start we can do a rolling restart without taking the application down or dropping any connections!


Cropping Faces at SeatGeek

Posted on

I wrote a blog post for SeatGeek's Dev blog about my recent work automating image cropping with OpenCV. We use this to generate images for our iPhone app, upcoming iPad app, and throughout our site.

It reached top 10 on Hacker News and was generating 20% of the traffic to our site for a period of time after it was submitted. Given it's success I'm working on a post regarding recent changes I've made to our deploy process as well.


Ordering of Rebar Dependencies

Posted on

As I am starting out with Erlang I've just added dependencies to the end of my Rebar config and everything just kind of worked. I added each dependency one-by-one and didn't have a problem until I cleaned out the deps folder and tried to recompile. Then I ran into this error:

src/ranch_protocol.erl:none: undefined parse transform 'lager_transform'

I knew that it was working before and that the parse transform wasn't an issue. Turns out the dependency ordering matters! Shouldn't be too big a surprise but Rebar uses the list of dependencies as the ordering for compilation, not any kind of introspection. I just had to put the Lager dependency above Ranch and everything worked out.


SeatGeek RSS

Posted on

I've setup an RSS feed for local concerts powered by SeatGeek. We (at SeatGeek) don't have one built-in but we do have an API. The page isn't pretty but I find it useful for finding any events I may want to go to. With tagging in NewsBlur I can filter events more easily.

I built this with Erlang as a way to test out the language. There isn't really a direct need for high concurrency but it was a good chance to give it a try. I've learned that I really like Erlang, it's rather terse and has constructs built into OTP that make writing software a joy. At some point I need to tackle using releases, but I'm not there yet.

When I spend more time on the RSS feed I'll eventually include affiliate links. It takes a lot of traffic to make money with affiliates especially at most concert prices. But maybe it will be an incentive for me to turn this into something even more useful.


More on RabbitMQ Priorities

Posted on

With a single process consuming from multiple queues the prefetch count could be a good enough solution to balancing the work from each queue.

After you have set up priorites with multiple queues you still need to consume from them. You could setup separate processes for each queue or a single process that consumes from multiple queues.

I usually set consumers to a prefetch count of 10, it works well enough and latency isn't much of a concern. When consuming from multiple queues setting each queue to the same prefetch count will give you a fair distribution of work to that consumer.

What I finally took the time to try this week was changing the prefetch counts based on priority. In my case we had 2 queues, high and low priority. The higher priority was based on user actions and we wanted to happen quickly. There was only 1 set of processes consuming from both queues and had the same prefetch counts. Since the messages are sent to the consumer ahead of time there were 20 messages for each process. Adjusting the low priority queue to a prefetch of 2 meant that there would be 12 items sent to the consumer, still plenty of work. These 12 items are put into a single queue in the library, no work needed in your code, and will give a 5/1 distribution of work in the consumer.

With the adjusted prefetch counts we are able to control which portion of the work we wind up doing when queues start to backup. In this case you have to sacrifice latency to do it, the higher priority queue may give more work to a busy consumer when others could be empty. In practice for us this did not matter, we set the prefetch on the high priority queue to 10 anyway.

This has the nice property that low priority items are still processed while high priority items exist and will be consumed at the highest rate as soon as the high priority items are drained. With more than 2 queues this technique may be cause more latency than you would like but it has be working well and required no code changes. I was planning on making a locking mechanism, and if you didn't want any low priority work in progress while there was high priority work you would still need to, but I don't think one will be needed anytime soon.


Conference Going

Posted on

I just returned from a week of conferences, first Monitorama and then Emerging Technologies for the Enterprise.

This was the first Monitorama event, held in Boston, and was a great chance to meet the people behind a lot of the software/blogs I follow. The first day was a single-track set of talks regarding open source monitoring and the second day a hackathon to help improve the state of open source monitoring. I contributed a bit to correct a small pain point I had. I didn't enter it into the judging partially because I believed it to be a rather simple hack and partially because I didn't want to have to rush to get back to Amtrak for my train back to NY. I was pleasantly surprised to see food always available including plenty of healthy bits.

PhillyETE was definitely a big change from Monitorama, more people, more talks, not as great of food. Part of the benefit for me to go to PhillyETE is the trip to Philadelphia to see my family, especially as an Easter trip. The best part of the conference had to be seeing the push for Clojure as an enterprise language, (and slightly less interest to me, Scala). Given that they are JVM languages they fit in very well and can work side by side for evaluation. I tried Scala for a bit since I wanted to try out Akka but ran into JVM memory issues during the Play "Hello, World!" tutorial which really soured me towards Scala.

Basho's sponsorship of Monitorama also helped convince me (with a 25% discount) to attend Ricon East. That may be the end of my conference going this year, I haven't decided yet about Surge.


Graphite Pager - v0.0.6 - Links to Documentation

Posted on

I've released version v0.0.6 of Graphite Pager my tool for alerting based on Graphite metrics.

The change for this release was to add links to documentation for each alert. Currently the format of the URL is {docs_url}/{alert name}#{alert legend name} where the docs_url is specified in the YAML config and the rest is based on the alert that is triggering.

While people at work haven't jumped to create metrics and alerts for various things this will at least make it easier for them to know why this alert was created and how to fix the problem. Right now I have only documented a few alerts and will do so as existing alerts fire. If anyone needs alerts made I will make sure the wiki page exists ahead of time.


Provisioning AMQP

Posted on

AMQP clients will allow you to declare your exchanges, queues, and bindings at the consumer level but that can cause problems as you use it more. You may get to the point that you will have to grep for all the declare methods in your code or run into problems trying to migrate to a new broker.

An alternative is to have consumers and producers take only the name of a queue or exchange and handle the rest outside of the application. This allows you to see and change in one place the configuration for all of your applications. When you need to provision a new broker it is done in a few seconds instead having to migrate some consumers, then all producers, then the rest of the consumers.

I've started writing and using Declare AMQP so that I can provision everything within Chef. It only supports the features I'm using but is very simple.

The migration is now much simpler as provisioning the server once is enough to make it ready for all applications. When I need to change exchanges or bindings I don't have to update any code. There is still the need to know which applications publish which routing key, but not a huge concern.

This has helped out as well configuring queues with specific priorities for the same type of tasks. Each application can be started with a queue to listen to and the configuration for both the broker and applications remains in one place.


Prioritizing Emails with RabbitMQ

Posted on

After you move a few tasks to the background with RabbitMQ you may realize that you eventually need to support different priorities for the same type of tasks, sending bulk email after you send transactional email. RabbitMQ doesn't have priorities so you wind up having to use separate queues for each priority.

You should already have a worker that can send the email, just now you need setup RabbitMQ with priorities.

The main exchange you use, email, should be declared either topic or direct and will take all of the messages you intend to send but when declared you should include an alternate exchange of email-undeliverable that is declared as a fanout exchange. Now you just need a default queue bound to the default routing key for the email exchange and also bound to email-undeliverable. Now every email your try to send that doesn't have a specifically prioritized queue will be routed to the default queue.

All you need now is to start your workers consuming from each queue you create.


Example Tornado AMQP Client with Pika

Posted on

I've used AMQP for a couple of years now but never used Pika in production. Recently I've been using Haigha in my AMQP Dispatcher project but needed a client for Tornado, which Pika supports. There is an example in the Pika docs of using the Tornado Connection but it doesn't provide as usable an interface as I'd like.

I wrote an client for internal use that handles the conditions I needed by default (including callbacks with the result of RabbitMQ publish confirmations) and after talking with a previous coworker put it into a gist.

It doesn't handle some things (like publish a content type with the encoded json) and could have some better names but it may be of use to more people.