Automatic Config Replication With Mercurial

I've done a lot of neat things since I started my new job earlier this month. I'm really excited about the things I've learned and experimented with, and I would like to share some of the concepts with my visitors.

At work we use a lot of virtual machines in our individual development environments. Most of these virtual machines use very similar configuration settings, but the settings are not a standard part of the installation. That is because we build our virtual machines using the same installation tools that our customers would use. The configuration I'm talking about is just stuff specific to our development environment.

Creating and configuring these virtual machines is one of the first things my mentor showed me how to do my first day on the job. He commented on how quickly I would probably start learning all of the configuration tasks because we tend to setup our development VMs several times a month. That was all fine and dandy, and I did get a pretty good feel for what needed to go into a development VM that first day.

However, after doing it so many times, I realized how much time I was using just trying to get the VM set up just right. It wasn't hard to configure--it was just time-consuming. It wasn't long before I started thinking of ways to optimize the process.

One of the ideas I came up with, which seems to be serving my purposes perfectly, is that of using Mercurial to quickly and easily get the exact same configuration from one box to another. It also has the added benefit of keeping a history of the changes I make to my configuration as time goes on.

I won't go into exact detail on how I have things setup at work, but I would like to try to describe a similar scenario that should illustrate my goal just as well.

Getting Started

One of the first things I would encourage you to do is follow along. It will make the concept sink in much faster, and you will probably see other applications very quickly. Please note, however, that if you're following along exactly, it could be a very time-consuming process. I will be using 3 virtual machines as I write this, but you could just as easily use 5, 10, or 100,000. Likewise, you could eliminate the virtual machines altogether if you're in an environment with several physical computers.

One virtual machine will act as the "master" server, or the one that will be configured first. The other virtual machines will act as "slave" servers, which will simply receive configuration updates that happen on the master server. We will also modify this behavior to be a bit more interesting toward the end of the article.

Virtual Machines Galore!

First off, I will create some basic virtual machines using the net install version of Debian 5.0.3. I really only need to create 1 VM and then clone it a couple of times. I am willing to furnish my virtual machines to those who are interested in using them. I will install some additional software in the VM to make sure the demo works smoothly. Among the packages that I will install are:

  • Python
  • Mercurial
  • OpenSSH server

Initialize a Repository

Once I have all of that set up in my virtual machines, I will initialize a Mercurial repository on the master server to maintain the configuration files that I am interested in. Let's just use the /etc directory for the time being. There's a pretty good chance that most of our system-wide configuration will all be contained somewhere beneath /etc.

cd /etc
hg init

Now let's have a gander at the files that we can have Mercurial manage for us:

hg st

Wow! That is quite a set of files, isn't it? Thankfully, they should mostly be plain text files. Mercurial is very efficient at managing text files. Let's now add all of the files in /etc to our repository, so they can be tracked and easily pushed out to other systems.

hg add

That command will happily add everything that hg st printed. Obviously, we can get a little more picky about what we do and do not add to our repository, but that's not the goal of this article. Now, this step merely tells Mercurial that it needs to pay attention to changes in these files. The files have not yet been committed to the repo. Let's do that, so we have a backup of our configuration files in their pristine state:

hg ci -m "Initial import"

The -m "Initial import" is just a comment, to describe what happened to warrant a commit to the repository. It is for your use and the use of anyone who has access to your repo.

Clone The Configuration

Now let's try to push the configuration we just committed on the master server to one of the slave servers. Since my virtual machines are all essentially in the same state, there should be no conflicts, right? Try running the following command on the master server:

hg push ssh://root@slave1//etc
root@slave1's password:
remote: abort: There is no Mercurial repository here (.hg not found)!
abort: no suitable response from remote hg!

Blast! We can't simply push the configuration files out to another computer. For that to work, we'd first have to have the repository itself exist on the slave server. Let's try this another way. One the slave server, run this command:

hg clone ssh://root@master//etc /etc
root@master's password:
abort: destination '/etc/' is not empty

Doh! Mercurial won't let us clone the repository from the master server! That's because Mercurial wants to clone to a new directory, with nothing already in it. One way to get around this hairball of a show-stopper is to just copy the repo using conventional UNIX utilities. Execute this command on one of your slave servers:

scp -r root@master:/etc/.hg /etc/

The .hg directory contains all of the repository information, and it's really all we need to snag in order to clone the repository. This might not be the most elegant solution in the world, but it will suffice for the time being. Once the scp command completes, we should have a full copy of the configuration file repository. Run this command to verify:

hg st

If your setup is anything like mine, you'll probably have a few files that are listed as being modified. Chances are that these files will vary from host to host anyway, and they are probably not worth keeping in a version control system. That would just be begging for conflicts.

I wrote an extension for Mercurial that should make this part of my tutorial a little less hacky. On your other slave server, run the following commands:

hg clone http://bitbucket.org/codekoala/hgext /root/hgext
echo "[extensions]" >> /root/.hgrc
echo "neclone = /root/hgext/neclone.py" >> /root/.hgrc

This extension gives you a new Mercurial command called neclone (N. E. Clone, or "not empty clone"). As we saw earlier, Mercurial doesn't let us clone a repository into a directory that is not empty. This extension allows us to do that. It works almost identically to the regular clone command... takes the same options and everything.

Still on your second slave server, run these additional commands:

hg neclone ssh://root@master//etc /etc
cd /etc
hg up -C

The last step is optional, and soon to be included as part of the extension. It will update your working copy to the latest revision in the repository. Beware that it overwrites any uncommitted changes you may have made to files that are tracked by Mercurial.

So now both slave servers should have a clone of the configuration repository from the master server.

Being Picky

Let's start to be a little picky about the files we are tracking in our repository. Some of the files appears as being modified on my slave server after copying the .hg directory from the master server are:

  • adjtime
  • alternatives/pager
  • alternatives/pager.1.gz
  • mailcap
  • network/run/ifstate
  • udev/rules.d/70-persistent-net.rules

I think it's safe to remove these from the repository, to avoid conflicts with other systems. To tell Mercurial to stop tracking files it is tracking, without actually deleting the file from the filesystem, you can use the following command:

hg forget adjtime
hg forget mailcap

And so on. Go ahead and do that for each of the files that appeared to be modified on your slave server immediately after copying the .hg directory. I'm going to add /etc/hostname to the list of files to forget too.

After doing that, each of those files should appear as being marked for removal when you run hg st. Don't worry, this is normal. The files will not be deleted from the filesystem, but they will be deleted from the repository. Go ahead and commit those changes to the repository on your slave server.

hg ci -Am "Removed some files from version control"

Now let's push those changes out to the master server:

hg push
abort: repository default-push not found!

Since we copied the .hg directory directly using scp, our slave won't know where the changes need to go when we run the push command with no explicit destination repository. To fix that, let's create a file in /etc/.hg/ called hgrc on the slave server. In that file, put the following text:

[paths]
default = ssh://root@master//etc

The hg push command should now push directly to the master server. Yay! The problem we face now is that every other slave server in the group is out of date. How can we fix that? We'll use Mercurial hooks.

Automating Config Replication

Mercurial offers some very useful hooks that we can use to automatically push configuration changes out to each of our slave servers. We will use the commit and changegroup hooks to do the magic. Let's create a script that will live on the master server to take care of pushing our changes out to each slave server. Create a new file in /etc/ on the master server called propagate.sh:

#!/bin/bash
hg up
for node in 'slave1' 'slave2'
do
    ssh root@$node "cd /etc; hg pull -u"
done

Let's also make sure this script is executable:

chmod +x /etc/propagate.sh

This script assumes that your /etc/hosts file or your nameserver are configured appropriately to allow slave1 and slave2 to be resolved to IP addresses. The reason we're SSH'ing into each slave server and using hg pull instead of simply using hg push ssh://root@$node//etc is because you can't force an update on a remote server using push. You can, however, request an update when you're using pull.

Obviously, this script is not the most sophisticated of scripts. It might work well for my demonstration, with only a few servers, but once you get beyond that it would be a nightmare to maintain the list of servers the script has to connect to. You can use whatever means you'd like to keep track of the servers you want to replicate your configuration to. I don't want to bother with all of the crap I'd get for suggesting one thing over another, so it's now your call.

Now it's time to configure the Mercurial hook to execute that script when the master server sees a changeset get into its repository. Open up /etc/.hg/hgrc on the master server, or create it if it doesn't exist. Make sure it has at least the following in it:

[hooks]
commit.propagate = /etc/propagate.sh
changegroup.propagate = /etc/propagate.sh

Let's try it out! Run these commands on your master server:

echo "" >> /etc/hosts
hg ci -m "Added a blank line to the hosts file"
root@slave1's password:
remote: Permission denied, please try again.
remote: Permission denied, please try again.
remote: Permission denied (publickey,password).
abort: no suitable response from remote hg!
Connection closed by slave2
warning: commit.propagate hook exited with status 255

Blast! The script failed because it wanted us to type in a password, but it was not in interactive mode. Let's fix that with a little preshared key magic. I won't go into the details about how this works, but the following commands on your master server should get us rolling:

ssh-keygen
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys2
scp -r ~/.ssh root@slave1:~
scp -r ~/.ssh root@slave2:~

Warning

Keep in mind this is not secure and should probably not be how your production machines are configured, especially with the root user.

For simplicity's sake, just accept all of the details and don't set a passphrase. These commands enable us to SSH into our slave servers without using a password. If you get an error such as:

remote: Host key verification failed.
abort: no suitable response from remote hg!

...it just means you need to manually log into your master server from the slave machine that threw that error. When doing so, you will have to answer "yes" to a question about the authenticity of the host you're logging into.

Testing It Out

It is now time to see if we can make a configuration change on one slave server and have it show up on the other slave server. Let's update the hosts file a little bit. Let's add the following line on the second slave server:

10.0.0.5        nonexistanthost

Now let's commit the change and push it off to the master server:

hg ci -m "Added a dumb line to the hosts file"
hg push

My system actually told me that that it had copied the change out to another host. I know because I saw these lines:

remote: pulling from ssh://root@master//etc
remote: searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

Now when I look at the first slave server, I should see that new line in my /etc/hosts file. Also, the log on each server should have the same entry that I just made about adding "a dumb line to the hosts file."

Seem Like A Lot of Work?

A lot of what we just did probably seemed like more work that it is worth, right? Well, being a nerd typically comes with a few qualities. One quality which I have observed many a time in my most geeky of friends is that they will spend hours and hours up front on a program or script just so they can save 2 minutes in the future. They work hard to be lazy.

There is a lot of boilerplate configuration that takes place in this particular scenario. I realize that. What I haven't shared with you, though, is how I automated the boilerplate configuration as well as the propagation of configuration. I'm tired of putting this article off, so I will have to leave those details for another article. Sorry!

Why?! There's a Better Way (tm)

There is always a better way. Always. Go ahead and use whatever you feel is the most efficient method for keeping configuration files in sync across several computers. This is just one more option to add to your toolkit. Don't worry, I won't be offended if you don't like it or don't use it. It works perfect for me and it's free, and I just wanted to share!

Mercurial 1.3 Released

Today marks the official release of Mercurial 1.3, an awesome distributed version control system. This release comes with several nifty features, including the following, straight from the What's New wiki page:

Major Changes

  • experimental support for sub-repositories
  • Python 2.3 is no longer supported; now requires Python 2.4-2.6

Commands

  • merge: add -P/--preview option
  • update: don't unlink added files when -C/--clean is specified
  • update: added -c/--check option to abort on local changes
  • update: allow merges going backwards
  • push: improved handling of named branches
  • branches/heads: add a -c/--closed option to show closed branches
  • help: new extensions topic

General

  • add patch.eol config setting to work with cross-platform patches
  • fixed support for SSL through proxies
  • add ability to load hooks from arbitrary Python modules
  • hide passwords for HTTP repositories in error and log output
  • fix Python 2.6 support in the Windows installer
  • add mechanism for specifying HTTP authentication details in hgrc
  • prompts and choices are now shown even in non-interactive mode
  • performance improvements, especially on Windows
  • much improved zsh completion
  • improved Danish, Japanese, Italian and simplified Chinese translations
  • new German, French, Greek, Brazilian Portuguese and traditional Chinese translations

Web interface

  • read configuration data from webdir configs
  • add branches page to hgweb
  • pluggable templater engine support
  • refresh hgwebdir configuration periodically
  • let web.encoding override ui.encoding setting
  • deal with dicts/lists like webdir config paths

I'm quite stoked about this release :) For additional information, please check the project's wiki.

Checking In

I suppose I should update everyone out there about what I've been up to lately. It seems strange to me that I post article much less frequently now than I did when I was a full-time university student. You'd think I'd have a whole lot more time to blog about whatever I've been working on. I suppose I do indeed have that time, it's just that I usually like to wait until my projects are "ready" for the public before I write about them.

The biggest reason I haven't posted much of anything lately is a small Twitter client I've been working on. Its purpose is to be a simple, out-of-the-way Twitter client that works equally well on Windows, Linux, and OSX. The application is written in Python and wxPython, and it has been coming along quite well. It works great in Linux (in GNOME and KDE at least), but Windows and OSX have issues with windows stealing focus when I don't want them to. I'm still trying to figure it out--any advice would be greatly appreciated.

Chirpy currently does nothing more than check your Twitter accounts for updates periodically. It notifies you of new updates using blinking buttons (which can be configured to not blink). I think the interface is pretty nice and easy to use, but I am its developer so it's only proper that I think that way.

Anyway, that project has been sucking up a lot of my free time. It's been frustrating as I build it in Linux only to find that Windows and OSX both act stupidly when I go to test it. That frustration inspired me to tinker with a different approach to a Twitter client. I began fooling around with it last night, and I think the idea has turned out to be more useful than Chripy is after a month of development!

I'm calling this new project "Tim", which is short for "Twitter IM". This one also periodically checks your Twitter account(s) for updates (of course). However, Tim will send any Twitter updates to any Jabber-enabled instant messenger client that you are signed into. If you're like me, you have Google Talk open most of the day, so you can just have Twitter updates go straight there! You can also post updates to Twitter using your Jabber instant messenger when Tim is running by simply sending a message back!!

The really neat stuff comes in when you start to consider the commands that I've added to Tim tonight. I've made it possible for you to filter out certain hashtags, follow/unfollow users, and specify from which Twitter account to post updates (when you have multiple accounts enabled). I hate all of those #FollowFriday tweets... they drive me crazy. So all I have to do is type ./filter followfriday and no tweet that contains #FollowFriday will be sent to my Jabber client. I love it.

More commands are on the way. Also on the way is a friendly interface for configuring Tim. Getting it up and running the first time is... a little less than pleasant :) Once you have it configured it seems to work pretty well though.

If you're interested in trying it out, just head on over to the project's page (http://bitbucket.org/codekoala/twitter-im/). Windows users can download an installer from the Downloads tab. I plan on putting up a DMG a little later tonight for OSX users. Linux users can download the .tar.gz file and install the normal Python way :) Enjoy!

Update: The DMG for OSX is a little bigger than I thought it would be, so I won't be hosting it on bitbucket. Instead, you can download it from my server.

Don't forget to read the README !!!

Syntax Highlighting, ReST, Pygments, and Django

Some of you regulars out there may have noticed an interesting change in the presentation of some of my articles: source code highlighting. I've been interested in doing this for quite some time, I just never really got around to implementing it until last night.

I found this implementation process to be a bit more complicatd than I had anticipated. For my own benefit as well as for anyone else who wants to do the same thing, I thought I'd document my findings in a thorough article for how to add syntax highlighting to an existing Django- and reStructuredText-powered Web site.

The power behind the syntax highlighting is:

Python is a huge player in this feature because reStructuredText (ReST) was built for Python, Pygments is the source highlighter (written in Python), and Django is written in Python (and my site is powered by Django). Some of you may recall that I converted all of my articles to ReST not too long ago because it suited my needs better than Textile, my previous markup processor. At the time, I was not aware that the conversion to ReST would make it all the easier for me to implement the syntax highlighting, but last night I figured out that that conversion probably saved me a lot of frustration. Cascading Stylesheets (CSS) are responsible for making the source code actually look good, while Pygments takes care of assigning classes to various parts of the designated source code and generating the CSS.

So, the first set of requirements, which I will not document in this article, are that you already have a Django site up and running and that you're familiar with ReST syntax. If you have the django.contrib.flatpages application installed already, you can type up some ReST documents there and apply the concepts discussed in this article.

Next, you should ensure that you have Pygments installed. There are a variety of ways to install this. Perhaps the easiest and most platform-independent method is to use easy_install:

$ easy_install pygments

This command should work essentially the same on Windows, Linux, and Macintosh computers. If you don't have it installed, you can get it from its website. If you're using a Debian-based distribution of Linux, such as Ubuntu, you could do something like this:

$ sudo apt-get install python-pygments

...and it should take care of downloading and installing Pygments. Alternatively, you can download it straight from the PyPI page and install it manually.

Now we need to install the Pygments ReST directive. A ReST directive is basically like a special command to the ReST processor. I think this part was the most difficult aspect of the implementation, simply because I didn't know where to find the Pygments directive or how to write my own. Eventually, I ended up downloading the Pygments-1.0.tar.gz file from PyPI, opening the Pygments-1.0/external/rst-directive.py file from the archive, and copying the stuff in there into a new file within my site.

For my own purposes, I made some small adjustments to the directive over what come with the Pygments distribution. I think it would save us all a lot of hassle if I just copied and pasted the directive, as I currently have it, so you can see it first-hand.

"""
    The Pygments reStructuredText directive
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    This fragment is a Docutils_ 0.4 directive that renders source code
    (to HTML only, currently) via Pygments.

    To use it, adjust the options below and copy the code into a module
    that you import on initialization.  The code then automatically
    registers a ``code-block`` directive that you can use instead of
    normal code blocks like this::

    .. code:: python

            My code goes here.

    If you want to have different code styles, e.g. one with line numbers
    and one without, add formatters with their names in the VARIANTS dict
    below.  You can invoke them instead of the DEFAULT one by using a
    directive option::

    .. code:: python
       :number-lines:

            My code goes here.

    Look at the `directive documentation`_ to get all the gory details.

    .. _Docutils: http://docutils.sf.net/
    .. _directive documentation:
       http://docutils.sourceforge.net/docs/howto/rst-directives.html

    :copyright: 2007 by Georg Brandl.
    :license: BSD, see LICENSE for more details.
"""

# Options
# ~~~~~~~

# Set to True if you want inline CSS styles instead of classes
INLINESTYLES = False

from pygments.formatters import HtmlFormatter

# The default formatter
DEFAULT = HtmlFormatter(noclasses=INLINESTYLES)

# Add name -> formatter pairs for every variant you want to use
VARIANTS = {
    'linenos': HtmlFormatter(noclasses=INLINESTYLES, linenos=True),
}


from docutils import nodes
from docutils.parsers.rst import directives

from pygments import highlight
from pygments.lexers import get_lexer_by_name, TextLexer

def pygments_directive(name, arguments, options, content, lineno,
                       content_offset, block_text, state, state_machine):
    try:
        lexer = get_lexer_by_name(arguments[0])
    except ValueError:
        # no lexer found - use the text one instead of an exception
        lexer = TextLexer()
    # take an arbitrary option if more than one is given
    formatter = options and VARIANTS[options.keys()[0]] or DEFAULT
    parsed = highlight(u'\n'.join(content), lexer, formatter)
    parsed = '<div class="codeblock">%s</div>' % parsed
    return [nodes.raw('', parsed, format='html')]

pygments_directive.arguments = (1, 0, 1)
pygments_directive.content = 1
pygments_directive.options = dict([(key, directives.flag) for key in VARIANTS])

directives.register_directive('code-block', pygments_directive)

I won't explain what that code means, because, quite frankly, I'm still a little hazy on the inner workings of ReST directives myself. Suffice it to say that this snippet allows you to easily highlight blocks of code on ReST-powered pages.

The question now is: where do I put this snippet? As far as I'm aware, this code can be located anywhere so long as it is loaded at one point or another before you start your ReST processing. For the sake of simplicity, I just stuffed it in the __init__.py file of my Django site. This is the __init__.py file that lives in the same directory as manage.py and settings.py. Putting it in that file just makes sure it's loaded each time you start your Django site.

To make Pygments highlight a block of code, all you need to do is something like this:

.. code:: python

    print 'Hello world!'

...which would look like...

print 'Hello world!'

If you have a longer block of code and would like line numbers, use the :number-lines: option:

.. code:: python
    :number-lines:

    for i in range(100):
        print i

...which should look like this...

for i in range(100):
    print i

That's all fine and dandy, but it probably doesn't look like the code is highlighted at all just yet (on your site, not mine). It's just been marked up by Pygments to have some pretty CSS styles applied to it. But how do you know which styles mean what?

Luckily enough, Pygments takes care of generating the CSS files for you as well. There are several attractive styles that come with Pygments. I would recommend going to the Pygments demo to see which one suits you best. You can also roll your own styles, but I haven't braved that yet so I'll leave that for another day.

Once you choose a style (I chose native for Code Koala), you can run the following commands:

$ pygmentize -S native -f html > native.css
$ cp native.css /path/to/site/media/css

(obviously, you'd want to replace native with the name of the style you like the most) Finally, add a line to your HTML templates to load the newly created CSS file. In my case, it's something like this:

<link rel="stylesheet" type="text/css" href="/static/styles/native.css" />

Now you should be able to see nicely-formatted source code on your Web pages (assuming you've already got ReST processing your content).

If you haven't been using ReST to generate nicely-formatted pages, you should make sure a couple of things are in place. First, you must have the django.contrib.markup application installed. Second, your templates should be setup to process ReST markup into HTML. Here's a sample templates/flatpages/default.html:

{% extends 'base.html' %}
{% load markup %}

{% block title %}{{ flatpage.title }}{% endblock %}

{% block content %}
<h2>{{ flatpage.title }}</h2>

{{ flatpage.content|restructuredtext }}
{% endblock %}

So that short template should allow you to use ReST markup for your flatpages, and it should also take care of the magic behind the .. code:: python directive.

I should also note that Pygments can handle a TON of languages. Check out the Pygments demo for a list of languages it knows how to highlight.

I think that about does it. Hopefully this article will help some other poor chap who is currently in the same situation as I was last night, and hopefully it will save you a lot more time than it took me to figure out all this junk. If it looks like I've missed something, or maybe that something needs further clarification, please comment and I'll see what I can do.