Thoughts on Docker

Docker has been causing a lot of ripples in all sorts of ponds in recent years. I first started playing with it nearly a year ago now, after hearing about it from someone else at work. At first I didn't really understand what problems it was trying to solve. The more I played with it, however, the more interesting it became.

Gripes About Docker

There were plenty of things that I didn't care for about Docker. The most prominent strike against it was how slow it was to start, stop, and destroy containers. I soon learned that if I store my Docker data on a btrfs partition, things become much faster. And it was great! Things that used to take 10 minutes started taking 2 or 3 minutes. Very significant improvement.

But then it was still slow to actually build any containers that are less than trivial. For example, we've been using Docker for one of my side projects since April 2014 (coming from vagrant). Installing all of the correct packages and whatnot inside of a our base Docker image took several minutes. Much longer than it does on iron or even in virtual machines. It was just slow. Anytime we had to update dependencies, we'd invalidate the image cache and spend a large chunk of time just waiting for an image to build. It was/is painful.

On top of that, pushing and pulling from the public registry is much slower than a lot of us would like it to be. We set up a private registry for that side project, but it was still slower than it should be for something like that.

Many of you reading this article have probably read most or all of those gripes from other Docker critics. They're fairly common complaints.

Lately, one of the things about using Docker for development that's become increasingly more frustrating is communication between containers on different hosts. Docker uses environment variables to tell one container how to reach services on another container running on the same host. Using environment variables is a great way to avoid hardcoding IPs and ports in your applications. I love it. However, when your development environment consists of 8+ distinct containers, the behavior around those environment variables is annoying (in my opinion).

Looking For Alternatives

I don't really feel like going into more detail on that right now. Let's just say it was frustating enough for me to look at alternatives (more out of curiosity than really wanting to switch away from Docker). This search led me to straight Linux containers (LXC), upon which Docker was originally built.

I remembered trying to use LXC for a little while back in 2012, and it wasn't a very successful endeavor--probably because I didn't understand containers very well at the time. I also distinctly remember being very fond of Docker when I first tried it because it made LXC easy to use. That's actually how I pitched it to folks.

Long story short, I have been playing with LXC for the past while now. I'm quite happy with it this time around. It seems to better fit the bill for most of the things we have been doing with Docker. In my limited experience with LXC so far, it's generally faster, more flexible, and more mature than Docker.

What proof do I have that it's faster? I have no hard numbers right now, but building one of our Docker images could take anywhere from 10 to 20 minutes. And that was building on top of an already existing base image. The base image took a few minutes to build too, but it was built much less regularly than this other image. So 10-20 minutes just to install the application-specific packages. Not the core packages. Not configure things. Just install additional packages.

Building an entire LXC container from scratch, installing all dependencies, and configuring basically an all-in-one version of the 8 different containers (along with a significant number of other things for monitoring and such) has consistently taken less than 3 minutes on my 2010 laptop. The speed difference is phenominal, and I don't even need btrfs. Lauching the full container is basically as fast as launching a single-purpose Docker container.

What proof do I have that LXC is more flexible than Docker? Have you tried running systemd inside of a Docker container? Yeah, it's not the most intuitive thing in the world (or at least it wasn't the last time I bothered to try it). LXC will let you use systemd without any fuss (that I've noticed, anyway). This probably isn't the greatest example of flexibility in the world of containers, but it certainly works for me.

You also get some pretty interesting networking options, from what I read. Not all of your containers need to be NAT'ed. Some can be NAT'ed and some can be bridged to appear on the same network as the host. I'm still exploring all of these goodies, so don't ask for details about them from me just yet ;)

What proof do I have that LXC is more mature than Docker? Prior to Docker version 0.9, its default execution environment was LXC. Version 0.9 introduced libcontainer, which eliminated Docker's need for LXC. The LXC project has been around since August 2008; Docker has been around since March 2013. That's nearly 5 entire years that LXC has had to mature before Docker was even a thing.

What Now?

Does all of this mean I'll never use Docker again? That I'll use LXC for everything that Docker used to handle for me? No. I will still continue to use Docker for the foreseeable future. I'll just be more particular about when I use it vs when I use LXC.

I still find Docker to be incredibly useful and valuable. I don't think it's as suitable for long-running development environments or to replace a fair amount of what folks have been using Vagrant to do. It can certainly handle that stuff, but LXC seems better suited to the task, at least in my experience.

Why do I think Docker is still useful and valuable? Well, let me share an example from work. We occasionally use a program with rather silly Java requirements. It requires a specific revision, and it must be 32-bit. It's really dumb. Installing and using this program on Ubuntu is really quite easy. Using the program on CentOS, however, is .... quite an adventure. But not an adventure you really want to take. You just want to use that program.

All I had to do was compose a Dockerfile based on Ubuntu, toss a couple apt-get lines in there, build an image, and push it to our registry. Now any of our systems with Docker installed can happily use that program without having to deal with any of the particularities about that one program. The only real requirement now is an operational installation of Docker.

Doing something like that is certainly doable with LXC, but it's not quite as cut and dry. In addition to having LXC installed, you also have to make sure that the container configuration file is suitable for each system where the program will run. This means making sure there's a bridged network adapter on the host, the configuration file uses the correct interface name, at the configuration file doesn't try to use an IP address that's already claimed, etc etc.

Also, Docker gives you port forwarding, bind mounts, and other good stuff with some simple command line parameters. Again, port forwarding and bind mounts are perfectly doable with straight LXC, but it's more complicated than just passing some additional command line parameters.

Anyway. I just wanted to get that out there. LXC will likely replace most of my Linux-based virtual machines for the next while, but Docker still has a place in my toolbox.

"systemctl status foo" was too slow

For quite a while now, running any sort of systemctl status foo command seemed to take forever on any and all of my systems. That exact command would sometimes take as long as 30 seconds to return complete, despite foo not even being an available service. I noticed it more on my aging laptop than on my other systems, but I just attributed the slowness to my hard drive maybe preparing to fail.

Anyway, I finally got frustrated enough to actually put some effort into seeing what the problem might really be and how I could avoid the terrible delay for something so simple. It dawned on me that the actual status of the service was coming back pretty fast, but getting any recent output from the service is what took forever.

This led me to look into systemd's journald. I checked the /var/log/journal/xxxxx... directory on my laptop. It was massive--4.5GB of logs of whatever. I know better than to just go deleting files out from under a running process, so I looked into ways to simply truncate the logs. This led me to a few pages that all suggested that I modify /etc/systemd/journald.conf to optimize things a bit.

The configuration options that I kept seeing were SystemMaxUse and RuntimeMaxUse. When I set these both to 10M and restarted journald (systemctl restart systemd-journald), my /var/log/journal/xxxxx... directory was nice and tidy again. And systemctl status foo-like commands returned muuuch faster.

I suppose I'll be adding this stuff to my configuration script!

uWSGI FastRouter and nginx

Lately I've been spending a lot of time playing with Docker, particularly with Web UIs and "clustering" APIs. I've been using Nginx and uWSGI for most of my sites for quite some time now. My normal go-to for distributing load is with nginx's upstream directive.

This directive can be used to specify the address/socket of backend services that should handle the same kinds of requests. You can configure the load balancing pretty nicely right out of the box. However, when using Docker containers, you don't always know the exact IP for the container(s) powering your backend.

I played around with some fun ways to automatically update the nginx configuration and reload nginx each time a backend container appeared or disappeared. This was really, really cool to see in action (since I'd never attempted it before). But it seemed like there had to be a better way.

Mongrel2 came to mind. I've played with it in the past, and it seemed to handle my use cases quite nicely until I tried using it with VirtualBox's shared folders. At the time, it wasn't quite as flexible as nginx when it came to working with those shared folders (might still be the case). Anyway, the idea of having a single frontend that could seamlessly pass work along to any number of workers without being reconfigured and/or restarted seemed like the ideal solution.

As I was researching other Mongrel2-like solutions, I stumbled upon yet another mind-blowing feature tucked away in uWSGI: The uWSGI FastRouter.

This little gem makes it super easy to get the same sort of functionality that Mongrel2 offers. Basically, you create a single uWSGI app that will route requests to the appropriate workers based on the domain being requested. Workers can "subscribe" to that app to be added to the round-robin pool of available backends. Any given worker app can actually serve requests for more than one domain if you so desire.

On the nginx side of things, all you need to do is use something like uwsgi_pass with the router app's socket. That's it. You can then spawn thousands of worker apps without ever restarting nginx or the router app. Whoa.

So let's dig into an example. First, some prerequisites. I'm currently using:

  • nginx 1.6.0
  • uwsgi 2.0.4
  • bottle 0.12.7
  • Python 3.4.1
  • Arch Linux

The first thing we want is that router app. Here's a uWSGI configuration file I'm using:

uwsgi-fastrouter/router.ini

[uwsgi]
plugins = fastrouter
master = true
shared-socket = 127.0.0.1:3031
fastrouter-subscription-server = 0.0.0.0:2626
fastrouter = =0
fastrouter-cheap = true
vacuum = true

# vim:ft=dosini et ts=2 sw=2 ai:

So, quick explanation of the interesting parts:

  • shared-socket: we're setting up a shared socket on 127.0.0.1:3031. This is the socket that we'll use with nginx's uwsgi_pass directive, and it's also used for our fastrouter socket (=0 implies that we're using socket 0).
  • fastrouter-subscription-server: this is how we make it possible for our worker apps to become candidates to serve requests.
  • fastrouter-cheap: this disables the fastrouter when we have no subscribed workers. Supposedly, you can get the actual fastrouter app to also be a subscriber automatically, but I was unable to get this working properly.

Now let's look at a sample worker app configuration:

uwsgi-fastrouter/worker.ini

[uwsgi]
plugins = python
master = true
processes = 2
threads = 4
heartbeat = 10
socket = 192.*:0
subscribe2 = server=127.0.0.1:2626,key=foo.com
wsgi = app
vacuum = true
harakiri = 10
max-requests = 100
logformat = %(addr) - %(user) [%(ltime)] "%(method) %(uri) %(proto)" %(status) %(size) "%(referer)" "%(uagent)"

# vim:ft=dosini et ts=2 sw=2 ai:
  • socket: we're automatically allocating a socket on our NIC with an IP address that looks like 192.x.x.x. This whole syntax was a new discovery for me as part of this project! Neat stuff!!
  • subscribe2: this is one of the ways that we can subscribe to our fastrouter. Based on the server=127.0.0.1:2626 bit, we're working on the assumption that the fastrouter and workers are all going to be running on the same host. The key=foo.com is how our router app knows which domain a worker will serve requests for.
  • wsgi: our simple Bottle application.

Now let's look at our minimal Bottle application:

uwsgi-fastrouter/app.py

from bottle import route, default_app


application = default_app()
application.catchall = False


@route('/')
def index():
    return 'Hello World!'

All very simple. The main thing to point out here is that we've imported the default_app function from bottle and use it to create an application instance that uWSGI's wsgi option will use automatically.

Finally, our nginx configuration:

uwsgi-fastrouter/nginx.conf

daemon                  off;
master_process          on;
worker_processes        1;
pid                     nginx.pid;

events {
    worker_connections  1024;
}


http {
    include             /etc/nginx/mime.types;

    access_log          ./access.log;
    error_log           ./error.log;

    default_type        application/octet-stream;
    gzip                on;
    sendfile            on;
    keepalive_timeout   65;

    server {
        listen 80 default;
        server_name localhost foo.com;

        location / {
            include     /etc/nginx/uwsgi_params;
            uwsgi_pass  127.0.0.1:3031;
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root /usr/share/nginx/html;
        }
    }
}

# vim:filetype=nginx:

Nothing too special about this configuration. The only thing to really point out is the uwsgi_pass with the same address we provided to our router's shared-socket option. Also note that it will bind to port 80 by default, so you'll need root access for nginx.

Now let's run it all! In different terminal windows, run each of the following commands:

sudo nginx -c nginx.conf -p $(pwd)
uwsgi --ini router.ini
uwsgi --ini worker.ini

If all goes well, you should see no output from the nginx command. The router app should have some output that looks something like this:

spawned uWSGI master process (pid: 4367)
spawned uWSGI fastrouter 1 (pid: 4368)
[uwsgi-subscription for pid 4368] new pool: foo.com (hash key: 11571)
[uwsgi-subscription for pid 4368] foo.com => new node: :58743
[uWSGI fastrouter pid 4368] leaving cheap mode...

And your worker app should have output containing:

subscribing to server=127.0.0.1:2626,key=foo.com

For the purpose of this project, I quickly edited my /etc/hosts file to include foo.com as an alias for 127.0.0.1. Once you have something like that in place, you should be able to hit the nginx site and see requests logged in your worker app's terminal:

curl foo.com

The really cool part is when you spin up another worker (same command as before, since the port is automatically assigned). Again, there's no need to restart nginx nor the router app--the new worker will be detected automatically! After doing so, each request will be spread out across all of the subscribed workers.

Here's a quick video of all of this in action, complete with multiple worker apps subscribing to one router app. Pay close attention to the timestamps in the worker windows.

While this is all fine and dandy, there are a couple of things that seem like they should have better options. Namely, I'd like to get the single FastRouter+worker configuration working. I think it would also be nice to be able to use host names or DNS entries for the workers to know how to connect to the FastRouter instance. Any insight anyone can offer would be greatly appreciated! I know I'm just scratching the surface of this feature!

Minion-Specific Data With etcd

So I've been spending a fair amount of my free time lately learning more about salt, docker, and CoreOS. Salt has been treating very well. I mostly only use it at home, but more opportunities to use it at work are near.

The first I remember really hearing about Docker was when one of my co-workers tried using it for one of his projects. I didn't really spend much time with it until after SaltConf earlier this year (where lots of others brought it up). I'm pretty excited about Docker. I generally go out of my way to make sure my stuff will work fine on various versions of Linux, and Docker makes testing on various platforms insanely easy.

CoreOS is one of my more recent discoveries. I stumbled upon it in the wee hours of the night a few weeks ago, and I've been very curious to see how CoreOS and my fairly limited knowledge of Docker could help me. For those of you who haven't heard of CoreOS yet, it's kinda like a "hypervisor" for Docker containers with some very cool clustering capabilities.

I was able to attend a SaltStack and CoreOS meetup this past week. Most of the CoreOS developers stopped by on their way to GopherCon, and we all got to see a very cool demo of CoreOS in action. It was very cool to see everything in action.

One of the neat projects that the CoreOS folks have given us is called etcd. It is a "highly-available key value store for shared configuration and service discovery." I'm still trying to figure out how to effectively use it, but what I've seen of it is very cool. Automatic leader election, rapid synchronization, built-in dashboard, written in Go.

Anyway, I wanted to be able to use information stored in an etcd cluster in my Salt states. techhat committed some initial support for etcd in Salt about a month ago, but the pillar support was a bit more limited than I had hoped. Last night I submitted a pull request for getting minion-specific information out of etcd. This won't be available for a little while--it's only in the develop branch for now.

To use it, you'll need a couple of things in your Salt master's configuration file (/etc/salt/master). First, you must configure your etcd host and port. In order to use this information in our pillar, we need to configure this using a named profile. We'll call the profile "local_etcd":

local_etcd:
  etcd.host: 127.0.0.1
  etcd.port: 4001

Now we can tell Salt to fetch pillar information from this etcd server as so:

ext_pillar:
  - etcd: local_etcd root=/salt

Be sure to restart your Salt master after making these modifications. Let's add some information to etcd to play with:

etcdctl set salt/foo/bar baz
etcdctl set salt/foo/baz qux

After doing so, you should be able to grab this information from any minion's pillar:

salt "*" pillar.items foo
test1:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            qux
test2:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            qux

Ok, that's great! We've achived shared information between etcd and our Salt pillar. But what do we do to get minion-specific data out of etcd? Well, we need to start by modifying our master's configuration again. Replace our previous ext_pillar config with the following:

ext_pillar:
  - etcd: local_etcd root=/salt/shared
  - etcd: local_etcd root=/salt/private/%(minion_id)s

Note that the original etcd root changed from /salt to /salt/shared. We do this so we don't inadvertently end up with all minion-specific information from etcd in the shared pillar. Now let's put the sample data back in (again, noting the addition of shared/):

etcdctl set salt/shared/foo/bar baz
etcdctl set salt/shared/foo/baz qux

To override the value of one of these keys for a specific minion, we can use that minion's ID in the key:

etcdctl set salt/private/test2/foo/baz demo

Now when we inspect our pillar, it should look like this:

salt "*" pillar.items foo
test1:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            qux
test2:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            demo

Notice that the value for foo.baz is qux for minion test1, while its value is demo for test2. Success!

Whew.

I work on a test automation framework at my day job. It's Django-powered, and there's a lot of neat stuff going on with it. I love building it!

Anyway, yesterday during a meeting, I got an email from a co-worker who seemed to be in a bit of a panic. He wrote that he accidentally deleted the wrong thing, and, being Django on the backend, a nice cascading delete went with it (why he ignored the confirmation page is beyond me). He asked if we had any database backups that we could restore, also curious as to how long it would take.

Well, lucky for him (and me!), I decided very early on while working on the project that I would implement a custom database driver that never actually deletes stuff (mostly for auditing purposes). Instead, it simply marks any record the user asks to delete as inactive, thus hiding it from the UI. Along with this, nightly database backups were put in place.

I'll be quite honest--I had a moment of fear as I considered how long it had been since I really checked that either of these two things were still working as designed. I implemented the database driver before I learned to appreciate unit testing, and I haven't made it to that piece as I've been backfilling my unit test suite (yet). As for the nightly database backups, I had never actually needed to restore one, so for probably the last year I didn't really bother checking a) that they were still being produced or b) that they were valid backups.

Thankfully, both pieces were still working perfectly. All I had to do was undelete a few things from the database, as I haven't made a UI for this. After doing that, I realized that one set of relationships was not handled by the custom driver. To fix this, I just restored the most recent nightly backup to a separate database and extracted just those relationships I was interested in. And it worked!

This is the first time I've really been bitten by a situation like this personally. I'm very pleased that I had the foresight to implement the precautionary measures early on in my project. I've also learned that I should probably keep up with those measures a bit better. I definitely plan to make some changes to help mitigate the potential for bad stuff to happen in the future. But it looks like I have a good foundation to build upon now.

TL;DR: unseen magic and valid database backups FTW.

SaltConf 2014

Being one to try to automate all teh things, I'm always curious to find and experiment with new tools that appear which are supposed to help me be lazy. SaltStack is one such tool.

I first stumbled upon references to SaltStack sometime before the summer of 2012. At the time, I only put enough effort into SaltStack to be aware of what it does and a little bit of its history. I remember telling a few of my friends about it, and adding it to my TODO list. At some point, I even installed it on a couple of my computers.

The problem was that I never made time to actually learn how to use it. I kept telling myself that I'd experiment with it, but something else always got in the way--kids, work, gaming... Also, I had briefly used tools like chef and puppet (or tried to), and I had a bad taste in my mouth about configuration management utilities. I'm sure part of my hesitation had to do with those tools.

Anyway, fast forward to the beginning of January 2014. Salt is still installed on my main computer, but I've never even launched or configured it. I decided to uninstall salt and come back to it another time. Just a few short days after uninstalling salt, my supervisor at work sent me an email, asking if I'd be interested in attending SaltConf. I was more than happy to jump on the opportunity to finally learn about this tool that I had been curious and hesitant to use (and get paid to do it!).

The Training

I was able to sign up for an introductory course for SaltStack, which took place on Tuesday, January 28th. This was an all-day ordeal, but it was very intriguing to me. Normally, I'm one of the quiet ones in a classroom setting. I rarely ask questions or comment on this or that. This was not the case with the training course. I was all over everything our instructors had to say. I was hooked.

A lot of topics were quickly reviewed during the training. What normally takes 3 days was compressed into a single-day course. It was rather brutal in that sense--tons of material to digest. I think they did a fantastic job of explaining the core concepts and giving a fair number of examples during the training.

The Conference

SaltConf really began on Wednesday, and there were some absolutely fantastic sessions. I was particularly impressed with a demo of VMware's vCloud Application Director, which can orchestrate the creation of entire clusters of inter-related servers.

Other sessions that were quite interesting to me mostly related to virtualization using Docker, straight LXC, and libvirt. I'm very excited to become proficient with salt when dealing with virtualized environments.

The Certification

SaltStack officially introduced its first certification, known as SSCE (SaltStack Certified Engineer). The certification fee was included in the registration for the conference. Despite only having a matter of hours worth of rudimentary experience with SaltStack, I decided I might as well take a stab at the exam. I fully expected to fail, but I had absolutely nothing to lose other than an hour taking the exam.

Well, I took the exam Wednesday night, after the full day of training and another full day of seeing real-world uses for salt. I did spend an hour or two reading docs, installing, and configuring salt on my home network too. Eighty questions and 56 minutes later, I learned my score.

I got 68 our of the 80 questions correct--85%! Not bad for a newbie. I hear the pass/fail threshold is 80%, but I've yet to receive my SSCE number or anything like that. Hopefully by Monday I'll receive that information.

Moving Forward

It actually occurred to me that I've basically built my own version of the platform-independent remote execution portion of SaltStack (for work). Many of the same concepts exist in both salt and my own implementation. I will say that I am partial to the my design, but I'll most likely be phasing it out to move toward salt in the long term.

After attending all three days of SaltStack deliciousness, I'm absolutely convinced that salt will be a part of my personal and professional toolkit for a long time to come. It's an extremely powerful and modular framework.

In the little bit of experimentation that I've done with salt on my home network, I've already found a few bug that appear to be low-hanging fruit. I plan on working closely with the community to verify that they are indeed bugs, and I most definitely plan on contributing back everything I can. This is such an exciting project!!

If you haven't used it yet, you must research it and give it a try. It is a game-changer.

Startech: Scammer Scammed

My wife and I took the kids out to visit family out in California for Thanksgiving break this past year. It was a fantastic visit. We all had a great time. I even got a fun story out of the first evening there! I made sure to write down the details in my phone shortly after this occurred, and I've decided to post them here for others to enjoy.


My wife's grandfather received a phone call from a "Startech" company (213-330-0187, according to the phone's caller ID). The caller yammered on about having detected a bunch of viruses on grandpa's computer, and he claimed that he was calling to help us get rid of them. Being the resident tech guy, grandpa handed the phone off to me to deal with the situation.

The Indian guy on the other end again explained that he found out that our computer has several viruses and was going to walk us through how to get rid of them. He asked if my computer was turned on, to which I responded that no, the computer wasn't currently on. He asked if I could go turn it on and sit in front of it. I told him I would. While the computer was "booting," he asked how old the computer is. I told him it was maybe three years old. Eventually, I told him the computer was ready.

At this point, he asked me if I saw a key on my keyboard with the letters C, T, R, and L. Obviously, I did. Then he asked if I could see a key near that with a flag on it. When I said that I could see it, he asked me to find the R key. Once discovered, he instructed me to push the flag key and the R key.

I told him that I pushed the keys, but nothing happened on my computer. He patiently asked me to try again. When I again stated that nothing happened, he asked me to describe which keys I was pushing. I told him I held down the flag key and the R key at the same time, and he claimed that it was not possible for nothing to happen when I push those keys.

I believe that's when he instructed me to hold the flag key down with one finger then hold down the R key with another. Again nothing. He asked me to try a few more times, because maybe my computer was just slow. For each attempt, I claimed that nothing had happened, and he muttered something about this not being possible. Mind you, I wasn't even looking at a computer during any if this.

Eventually, he gave up trying to get that dialog to pop up. He said there was another option. He asked which Web browser I use, if it's called Mozilla Firefox, Google Chrome, or Microsoft Internet Explorer. I said. "Uhm, I think it's called Midori..." He was a bit confused, asking me to repeat the name. I did repeat it, and I even spelled it out for him. Apparently it wasn't important, because he just shrugged it off and continued with his script.

He asked me to type into the address bar the following address: www.appyy.com. I told him that I typed it in and it just said "Page Not Found." He was a bit skeptical at first, asking me to verify what I had typed into the address bar. He asked me to try again, again claiming that it is not possible for the page to not load.

That's when I asked him if I had to be connected to the Internet to follow this step, because I couldn't be on the phone and on the Internet at the same time. He let out a sort of exasperated sigh, then asked if there was any other number he could use to call me while I was on the Internet. I told him I only have the one number, and he diligently asked if a had any friends or family who could come over so I could use their phone. I said everyone I know is out of town for the holidays.

I believe he then went on a little rant about them calling everyone in my state about their viruses. No doubt in my mind :)

Then, trying to be helpful, I asked if maybe he could email me the instructions do I could walk through them after we hung up. He said he would just say them over the phone for me to write down. I told him I was okay with that, and then he started listing off the steps: "the first thing you'll need to do is hang up, then...." That's when I hung up on him. He called back, but we just laughed with each other instead of answering.

InstArch

My blog has obviously been quite inactive the past year. I've started a bunch of new projects and worked on some really interesting stuff in that time. I'm going to try to gradually describe the things I've been playing with here.

One project I started in the summer of 2013 is a personal Arch-based LiveCD. My goal in building this LiveCD was purely personal: I wanted to have a LiveCD with my preferred programs and settings just in case I hosed my main system somehow. I want to have minimal downtime, particularly when I need to keep producing for work. That was the idea behind this project. I called it InstArch, as in "instant Arch".

The build scripts for this project are hosted on bitbucket, while the ISOs I build are hosted on sourceforge. InstArch was recently added to Softpedia, and it has received a bit of interest because of that. Again, the idea behind this project was entirely personal--I'm not trying to make a new distribution or community because I'm dissatisfied with Arch or anything like that. There is some erroneous information about InstArch on Softpedia, but I haven't yet written them to ask them to fix it. Soon enough :)

If you're interested in playing with my live CD, feel free to download it and offer suggestions on the issue tracker. I may or may not implement any suggestions :) I've already had one person email me asking about the default username and password for InstArch. If you also find yourself needing this information:

  • username: inst
  • password: arch

You shouldn't need this information unless you try to use sudo or try to switch desktop sessions.

Here's a video of my live CD in action.

Also note that I haven't built/published a new live CD for several months.


Another part of the InstArch project, which I started looong before the actual LiveCD, was to create my own personal Arch repository. It tracks a bunch of packages that I build from the AUR and other personal Arch packages. Anyone is free to use this repo, and it's one that's built into my live CD.

If you wish to use this repository, add my key:

pacman-key -r 051680AC
pacman-key --lsign-key 051680AC

Then add this to your /etc/pacman.conf:

[instarch]
SigLevel = PackageRequired
Server = http://instarch.codekoala.com/$arch/

Check Your Receipts

This morning I stopped for gas at a gas station that is associated with a grocery store. Buy more groceries, save a few cents off each gallon pumped at their station. That sort of deal. I found a gas voucher in my coat from a grocery shopping trip that should have allowed 25 cents off each gallon, so I figured I might as well use it before it expired.

When I scanned the little barcode on the voucher, I noticed that the display only registered a 20-cent-per-gallon discount. I also noticed that it would let me pump only ~7.5 gallons instead of the 20 that the voucher was good for. Luckily, there was an attendant in the tiny shack for the gas station that early. I approached him and asked what was going on--why I wasn't getting my full discount.

Obviously, he didn't believe my claims and had to see things for himself. He scanned the voucher and saw exactly what I described. Confused, he scuttled off to his shack to investigate. He couldn't figure out the exact cause, but ultimately he decided that someone else also had the same code or something from their own shopping trip. He was kind enough to actually give me the cash value of the 25-cents-per-gallon discount right then and there, so that's cool.

Moral of the story: if you use such gas vouchers, be sure to check the displayed discount with what you see on the voucher. If you notice a discrepancy, maybe you'll be lucky enough to get the cash value like I did! What makes it even more exciting is that I rarely use the full "up to 20 gallons" part of the voucher before the expiration. Bonus!

Test-Driven Development With Python

Earlier this year, I was approached by the editor of Software Developer's Journal to write a Python-related article. I was quite flattered by the opportunity, but, being extremely busy at the time with work and family life, I was hesitant to agree. However, after much discussion with my wife and other important people in my life, I decided to go for it.

I had a lot of freedom to choose a topic to write about in the article, along with a relatively short timeline. I think I had two weeks to write the article after finally agreeing to do so, and I was supposed to write some 7-10 pages about my chosen topic.

Having recently been converted to the wonders of test-driven development (TDD), I decided that should be my topic. Several of my friends were also interested in getting into TDD, and they were looking for a good, simple way to get their feet wet. I figured the article would be as good a time as any to write up something to help my friends along.

I set out with a pretty grand plan for the article, but as the article progressed, it became obvious that my plan was a bit too grandios for a regular magazine article. I scaled back my plans a bit and continued working on the article. I had to scale back again, and I think one more time before I finally had something that was simple enough to not write a book about.

Well, that didn't exactly turn out as planned either. I ended up writing nearly 40 pages of LibreOffice single-spaced, 12pt Times New Roman worth of TDD stuff. Granted, a fair portion of the article's length is comprised of code snippets and command output.

Anyway, I have permission to repost the article here, and I wanted to do so because I feel that the magazine formatting kinda butchered the formatting I had in mind for my article (and understandably so). To help keep the formatting more pristine, I've turned it into a PDF for anyone who's interested in reading it.

So, without much further ado, here's the article! Feel free to download or print the PDF as well.