uWSGI FastRouter and nginx

Lately I've been spending a lot of time playing with Docker, particularly with Web UIs and "clustering" APIs. I've been using Nginx and uWSGI for most of my sites for quite some time now. My normal go-to for distributing load is with nginx's upstream directive.

This directive can be used to specify the address/socket of backend services that should handle the same kinds of requests. You can configure the load balancing pretty nicely right out of the box. However, when using Docker containers, you don't always know the exact IP for the container(s) powering your backend.

I played around with some fun ways to automatically update the nginx configuration and reload nginx each time a backend container appeared or disappeared. This was really, really cool to see in action (since I'd never attempted it before). But it seemed like there had to be a better way.

Mongrel2 came to mind. I've played with it in the past, and it seemed to handle my use cases quite nicely until I tried using it with VirtualBox's shared folders. At the time, it wasn't quite as flexible as nginx when it came to working with those shared folders (might still be the case). Anyway, the idea of having a single frontend that could seamlessly pass work along to any number of workers without being reconfigured and/or restarted seemed like the ideal solution.

As I was researching other Mongrel2-like solutions, I stumbled upon yet another mind-blowing feature tucked away in uWSGI: The uWSGI FastRouter.

This little gem makes it super easy to get the same sort of functionality that Mongrel2 offers. Basically, you create a single uWSGI app that will route requests to the appropriate workers based on the domain being requested. Workers can "subscribe" to that app to be added to the round-robin pool of available backends. Any given worker app can actually serve requests for more than one domain if you so desire.

On the nginx side of things, all you need to do is use something like uwsgi_pass with the router app's socket. That's it. You can then spawn thousands of worker apps without ever restarting nginx or the router app. Whoa.

So let's dig into an example. First, some prerequisites. I'm currently using:

  • nginx 1.6.0
  • uwsgi 2.0.4
  • bottle 0.12.7
  • Python 3.4.1
  • Arch Linux

The first thing we want is that router app. Here's a uWSGI configuration file I'm using:

uwsgi-fastrouter/router.ini

[uwsgi]
plugins = fastrouter
master = true
shared-socket = 127.0.0.1:3031
fastrouter-subscription-server = 0.0.0.0:2626
fastrouter = =0
fastrouter-cheap = true
vacuum = true

# vim:ft=dosini et ts=2 sw=2 ai:

So, quick explanation of the interesting parts:

  • shared-socket: we're setting up a shared socket on 127.0.0.1:3031. This is the socket that we'll use with nginx's uwsgi_pass directive, and it's also used for our fastrouter socket (=0 implies that we're using socket 0).
  • fastrouter-subscription-server: this is how we make it possible for our worker apps to become candidates to serve requests.
  • fastrouter-cheap: this disables the fastrouter when we have no subscribed workers. Supposedly, you can get the actual fastrouter app to also be a subscriber automatically, but I was unable to get this working properly.

Now let's look at a sample worker app configuration:

uwsgi-fastrouter/worker.ini

[uwsgi]
plugins = python
master = true
processes = 2
threads = 4
heartbeat = 10
socket = 192.*:0
subscribe2 = server=127.0.0.1:2626,key=foo.com
wsgi = app
vacuum = true
harakiri = 10
max-requests = 100
logformat = %(addr) - %(user) [%(ltime)] "%(method) %(uri) %(proto)" %(status) %(size) "%(referer)" "%(uagent)"

# vim:ft=dosini et ts=2 sw=2 ai:
  • socket: we're automatically allocating a socket on our NIC with an IP address that looks like 192.x.x.x. This whole syntax was a new discovery for me as part of this project! Neat stuff!!
  • subscribe2: this is one of the ways that we can subscribe to our fastrouter. Based on the server=127.0.0.1:2626 bit, we're working on the assumption that the fastrouter and workers are all going to be running on the same host. The key=foo.com is how our router app knows which domain a worker will serve requests for.
  • wsgi: our simple Bottle application.

Now let's look at our minimal Bottle application:

uwsgi-fastrouter/app.py

from bottle import route, default_app


application = default_app()
application.catchall = False


@route('/')
def index():
    return 'Hello World!'

All very simple. The main thing to point out here is that we've imported the default_app function from bottle and use it to create an application instance that uWSGI's wsgi option will use automatically.

Finally, our nginx configuration:

uwsgi-fastrouter/nginx.conf

daemon                  off;
master_process          on;
worker_processes        1;
pid                     nginx.pid;

events {
    worker_connections  1024;
}


http {
    include             /etc/nginx/mime.types;

    access_log          ./access.log;
    error_log           ./error.log;

    default_type        application/octet-stream;
    gzip                on;
    sendfile            on;
    keepalive_timeout   65;

    server {
        listen 80 default;
        server_name localhost foo.com;

        location / {
            include     /etc/nginx/uwsgi_params;
            uwsgi_pass  127.0.0.1:3031;
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root /usr/share/nginx/html;
        }
    }
}

# vim:filetype=nginx:

Nothing too special about this configuration. The only thing to really point out is the uwsgi_pass with the same address we provided to our router's shared-socket option. Also note that it will bind to port 80 by default, so you'll need root access for nginx.

Now let's run it all! In different terminal windows, run each of the following commands:

sudo nginx -c nginx.conf -p $(pwd)
uwsgi --ini router.ini
uwsgi --ini worker.ini

If all goes well, you should see no output from the nginx command. The router app should have some output that looks something like this:

spawned uWSGI master process (pid: 4367)
spawned uWSGI fastrouter 1 (pid: 4368)
[uwsgi-subscription for pid 4368] new pool: foo.com (hash key: 11571)
[uwsgi-subscription for pid 4368] foo.com => new node: :58743
[uWSGI fastrouter pid 4368] leaving cheap mode...

And your worker app should have output containing:

subscribing to server=127.0.0.1:2626,key=foo.com

For the purpose of this project, I quickly edited my /etc/hosts file to include foo.com as an alias for 127.0.0.1. Once you have something like that in place, you should be able to hit the nginx site and see requests logged in your worker app's terminal:

curl foo.com

The really cool part is when you spin up another worker (same command as before, since the port is automatically assigned). Again, there's no need to restart nginx nor the router app--the new worker will be detected automatically! After doing so, each request will be spread out across all of the subscribed workers.

Here's a quick video of all of this in action, complete with multiple worker apps subscribing to one router app. Pay close attention to the timestamps in the worker windows.

While this is all fine and dandy, there are a couple of things that seem like they should have better options. Namely, I'd like to get the single FastRouter+worker configuration working. I think it would also be nice to be able to use host names or DNS entries for the workers to know how to connect to the FastRouter instance. Any insight anyone can offer would be greatly appreciated! I know I'm just scratching the surface of this feature!

Minion-Specific Data With etcd

So I've been spending a fair amount of my free time lately learning more about salt, docker, and CoreOS. Salt has been treating very well. I mostly only use it at home, but more opportunities to use it at work are near.

The first I remember really hearing about Docker was when one of my co-workers tried using it for one of his projects. I didn't really spend much time with it until after SaltConf earlier this year (where lots of others brought it up). I'm pretty excited about Docker. I generally go out of my way to make sure my stuff will work fine on various versions of Linux, and Docker makes testing on various platforms insanely easy.

CoreOS is one of my more recent discoveries. I stumbled upon it in the wee hours of the night a few weeks ago, and I've been very curious to see how CoreOS and my fairly limited knowledge of Docker could help me. For those of you who haven't heard of CoreOS yet, it's kinda like a "hypervisor" for Docker containers with some very cool clustering capabilities.

I was able to attend a SaltStack and CoreOS meetup this past week. Most of the CoreOS developers stopped by on their way to GopherCon, and we all got to see a very cool demo of CoreOS in action. It was very cool to see everything in action.

One of the neat projects that the CoreOS folks have given us is called etcd. It is a "highly-available key value store for shared configuration and service discovery." I'm still trying to figure out how to effectively use it, but what I've seen of it is very cool. Automatic leader election, rapid synchronization, built-in dashboard, written in Go.

Anyway, I wanted to be able to use information stored in an etcd cluster in my Salt states. techhat committed some initial support for etcd in Salt about a month ago, but the pillar support was a bit more limited than I had hoped. Last night I submitted a pull request for getting minion-specific information out of etcd. This won't be available for a little while--it's only in the develop branch for now.

To use it, you'll need a couple of things in your Salt master's configuration file (/etc/salt/master). First, you must configure your etcd host and port. In order to use this information in our pillar, we need to configure this using a named profile. We'll call the profile "local_etcd":

local_etcd:
  etcd.host: 127.0.0.1
  etcd.port: 4001

Now we can tell Salt to fetch pillar information from this etcd server as so:

ext_pillar:
  - etcd: local_etcd root=/salt

Be sure to restart your Salt master after making these modifications. Let's add some information to etcd to play with:

etcdctl set salt/foo/bar baz
etcdctl set salt/foo/baz qux

After doing so, you should be able to grab this information from any minion's pillar:

salt "*" pillar.items foo
test1:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            qux
test2:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            qux

Ok, that's great! We've achived shared information between etcd and our Salt pillar. But what do we do to get minion-specific data out of etcd? Well, we need to start by modifying our master's configuration again. Replace our previous ext_pillar config with the following:

ext_pillar:
  - etcd: local_etcd root=/salt/shared
  - etcd: local_etcd root=/salt/private/%(minion_id)s

Note that the original etcd root changed from /salt to /salt/shared. We do this so we don't inadvertently end up with all minion-specific information from etcd in the shared pillar. Now let's put the sample data back in (again, noting the addition of shared/):

etcdctl set salt/shared/foo/bar baz
etcdctl set salt/shared/foo/baz qux

To override the value of one of these keys for a specific minion, we can use that minion's ID in the key:

etcdctl set salt/private/test2/foo/baz demo

Now when we inspect our pillar, it should look like this:

salt "*" pillar.items foo
test1:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            qux
test2:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            demo

Notice that the value for foo.baz is qux for minion test1, while its value is demo for test2. Success!

Whew.

I work on a test automation framework at my day job. It's Django-powered, and there's a lot of neat stuff going on with it. I love building it!

Anyway, yesterday during a meeting, I got an email from a co-worker who seemed to be in a bit of a panic. He wrote that he accidentally deleted the wrong thing, and, being Django on the backend, a nice cascading delete went with it (why he ignored the confirmation page is beyond me). He asked if we had any database backups that we could restore, also curious as to how long it would take.

Well, lucky for him (and me!), I decided very early on while working on the project that I would implement a custom database driver that never actually deletes stuff (mostly for auditing purposes). Instead, it simply marks any record the user asks to delete as inactive, thus hiding it from the UI. Along with this, nightly database backups were put in place.

I'll be quite honest--I had a moment of fear as I considered how long it had been since I really checked that either of these two things were still working as designed. I implemented the database driver before I learned to appreciate unit testing, and I haven't made it to that piece as I've been backfilling my unit test suite (yet). As for the nightly database backups, I had never actually needed to restore one, so for probably the last year I didn't really bother checking a) that they were still being produced or b) that they were valid backups.

Thankfully, both pieces were still working perfectly. All I had to do was undelete a few things from the database, as I haven't made a UI for this. After doing that, I realized that one set of relationships was not handled by the custom driver. To fix this, I just restored the most recent nightly backup to a separate database and extracted just those relationships I was interested in. And it worked!

This is the first time I've really been bitten by a situation like this personally. I'm very pleased that I had the foresight to implement the precautionary measures early on in my project. I've also learned that I should probably keep up with those measures a bit better. I definitely plan to make some changes to help mitigate the potential for bad stuff to happen in the future. But it looks like I have a good foundation to build upon now.

TL;DR: unseen magic and valid database backups FTW.

Test-Driven Development With Python

Earlier this year, I was approached by the editor of Software Developer's Journal to write a Python-related article. I was quite flattered by the opportunity, but, being extremely busy at the time with work and family life, I was hesitant to agree. However, after much discussion with my wife and other important people in my life, I decided to go for it.

I had a lot of freedom to choose a topic to write about in the article, along with a relatively short timeline. I think I had two weeks to write the article after finally agreeing to do so, and I was supposed to write some 7-10 pages about my chosen topic.

Having recently been converted to the wonders of test-driven development (TDD), I decided that should be my topic. Several of my friends were also interested in getting into TDD, and they were looking for a good, simple way to get their feet wet. I figured the article would be as good a time as any to write up something to help my friends along.

I set out with a pretty grand plan for the article, but as the article progressed, it became obvious that my plan was a bit too grandios for a regular magazine article. I scaled back my plans a bit and continued working on the article. I had to scale back again, and I think one more time before I finally had something that was simple enough to not write a book about.

Well, that didn't exactly turn out as planned either. I ended up writing nearly 40 pages of LibreOffice single-spaced, 12pt Times New Roman worth of TDD stuff. Granted, a fair portion of the article's length is comprised of code snippets and command output.

Anyway, I have permission to repost the article here, and I wanted to do so because I feel that the magazine formatting kinda butchered the formatting I had in mind for my article (and understandably so). To help keep the formatting more pristine, I've turned it into a PDF for anyone who's interested in reading it.

So, without much further ado, here's the article! Feel free to download or print the PDF as well.

I'm Using Nikola Now

For anyone who still might be visiting my site with any regularity, you might have noticed some changes around here. For the past several years, I've been blogging on a Django-based system that I wrote a very long time ago. I wrote it because, at the time, there weren't many Django-based blogging platforms, and certainly none of the few were quite as robust as I thought I wanted.

I set out to build my own blogging platform, and I think it worked out fairly well. As with all things, however, it became obsolete as the ecosystem around it flourished. I simply didn't have the time to continue maintaining it as I should have. That's also part of the reason for my lack of activity here these past couple of years.

Anyway, in an effort to keep this blog alive, I've switched to a much more simple blogging system known as nikola. It's not your run of the mill Wordpress clone. No, it's much more simple than that, but it doesn't sacrifice much of what I had with django-articles. I still get to write my posts using a format that I enjoy (restructuredtext). I get to write my posts in an editor that I enjoy (vim). I get to keep my posts in a "database" that I enjoy (git). I get to deploy using an interface that I enjoy (the command line). And I don't have to try to keep up with what is happening in the blogging ecosystem--there are plenty of other people handling that with nikola for me!

So, you can expect more posts in the coming year. Call it a new year's resolution.

Long Time No See

Hello again everyone! Soooo much has happened since I last posted on my blog. I figured it was about time to check in and actually be active on my own site again. What follows is just a summary of what has happened in our lives since the beginning of February this year.

Leaving ScienceLogic

First of all, my wife and I decided toward the end of 2011 that it was time for us to move away from Virginia. For reasons that we did not quite understand yet, we both wanted to move to Utah. I applied for my first Utah-based job opportunity just before Christmas 2011. Several of my friends in the Salt Lake City area were kind enough to get me a few interviews here and there, but none of the opportunities were very serious.

Probably about the time I wrote my last blog post, I was contacted by a recruiter in Boise. I would have loved to move back to Idaho, but my wife would have nothing to do with me if I did that. When I shared this information with the recruiter, he said he had a recruiter friend in the SLC area and that he'd pass my information along to him. Within a day, his friend had me set up with a screening problem for a company just outside of SLC.

I was a little hesitant about that particular opportunity, because it was a Ruby on Rails development shop and an advertising company. However, our timeline was getting smaller and smaller--we had to be out of our apartment by the 1st of April--and I didn't see any other serious opportunities on the horizon. So anyway, I completed the programming problem in both Python and Ruby and had a few video chats with some guys with the company. I guess they liked my work, even though I hadn't touched Ruby in several years.

Sometime in the middle of February, the company extended me an offer letter, which my wife and I considered for a few days before accepting. My last day with ScienceLogic was the 30th of February. My first day with the new company was the 12th of March, so we had a couple of weeks to pack everything up and drive across the country. Packing was ridiculously stressful, but the drive was actually quite enjoyable (my wife wouldn't agree). I drove my Mazda 3 with my 2 year old son in the back, and my wife drove the Dodge Grand Caravan with our 7 month old twins.

The New Job

We arrived in Utah on the 10th of March and immediately fell in love with the little town house we're renting and the surrounding community. It's a really nice area. We spent the first couple of days exploring the area and learning our routes to various locations.

My first week on the new job was interesting. They didn't have much for me to do, and we were all scheduled to go to a local tech conference for the last three days of the week. Very appealing way to begin a new job!

As time went on, I did a bit of work here and there, but most of my time on the job was just spent warming a chair in between requests for things to do. Eventually, I just got fed up with the amount of work I was (or, rather, wasn't) doing. By the beginning of May, I was already looking for another job where I could feel useful.

I got in touch with a guy I worked with for a couple of weeks before he quit working for the company that brought us to Utah (I'm intentionally avoiding the use of the company's name). This guy was only able to stand working for that company for about 3 weeks before he quit and went back to his prior company. He referred me for an interview with his managers, and by the middle of May, I had a new job lined up.

The Better New Job

While I was initially hesitant about the job (test automation), I looked at it as a major step up from what I had been doing since March. That and it cut my commute in half. And they provide excellent hardware. Anyway, I started working for StorageCraft Technology Company at the end of May as a Senior Software Engineer in Test.

My task was to build a framework to make the jobs of the manual testers easier. I had no requirements document to refer to, or any specific guidance other than that. I was simply asked to build something that would make lives easier. StorageCraft had recently hired another test automation developer, and the two of us worked together to come up with a design plan for the framework.

We built a lot of neat things into the framework, gave a couple of demos, and it seems like people are really quite pleased with the direction we've gone. I gave a demo of the (Django) UI just the other day, and my supervisors basically gave me the green light to keep building whatever I wanted to. Since the other test automation guy got the boot for being unreliable, I will get to see many of my plans through exactly the way I want! I'm really excited about that.

Enough About Work

Aside from all of the excitement in my career decisions, things are going very well with the family. We live about 3 hours away from my mom, and we've been out there to visit a few times already. It's really fun to see the kids playing with their grandma! The last time we were out for a visit, for my grandmother's 80th birthday, my son and I took my dad's Rhino for a spin. We got stuck, and it was sooo much fun!

Mudding in the Rhino

The twins are growing so well too. They're crawling and getting around very well now. Jane has started to stand up on her own, and she tries to take a step every once in a while. Claire prefers to sit, but she loves to wave, clap, and repeat noises that she hears.

My wife is planning on starting up a new website soon, and she keeps taunting me with the possibility of having me build it for her. Yes, taunting.

Okay, Back to Hobbies

My wife also picked up a Dremel Trio for Dad's Day. To get used to it, I made some little wooden signs with the kids' names on them. Being the quasi-perfectionist that I am, I'm not completely satisfied with how they all turned out. I suppose they'll do for a "first attempt" sort of result though!

First project with the Dremel Trio

I've still got various projects in the works with my Arduino and whatnot. A couple of months ago, I finished a project that helps me see where I'm walking when I go down to my mancave at night. The light switches for the basement are all at the stairs, and my setup is on the opposite side of the basement. I typically prefer to have the lights off when I'm on my computer, and it was annoying and horribly inefficient to turn the lights on when entering the basement, go to my computer, then go back to turn the lights off.

To solve that problem, I re-purposed one of my PIR motion sensors and picked up a LED strip from eBay. I have the motion sensor pointing at the entrance to the basement, and the LED strip strung across the ceiling along the path that I take to get to my desk. When the motion sensor detects movement, it fades the LED strip on, continues to power it for a few seconds, and gradually fades them out when it no longer detects movement. It's all very sexy, if I do say so myself.

Lazy man's light switch

I've tried to capture videos of the setup, but my cameras all have poor light sensors or something, so it's difficult to really show what it's like. The LED strip illuminates the basement perfectly just long enough for me to get to my desk, but the videos just show a faint outline of my body lurking in the dark. :(

One project that is in the works right now is a desk fan that automatically turns on when the ambient temperature reaches a certain level. The fan's speed will vary depending on the temperature, and there will be an LCD screen to allow simple reporting and configuration of thresholds and whatnot. I'm pretty excited about it, but I want to order a few things off of eBay before I go much further with it.

Obviously, much more had happened in the past months, but this post is long enough already. Things are calming down quite a bit now that we're settled in, so I hope to resume activity on my open source projects as well as this blog.

Command Line Progress Bar With Python

Wow, it's been a very long time since I've posted on my blog. It's amazing how much time twins take up! Now they're almost 6 months old, and things are starting to calm down a bit. I will try to make time to keep my blog updated more regularly now.

Earlier today, a good friend asked me how I would handle displaying a progress bar on the command line in a Python program. I suggested a couple of things, including leaving his already working program alone for the most part and just having a separate thread display the progress bar. I'm not sure exactly why, but he decided to not use a separate thread. It didn't seem like it would be very difficult, so I spent a few minutes mocking up a quick little demo.

This is by no means the most perfect of solutions, but I think it might be useful for others who find themselves in similar circumstances. Please feel free to comment with your suggestions! This is literally my first stab at something like this, and I only spent about 5 minutes hacking it up.

Arduino-Powered Webcam Mount

Earlier this month, I completed yet another journey around the biggest star in our galaxy. Some of my beloved family members thought this would be a good occasion to send me some cash, and I also got a gift card for being plain awesome at work. Even though we really do need a bigger car and whatnot, my wife insisted that I only spend this money on myself and whatever I wanted.

Little did she know the can of worms she just opened up.

I took pretty much all of the money and blew it on stuff for my electronics projects. Up to this point, my projects have all been pretty boring simply because nothing ever moved--it was mostly just lights turning on and off or changing colors. Sure, that's fun, but things really start to get interesting when you actually interact with the physical world. With the birthday money, I was finally able to buy a bunch of servos to begin living out my childhood dream of building robots.

My first project since getting all of my new toys was a motorized webcam mount. My parents bought me a Logitech C910 for my birthday because they were tired of trying to see their grandchildren with the crappy webcam that is built into my laptop. It was a perfect opportunity to use SparkFun's tutorial for some facial tracking (thanks to OpenCV) using their Pan/Tilt Servo Bracket.

It took a little while to get everything setup properly, but SparkFun's tutorial explains perfectly how you can get everything setup if you want to repeat this project.

The problem I had with the SparkFun tutorial, though, is that it basically only gives you a standalone program that does the facial tracking and displays your webcam feed. What good is that? I actually wanted to use this rig to chat with people!! That's when I set out to figure out how to do this.

While the Processing sketch ran absolutely perfect on Windows, it didn't want to work on my Arch Linux system due to some missing dependencies that I didn't know how/care to satisfy. As such, I opted to rewrite the sketch using Python so I could do the facial tracking in Linux.

This is still a work in progress, but here's the current facial tracking program which tells the Arduino where the webcam should be pointing, along with the Arduino sketch.

Now that I could track a face and move my webcam in Linux, I still faced the same problem as before: how can I use my face-tracking, webcam-moving program during a chat with my mom? I had no idea how to accomplish this. I figured I would have to either intercept the webcam feed as it was going to Skype or the Google Talk Plugin, or I'd have to somehow consume the webcam feed and proxy it back out as a V4L2 device that the Google Talk Plugin could then use.

Trying to come up with a way of doing that seemed rather impossible (at least in straight Python), but I eventually stumbled upon a couple little gems.

So the GStreamer tutorial walks you step-by-step through different ways of using a gst-launch utility, and I found this information very useful. I learned that you can use tee to split a webcam feed and do two different things with it. I wondered if it would be possible to split one webcam feed and send it to two other V4L2 devices.

Enter v4l2loopback.

I was able to install this module from Arch's AUR, and using it was super easy (you should be root for this):

modprobe v4l2loopback devices=2

This created two new /dev/video* devices on my system, which happened to be /dev/video4 and /dev/video5 (yeah... been playing with a lot of webcams and whatnot). One device, video4, is for consumption by my face-tracking program. The other, video5, is for VLC, Skype, Google+ Hangouts, etc. After creating those devices, I simply ran the following command as a regular user:

gst-launch-0.10 v4l2src device=/dev/video1 ! \
    'video/x-raw-yuv,width=640,height=480,framerate=30/1' ! \
    tee name=t_vid ! queue ! \
    v4l2sink sync=false device=/dev/video4 t_vid. ! \
    queue ! videorate ! 'video/x-raw-yuv,framerate=30/1' ! \
    v4l2sink device=/dev/video5

There's a whole lot of stuff going on in that command that I honestly do not understand. All I know is that it made it so both my face-tracking Python program AND VLC can consume the same video feed via two different V4L2 devices! A co-worker of mine agreed to have a quick Google+ Hangout with me to test this setup under "real" circumstances (thx man). It worked :D Objective reached!

I had really hoped to find a way to handle this stuff inside Python, but I have to admit that this is a pretty slick setup. A lot of things are still hardcoded, but I do plan on making things a little more generic soon enough.

So here's my little rig (why yes, I did mount it on top of an old Kool-Aid powder thingy lid):

And a video of it in action. Please excuse the subject of the webcam video, I'm not sure where that guy came from or why he's playing with my webcam.

Firefox 6.0 Is My Default Browser Again

I've just made Firefox my default browser again (after using Chrome/Chromium as my default for quite some time). FF6 renders things almost as fast as Chrome, even on our ridiculously complicated pages at work. This was my primary reason for not using FF for such a long time.

However, there are other things that I've decided are worth going back to FF at least for the time being. If any of you know of ways to accomplish this stuff with Chrome, please enlighten me!

  • Tab groups. I often have dozens and dozens of tabs open. Some I like to have open for my "night life." Some I like to have open for research. Some I like to have open for work. Tab groups let me organize these, and hide the ones I'm currently not interested in.
  • Vimperator. This was one thing I missed dearly when I decided to ditch FF for Chrome back in the day. Vimium just doesn't work as well. It's close, but still not as robust. For example, I can still use Vimperator shortcuts when viewing a raw file. Not so with Vimium.
  • Permanent exceptions for bad certificates. Normally this wouldn't be a big deal. You visit a site that has a bad certificate, your browser warns you that bad things could be happening and advises you to be careful. That's great! However, for work we all use several VMs for development (I often have 7 VMs running all the time just for my day-to-day routine). They all share the same certificate, which is bad. Chrome doesn't let me tell it to stop bothering me about the bad certificate. Firefox does.
  • Memory usage. Strangely enough, memory usage isn't something that most people say is awesome in FF these days. But I've personally discovered that it is better than Chrome on "poorly-endowed" systems. Both Chrome and FF are consume gobs of RAM on my regular 8GB+ RAM machines, but I have plenty to spare so it doesn't bother me much. I recently inherited my wife's old MacBook, which only has 1GB of RAM. Chrome is absolutely painful to use--it's slow and brings the rest of the system to a crawl. Firefox is noticeably more responsive on that machine.
  • Selenium. I don't do much UI testing these days, but I do still use Selenium to automate some mundane tasks for work. Haven't checked to see if this is available for Chrome for a while...

So far, the only regret I really have with my decision to move back to Firefox (again, for the time being) is the amount of chrome. Chrome does a good job of staying out of the way and letting me see the web page. Firefox just has just a bit more going on at the top and bottom of the window. Definitely not a deal-breaker on my 1920x1080 laptop though ;)

Free Professional Web Development

I'm pleased to announce my immediate availability for some freelance Web development projects. Lately, I've been quite swamped with a very awesome project at my day job. Many things had to be put on hold as a result of that project. However, it looks like the worst is now behind me, and I find myself with much more free time than before!

As such, I'm opening the doors to any and all (except full-time, because I love my day job) Web development opportunities. I guarantee absolute satisfaction and reliable, efficient Web sites, and I deliver quickly. All you need to do is provide the requirements for your project in the comments section of this article. I'll pump out at the very least a proof of concept that becomes your property (free of charge), no questions asked.

You will not be disappointed. Use this as an opportunity 1) to get free stuff and 2) to test your requirements-gathering/communicating skills. Let the battle begin!