uWSGI FastRouter and nginx

Lately I've been spending a lot of time playing with Docker, particularly with Web UIs and "clustering" APIs. I've been using Nginx and uWSGI for most of my sites for quite some time now. My normal go-to for distributing load is with nginx's upstream directive.

This directive can be used to specify the address/socket of backend services that should handle the same kinds of requests. You can configure the load balancing pretty nicely right out of the box. However, when using Docker containers, you don't always know the exact IP for the container(s) powering your backend.

I played around with some fun ways to automatically update the nginx configuration and reload nginx each time a backend container appeared or disappeared. This was really, really cool to see in action (since I'd never attempted it before). But it seemed like there had to be a better way.

Mongrel2 came to mind. I've played with it in the past, and it seemed to handle my use cases quite nicely until I tried using it with VirtualBox's shared folders. At the time, it wasn't quite as flexible as nginx when it came to working with those shared folders (might still be the case). Anyway, the idea of having a single frontend that could seamlessly pass work along to any number of workers without being reconfigured and/or restarted seemed like the ideal solution.

As I was researching other Mongrel2-like solutions, I stumbled upon yet another mind-blowing feature tucked away in uWSGI: The uWSGI FastRouter.

This little gem makes it super easy to get the same sort of functionality that Mongrel2 offers. Basically, you create a single uWSGI app that will route requests to the appropriate workers based on the domain being requested. Workers can "subscribe" to that app to be added to the round-robin pool of available backends. Any given worker app can actually serve requests for more than one domain if you so desire.

On the nginx side of things, all you need to do is use something like uwsgi_pass with the router app's socket. That's it. You can then spawn thousands of worker apps without ever restarting nginx or the router app. Whoa.

So let's dig into an example. First, some prerequisites. I'm currently using:

  • nginx 1.6.0
  • uwsgi 2.0.4
  • bottle 0.12.7
  • Python 3.4.1
  • Arch Linux

The first thing we want is that router app. Here's a uWSGI configuration file I'm using:

uwsgi-fastrouter/router.ini

[uwsgi]
plugins = fastrouter
master = true
shared-socket = 127.0.0.1:3031
fastrouter-subscription-server = 0.0.0.0:2626
fastrouter = =0
fastrouter-cheap = true
vacuum = true

# vim:ft=dosini et ts=2 sw=2 ai:

So, quick explanation of the interesting parts:

  • shared-socket: we're setting up a shared socket on 127.0.0.1:3031. This is the socket that we'll use with nginx's uwsgi_pass directive, and it's also used for our fastrouter socket (=0 implies that we're using socket 0).
  • fastrouter-subscription-server: this is how we make it possible for our worker apps to become candidates to serve requests.
  • fastrouter-cheap: this disables the fastrouter when we have no subscribed workers. Supposedly, you can get the actual fastrouter app to also be a subscriber automatically, but I was unable to get this working properly.

Now let's look at a sample worker app configuration:

uwsgi-fastrouter/worker.ini

[uwsgi]
plugins = python
master = true
processes = 2
threads = 4
heartbeat = 10
socket = 192.*:0
subscribe2 = server=127.0.0.1:2626,key=foo.com
wsgi = app
vacuum = true
harakiri = 10
max-requests = 100
logformat = %(addr) - %(user) [%(ltime)] "%(method) %(uri) %(proto)" %(status) %(size) "%(referer)" "%(uagent)"

# vim:ft=dosini et ts=2 sw=2 ai:
  • socket: we're automatically allocating a socket on our NIC with an IP address that looks like 192.x.x.x. This whole syntax was a new discovery for me as part of this project! Neat stuff!!
  • subscribe2: this is one of the ways that we can subscribe to our fastrouter. Based on the server=127.0.0.1:2626 bit, we're working on the assumption that the fastrouter and workers are all going to be running on the same host. The key=foo.com is how our router app knows which domain a worker will serve requests for.
  • wsgi: our simple Bottle application.

Now let's look at our minimal Bottle application:

uwsgi-fastrouter/app.py

from bottle import route, default_app


application = default_app()
application.catchall = False


@route('/')
def index():
    return 'Hello World!'

All very simple. The main thing to point out here is that we've imported the default_app function from bottle and use it to create an application instance that uWSGI's wsgi option will use automatically.

Finally, our nginx configuration:

uwsgi-fastrouter/nginx.conf

daemon                  off;
master_process          on;
worker_processes        1;
pid                     nginx.pid;

events {
    worker_connections  1024;
}


http {
    include             /etc/nginx/mime.types;

    access_log          ./access.log;
    error_log           ./error.log;

    default_type        application/octet-stream;
    gzip                on;
    sendfile            on;
    keepalive_timeout   65;

    server {
        listen 80 default;
        server_name localhost foo.com;

        location / {
            include     /etc/nginx/uwsgi_params;
            uwsgi_pass  127.0.0.1:3031;
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
            root /usr/share/nginx/html;
        }
    }
}

# vim:filetype=nginx:

Nothing too special about this configuration. The only thing to really point out is the uwsgi_pass with the same address we provided to our router's shared-socket option. Also note that it will bind to port 80 by default, so you'll need root access for nginx.

Now let's run it all! In different terminal windows, run each of the following commands:

sudo nginx -c nginx.conf -p $(pwd)
uwsgi --ini router.ini
uwsgi --ini worker.ini

If all goes well, you should see no output from the nginx command. The router app should have some output that looks something like this:

spawned uWSGI master process (pid: 4367)
spawned uWSGI fastrouter 1 (pid: 4368)
[uwsgi-subscription for pid 4368] new pool: foo.com (hash key: 11571)
[uwsgi-subscription for pid 4368] foo.com => new node: :58743
[uWSGI fastrouter pid 4368] leaving cheap mode...

And your worker app should have output containing:

subscribing to server=127.0.0.1:2626,key=foo.com

For the purpose of this project, I quickly edited my /etc/hosts file to include foo.com as an alias for 127.0.0.1. Once you have something like that in place, you should be able to hit the nginx site and see requests logged in your worker app's terminal:

curl foo.com

The really cool part is when you spin up another worker (same command as before, since the port is automatically assigned). Again, there's no need to restart nginx nor the router app--the new worker will be detected automatically! After doing so, each request will be spread out across all of the subscribed workers.

Here's a quick video of all of this in action, complete with multiple worker apps subscribing to one router app. Pay close attention to the timestamps in the worker windows.

While this is all fine and dandy, there are a couple of things that seem like they should have better options. Namely, I'd like to get the single FastRouter+worker configuration working. I think it would also be nice to be able to use host names or DNS entries for the workers to know how to connect to the FastRouter instance. Any insight anyone can offer would be greatly appreciated! I know I'm just scratching the surface of this feature!

Minion-Specific Data With etcd

So I've been spending a fair amount of my free time lately learning more about salt, docker, and CoreOS. Salt has been treating very well. I mostly only use it at home, but more opportunities to use it at work are near.

The first I remember really hearing about Docker was when one of my co-workers tried using it for one of his projects. I didn't really spend much time with it until after SaltConf earlier this year (where lots of others brought it up). I'm pretty excited about Docker. I generally go out of my way to make sure my stuff will work fine on various versions of Linux, and Docker makes testing on various platforms insanely easy.

CoreOS is one of my more recent discoveries. I stumbled upon it in the wee hours of the night a few weeks ago, and I've been very curious to see how CoreOS and my fairly limited knowledge of Docker could help me. For those of you who haven't heard of CoreOS yet, it's kinda like a "hypervisor" for Docker containers with some very cool clustering capabilities.

I was able to attend a SaltStack and CoreOS meetup this past week. Most of the CoreOS developers stopped by on their way to GopherCon, and we all got to see a very cool demo of CoreOS in action. It was very cool to see everything in action.

One of the neat projects that the CoreOS folks have given us is called etcd. It is a "highly-available key value store for shared configuration and service discovery." I'm still trying to figure out how to effectively use it, but what I've seen of it is very cool. Automatic leader election, rapid synchronization, built-in dashboard, written in Go.

Anyway, I wanted to be able to use information stored in an etcd cluster in my Salt states. techhat committed some initial support for etcd in Salt about a month ago, but the pillar support was a bit more limited than I had hoped. Last night I submitted a pull request for getting minion-specific information out of etcd. This won't be available for a little while--it's only in the develop branch for now.

To use it, you'll need a couple of things in your Salt master's configuration file (/etc/salt/master). First, you must configure your etcd host and port. In order to use this information in our pillar, we need to configure this using a named profile. We'll call the profile "local_etcd":

local_etcd:
  etcd.host: 127.0.0.1
  etcd.port: 4001

Now we can tell Salt to fetch pillar information from this etcd server as so:

ext_pillar:
  - etcd: local_etcd root=/salt

Be sure to restart your Salt master after making these modifications. Let's add some information to etcd to play with:

etcdctl set salt/foo/bar baz
etcdctl set salt/foo/baz qux

After doing so, you should be able to grab this information from any minion's pillar:

salt "*" pillar.items foo
test1:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            qux
test2:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            qux

Ok, that's great! We've achived shared information between etcd and our Salt pillar. But what do we do to get minion-specific data out of etcd? Well, we need to start by modifying our master's configuration again. Replace our previous ext_pillar config with the following:

ext_pillar:
  - etcd: local_etcd root=/salt/shared
  - etcd: local_etcd root=/salt/private/%(minion_id)s

Note that the original etcd root changed from /salt to /salt/shared. We do this so we don't inadvertently end up with all minion-specific information from etcd in the shared pillar. Now let's put the sample data back in (again, noting the addition of shared/):

etcdctl set salt/shared/foo/bar baz
etcdctl set salt/shared/foo/baz qux

To override the value of one of these keys for a specific minion, we can use that minion's ID in the key:

etcdctl set salt/private/test2/foo/baz demo

Now when we inspect our pillar, it should look like this:

salt "*" pillar.items foo
test1:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            qux
test2:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            demo

Notice that the value for foo.baz is qux for minion test1, while its value is demo for test2. Success!

Whew.

I work on a test automation framework at my day job. It's Django-powered, and there's a lot of neat stuff going on with it. I love building it!

Anyway, yesterday during a meeting, I got an email from a co-worker who seemed to be in a bit of a panic. He wrote that he accidentally deleted the wrong thing, and, being Django on the backend, a nice cascading delete went with it (why he ignored the confirmation page is beyond me). He asked if we had any database backups that we could restore, also curious as to how long it would take.

Well, lucky for him (and me!), I decided very early on while working on the project that I would implement a custom database driver that never actually deletes stuff (mostly for auditing purposes). Instead, it simply marks any record the user asks to delete as inactive, thus hiding it from the UI. Along with this, nightly database backups were put in place.

I'll be quite honest--I had a moment of fear as I considered how long it had been since I really checked that either of these two things were still working as designed. I implemented the database driver before I learned to appreciate unit testing, and I haven't made it to that piece as I've been backfilling my unit test suite (yet). As for the nightly database backups, I had never actually needed to restore one, so for probably the last year I didn't really bother checking a) that they were still being produced or b) that they were valid backups.

Thankfully, both pieces were still working perfectly. All I had to do was undelete a few things from the database, as I haven't made a UI for this. After doing that, I realized that one set of relationships was not handled by the custom driver. To fix this, I just restored the most recent nightly backup to a separate database and extracted just those relationships I was interested in. And it worked!

This is the first time I've really been bitten by a situation like this personally. I'm very pleased that I had the foresight to implement the precautionary measures early on in my project. I've also learned that I should probably keep up with those measures a bit better. I definitely plan to make some changes to help mitigate the potential for bad stuff to happen in the future. But it looks like I have a good foundation to build upon now.

TL;DR: unseen magic and valid database backups FTW.

SaltConf 2014

Being one to try to automate all teh things, I'm always curious to find and experiment with new tools that appear which are supposed to help me be lazy. SaltStack is one such tool.

I first stumbled upon references to SaltStack sometime before the summer of 2012. At the time, I only put enough effort into SaltStack to be aware of what it does and a little bit of its history. I remember telling a few of my friends about it, and adding it to my TODO list. At some point, I even installed it on a couple of my computers.

The problem was that I never made time to actually learn how to use it. I kept telling myself that I'd experiment with it, but something else always got in the way--kids, work, gaming... Also, I had briefly used tools like chef and puppet (or tried to), and I had a bad taste in my mouth about configuration management utilities. I'm sure part of my hesitation had to do with those tools.

Anyway, fast forward to the beginning of January 2014. Salt is still installed on my main computer, but I've never even launched or configured it. I decided to uninstall salt and come back to it another time. Just a few short days after uninstalling salt, my supervisor at work sent me an email, asking if I'd be interested in attending SaltConf. I was more than happy to jump on the opportunity to finally learn about this tool that I had been curious and hesitant to use (and get paid to do it!).

The Training

I was able to sign up for an introductory course for SaltStack, which took place on Tuesday, January 28th. This was an all-day ordeal, but it was very intriguing to me. Normally, I'm one of the quiet ones in a classroom setting. I rarely ask questions or comment on this or that. This was not the case with the training course. I was all over everything our instructors had to say. I was hooked.

A lot of topics were quickly reviewed during the training. What normally takes 3 days was compressed into a single-day course. It was rather brutal in that sense--tons of material to digest. I think they did a fantastic job of explaining the core concepts and giving a fair number of examples during the training.

The Conference

SaltConf really began on Wednesday, and there were some absolutely fantastic sessions. I was particularly impressed with a demo of VMware's vCloud Application Director, which can orchestrate the creation of entire clusters of inter-related servers.

Other sessions that were quite interesting to me mostly related to virtualization using Docker, straight LXC, and libvirt. I'm very excited to become proficient with salt when dealing with virtualized environments.

The Certification

SaltStack officially introduced its first certification, known as SSCE (SaltStack Certified Engineer). The certification fee was included in the registration for the conference. Despite only having a matter of hours worth of rudimentary experience with SaltStack, I decided I might as well take a stab at the exam. I fully expected to fail, but I had absolutely nothing to lose other than an hour taking the exam.

Well, I took the exam Wednesday night, after the full day of training and another full day of seeing real-world uses for salt. I did spend an hour or two reading docs, installing, and configuring salt on my home network too. Eighty questions and 56 minutes later, I learned my score.

I got 68 our of the 80 questions correct--85%! Not bad for a newbie. I hear the pass/fail threshold is 80%, but I've yet to receive my SSCE number or anything like that. Hopefully by Monday I'll receive that information.

Moving Forward

It actually occurred to me that I've basically built my own version of the platform-independent remote execution portion of SaltStack (for work). Many of the same concepts exist in both salt and my own implementation. I will say that I am partial to the my design, but I'll most likely be phasing it out to move toward salt in the long term.

After attending all three days of SaltStack deliciousness, I'm absolutely convinced that salt will be a part of my personal and professional toolkit for a long time to come. It's an extremely powerful and modular framework.

In the little bit of experimentation that I've done with salt on my home network, I've already found a few bug that appear to be low-hanging fruit. I plan on working closely with the community to verify that they are indeed bugs, and I most definitely plan on contributing back everything I can. This is such an exciting project!!

If you haven't used it yet, you must research it and give it a try. It is a game-changer.

Startech: Scammer Scammed

My wife and I took the kids out to visit family out in California for Thanksgiving break this past year. It was a fantastic visit. We all had a great time. I even got a fun story out of the first evening there! I made sure to write down the details in my phone shortly after this occurred, and I've decided to post them here for others to enjoy.


My wife's grandfather received a phone call from a "Startech" company (213-330-0187, according to the phone's caller ID). The caller yammered on about having detected a bunch of viruses on grandpa's computer, and he claimed that he was calling to help us get rid of them. Being the resident tech guy, grandpa handed the phone off to me to deal with the situation.

The Indian guy on the other end again explained that he found out that our computer has several viruses and was going to walk us through how to get rid of them. He asked if my computer was turned on, to which I responded that no, the computer wasn't currently on. He asked if I could go turn it on and sit in front of it. I told him I would. While the computer was "booting," he asked how old the computer is. I told him it was maybe three years old. Eventually, I told him the computer was ready.

At this point, he asked me if I saw a key on my keyboard with the letters C, T, R, and L. Obviously, I did. Then he asked if I could see a key near that with a flag on it. When I said that I could see it, he asked me to find the R key. Once discovered, he instructed me to push the flag key and the R key.

I told him that I pushed the keys, but nothing happened on my computer. He patiently asked me to try again. When I again stated that nothing happened, he asked me to describe which keys I was pushing. I told him I held down the flag key and the R key at the same time, and he claimed that it was not possible for nothing to happen when I push those keys.

I believe that's when he instructed me to hold the flag key down with one finger then hold down the R key with another. Again nothing. He asked me to try a few more times, because maybe my computer was just slow. For each attempt, I claimed that nothing had happened, and he muttered something about this not being possible. Mind you, I wasn't even looking at a computer during any if this.

Eventually, he gave up trying to get that dialog to pop up. He said there was another option. He asked which Web browser I use, if it's called Mozilla Firefox, Google Chrome, or Microsoft Internet Explorer. I said. "Uhm, I think it's called Midori..." He was a bit confused, asking me to repeat the name. I did repeat it, and I even spelled it out for him. Apparently it wasn't important, because he just shrugged it off and continued with his script.

He asked me to type into the address bar the following address: www.appyy.com. I told him that I typed it in and it just said "Page Not Found." He was a bit skeptical at first, asking me to verify what I had typed into the address bar. He asked me to try again, again claiming that it is not possible for the page to not load.

That's when I asked him if I had to be connected to the Internet to follow this step, because I couldn't be on the phone and on the Internet at the same time. He let out a sort of exasperated sigh, then asked if there was any other number he could use to call me while I was on the Internet. I told him I only have the one number, and he diligently asked if a had any friends or family who could come over so I could use their phone. I said everyone I know is out of town for the holidays.

I believe he then went on a little rant about them calling everyone in my state about their viruses. No doubt in my mind :)

Then, trying to be helpful, I asked if maybe he could email me the instructions do I could walk through them after we hung up. He said he would just say them over the phone for me to write down. I told him I was okay with that, and then he started listing off the steps: "the first thing you'll need to do is hang up, then...." That's when I hung up on him. He called back, but we just laughed with each other instead of answering.

InstArch

My blog has obviously been quite inactive the past year. I've started a bunch of new projects and worked on some really interesting stuff in that time. I'm going to try to gradually describe the things I've been playing with here.

One project I started in the summer of 2013 is a personal Arch-based LiveCD. My goal in building this LiveCD was purely personal: I wanted to have a LiveCD with my preferred programs and settings just in case I hosed my main system somehow. I want to have minimal downtime, particularly when I need to keep producing for work. That was the idea behind this project. I called it InstArch, as in "instant Arch".

The build scripts for this project are hosted on bitbucket, while the ISOs I build are hosted on sourceforge. InstArch was recently added to Softpedia, and it has received a bit of interest because of that. Again, the idea behind this project was entirely personal--I'm not trying to make a new distribution or community because I'm dissatisfied with Arch or anything like that. There is some erroneous information about InstArch on Softpedia, but I haven't yet written them to ask them to fix it. Soon enough :)

If you're interested in playing with my live CD, feel free to download it and offer suggestions on the issue tracker. I may or may not implement any suggestions :) I've already had one person email me asking about the default username and password for InstArch. If you also find yourself needing this information:

  • username: inst
  • password: arch

You shouldn't need this information unless you try to use sudo or try to switch desktop sessions.

Here's a video of my live CD in action.

Also note that I haven't built/published a new live CD for several months.


Another part of the InstArch project, which I started looong before the actual LiveCD, was to create my own personal Arch repository. It tracks a bunch of packages that I build from the AUR and other personal Arch packages. Anyone is free to use this repo, and it's one that's built into my live CD.

If you wish to use this repository, add my key:

pacman-key -r 051680AC
pacman-key --lsign-key 051680AC

Then add this to your /etc/pacman.conf:

[instarch]
SigLevel = PackageRequired
Server = http://instarch.codekoala.com/$arch/

Check Your Receipts

This morning I stopped for gas at a gas station that is associated with a grocery store. Buy more groceries, save a few cents off each gallon pumped at their station. That sort of deal. I found a gas voucher in my coat from a grocery shopping trip that should have allowed 25 cents off each gallon, so I figured I might as well use it before it expired.

When I scanned the little barcode on the voucher, I noticed that the display only registered a 20-cent-per-gallon discount. I also noticed that it would let me pump only ~7.5 gallons instead of the 20 that the voucher was good for. Luckily, there was an attendant in the tiny shack for the gas station that early. I approached him and asked what was going on--why I wasn't getting my full discount.

Obviously, he didn't believe my claims and had to see things for himself. He scanned the voucher and saw exactly what I described. Confused, he scuttled off to his shack to investigate. He couldn't figure out the exact cause, but ultimately he decided that someone else also had the same code or something from their own shopping trip. He was kind enough to actually give me the cash value of the 25-cents-per-gallon discount right then and there, so that's cool.

Moral of the story: if you use such gas vouchers, be sure to check the displayed discount with what you see on the voucher. If you notice a discrepancy, maybe you'll be lucky enough to get the cash value like I did! What makes it even more exciting is that I rarely use the full "up to 20 gallons" part of the voucher before the expiration. Bonus!

Test-Driven Development With Python

Earlier this year, I was approached by the editor of Software Developer's Journal to write a Python-related article. I was quite flattered by the opportunity, but, being extremely busy at the time with work and family life, I was hesitant to agree. However, after much discussion with my wife and other important people in my life, I decided to go for it.

I had a lot of freedom to choose a topic to write about in the article, along with a relatively short timeline. I think I had two weeks to write the article after finally agreeing to do so, and I was supposed to write some 7-10 pages about my chosen topic.

Having recently been converted to the wonders of test-driven development (TDD), I decided that should be my topic. Several of my friends were also interested in getting into TDD, and they were looking for a good, simple way to get their feet wet. I figured the article would be as good a time as any to write up something to help my friends along.

I set out with a pretty grand plan for the article, but as the article progressed, it became obvious that my plan was a bit too grandios for a regular magazine article. I scaled back my plans a bit and continued working on the article. I had to scale back again, and I think one more time before I finally had something that was simple enough to not write a book about.

Well, that didn't exactly turn out as planned either. I ended up writing nearly 40 pages of LibreOffice single-spaced, 12pt Times New Roman worth of TDD stuff. Granted, a fair portion of the article's length is comprised of code snippets and command output.

Anyway, I have permission to repost the article here, and I wanted to do so because I feel that the magazine formatting kinda butchered the formatting I had in mind for my article (and understandably so). To help keep the formatting more pristine, I've turned it into a PDF for anyone who's interested in reading it.

So, without much further ado, here's the article! Feel free to download or print the PDF as well.

I'm Using Nikola Now

For anyone who still might be visiting my site with any regularity, you might have noticed some changes around here. For the past several years, I've been blogging on a Django-based system that I wrote a very long time ago. I wrote it because, at the time, there weren't many Django-based blogging platforms, and certainly none of the few were quite as robust as I thought I wanted.

I set out to build my own blogging platform, and I think it worked out fairly well. As with all things, however, it became obsolete as the ecosystem around it flourished. I simply didn't have the time to continue maintaining it as I should have. That's also part of the reason for my lack of activity here these past couple of years.

Anyway, in an effort to keep this blog alive, I've switched to a much more simple blogging system known as nikola. It's not your run of the mill Wordpress clone. No, it's much more simple than that, but it doesn't sacrifice much of what I had with django-articles. I still get to write my posts using a format that I enjoy (restructuredtext). I get to write my posts in an editor that I enjoy (vim). I get to keep my posts in a "database" that I enjoy (git). I get to deploy using an interface that I enjoy (the command line). And I don't have to try to keep up with what is happening in the blogging ecosystem--there are plenty of other people handling that with nikola for me!

So, you can expect more posts in the coming year. Call it a new year's resolution.

Django Projects

Over the past 6 years, I've built a lot of things with Django. It has treated me very well, and I have very much enjoyed seeing it progress. I got into Django when I helped the company I was working for transition away from a homegrown PHP framework toward something more reliable and flexible. It was very exciting to learn more about Django at a time when the ecosystem was very young.

When I started with Django, there weren't a lot of pluggable apps to fill the void for things like blogs, event calendars, and other useful utilities for the kinds of sites I was building. That has changed quite a bit since then. The ecosystem has evolved and progressed like mad, and it's wonderful. We have so many choices for simple things to very complex things. It's amazing!

Unfortunately, during this whole time period, my development efforts have shifted from creating my own open source projects to share with the world toward more proprietary solutions for my employers. If it's not obvious to you from my blog activity in recent years, I've become very busy with family life and work. I have very little time to give my open source projects the attention they deserve.

For at least 4 years, I've been telling myself that I'd have/make time to revamp all of my projects. To make them usable with what Django is today instead of what it was when I built the projects. Yeah, this time has never showed up. Take a look at the last time I wrote a blog article!

I have decided to disown pretty much all of my open source Django projects. I've basically done this with one or two of the more popular projects already--let someone else take the reigns while I lurk in the background and occasionally comment on an issue here or there. I truly appreciate those who have taken the initiative here. But there are still plenty of projects that people may find useful that need some attention. I'm putting it up to the community to take these projects over if you find them useful so they can get the love and attention they need.

Here is a list of Django projects that anyone is free to assume responsibility for. Most of them are silly and mostly useless now. Some are unpleasant to look at and could use an entire rewrite.

The fact that I'm giving up these projects does not mean I'm giving up on Django. On the contrary, I'm still using it quite heavily. I'm just doing it in such a way that I can't necessarily post my work for everyone to use. I honestly don't expect much of this disowning effort, since the projects are mostly stale and incompatible with recent versions of Django. But please let me know if you do want to take over one of my projects and care for it.