More django-articles Updates

I've spent a little more time lately adding new features to django-articles. There are two major additions in the latest release (2.0.0-pre2).

  • Article attachments
  • Article statuses

That's right folks! You can finally attach files to your articles. This includes attachments to emails that you send, if you have the articles from email feature properly configured. To prove it, I'm going to attach a file to this article (which I'm posting via email).

Next, I've decided that it's worth allowing the user to specify different statuses for their articles. One of the neat things about this feature is that if you are a super user, you're logged in, and you save an article with a status that is designated as "non-live", you will still be able to see it on the site. This is a way for users to preview their work before making it live. Out of the box, there are only two statuses: draft and finished. You're free to add more statuses if you feel so inclined (they're in the database, not hardcoded).

The article status is still separate from the "is_active" flag when saving an article. Any article that is marked as inactive will not appear on the site regardless of the article's "status".

On a slightly less impressive note (although still important), this release includes some basic unit tests. Most of the tests currently revolve around article statuses and making sure that the appropriate articles appear on the site.

2Ze.us Updates

There has been quite a bit of recent activity in my 2ze.us project since I first released it nearly a year ago. My intent was not to become a competitor with bit.ly, is.gd, or anyone else in the URL-shortening arena. I created the site as a way for me to learn more about Google's AppEngine. It didn't take very long to get it up and running, and it seemed to work fairly well.

AppEngine and Extensions

I was able to basically leave the site alone on AppEngine for several months--through about September 2009. In that time, I came up with a Firefox extension to make its use more convenient.

The extension allows you to quickly get a shortened URL for the page you're currently looking at, and a couple of context menu items let you get a short URL for things like specific images on a page. Also included in the extension is a preview for 2ze.us links. The preview can tell you the title and domain of the link's target. It can tell you how much smaller the 2ze.us URL is compared to the full URL. Finally, it displays how many times that particular 2ze.us link has been clicked.

That as all fine and dandy. It was the second Firefox extension I had ever written, and it's still running strong. In June or July of 2009, I started working on a little program to make it easier for me to interact with Twitter the way I wanted to. This was a great opportunity for me to incorporate 2ze.us into the application so any URL I wanted to post to Twitter would automatically be shortened for me, using my own shortener.

Porting to WebFaction And PHP

Anyway, around the end of September 2009, I noticed that there were a lot of problems with 2ze.us. It was slow and sometimes completely unresponsive. Certain URLs would redirect to their full URLs, while others wouldn't. The Firefox extension stopped working nicely. Oh yeah, and AppEngine rolled back to a previous revision of the code without me telling it to. That's when everything just died. It didn't take long for me to decide to migrate my project from AppEngine onto my awesome WebFaction hosting.

At this point, I was faced with a small dilemma: keep the code in Python, or port it to PHP. I opted to port it over to PHP, because I didn't want all of the overhead of a full Django instance for a site that needed to be very zippy. And I was unacquainted with other Python options.

By early October 2009, I had managed to turn the project into a PHP beast, running on Apache. It was a lot more responsive than AppEngine ever let 2ze.us be. There were a few bumps along the road, what with the extension and Twitter client relying on various parts of the site. Eventually it got to a point where I could just let it sit and work.

Chromium Extension

Sometime around the end of December, I decided to write another extension for 2ze.us, only for Google Chrome and Chromium this time. This extension isn't quite as feature-packed as its Firefox brother, but it gets the job done.

Clip2Zeus

Shortly after "completing" the Chromium extension, I had what seemed like a pretty original idea. Who knows if it really is, but I still haven't seen another tool quite like the one that I made as a result of this idea. I thought, "Now, why should I need to install an extension in each Web browser I use on each computer I use? Is there a better way?"

The answer came quickly: a standalone, desktop application. Write one program that handles shortening URLs for you. My laziness told me to make a program that monitors your system clipboard for URLs. If a URL is detected, try to shorten it, and update the clipboard contents in place. Boom. Done. All extensions become useless beyond things like the URL preview (which is very useful, imo).

The next question I asked was, "Do I make it platform-dependent? Should I stick it to the majority of computer users and write my tool for Linux only? For OSX only? For, uh... Windows only?" Again, an easy question to answer. Support them all or don't even bother writing the application.

A week's worth of midnight hacking saw the birth of Clip2Zeus 1.0a. It's a cross-platform compatible desktop application that does exactly what I just mentioned. When it's running and detects a URL on your system clipboard, it will try to shorten it and update it in your clipboard. If you copy a block of text, the application will only modify the URLs in that block of text--meaning the block of text will still be in your clipboard, but it will have shorter URLs.

I use the program every day at work (on OSX). It's been very fun for me to see a short URL any time I copy a nasty URL to my clipboard. Imagine that; I'm a big fan of my own work...

Tornado

Lately, I've noticed that the site was getting kind of slow again. Sometimes it would take several seconds for Clip2Zeus to shorten URLs in my clipboard, when it was normally instantaneous. Every once in a while, Clip2Zeus would completely fail to connect to the website.

One of my friends has asked me a lot of questions about the Tornado framework in the past months. I had read a few things about Tornado when it was open-sourced last year, but I didn't really feel the need to dabble with it. These questions prompted me to tinker a little.

Last night I re-ported 2ze.us to Python, using the Tornado framework this time. So far I'm very impressed with its responsiveness. The framework offers a lot of neat little utilities, and it is very fast (as reported by dozens of other reputable sources).

On top of the speed increase that came with the transition to Tornado, my RAM usage on WebFaction has come down by nearly 100MB. Just by turning off the one Apache-backed website. Now I'm nowhere near my RAM cap! Wahoo!!

Enough rambling. Like I said at the beginning of this article, a lot has been happening with this project in the past year. I didn't even think about all of the time I put into projects related to my simple little side project. Looking back, I'm quite satisfied with how things have unfolded.

Statistics

Here are some simple statistics for 2ze.us. Since March 2009...

  • 5,252 URLs have been shortened using 2ze.us
  • 2ze.us links have been clicked 198,267 times
  • 315,951 URL characters have been turned into 11,532 characters

In April 2009...

  • 217 URLs were shortened
  • 2ze.us links were clicked 617 times

In February 2010...

  • 1,182 URLs were shortened
  • 2ze.us links were clicked 32,830 times

Not too shabby for a side project.

Tip: easy_install / pip

With all of the exciting updates to Mercurial recently, I've been on a rampage, updating various boxes everywhere I go. I'm in the habit of using easy_install and/or pip to install most of my Python-related packages. It's pretty easy to install packages that are in well-known locations (like PyPI or on Google Code, for example). It's also pretty easy to update packages using either utility. Both take a -U parameter, which, to my knowledge, tells it to actually check for updates and install the latest version.

That's all fine and dandy, but what happens when you want to install an "unofficial" version of some package? I mean, what if your favorite project all of the sudden includes some feature that you will die unless you can have access to it and the next official version is weeks or months in the future? There are typically a few avenues you can take to satisfy your needs, but I wanted to bring up something that I think not many people are aware of: easy_install and pip can both understand URLs to installable Python packages.

What do I mean by that, you ask? Well, when you get down to the basics of what both utilities do, they just take care of downloading some Python package and installing it with the setup.py file contained therein. In many cases, these utilities will search various package repositories, such as PyPI, to download whatever package you specify. If the package is found, it will be downloaded and extracted.

In most cases, you can do all of that yourself:

$ wget http://pypi.python.org/someproject/somepackage.tar.gz
$ tar zxf somepackage.tar.gz
$ cd somepackage
$ python setup.py install

Both easy_install and pip obviously do a lot of other magic, but that is perhaps the most basic way to understand what they do. To answer that last question, you can help your utility of choice out by specifying the exact URL to the specific package you want it to install for you:

$ easy_install http://pypi.python.org/someproject/somepackage.tar.gz
$ pip install http://pypi.python.org/someproject/somepackage.tar.gz

For me, this feature comes in very handy with projects that are hosted on BitBucket, for example, because you can always get any revision of the project in a tidy .tar.gz file. So when I'm updating Mercurial installations, I can do this to get the latest stable revision:

$ easy_install http://selenic.com/repo/hg-stable/archive/tip.tar.gz

It's pretty slick. Here's a full example:

[user@web ~]$ hg version
Mercurial Distributed SCM (version 1.2.1)

Copyright (C) 2005-2009 Matt Mackall <mpm@selenic.com> and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
[user@web ~]$ easy_install http://selenic.com/repo/hg-stable/archive/tip.tar.gz
Downloading http://selenic.com/repo/hg-stable/archive/tip.tar.gz
Processing tip.tar.gz
Running Mercurial-stable-branch--8bce1e0d2801/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Gnk2c9/Mercurial-stable-branch--8bce1e0d2801/egg-dist-tmp--2VAce
zip_safe flag not set; analyzing archive contents...
mercurial.help: module references __file__
mercurial.templater: module references __file__
mercurial.extensions: module references __file__
mercurial.i18n: module references __file__
mercurial.lsprof: module references __file__
Removing mercurial unknown from easy-install.pth file
Adding mercurial 1.4.1-4-8bce1e0d2801 to easy-install.pth file
Installing hg script to /home/user/bin

Installed /home/user/lib/python2.5/mercurial-1.4.1_4_8bce1e0d2801-py2.5-linux-i686.egg
Processing dependencies for mercurial==1.4.1-4-8bce1e0d2801
Finished processing dependencies for mercurial==1.4.1-4-8bce1e0d2801
[user@web ~]$ hg version
Mercurial Distributed SCM (version 1.4.1+4-8bce1e0d2801)

Copyright (C) 2005-2009 Matt Mackall <mpm@selenic.com> and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Notice the version change from 1.2.1 to 1.4.1+4-8bce1e0d2801. w00t.

Edit: devov pointed out that pip is capable of installing packages directly from its repository. I've never used this functionality, but I'm interested in trying it out sometime! Thanks devov!

Automatic Config Replication With Mercurial

I've done a lot of neat things since I started my new job earlier this month. I'm really excited about the things I've learned and experimented with, and I would like to share some of the concepts with my visitors.

At work we use a lot of virtual machines in our individual development environments. Most of these virtual machines use very similar configuration settings, but the settings are not a standard part of the installation. That is because we build our virtual machines using the same installation tools that our customers would use. The configuration I'm talking about is just stuff specific to our development environment.

Creating and configuring these virtual machines is one of the first things my mentor showed me how to do my first day on the job. He commented on how quickly I would probably start learning all of the configuration tasks because we tend to setup our development VMs several times a month. That was all fine and dandy, and I did get a pretty good feel for what needed to go into a development VM that first day.

However, after doing it so many times, I realized how much time I was using just trying to get the VM set up just right. It wasn't hard to configure--it was just time-consuming. It wasn't long before I started thinking of ways to optimize the process.

One of the ideas I came up with, which seems to be serving my purposes perfectly, is that of using Mercurial to quickly and easily get the exact same configuration from one box to another. It also has the added benefit of keeping a history of the changes I make to my configuration as time goes on.

I won't go into exact detail on how I have things setup at work, but I would like to try to describe a similar scenario that should illustrate my goal just as well.

Getting Started

One of the first things I would encourage you to do is follow along. It will make the concept sink in much faster, and you will probably see other applications very quickly. Please note, however, that if you're following along exactly, it could be a very time-consuming process. I will be using 3 virtual machines as I write this, but you could just as easily use 5, 10, or 100,000. Likewise, you could eliminate the virtual machines altogether if you're in an environment with several physical computers.

One virtual machine will act as the "master" server, or the one that will be configured first. The other virtual machines will act as "slave" servers, which will simply receive configuration updates that happen on the master server. We will also modify this behavior to be a bit more interesting toward the end of the article.

Virtual Machines Galore!

First off, I will create some basic virtual machines using the net install version of Debian 5.0.3. I really only need to create 1 VM and then clone it a couple of times. I am willing to furnish my virtual machines to those who are interested in using them. I will install some additional software in the VM to make sure the demo works smoothly. Among the packages that I will install are:

  • Python
  • Mercurial
  • OpenSSH server

Initialize a Repository

Once I have all of that set up in my virtual machines, I will initialize a Mercurial repository on the master server to maintain the configuration files that I am interested in. Let's just use the /etc directory for the time being. There's a pretty good chance that most of our system-wide configuration will all be contained somewhere beneath /etc.

cd /etc
hg init

Now let's have a gander at the files that we can have Mercurial manage for us:

hg st

Wow! That is quite a set of files, isn't it? Thankfully, they should mostly be plain text files. Mercurial is very efficient at managing text files. Let's now add all of the files in /etc to our repository, so they can be tracked and easily pushed out to other systems.

hg add

That command will happily add everything that hg st printed. Obviously, we can get a little more picky about what we do and do not add to our repository, but that's not the goal of this article. Now, this step merely tells Mercurial that it needs to pay attention to changes in these files. The files have not yet been committed to the repo. Let's do that, so we have a backup of our configuration files in their pristine state:

hg ci -m "Initial import"

The -m "Initial import" is just a comment, to describe what happened to warrant a commit to the repository. It is for your use and the use of anyone who has access to your repo.

Clone The Configuration

Now let's try to push the configuration we just committed on the master server to one of the slave servers. Since my virtual machines are all essentially in the same state, there should be no conflicts, right? Try running the following command on the master server:

hg push ssh://root@slave1//etc
root@slave1's password:
remote: abort: There is no Mercurial repository here (.hg not found)!
abort: no suitable response from remote hg!

Blast! We can't simply push the configuration files out to another computer. For that to work, we'd first have to have the repository itself exist on the slave server. Let's try this another way. One the slave server, run this command:

hg clone ssh://root@master//etc /etc
root@master's password:
abort: destination '/etc/' is not empty

Doh! Mercurial won't let us clone the repository from the master server! That's because Mercurial wants to clone to a new directory, with nothing already in it. One way to get around this hairball of a show-stopper is to just copy the repo using conventional UNIX utilities. Execute this command on one of your slave servers:

scp -r root@master:/etc/.hg /etc/

The .hg directory contains all of the repository information, and it's really all we need to snag in order to clone the repository. This might not be the most elegant solution in the world, but it will suffice for the time being. Once the scp command completes, we should have a full copy of the configuration file repository. Run this command to verify:

hg st

If your setup is anything like mine, you'll probably have a few files that are listed as being modified. Chances are that these files will vary from host to host anyway, and they are probably not worth keeping in a version control system. That would just be begging for conflicts.

I wrote an extension for Mercurial that should make this part of my tutorial a little less hacky. On your other slave server, run the following commands:

hg clone http://bitbucket.org/codekoala/hgext /root/hgext
echo "[extensions]" >> /root/.hgrc
echo "neclone = /root/hgext/neclone.py" >> /root/.hgrc

This extension gives you a new Mercurial command called neclone (N. E. Clone, or "not empty clone"). As we saw earlier, Mercurial doesn't let us clone a repository into a directory that is not empty. This extension allows us to do that. It works almost identically to the regular clone command... takes the same options and everything.

Still on your second slave server, run these additional commands:

hg neclone ssh://root@master//etc /etc
cd /etc
hg up -C

The last step is optional, and soon to be included as part of the extension. It will update your working copy to the latest revision in the repository. Beware that it overwrites any uncommitted changes you may have made to files that are tracked by Mercurial.

So now both slave servers should have a clone of the configuration repository from the master server.

Being Picky

Let's start to be a little picky about the files we are tracking in our repository. Some of the files appears as being modified on my slave server after copying the .hg directory from the master server are:

  • adjtime
  • alternatives/pager
  • alternatives/pager.1.gz
  • mailcap
  • network/run/ifstate
  • udev/rules.d/70-persistent-net.rules

I think it's safe to remove these from the repository, to avoid conflicts with other systems. To tell Mercurial to stop tracking files it is tracking, without actually deleting the file from the filesystem, you can use the following command:

hg forget adjtime
hg forget mailcap

And so on. Go ahead and do that for each of the files that appeared to be modified on your slave server immediately after copying the .hg directory. I'm going to add /etc/hostname to the list of files to forget too.

After doing that, each of those files should appear as being marked for removal when you run hg st. Don't worry, this is normal. The files will not be deleted from the filesystem, but they will be deleted from the repository. Go ahead and commit those changes to the repository on your slave server.

hg ci -Am "Removed some files from version control"

Now let's push those changes out to the master server:

hg push
abort: repository default-push not found!

Since we copied the .hg directory directly using scp, our slave won't know where the changes need to go when we run the push command with no explicit destination repository. To fix that, let's create a file in /etc/.hg/ called hgrc on the slave server. In that file, put the following text:

[paths]
default = ssh://root@master//etc

The hg push command should now push directly to the master server. Yay! The problem we face now is that every other slave server in the group is out of date. How can we fix that? We'll use Mercurial hooks.

Automating Config Replication

Mercurial offers some very useful hooks that we can use to automatically push configuration changes out to each of our slave servers. We will use the commit and changegroup hooks to do the magic. Let's create a script that will live on the master server to take care of pushing our changes out to each slave server. Create a new file in /etc/ on the master server called propagate.sh:

#!/bin/bash
hg up
for node in 'slave1' 'slave2'
do
    ssh root@$node "cd /etc; hg pull -u"
done

Let's also make sure this script is executable:

chmod +x /etc/propagate.sh

This script assumes that your /etc/hosts file or your nameserver are configured appropriately to allow slave1 and slave2 to be resolved to IP addresses. The reason we're SSH'ing into each slave server and using hg pull instead of simply using hg push ssh://root@$node//etc is because you can't force an update on a remote server using push. You can, however, request an update when you're using pull.

Obviously, this script is not the most sophisticated of scripts. It might work well for my demonstration, with only a few servers, but once you get beyond that it would be a nightmare to maintain the list of servers the script has to connect to. You can use whatever means you'd like to keep track of the servers you want to replicate your configuration to. I don't want to bother with all of the crap I'd get for suggesting one thing over another, so it's now your call.

Now it's time to configure the Mercurial hook to execute that script when the master server sees a changeset get into its repository. Open up /etc/.hg/hgrc on the master server, or create it if it doesn't exist. Make sure it has at least the following in it:

[hooks]
commit.propagate = /etc/propagate.sh
changegroup.propagate = /etc/propagate.sh

Let's try it out! Run these commands on your master server:

echo "" >> /etc/hosts
hg ci -m "Added a blank line to the hosts file"
root@slave1's password:
remote: Permission denied, please try again.
remote: Permission denied, please try again.
remote: Permission denied (publickey,password).
abort: no suitable response from remote hg!
Connection closed by slave2
warning: commit.propagate hook exited with status 255

Blast! The script failed because it wanted us to type in a password, but it was not in interactive mode. Let's fix that with a little preshared key magic. I won't go into the details about how this works, but the following commands on your master server should get us rolling:

ssh-keygen
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys2
scp -r ~/.ssh root@slave1:~
scp -r ~/.ssh root@slave2:~

Warning

Keep in mind this is not secure and should probably not be how your production machines are configured, especially with the root user.

For simplicity's sake, just accept all of the details and don't set a passphrase. These commands enable us to SSH into our slave servers without using a password. If you get an error such as:

remote: Host key verification failed.
abort: no suitable response from remote hg!

...it just means you need to manually log into your master server from the slave machine that threw that error. When doing so, you will have to answer "yes" to a question about the authenticity of the host you're logging into.

Testing It Out

It is now time to see if we can make a configuration change on one slave server and have it show up on the other slave server. Let's update the hosts file a little bit. Let's add the following line on the second slave server:

10.0.0.5        nonexistanthost

Now let's commit the change and push it off to the master server:

hg ci -m "Added a dumb line to the hosts file"
hg push

My system actually told me that that it had copied the change out to another host. I know because I saw these lines:

remote: pulling from ssh://root@master//etc
remote: searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files

Now when I look at the first slave server, I should see that new line in my /etc/hosts file. Also, the log on each server should have the same entry that I just made about adding "a dumb line to the hosts file."

Seem Like A Lot of Work?

A lot of what we just did probably seemed like more work that it is worth, right? Well, being a nerd typically comes with a few qualities. One quality which I have observed many a time in my most geeky of friends is that they will spend hours and hours up front on a program or script just so they can save 2 minutes in the future. They work hard to be lazy.

There is a lot of boilerplate configuration that takes place in this particular scenario. I realize that. What I haven't shared with you, though, is how I automated the boilerplate configuration as well as the propagation of configuration. I'm tired of putting this article off, so I will have to leave those details for another article. Sorry!

Why?! There's a Better Way (tm)

There is always a better way. Always. Go ahead and use whatever you feel is the most efficient method for keeping configuration files in sync across several computers. This is just one more option to add to your toolkit. Don't worry, I won't be offended if you don't like it or don't use it. It works perfect for me and it's free, and I just wanted to share!

Google Code + Mercurial = Many Happies

Last night I noticed that Google Code is actually offering the Mercurial project hosting that they promised back in April. I guess it's been around for most of May, but I never saw any news to suggest that it was actually public. As soon as I noticed it, I converted one of my less-known, less-used SVN projects to Mercurial. I'm really liking it.

I need to do a bit more work on this particular project before I announce it to the world, but it's out there, and it's Mercurial powered now babay. I think I will be leaving most of my other projects in SVN so I don't upset all of the other people who actually use them.

Oh, I also noticed that the project quotas were bumped up quite a bit. Now each project seems to get a whopping 1GB of space for free!!! What do you have to say about that, BitBucket/GitHub/Assembla/[insert dirty, rotten free open source project hosting host name here]?!

Hooray for Google Code!

My Fedora 11 Adventures: Part III

Alrighty folks. Good night's rest? Check. Need to get work done? Check. Today's adventure will be about getting my computer set up for the regular development tasks that I need to do every day for my work and hobbies.

Getting Work Done

The first thing I noticed this morning when I turned on my computer was that it took exactly 1 minute from the time I hit the power button to the time I hit the enter key to log into my computer. Logging in took an additional 15-20 seconds. That was quite nice.

The next thing I noticed was that I was not connected to my network as I should be. Clicking the system tray menu item as I did last night did the trick, but I'm going to have to investigate how to make it connect automatically at boot.

Automatic Network Connectivity

It looks like I can have my Ethernet be activated automatically by right clicking on the network manager icon in my system tray, selecting "Edit Connections," selecting "System eth0," clicking the "Edit" button, and finally checking the "Connect automatically" option in the subsequent window. We'll see if this truly activates my connection next time I boot.

In an effort to get my wireless working, I poked around a little more in the "Edit Connections" screen, but I didn't see anything that seemed useful. I did find something that seemed a bit more interesting by selecting Applications > Administration > Network Configuration from the KDE menu. This utility suggested that my wireless adapter was actually wlan1 instead of the wlan0 that the tray icon seemed to think it was.

I tweaked a few settings about my wireless adapter, such as marking the "Activate device when computer starts" and "Allow all users to enable and disable the device." In the Hardware Device tab, I selected my actual Broadcom wireless adapter instead of the non-existant wlan0. I also hit the probe button next to the "Bind to MAC address" box.

My network manager tray icon still shows no wireless networks (of which there is no shortage around here), and running iwlist scan as root says "Network is down" next to wlan1. I think I will just mess with it later. Maybe it will "just work" when I reboot next time.

Installing/Configuring The Tools

As I previously mentioned, I prefer to use things that work well without getting in my way. When talking about text editors, VIM is just fine for me, and VIM 7.2.148 is already installed on my Fedora 11. One less thing to install.

Next up comes the installation of all of the goods for Firefox. It turns out that Fedora comes with Firefox 3.5 Beta 4--a bold move. I hope my extensions all work! The extensions I will be installing right now include:

  • AdBlock Plus: get rid of pesky ads that slow down my computer
  • Firebug: an amazing tool when debugging Web pages
  • Web Developer: has some niceties that Firebug doesn't come with
  • Screengrab: fantastic for taking screenshots of full Web pages
  • 2Zeus: my own little extension that allows me to quickly get short URLs a la tinyurl.com and is.gd

When I plugged in my external 1TB Seagate hard drive, I got a delicious Fatal Error message:

/images/fedora/p3/fatal_error.png

All appears to be in order, however, as I have access to all of the partitions on the external drive.

Next I want to install Opera. It appears that the place to look is Applications > System > Software Management in the KDE menu. Let's see what we have. Searching for Opera in the only obvious search box sent my computer into a crazy "let me do something without telling you" cycle. I have no idea what's really going on, but my processor has been maxed out for the past 3 minutes and my network has been working a little here and there. Can it really be that difficult to find a simple package? Oh! It finished! It took 6 minutes and 54 seconds to find nothing. Excellent. Let me look somewhere else.

Awesome. My computer is non-responsive. The hard drive is still working, but my GUI is doing nothing. I love it. Attempts to drop back to a trusty console using Control, Alt, and F1-F6 rendered no results. I wonder if I can SSH in from here... I sure can! Fantastic. Let's see what's happening.

It appears that X is taking up 90% of my processing power, but my computer is still not responding to any of my input. Dang it! Now my SSH session isn't working. Looks like the only option I have now is to do a hard reset. Joy of joys. Thank you for this opportunity, Fedora. Last time I did a hard reset, I was in Windows and it trashed my 1TB external.

So far rebooting seems to be going well. I wonder if my network will be setup properly still... Fantastic! It works! Wireless is still not available though. I can live without that for the time being.

Back in the Software Management utility, searching for Opera again proved to work much more quickly, but I didn't get any results. I suppose I'll just go download it from their site. The download for Opera 10 beta 1 is a mere 7.2MB, and it looks like it will open in the same Software Management utility that I've been dinking around in.

When I downloaded the Opera package, I asked it to open directly in the default program, KPackageKit. That doesn't seem to be working in the least, so I am going to try to just save it to my home directory and install it some other way. Sorry guys and gals, I ended up just dropping back to a terminal to run rpm -Uvh opera-10.00-b1.gcc4-shared-qt3.x86_64.rpm and that seemed to work fine. Opera appeared in my KDE menu, and it runs well now.

Next up is Pidgin. Pidgin 2.5.5 is installed by default, and getting it up and running was as trivial as ever.

Now to test Flash... YouTube, here I come!! Beh, Flash is not installed by default, and it's also not in the Software Management tool. What use is that thing?! Maybe if I apply all of the updates in the "Software Updates" section it will feel more useful... Here it goes.

Cool. System is unresponsive again. Let's see if I can reboot from here. Nope! Thank you, Fedora, for making me hard reset my system more in 2 hours than I have had to in YEARS. Yeah, thanks buddy.

10:50 AM So the software updates continue to not work. It appears that a ypbind package is the culprit which is causing everything to hang... I disabled it and tried to install the software updates again.

10:53 AM GUI is non-responsive again. Yay.

10:56 AM Third hard reset in 3 hours. Maybe I will have to modify my original parameters and try GNOME to see if that makes the computer usable for more than an hour at a time.

11:00 AM That's it! I'm getting rid of KDE 4... sorry folks, GNOME is my only hope of getting work done. Second clean shutdown out of 5 since the installation completed last night.

My Fedora 11 Adventures: Part I

Today I decided that I would deliberately put myself outside of my comfort zone. No, not by intentionally putting myself on a telephone for more than 5 minutes this month... I will need a lot more preparation before I can attempt that one. No no, today's experiment has to do with Linux. If you're new around here, I am a very big fan of Linux. It has been my primary operating system for over 8 years (but I still use Windows and Mac occasionally, when I need to test my programs and the cross-platform behavior).

A Little Background On Yours Truly

There was a time when I was what you would call a distro-hopper. I would download any and every Linux distribution I could get my hands on. Most of them would hang around on my computer for a few days at best, but a select few actually impressed me enough to have them stick around for longer. Among those few are Slackware and Sidux. Many other distros are nice and pretty, but when it comes to me being productive on them, there always seems to be something lacking.

I am addicted to speed and reliability--two things that originally urged me to tinker with Linux all those years ago. I am more than willing to sacrifice looks and features for being able to just get something done quickly and efficiently. As a matter of fact, I'm writing this article in VIM, one of the most "light-weight" editors around these days. It allows me to do exactly what I want to do without getting in my way. That's how I like things.

That's probably the main reason I love Slackware. It won't do anything I don't tell it to do. No crazy background processes updating some package repository, slowing down my system. No pestering me about security updates that I will install in my own due time. Slackware only does what I want it to, and I have learned a ton about Linux because of it. If I decide I want something automated in the background, I have to tell the computer to do it. If one of my programs has been updated on the Internet, I download and install the package manually instead of using a "package manager." If one of my programs doesn't work because of a missing dependency, I am the one who finds and downloads the dependency. It's a lot of work initially, but I'm of the persuasion that this work is well worth it for my situation.

In today's day and age, that sort of setup seems to scare a lot of people off. People like to have things "just work." People like to not have to worry about keeping up to speed with what security threats are out there. People like having things to keep them entertained instead of getting things done. People like to see their desktop turn into a cube and spin around. People like to see things glow and wiggle on their computer. It's aesthetically pleasing. There's nothing wrong with that. Unless you want to get things done instead of just stare at your computer.

The Challenge

With that background in mind, you should be equipped to better understand the information and articles that follow. My challenge to myself is this: install Fedora 11 and use it for at least a week. To add to the the challenge, I'm installing the 64-bit version. In my past experience with 64-bit operating systems, there has been no real motivation or necessity for 64-bit computing. It just means more compatibility problems, which reduces productivity. This will be the first 64-bit operating system I actually plan to keep around beyond the exploratory period.

There are a few things about this that will bring me waaaay out of my comfort zone. They are (in no particular order):

  • Fedora
  • RPMs
  • KDE 4

I have a strong disregard for each of these items. There was a time when I considered Fedora to be a respectable platform--back when it was Fedora Core 2 or 3. Ever since then, I feel that it has gone down the tubes. RPMs have always seemed grossly lacking in the speed department to me, and it only got worse after I found out about Debian and Slackware. Finally, KDE 4 seems like one of the absolute worst window managers I have yet to encounter. I love KDE 3.5.x. I wish I could use it everywhere I go. But KDE 4 has yet to appeal to my desire for efficient productivity--it gets in my way almost as much as GNOME does.

Starting today, I plan to look all of these opinions (as biased as they may be) straight in the eye and take 'em head-on. I am going to work on learning to enjoy using Fedora. I'm going to work on learning how to appreciate RPMs. I am going to learn to be productive in the window manager "of the future."

And I will keep you all apprised of my progress.

Checking In

I suppose I should update everyone out there about what I've been up to lately. It seems strange to me that I post article much less frequently now than I did when I was a full-time university student. You'd think I'd have a whole lot more time to blog about whatever I've been working on. I suppose I do indeed have that time, it's just that I usually like to wait until my projects are "ready" for the public before I write about them.

The biggest reason I haven't posted much of anything lately is a small Twitter client I've been working on. Its purpose is to be a simple, out-of-the-way Twitter client that works equally well on Windows, Linux, and OSX. The application is written in Python and wxPython, and it has been coming along quite well. It works great in Linux (in GNOME and KDE at least), but Windows and OSX have issues with windows stealing focus when I don't want them to. I'm still trying to figure it out--any advice would be greatly appreciated.

Chirpy currently does nothing more than check your Twitter accounts for updates periodically. It notifies you of new updates using blinking buttons (which can be configured to not blink). I think the interface is pretty nice and easy to use, but I am its developer so it's only proper that I think that way.

Anyway, that project has been sucking up a lot of my free time. It's been frustrating as I build it in Linux only to find that Windows and OSX both act stupidly when I go to test it. That frustration inspired me to tinker with a different approach to a Twitter client. I began fooling around with it last night, and I think the idea has turned out to be more useful than Chripy is after a month of development!

I'm calling this new project "Tim", which is short for "Twitter IM". This one also periodically checks your Twitter account(s) for updates (of course). However, Tim will send any Twitter updates to any Jabber-enabled instant messenger client that you are signed into. If you're like me, you have Google Talk open most of the day, so you can just have Twitter updates go straight there! You can also post updates to Twitter using your Jabber instant messenger when Tim is running by simply sending a message back!!

The really neat stuff comes in when you start to consider the commands that I've added to Tim tonight. I've made it possible for you to filter out certain hashtags, follow/unfollow users, and specify from which Twitter account to post updates (when you have multiple accounts enabled). I hate all of those #FollowFriday tweets... they drive me crazy. So all I have to do is type ./filter followfriday and no tweet that contains #FollowFriday will be sent to my Jabber client. I love it.

More commands are on the way. Also on the way is a friendly interface for configuring Tim. Getting it up and running the first time is... a little less than pleasant :) Once you have it configured it seems to work pretty well though.

If you're interested in trying it out, just head on over to the project's page (http://bitbucket.org/codekoala/twitter-im/). Windows users can download an installer from the Downloads tab. I plan on putting up a DMG a little later tonight for OSX users. Linux users can download the .tar.gz file and install the normal Python way :) Enjoy!

Update: The DMG for OSX is a little bigger than I thought it would be, so I won't be hosting it on bitbucket. Instead, you can download it from my server.

Don't forget to read the README !!!

Send E-mails When You Get @replies On Twitter

I just had a buddy of mine ask me to write a script that would send an e-mail to you whenever you get an "@reply" on Twitter. I've recently been doing some work on a Twitter application, so I feel relatively comfortable with the python-twitter project to access Twitter. It didn't take very long to come up with this script, and it appears to work fine for us (using a cronjob to run the script periodically).

I thought others on the Internets might enjoy the script as well, so here it is!

#!/usr/bin/env python
# -*- coding: utf-8 -*-

"""
A simple script to check your Twitter account for @replies and send you an email
if it finds any new ones since the last time it checked.  It was developed using
python-twitter 0.5 and Python 2.5.  It has been tested on Linux only, but it
should work fine on other platforms as well.  This script is intended to be
executed by a cron manager or scheduled task manager.

Copyright (c) 2009, Josh VanderLinden
All rights reserved.

Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:

- Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
- Neither the name of the organization nor the names of its contributors may
be used to endorse or promote products derived from this software without
specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""

import twitter
import ConfigParser
import os
import sys
from datetime import datetime
import smtplib
from email.MIMEMultipart import MIMEMultipart
from email.MIMEText import MIMEText
from email.Utils import formatdate

# get the user's "home" directory
DIRNAME = os.path.expanduser('~')
CONFIG = os.path.join(DIRNAME, '.twitter_email_replies.conf')
FORMAT = '%a %b %d %H:%M:%S +0000 %Y'
REPLY_TEMPLATE = """%(author)s said: %(text)s
Posted on %(created_at)s
Go to http://twitter.com/home?status=@%(screen_name)s%%20&in_reply_to_status_id=%(id)s&in_reply_to=%(screen_name)s to post a reply
"""

# sections
AUTH = 'credentials'
EXEC = 'exec_info'
EMAIL = 'email_info'

# make the code a bit "cleaner"
O = lambda s: sys.stdout.write(s + '\n')
E = lambda s: sys.stderr.write(s + '\n')
str2dt = lambda s: datetime.strptime(s, FORMAT)

def get_dict(status):
    my_dict = status.AsDict()
    my_dict['screen_name'] = my_dict['user']['screen_name']
    my_dict['author'] = my_dict['user']['name']
    return my_dict

def main():
    O('Reading configuration from %s' % CONFIG)
    parser = ConfigParser.SafeConfigParser()
    config = parser.read(CONFIG)

    # make sure we have the proper sections
    if not parser.has_section(AUTH): parser.add_section(AUTH)
    if not parser.has_section(EMAIL): parser.add_section(EMAIL)
    if not parser.has_section(EXEC): parser.add_section(EXEC)

    try:
        # get some useful settings from the configuration file
        username = parser.get(AUTH, 'username')
        password = parser.get(AUTH, 'password')

        to_address = parser.get(EMAIL, 'to_address')
        from_address = parser.get(EMAIL, 'from_address')
        smtp_server = parser.get(EMAIL, 'smtp_server')
        smtp_user = parser.get(EMAIL, 'smtp_user')
        smtp_pass = parser.get(EMAIL, 'smtp_pass')

        if '' in [username, password, to_address, from_address, smtp_server]:
            raise Exception('Not configured')
    except Exception:
        E('Please configure your credentials and e-mail information in %s!' % CONFIG)

        # create some placeholders in the configuration file to make it easier
        sections = {
            AUTH: ('username', 'password'),
            EMAIL: ('to_address', 'from_address', 'smtp_server', 'smtp_user', 'smtp_pass')
        }

        for section in sections.keys():
            for opt in sections[section]:
                if not parser.has_option(section, opt):
                    parser.set(section, opt, '')
    else:
        # determine the last time we checked for replies
        try:
            last_check = str2dt(parser.get(EXEC, 'last_run'))
        except ConfigParser.NoOptionError:
            last_check = datetime.utcnow()
        last_check_str = last_check.strftime(FORMAT)

        info = 'Fetching updates for %s since %s' % (username,
                                                       last_check_str)
        O(info)

        # attempt to connect to Twitter
        api = twitter.Api(username=username, password=password)

        # not using the `since` parameter for more backward-compatibility
        timeline = api.GetReplies()
        new_replies = []
        for reply in timeline:
            post_time = str2dt(reply.GetCreatedAt())
            if post_time > last_check:
                new_replies.append(reply)

        count = len(new_replies)
        if count:
            # send out an email for this user
            O('Found %i new replies... sending e-mail to %s' % (count, to_address))
            reply_list = '\n\n'.join([REPLY_TEMPLATE % get_dict(r) for r in new_replies])
            is_are = 'is'
            plural = 'y'
            if count != 1:
                is_are = 'are'
                plural = 'ies'

            params = {
                'is_are': is_are,
                'count': count,
                'replies': plural,
                'username': username,
                'reply_list': reply_list,
                'last_check': last_check_str
            }

            text = """There %(is_are)s %(count)i new @repl%(replies)s for %(username)s on Twitter since %(last_check)s:

%(reply_list)s""" % params

            # compose the e-mail
            msg = MIMEMultipart()
            msg['From'] = from_address
            msg['To'] = to_address
            msg['Date'] = formatdate(localtime=True)
            msg['Subject'] = 'New @Replies for %s' % username
            msg.attach(MIMEText(text))

            # try to send the e-mail message out
            email = smtplib.SMTP(smtp_server)
            if smtp_user and smtp_pass:
                email.login(smtp_user, smtp_pass)
            email.sendmail(from_address,
                           to_address,
                           msg.as_string())
            email.close()

        # save the current time so we know where to pick up next time
        parser.set(EXEC, 'last_run', datetime.utcnow().strftime(FORMAT))

    # write the config
    O('Saving settings...')
    out = open(CONFIG, 'wb')
    parser.write(out)
    out.close()

if __name__ == '__main__':
    main()

Feel free to copy this script and modify it to your desires. Also, please comment if you have issues using it.

Downtime and django-tracking 0.2.7

The Foul Side

Some of you may have noticed the ~11 hours of intermittent downtime that codekoala.com experienced from early on the 24th of January to just a little while ago. I was doing some work on my django-tracking application, which somehow seemed to break my site. CodeKoala.com uses PostgreSQL as the database backend, and as soon as I tried to apply the changes to django-tracking to my site, everything just seemed to die.

The weird thing was that the site would work if I put it on a sqlite or MySQL backend. I didn't change the database schema at all as part of my changes to django-tracking, so it made absolutely no sense. I was in touch with WebFaction's awesome support squad for a good deal of today trying to get things sorted out. We tried just about everything we could think of, short of porting the entire site to a different backend or restoring a recent backup.

Just as things were looking very grim, I tried this command: ./manage.py reset tracking. Voilà! The site started working again. I guess I just had some super funky junk in my tracking application's tables.

On the Brighter Side

As a result of all this work and toil, you all can now enjoy django-tracking 0.2.7! There were a lot of minor code optimizations that went into this release. The biggest change, however, is the fancy "active users map" that you see here.

This feature allows you to display a map of where your recently active users are likely to be based upon their IP address. A list is also available below the map with displays further information about each active visitor. The page updates itself every 5 seconds or so, which means that if a visitor hasn't been active for 10 minutes (or whatever your timeout happens to be), their marker will disappear from the map and their entry in the last will go away too! Pretty dang fancy if you ask me!

If you're interested in downloading and using django-tracking, please check out the links at the end of the article. The Google Code link explains what you need to do and how to configure things.

So folks!! Please play with it!