2Ze.us Updates

There has been quite a bit of recent activity in my 2ze.us project since I first released it nearly a year ago. My intent was not to become a competitor with bit.ly, is.gd, or anyone else in the URL-shortening arena. I created the site as a way for me to learn more about Google's AppEngine. It didn't take very long to get it up and running, and it seemed to work fairly well.

AppEngine and Extensions

I was able to basically leave the site alone on AppEngine for several months--through about September 2009. In that time, I came up with a Firefox extension to make its use more convenient.

The extension allows you to quickly get a shortened URL for the page you're currently looking at, and a couple of context menu items let you get a short URL for things like specific images on a page. Also included in the extension is a preview for 2ze.us links. The preview can tell you the title and domain of the link's target. It can tell you how much smaller the 2ze.us URL is compared to the full URL. Finally, it displays how many times that particular 2ze.us link has been clicked.

That as all fine and dandy. It was the second Firefox extension I had ever written, and it's still running strong. In June or July of 2009, I started working on a little program to make it easier for me to interact with Twitter the way I wanted to. This was a great opportunity for me to incorporate 2ze.us into the application so any URL I wanted to post to Twitter would automatically be shortened for me, using my own shortener.

Porting to WebFaction And PHP

Anyway, around the end of September 2009, I noticed that there were a lot of problems with 2ze.us. It was slow and sometimes completely unresponsive. Certain URLs would redirect to their full URLs, while others wouldn't. The Firefox extension stopped working nicely. Oh yeah, and AppEngine rolled back to a previous revision of the code without me telling it to. That's when everything just died. It didn't take long for me to decide to migrate my project from AppEngine onto my awesome WebFaction hosting.

At this point, I was faced with a small dilemma: keep the code in Python, or port it to PHP. I opted to port it over to PHP, because I didn't want all of the overhead of a full Django instance for a site that needed to be very zippy. And I was unacquainted with other Python options.

By early October 2009, I had managed to turn the project into a PHP beast, running on Apache. It was a lot more responsive than AppEngine ever let 2ze.us be. There were a few bumps along the road, what with the extension and Twitter client relying on various parts of the site. Eventually it got to a point where I could just let it sit and work.

Chromium Extension

Sometime around the end of December, I decided to write another extension for 2ze.us, only for Google Chrome and Chromium this time. This extension isn't quite as feature-packed as its Firefox brother, but it gets the job done.

Clip2Zeus

Shortly after "completing" the Chromium extension, I had what seemed like a pretty original idea. Who knows if it really is, but I still haven't seen another tool quite like the one that I made as a result of this idea. I thought, "Now, why should I need to install an extension in each Web browser I use on each computer I use? Is there a better way?"

The answer came quickly: a standalone, desktop application. Write one program that handles shortening URLs for you. My laziness told me to make a program that monitors your system clipboard for URLs. If a URL is detected, try to shorten it, and update the clipboard contents in place. Boom. Done. All extensions become useless beyond things like the URL preview (which is very useful, imo).

The next question I asked was, "Do I make it platform-dependent? Should I stick it to the majority of computer users and write my tool for Linux only? For OSX only? For, uh... Windows only?" Again, an easy question to answer. Support them all or don't even bother writing the application.

A week's worth of midnight hacking saw the birth of Clip2Zeus 1.0a. It's a cross-platform compatible desktop application that does exactly what I just mentioned. When it's running and detects a URL on your system clipboard, it will try to shorten it and update it in your clipboard. If you copy a block of text, the application will only modify the URLs in that block of text--meaning the block of text will still be in your clipboard, but it will have shorter URLs.

I use the program every day at work (on OSX). It's been very fun for me to see a short URL any time I copy a nasty URL to my clipboard. Imagine that; I'm a big fan of my own work...

Tornado

Lately, I've noticed that the site was getting kind of slow again. Sometimes it would take several seconds for Clip2Zeus to shorten URLs in my clipboard, when it was normally instantaneous. Every once in a while, Clip2Zeus would completely fail to connect to the website.

One of my friends has asked me a lot of questions about the Tornado framework in the past months. I had read a few things about Tornado when it was open-sourced last year, but I didn't really feel the need to dabble with it. These questions prompted me to tinker a little.

Last night I re-ported 2ze.us to Python, using the Tornado framework this time. So far I'm very impressed with its responsiveness. The framework offers a lot of neat little utilities, and it is very fast (as reported by dozens of other reputable sources).

On top of the speed increase that came with the transition to Tornado, my RAM usage on WebFaction has come down by nearly 100MB. Just by turning off the one Apache-backed website. Now I'm nowhere near my RAM cap! Wahoo!!

Enough rambling. Like I said at the beginning of this article, a lot has been happening with this project in the past year. I didn't even think about all of the time I put into projects related to my simple little side project. Looking back, I'm quite satisfied with how things have unfolded.

Statistics

Here are some simple statistics for 2ze.us. Since March 2009...

  • 5,252 URLs have been shortened using 2ze.us
  • 2ze.us links have been clicked 198,267 times
  • 315,951 URL characters have been turned into 11,532 characters

In April 2009...

  • 217 URLs were shortened
  • 2ze.us links were clicked 617 times

In February 2010...

  • 1,182 URLs were shortened
  • 2ze.us links were clicked 32,830 times

Not too shabby for a side project.

Monitor Multiple Remote Files Using Multitail

There comes a time in each of our individual lives that we just learn to love log files. We learn to love utilities like tail and grep as we pore over countless lines of information, seeking out the stuff that really matters. We like to show off our debugging prowess as innocent bystanders look on in absolute wonderment.

While that's all fine and dandy, I'm always on the lookout for utilities to make my log monitoring less painful. A few weeks ago, my supervisor introduced me to a program that he's been using for quite some time: multitail. In essence, it's tail with some really neat features, such as the ability to:

  • "tail" multiple files (or commands, like netstat) independently in the same terminal
  • highlight text using regular expressions
  • search log messages and see only the matching lines
  • merge multiple files into one log window
  • scrolling back in the history of a log file
  • highlighting "themes"

I've been using multitail for a couple of weeks now (it took me a while to warm up to it after my supervisor introduce it), and I'm quite satisfied with it. One thing I really, really like about multitail is that I can kinda sorta almost monitor multiple remote files. What does that mean, you ask?

Well, my development environment includes at least 5 virtual machines, each of which will be logging different but equally important information. I want to be able to "tail" a specific log file on each of the virtual machines in one window. Now, it took me a while to learn how to do this, which is why I'm sharing the information with you.

And here comes my usual disclaimer: this may not be the most efficient way to do what I want to do, but it's currently working for me. I'm open to other solutions too!

Anyway, I can run a command like the following to monitor multiple remote log files:

multitail -l 'ssh user@host1 "tail -f /path/to/log/file"' -l 'ssh user@host2 "tail -f /path/to/log/file"'

Such a command would ssh into two computers, host1 and host2, and run tail -f /path/to/log/file on each. Multitail allows you to monitor the output of both tail commands in a single window, reducing clutter on your desktop. You can also arrange the files/commands you're "tailing" into various rows and columns. I tend to have a 2x2 grid of log files when I use multitail at work.

I've also started using multitail to monitor the access and error logs for my Django sites on WebFaction. I simply ssh into my account, run an alias for a ridiculous multitail command, and watch as both log files scroll on by.

Again, this is just another aspect of my work environment that is fun and useful to me, and I wanted to spread the joy. Multitail may or may not be a utility you like to use, but it suits my current needs and desires quite well. YMMV. And, once again, I'm always on the look-out for other tools to make my work life more interesting and productive!

Afloat: Window Management For OSX

Today I wanted to be able to watch the PyCon Live Stream while using OS X at work. A quick Google search returned an awesome program: Afloat. It lets me change the window settings for just about any OS X application. Right now I've got the live feed just lingering there in the background, pinned to my desktop. Loving it!

Auto-Generating Documentation Using Mercurial, ReST, and Sphinx

I often find myself taking notes about various aspects of my job that I feel I would forget as soon as I moved onto another project. I've gotten into the habit of taking my notes using reStructured Text, which shouldn't come as any surprise to any of my regular visitors. On several occasions, I had some of the other guys in the company ask me for some clarification on some things I had taken notes on. Lucky for me, I had taken some nice notes!

However, these individuals probably wouldn't appreciate reading ReST markup as much as I do, so I decided to do something nice for them. I setup Sphinx to prettify my documentation. I then wrote a small Web server using Python, so people within the company network could access the latest version of my notes without much hassle.

Just like I take notes to remind myself of stuff at work, I want to do that again for this automated ReST->HTML magic--I want to be able to do this in the future! I figured I would make my notes even more public this time, so you all can enjoy similar bliss.

Platform Dependence

I am writing this article with UNIX-like operating systems in mind. Please forgive me if you're a Windows user and some of this is not consistent with what you're seeing. Perhaps one day I'll try to set this sort of thing up on Windows.

Installing Sphinx

The first step that we want to take is installing Sphinx. This is the project that Python itself uses to generate its online documentation. It's pretty dang awesome. Feel free to skip this section if you have already installed Sphinx.

Depending on your environment of choice, you may or may not have a package manager that offers python-sphinx or something along those lines. I personally prefer to install it using pip or easy_install:

$ sudo pip install sphinx

Running that command will likely respond with a bunch of output about downloading Sphinx and various dependencies. When I ran it in my sandbox VM, I saw it install the following packages:

  • pygments
  • jinja2
  • docutils
  • sphinx

It should be a pretty speedy installation.

Installing Mercurial

We'll be using Mercurial to keep track of changes to our ReST documentation. Mercurial is a distributed version control system that is built using Python. It's wonderful! Just like with Sphinx, if you have already installed Mercurial, feel free to skip to the next section.

I personally prefer to install Mercurial using pip or easy_install--it's usually more up-to-date than what you would have in your package repositories. To do that, simply run a command such as the following:

$ sudo pip install mercurial

This will go out and download and install the latest stable Mercurial. You may need python-dev or something like that for your platform in order for that command to work. However, if you're on Windows, I highly recommend TortoiseHg. The installer for TortoiseHg will install a graphical Mercurial client along with the command line tools.

Create A Repository

Now let's create a brand new Mercurial repository to house our notes/documentation. Open a terminal/console/command prompt to the location of your choice on your computer and execute the following commands:

$ hg init mydox
$ cd mydox

Configure Sphinx

The next step is to configure Sphinx for our project. Sphinx makes this very simple:

$ sphinx-quickstart

This is a wizard that will walk you through the configuration process for your project. It's pretty safe to accept the defaults, in my opinion. Here's the output of my wizard:

$ sphinx-quickstart
Welcome to the Sphinx quickstart utility.

Please enter values for the following settings (just press Enter to
accept a default value, if one is given in brackets).

Enter the root path for documentation.
> Root path for the documentation [.]:

You have two options for placing the build directory for Sphinx output.
Either, you use a directory "_build" within the root path, or you separate
"source" and "build" directories within the root path.
> Separate source and build directories (y/N) [n]: y

Inside the root directory, two more directories will be created; "_templates"
for custom HTML templates and "_static" for custom stylesheets and other static
files. You can enter another prefix (such as ".") to replace the underscore.
> Name prefix for templates and static dir [_]:

The project name will occur in several places in the built documentation.
> Project name: My Dox
> Author name(s): Josh VanderLinden

Sphinx has the notion of a "version" and a "release" for the
software. Each version can have multiple releases. For example, for
Python the version is something like 2.5 or 3.0, while the release is
something like 2.5.1 or 3.0a1.  If you don't need this dual structure,
just set both to the same value.
> Project version: 0.0.1
> Project release [0.0.1]:

The file name suffix for source files. Commonly, this is either ".txt"
or ".rst".  Only files with this suffix are considered documents.
> Source file suffix [.rst]:

One document is special in that it is considered the top node of the
"contents tree", that is, it is the root of the hierarchical structure
of the documents. Normally, this is "index", but if your "index"
document is a custom template, you can also set this to another filename.
> Name of your master document (without suffix) [index]:

Please indicate if you want to use one of the following Sphinx extensions:
> autodoc: automatically insert docstrings from modules (y/N) [n]:
> doctest: automatically test code snippets in doctest blocks (y/N) [n]:
> intersphinx: link between Sphinx documentation of different projects (y/N) [n]:
> todo: write "todo" entries that can be shown or hidden on build (y/N) [n]:
> coverage: checks for documentation coverage (y/N) [n]:
> pngmath: include math, rendered as PNG images (y/N) [n]:
> jsmath: include math, rendered in the browser by JSMath (y/N) [n]:
> ifconfig: conditional inclusion of content based on config values (y/N) [n]:

A Makefile and a Windows command file can be generated for you so that you
only have to run e.g. `make html' instead of invoking sphinx-build
directly.
> Create Makefile? (Y/n) [y]:
> Create Windows command file? (Y/n) [y]: n

Finished: An initial directory structure has been created.

You should now populate your master file ./source/index.rst and create other documentation
source files. Use the Makefile to build the docs, like so:
   make builder
where "builder" is one of the supported builders, e.g. html, latex or linkcheck.

If you followed the same steps I did (I separated the source and build directories), you should see three new files in your mydox repository:

  • build/
  • Makefile
  • source/

We'll do our work in the source directory.

Get Some ReST

Now is the time when we start writing some ReST that we want to turn into HTML using Sphinx. Open some file, like first_doc.rst and put some ReST in it. If nothing comes to mind, or you're not familiar with ReST syntax, try the following:

=========================
This Is My First Document
=========================

Yes, this is my first document.  It's lame.  Deal with it.

Save the file (keep in mind that it should be within the source directory if you used the same settings I did). Now it's time to add it to the list of files that Mercurial will pay attention to. While we're at it, let's add the other files that were created by the Sphinx configuration wizard:

$ hg add
adding ../Makefile
adding conf.py
adding first_doc.rst
adding index.rst
$ hg st
A Makefile
A source/conf.py
A source/first_doc.py
A source/index.rst

Don't worry that we don't see all of the directories in the output of hg st--Mercurial tracks files, not directories.

Automate HTML-ization

Here comes the magic in automating the conversion from ReST to HTML: Mercurial hooks. We will use the precommit hook to fire off a command that tells Sphinx to translate our ReST markup into HTML.

Edit your mydox/.hg/hgrc file. If the file does not yet exist, go ahead and create it. Add the following content to it:

[hooks]
precommit.sphinxify = ~/bin/sphinxify_docs.sh

I've opted to call a Bash script instead of using an inline Python call. Now let's create the Bash script, ~/bin/sphinxify_docs.sh:

#!/bin/bash
cd $HOME/mydox
sphinx-build source/ docs/

Notice that I used the $HOME environment variable. This means that I created the mydox directory at /home/myusername/mydox. Adjust that line according to your setup. You'll probably also want to make that script executable:

$ chmod +x ~/bin/sphinxify_docs.sh

Three, Two, One...

You should now be at a stage where you can safely commit changes to your repository and have Sphinx build your HTML documentation. Execute the following command somewhere under your mydox repository:

$ hg ci -m "Initial commit"

If your setup is anything like mine, you should see some output similar to this:

$ hg ci -m "Initial commit"
Making output directory...
Running Sphinx v0.6.4
No builder selected, using default: html
loading pickled environment... not found
building [html]: targets for 2 source files that are out of date
updating environment: 2 added, 0 changed, 0 removed
reading sources... [100%] index
looking for now-outdated files... none found
pickling environment... done
checking consistency... /home/jvanderlinden/mydox/source/first_doc.rst:: WARNING: document isn't included in any toctree
done
preparing documents... done
writing output... [100%] index
writing additional files... genindex search
copying static files... done
dumping search index... done
dumping object inventory... done
build succeeded, 1 warning.
$ hg st
? docs/.buildinfo
? docs/.doctrees/environment.pickle
? docs/.doctrees/first_doc.doctree
? docs/.doctrees/index.doctree
? docs/_sources/first_doc.txt
? docs/_sources/index.txt
? docs/_static/basic.css
? docs/_static/default.css
? docs/_static/doctools.js
? docs/_static/file.png
? docs/_static/jquery.js
? docs/_static/minus.png
? docs/_static/plus.png
? docs/_static/pygments.css
? docs/_static/searchtools.js
? docs/first_doc.html
? docs/genindex.html
? docs/index.html
? docs/objects.inv
? docs/search.html
? docs/searchindex.js

If you see something like that, you're in good shape. Go ahead and take a look at your new mydox/docs/index.html file in the Web browser of your choosing.

Not very exciting, is it? Notice how your first_doc.rst doesn't appear anywhere on that page? That's because we didn't tell Sphinx to put it there. Let's do that now.

Customizing Things

Edit the mydox/source/index.rst file that was created during Sphinx configuration. In the section that starts with .. toctree::, let's tell Sphinx to include everything we ReST-ify:

.. toctree::
   :maxdepth: 2
   :glob:

   *

That should do it. Now, I don't know about you, but I don't really want to include the output HTML, images, CSS, JS, or anything in my documentation repository. It would just take up more space each time we change an .rst file. Let's tell Mercurial to not pay attention to the output HTML--it'll just be static and always up-to-date on our filesystem.

Create a new file called mydox/.hgignore. In this file, put the following content:

syntax: glob
docs/

Save the file, and you should now see something like the following when running hg st:

$ hg st
M source/index.rst
? .hgignore

Let's include the .hgignore file in the list of files that Mercurial will track:

$ hg add .hgignore
$ hg st
M source/index.rst
A .hgignore

Finally, let's commit one more time:

$ hg ci -m "Updating the index to include our .rst files"
Running Sphinx v0.6.4
No builder selected, using default: html
loading pickled environment... done
building [html]: targets for 1 source files that are out of date
updating environment: 0 added, 1 changed, 0 removed
reading sources... [100%] index
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] index
writing additional files... genindex search
copying static files... done
dumping search index... done
dumping object inventory... done
build succeeded.

Tada!! The first_doc.rst should now appear on the index page.

Serving Your Documentation

Who seriously wants to have HTML files that are hard to get to? How can we make it easier to access those HTML files? Perhaps we can create a simple static file Web server? That might sound difficult, but it's really not--not when you have access to Python!

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from BaseHTTPServer import HTTPServer
from SimpleHTTPServer import SimpleHTTPRequestHandler

def main():
    try:
        server = HTTPServer(('', 80), SimpleHTTPRequestHandler)
        server.serve_forever()
    except KeyboardInterrupt:
        server.socket.close()

if __name__ == '__main__':
    main()

I created this simple script and put it in my ~/bin/ directory, also making it executable. Once that's done, you can navigate to your mydox/docs/ directory and run the script. Since I called the script webserver.py, I just do this:

$ cd ~/mydox/docs
$ sudo webserver.py

This makes it possible for you to visit http://localhost/ on your own computer, or to use your computer's IP in place of localhost to access your documentation from a different computer on your network. Pretty slick, if you ask me.

I suppose there's more I could add, but that's all I have time for tonight. Enjoy!

Announcing: Clip2Zeus

Sometime last year, I embarked on a mission to create my own TinyURL or bit.ly. This project had no real purpose other than to help me learn how to use Google's AppEngine. All of the URL-shortening services I had tried up to that point were perfectly satisfactory for my needs, but I wanted to explore a little.

It didn't take long for me to come up with the site that is now 2ze.us. I learned some neat things about AppEngine, and the site worked well enough for my needs (just like the others). Eventually I wrote a Firefox extension to make it easier to use the site. It offers the ability to quickly shorten "any" URL, and it also has a preview utility. This allows you to hover your cursor over a 2ze.us link and learn various bits of information about it--target domain name, the target page's title, number of hits, etc.

Toward the end of 2009, I started writing the same sort of extension for Chrome/Chromium. It offers pretty much the same sort of functionality as its Firefox brother, minus keyboard shortcuts.

Before long, I found myself embarking on another 2zeus-related endeavor. This new project is one that I am actually quite proud of and satisfied with. I wrote a program that will run in the background on your computer. I call it "Clip2Zeus". This program will periodically poll your clipboard, looking for URLs in whatever text you currently have on it. If any URLs are found, the program will run out to 2ze.us and try to shorten them. Once a valid result comes back from 2ze.us, your clipboard is automatically updated with the original URLs replaced by the shortened version.

It doesn't stop there, though. You can control the program using a couple of interfaces. One interface is a Tk GUI, which allows you to set the polling interval or turn off polling altogether. Should you choose to do that, you can click a button in the GUI any time you explicitly want to shorten URLs in your clipboard. There is another command line interface that offers the same sort of functionality.

I've been using this program on several computers for a couple of weeks, and I haven't noticed any memory/performance problems at all. It works just as well on Windows as it does on Linux, and just as well on OSX as it does on Linux. It just sits there silently until you give it a URL. It works with any program that can access the standard clipboard mechanism for whatever OS you're using.

You can download and install it using easy_install or pip. Or you can download it and install it directly from http://pypi.python.org/pypi/Clip2Zeus/

Another Bash Tip

I just learned yet another goodie about the Bash shell that I must share with you. This trick made my day on so many levels.

You know how annoying it is when you get those ridiculously long commands in a terminal window? You know how much more annoying it is when you generally can't Ctrl+arrow around the command to change bits and pieces when you're on OSX? If you've ever been in that boat, this tip is for you.

Bash allows you to hit Ctrl+x Ctrl+e to edit your current command in your "preferred" editor. Your "preferred" editor is determined from the EDITOR environment variable. Since I'm a fan of VIM, all I need to do is make sure I've got export EDITOR=vim in my .bashrc or something along those lines. Once I do that, I can hit Ctrl+x Ctrl+e anytime I am using Bash and have a smelly, long command I want to manipulate.

See it in action.

OSX, Growl, And Subversion

Today I found myself trying to figure out how to make a terminal window stay permanent on my desktop or dashboard on OSX, similar to what I've done in the past with Linux. I just wanted to have the terminal window monitoring things in the background for me. Actually, all I wanted to do was keep track of when my local working copy of our Subversion repository was out of sync. I wanted a solution that would keep out of my way, but I also wanted it to be easy.

My search for a solution seemed short-lived when a Google search suggested a dashboard widget for the Terminal application. The problem with it was that the download server was dead or simply blocked by my company's Internet filter. One way or another, it wasn't long before I went in search of another solution.

At that very instant, I received a Growl notification from some program. That's when it dawned on me--I could tell Growl to tell me when my working copy was out of sync. I had done stuff like that in the past, so I set out to write my solution. This is what I came up with:

#!/bin/bash
MY_BOX=[my IP address]
DEV_ROOT='/path/to/svn/working copy'
cd $DEV_ROOT

MY_REV=`svn log --limit 1 | awk '/^r/ {print $1}' | sed 's/[^0-9]//g'`
SVN_REV=`svn log --limit 1 -r HEAD | awk '/^r/ {print $1}' | sed 's/[^0-9]//g'`

if [[ $MY_REV != $SVN_REV ]]; then
    ssh username@$MY_BOX "growlnotify -s -d47111 -n 'iTerm' -t 'Out Of Sync' -m 'Your working copy is out of sync.  Repository is at revision $SVN_REV, and your working copy is at $MY_REV.'"
fi

Now, a little bit about my environment. As I've mentioned before, all of our development really takes place on Linux-powered virtual machines. We simply use our Macs as the system to interact with those virtual machines. That is why there's the ssh line in that script.

Basically, this script just checks the most recent revision in your local working copy. Then it checks the latest revision in the repository itself. It compares the two revision numbers, and if it finds a difference, it will SSH into my OSX box to send me a Growl notification. On the OSX side, I have Growl and growlnotify installed. Here's a summary of the options to growlnotify:

  • -s: make the notification sticky--don't hide the notification until the user specifically closes it.
  • -d47111: a unique identifier for the notification. This makes it so you can send the same message over and over and it would update any existing notifications with that ID instead of creating a new notification (unless one doesn't exist already).
  • -n 'iTerm': I believe this was supposed to be the "source" application. I don't remember right now.
  • -t 'Out Of Sync': The title for the notification.
  • -m 'Your working copy...': The message to send to my Mac.

This is a fabulous little reminder to me. I have it set up as a cronjob that runs every minute on my Linux-powered development virtual machine. Hopefully this will help others!

Checking In

I suppose I should update everyone out there about what I've been up to lately. It seems strange to me that I post article much less frequently now than I did when I was a full-time university student. You'd think I'd have a whole lot more time to blog about whatever I've been working on. I suppose I do indeed have that time, it's just that I usually like to wait until my projects are "ready" for the public before I write about them.

The biggest reason I haven't posted much of anything lately is a small Twitter client I've been working on. Its purpose is to be a simple, out-of-the-way Twitter client that works equally well on Windows, Linux, and OSX. The application is written in Python and wxPython, and it has been coming along quite well. It works great in Linux (in GNOME and KDE at least), but Windows and OSX have issues with windows stealing focus when I don't want them to. I'm still trying to figure it out--any advice would be greatly appreciated.

Chirpy currently does nothing more than check your Twitter accounts for updates periodically. It notifies you of new updates using blinking buttons (which can be configured to not blink). I think the interface is pretty nice and easy to use, but I am its developer so it's only proper that I think that way.

Anyway, that project has been sucking up a lot of my free time. It's been frustrating as I build it in Linux only to find that Windows and OSX both act stupidly when I go to test it. That frustration inspired me to tinker with a different approach to a Twitter client. I began fooling around with it last night, and I think the idea has turned out to be more useful than Chripy is after a month of development!

I'm calling this new project "Tim", which is short for "Twitter IM". This one also periodically checks your Twitter account(s) for updates (of course). However, Tim will send any Twitter updates to any Jabber-enabled instant messenger client that you are signed into. If you're like me, you have Google Talk open most of the day, so you can just have Twitter updates go straight there! You can also post updates to Twitter using your Jabber instant messenger when Tim is running by simply sending a message back!!

The really neat stuff comes in when you start to consider the commands that I've added to Tim tonight. I've made it possible for you to filter out certain hashtags, follow/unfollow users, and specify from which Twitter account to post updates (when you have multiple accounts enabled). I hate all of those #FollowFriday tweets... they drive me crazy. So all I have to do is type ./filter followfriday and no tweet that contains #FollowFriday will be sent to my Jabber client. I love it.

More commands are on the way. Also on the way is a friendly interface for configuring Tim. Getting it up and running the first time is... a little less than pleasant :) Once you have it configured it seems to work pretty well though.

If you're interested in trying it out, just head on over to the project's page (http://bitbucket.org/codekoala/twitter-im/). Windows users can download an installer from the Downloads tab. I plan on putting up a DMG a little later tonight for OSX users. Linux users can download the .tar.gz file and install the normal Python way :) Enjoy!

Update: The DMG for OSX is a little bigger than I thought it would be, so I won't be hosting it on bitbucket. Instead, you can download it from my server.

Don't forget to read the README !!!