<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>Code Koala</title><link>http://www.codekoala.com/</link><description>Ramblings of yet another nerd</description><atom:link type="application/rss+xml" href="http://www.codekoala.com/rss.xml" rel="self"></atom:link><language>en</language><lastBuildDate>Mon, 14 Dec 2015 22:04:41 GMT</lastBuildDate><generator>https://getnikola.com/</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>Thoughts on Docker</title><link>http://www.codekoala.com/posts/thoughts-on-docker/</link><dc:creator>Josh VanderLinden</dc:creator><description>&lt;div&gt;&lt;p&gt;&lt;a class="reference external" href="https://www.docker.com/"&gt;Docker&lt;/a&gt; has been causing a lot of ripples in all
sorts of ponds in recent years. I first started playing with it nearly a year
ago now, after hearing about it from someone else at work. At first I didn't
really understand what problems it was trying to solve. The more I played with
it, however, the more interesting it became.&lt;/p&gt;
&lt;div class="section" id="gripes-about-docker"&gt;
&lt;h2&gt;Gripes About Docker&lt;/h2&gt;
&lt;p&gt;There were plenty of things that I didn't care for about Docker. The most
prominent strike against it was how slow it was to start, stop, and destroy
containers. I soon learned that if I store my Docker data on a btrfs partition,
things become &lt;em&gt;much&lt;/em&gt; faster. And it was great! Things that used to take 10
minutes started taking 2 or 3 minutes. Very significant improvement.&lt;/p&gt;
&lt;p&gt;But then it was still slow to actually &lt;em&gt;build&lt;/em&gt; any containers that are less
than trivial. For example, we've been using Docker for one of my side projects
since April 2014 (coming from vagrant). Installing all of the correct packages
and whatnot inside of a our base Docker image took several minutes. Much longer
than it does on iron or even in virtual machines. It was just slow. Anytime we
had to update dependencies, we'd invalidate the image cache and spend a large
chunk of time just waiting for an image to build. It was/is painful.&lt;/p&gt;
&lt;p&gt;On top of that, pushing and pulling from the public registry is much slower
than a lot of us would like it to be. We set up a private registry for that
side project, but it was still slower than it should be for something like
that.&lt;/p&gt;
&lt;p&gt;Many of you reading this article have probably read most or all of those gripes
from other Docker critics. They're fairly common complaints.&lt;/p&gt;
&lt;p&gt;Lately, one of the things about using Docker for development that's become
increasingly more frustrating is communication between containers on different
hosts. Docker uses environment variables to tell one container how to reach
services on another container running on the same host. Using environment
variables is a great way to avoid hardcoding IPs and ports in your
applications. I love it. However, when your development environment consists of
8+ distinct containers, the behavior around those environment variables is
annoying (in my opinion).&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="looking-for-alternatives"&gt;
&lt;h2&gt;Looking For Alternatives&lt;/h2&gt;
&lt;p&gt;I don't really feel like going into more detail on that right now. Let's just
say it was frustating enough for me to look at alternatives (more out of
curiosity than really wanting to switch away from Docker). This search led me
to straight &lt;a class="reference external" href="https://linuxcontainers.org/"&gt;Linux containers (LXC)&lt;/a&gt;, upon
which Docker was originally built.&lt;/p&gt;
&lt;p&gt;I remembered trying to use LXC for a little while back in 2012, and it wasn't a
very successful endeavor--probably because I didn't understand containers very
well at the time. I also distinctly remember being very fond of Docker when I
first tried it because it made LXC easy to use. That's actually how I pitched
it to folks.&lt;/p&gt;
&lt;p&gt;Long story short, I have been playing with LXC for the past while now. I'm
quite happy with it this time around. It seems to better fit the bill for most
of the things we have been doing with Docker. In my limited experience with LXC
so far, it's generally faster, more flexible, and more mature than Docker.&lt;/p&gt;
&lt;p&gt;What proof do I have that it's faster? I have no hard numbers right now, but
building one of our Docker images could take anywhere from 10 to 20 minutes.
And that was &lt;em&gt;building on top&lt;/em&gt; of an already existing base image. The base
image took a few minutes to build too, but it was built much less regularly
than this other image. So 10-20 minutes just to install the
application-specific packages. Not the core packages. Not configure things.
Just install additional packages.&lt;/p&gt;
&lt;p&gt;Building an entire LXC container from scratch, installing all dependencies, and
configuring basically an all-in-one version of the 8 different containers
(along with a significant number of other things for monitoring and such) has
consistently taken less than 3 minutes on my 2010 laptop. The speed difference
is phenominal, and I don't even need btrfs. Lauching the full container is
basically as fast as launching a single-purpose Docker container.&lt;/p&gt;
&lt;p&gt;What proof do I have that LXC is more flexible than Docker? Have you tried
running systemd inside of a Docker container? Yeah, it's not the most intuitive
thing in the world (or at least it wasn't the last time I bothered to try it).
LXC will let you use systemd without any fuss (that I've noticed, anyway). This
probably isn't the greatest example of flexibility in the world of containers,
but it certainly works for me.&lt;/p&gt;
&lt;p&gt;You also get some pretty interesting networking options, from what I read. Not
all of your containers need to be NAT'ed. Some can be NAT'ed and some can be
bridged to appear on the same network as the host. I'm still exploring all of
these goodies, so don't ask for details about them from me just yet ;)&lt;/p&gt;
&lt;p&gt;What proof do I have that LXC is more mature than Docker? Prior to Docker
version 0.9, its &lt;a class="reference external" href="http://www.infoq.com/news/2014/03/docker_0_9"&gt;default execution environment was LXC&lt;/a&gt;. Version 0.9 introduced
&lt;tt class="docutils literal"&gt;libcontainer&lt;/tt&gt;, which eliminated Docker's need for LXC. The LXC project has
been around since August 2008; Docker has been around since March 2013. That's
nearly 5 entire years that LXC has had to mature before Docker was even a
thing.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="what-now"&gt;
&lt;h2&gt;What Now?&lt;/h2&gt;
&lt;p&gt;Does all of this mean I'll never use Docker again? That I'll use LXC for
everything that Docker used to handle for me? No. &lt;strong&gt;I will still continue to
use Docker&lt;/strong&gt; for the foreseeable future. I'll just be more particular about
when I use it vs when I use LXC.&lt;/p&gt;
&lt;p&gt;I still find Docker to be incredibly useful and valuable. I don't think it's as
suitable for long-running development environments or to replace a fair amount
of what folks have been using Vagrant to do. It can certainly handle that
stuff, but LXC seems better suited to the task, at least in my experience.&lt;/p&gt;
&lt;p&gt;Why do I think Docker is still useful and valuable? Well, let me share an
example from work. We occasionally use a program with rather silly Java
requirements. It requires a specific revision, and it must be 32-bit. It's
really dumb. Installing and using this program on Ubuntu is really quite easy.
Using the program on CentOS, however, is .... quite an adventure. But not an
adventure you really want to take. You just want to use that program.&lt;/p&gt;
&lt;p&gt;All I had to do was compose a Dockerfile based on Ubuntu, toss a couple apt-get
lines in there, build an image, and push it to our registry. Now any of our
systems with Docker installed can happily use that program without having to
deal with any of the particularities about that one program. The only real
requirement now is an operational installation of Docker.&lt;/p&gt;
&lt;p&gt;Doing something like that is certainly doable with LXC, but it's not quite as
cut and dry. In addition to having LXC installed, you also have to make sure
that the container configuration file is suitable for each system where the
program will run. This means making sure there's a bridged network adapter on
the host, the configuration file uses the correct interface name, at the
configuration file doesn't try to use an IP address that's already claimed, etc
etc.&lt;/p&gt;
&lt;p&gt;Also, Docker gives you port forwarding, bind mounts, and other good stuff with
some simple command line parameters. Again, port forwarding and bind mounts are
perfectly doable with straight LXC, but it's more complicated than just passing
some additional command line parameters.&lt;/p&gt;
&lt;p&gt;Anyway. I just wanted to get that out there. LXC will likely replace most of my
Linux-based virtual machines for the next while, but Docker still has a place
in my toolbox.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;</description><category>development</category><category>docker</category><category>linux</category><category>lxc</category><guid>http://www.codekoala.com/posts/thoughts-on-docker/</guid><pubDate>Sun, 15 Feb 2015 22:27:42 GMT</pubDate></item><item><title>"systemctl status foo" was too slow</title><link>http://www.codekoala.com/posts/systemctl-status-foo-was-too-slow/</link><dc:creator>Josh VanderLinden</dc:creator><description>&lt;div&gt;&lt;p&gt;For quite a while now, running any sort of &lt;tt class="docutils literal"&gt;systemctl status foo&lt;/tt&gt; command
seemed to take &lt;em&gt;forever&lt;/em&gt; on any and all of my systems. That exact command would
sometimes take as long as 30 seconds to return complete, despite &lt;tt class="docutils literal"&gt;foo&lt;/tt&gt; not
even being an available service. I noticed it more on my aging laptop than on
my other systems, but I just attributed the slowness to my hard drive maybe
preparing to fail.&lt;/p&gt;
&lt;p&gt;Anyway, I finally got frustrated enough to actually put some effort into seeing
what the problem might really be and how I could avoid the terrible delay for
something so simple. It dawned on me that the actual status of the service was
coming back pretty fast, but getting any recent output from the service is what
took forever.&lt;/p&gt;
&lt;p&gt;This led me to look into systemd's journald. I checked the
&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;/var/log/journal/xxxxx...&lt;/span&gt;&lt;/tt&gt; directory on my laptop. It was massive--4.5GB of
logs of whatever. I know better than to just go deleting files out from under a
running process, so I looked into ways to simply truncate the logs. This led me
to a few pages that all suggested that I modify &lt;tt class="docutils literal"&gt;/etc/systemd/journald.conf&lt;/tt&gt;
to optimize things a bit.&lt;/p&gt;
&lt;p&gt;The configuration options that I kept seeing were &lt;tt class="docutils literal"&gt;SystemMaxUse&lt;/tt&gt; and
&lt;tt class="docutils literal"&gt;RuntimeMaxUse&lt;/tt&gt;. When I set these both to &lt;tt class="docutils literal"&gt;10M&lt;/tt&gt; and restarted journald
(&lt;tt class="docutils literal"&gt;systemctl restart &lt;span class="pre"&gt;systemd-journald&lt;/span&gt;&lt;/tt&gt;), my &lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;/var/log/journal/xxxxx...&lt;/span&gt;&lt;/tt&gt;
directory was nice and tidy again. And &lt;tt class="docutils literal"&gt;systemctl status foo&lt;/tt&gt;-like commands
returned muuuch faster.&lt;/p&gt;
&lt;p&gt;I suppose I'll be adding this stuff to my configuration script!&lt;/p&gt;&lt;/div&gt;</description><category>journald</category><category>systemd</category><guid>http://www.codekoala.com/posts/systemctl-status-foo-was-too-slow/</guid><pubDate>Thu, 14 Aug 2014 06:35:37 GMT</pubDate></item><item><title>uWSGI FastRouter and nginx</title><link>http://www.codekoala.com/posts/uwsgi-fastrouter-and-nginx/</link><dc:creator>Josh VanderLinden</dc:creator><description>&lt;div&gt;&lt;p&gt;Lately I've been spending a lot of time playing with &lt;a class="reference external" href="http://www.docker.io/"&gt;Docker&lt;/a&gt;, particularly with
Web UIs and "clustering" APIs. I've been using &lt;a class="reference external" href="http://www.nginx.com/"&gt;Nginx&lt;/a&gt; and &lt;a class="reference external" href="http://uwsgi-docs.readthedocs.org/en/latest/"&gt;uWSGI&lt;/a&gt; for most of my
sites for quite some time now. My normal go-to for distributing load is with
nginx's &lt;a class="reference external" href="http://nginx.com/resources/admin-guide/load-balancer/"&gt;upstream&lt;/a&gt; directive.&lt;/p&gt;
&lt;p&gt;This directive can be used to specify the address/socket of backend services
that should handle the same kinds of requests. You can configure the load
balancing pretty nicely right out of the box. However, when using Docker
containers, you don't always know the exact IP for the container(s) powering
your backend.&lt;/p&gt;
&lt;p&gt;I played around with some fun ways to automatically update the nginx
configuration and reload nginx each time a backend container appeared or
disappeared. This was really, really cool to see in action (since I'd never
attempted it before). But it seemed like there had to be a better way.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://mongrel2.org/"&gt;Mongrel2&lt;/a&gt; came to mind. I've played with it in the past, and it seemed to
handle my use cases quite nicely until I tried using it &lt;a class="reference external" href="https://www.virtualbox.org/ticket/9069"&gt;with VirtualBox's
shared folders&lt;/a&gt;. At the time, it wasn't quite as flexible as nginx when it
came to working with those shared folders (might still be the case). Anyway,
the idea of having a single frontend that could seamlessly pass work along to
any number of workers without being reconfigured and/or restarted seemed like
the ideal solution.&lt;/p&gt;
&lt;p&gt;As I was researching other Mongrel2-like solutions, I stumbled upon yet another
mind-blowing feature tucked away in uWSGI: &lt;a class="reference external" href="http://uwsgi-docs.readthedocs.org/en/latest/Fastrouter.html"&gt;The uWSGI FastRouter&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This little gem makes it super easy to get the same sort of functionality that
Mongrel2 offers. Basically, you create a single uWSGI app that will route
requests to the appropriate workers based on the domain being requested.
Workers can "subscribe" to that app to be added to the round-robin pool of
available backends. Any given worker app can actually serve requests for more
than one domain if you so desire.&lt;/p&gt;
&lt;p&gt;On the nginx side of things, all you need to do is use something like
&lt;tt class="docutils literal"&gt;uwsgi_pass&lt;/tt&gt; with the router app's socket. That's it. You can then spawn
thousands of worker apps without ever restarting nginx or the router app. Whoa.&lt;/p&gt;
&lt;p&gt;So let's dig into an example. First, some prerequisites. I'm currently using:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;nginx 1.6.0&lt;/li&gt;
&lt;li&gt;uwsgi 2.0.4&lt;/li&gt;
&lt;li&gt;bottle 0.12.7&lt;/li&gt;
&lt;li&gt;Python 3.4.1&lt;/li&gt;
&lt;li&gt;Arch Linux&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The first thing we want is that router app. Here's a uWSGI configuration file
I'm using:&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://www.codekoala.com/listings/uwsgi-fastrouter/router.ini.html"&gt;uwsgi-fastrouter/router.ini&lt;/a&gt;&lt;/p&gt;
&lt;pre class="code ini"&gt;&lt;a name="rest_code_3c8b52f799a3434c9a0d601cfe9cae3c-1"&gt;&lt;/a&gt;&lt;span class="k"&gt;[uwsgi]&lt;/span&gt;
&lt;a name="rest_code_3c8b52f799a3434c9a0d601cfe9cae3c-2"&gt;&lt;/a&gt;&lt;span class="na"&gt;plugins&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;fastrouter&lt;/span&gt;
&lt;a name="rest_code_3c8b52f799a3434c9a0d601cfe9cae3c-3"&gt;&lt;/a&gt;&lt;span class="na"&gt;master&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;true&lt;/span&gt;
&lt;a name="rest_code_3c8b52f799a3434c9a0d601cfe9cae3c-4"&gt;&lt;/a&gt;&lt;span class="na"&gt;shared-socket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;127.0.0.1:3031&lt;/span&gt;
&lt;a name="rest_code_3c8b52f799a3434c9a0d601cfe9cae3c-5"&gt;&lt;/a&gt;&lt;span class="na"&gt;fastrouter-subscription-server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0:2626&lt;/span&gt;
&lt;a name="rest_code_3c8b52f799a3434c9a0d601cfe9cae3c-6"&gt;&lt;/a&gt;&lt;span class="na"&gt;fastrouter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;=0&lt;/span&gt;
&lt;a name="rest_code_3c8b52f799a3434c9a0d601cfe9cae3c-7"&gt;&lt;/a&gt;&lt;span class="na"&gt;fastrouter-cheap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;true&lt;/span&gt;
&lt;a name="rest_code_3c8b52f799a3434c9a0d601cfe9cae3c-8"&gt;&lt;/a&gt;&lt;span class="na"&gt;vacuum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;true&lt;/span&gt;
&lt;a name="rest_code_3c8b52f799a3434c9a0d601cfe9cae3c-9"&gt;&lt;/a&gt;
&lt;a name="rest_code_3c8b52f799a3434c9a0d601cfe9cae3c-10"&gt;&lt;/a&gt;&lt;span class="c1"&gt;# vim:ft=dosini et ts=2 sw=2 ai:&lt;/span&gt;
&lt;/pre&gt;&lt;p&gt;So, quick explanation of the interesting parts:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;shared-socket&lt;/span&gt;&lt;/tt&gt;: we're setting up a shared socket on &lt;tt class="docutils literal"&gt;127.0.0.1:3031&lt;/tt&gt;.
This is the socket that we'll use with nginx's &lt;tt class="docutils literal"&gt;uwsgi_pass&lt;/tt&gt; directive, and
it's also used for our &lt;tt class="docutils literal"&gt;fastrouter&lt;/tt&gt; socket (&lt;tt class="docutils literal"&gt;=0&lt;/tt&gt; implies that we're using
socket 0).&lt;/li&gt;
&lt;li&gt;&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;fastrouter-subscription-server&lt;/span&gt;&lt;/tt&gt;: this is how we make it possible for our
worker apps to become candidates to serve requests.&lt;/li&gt;
&lt;li&gt;&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;fastrouter-cheap&lt;/span&gt;&lt;/tt&gt;: this disables the fastrouter when we have no subscribed
workers. Supposedly, you can get the actual fastrouter app to also be a
subscriber automatically, but I was unable to get this working properly.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now let's look at a sample worker app configuration:&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://www.codekoala.com/listings/uwsgi-fastrouter/worker.ini.html"&gt;uwsgi-fastrouter/worker.ini&lt;/a&gt;&lt;/p&gt;
&lt;pre class="code ini"&gt;&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-1"&gt;&lt;/a&gt;&lt;span class="k"&gt;[uwsgi]&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-2"&gt;&lt;/a&gt;&lt;span class="na"&gt;plugins&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;python&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-3"&gt;&lt;/a&gt;&lt;span class="na"&gt;master&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;true&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-4"&gt;&lt;/a&gt;&lt;span class="na"&gt;processes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;2&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-5"&gt;&lt;/a&gt;&lt;span class="na"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;4&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-6"&gt;&lt;/a&gt;&lt;span class="na"&gt;heartbeat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;10&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-7"&gt;&lt;/a&gt;&lt;span class="na"&gt;socket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;192.*:0&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-8"&gt;&lt;/a&gt;&lt;span class="na"&gt;subscribe2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;server=127.0.0.1:2626,key=foo.com&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-9"&gt;&lt;/a&gt;&lt;span class="na"&gt;wsgi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-10"&gt;&lt;/a&gt;&lt;span class="na"&gt;vacuum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;true&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-11"&gt;&lt;/a&gt;&lt;span class="na"&gt;harakiri&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;10&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-12"&gt;&lt;/a&gt;&lt;span class="na"&gt;max-requests&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;100&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-13"&gt;&lt;/a&gt;&lt;span class="na"&gt;logformat&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;%(addr) - %(user) [%(ltime)] "%(method) %(uri) %(proto)" %(status) %(size) "%(referer)" "%(uagent)"&lt;/span&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-14"&gt;&lt;/a&gt;
&lt;a name="rest_code_24c1399bdb2941b69739a894d5aa9cc5-15"&gt;&lt;/a&gt;&lt;span class="c1"&gt;# vim:ft=dosini et ts=2 sw=2 ai:&lt;/span&gt;
&lt;/pre&gt;&lt;ul class="simple"&gt;
&lt;li&gt;&lt;tt class="docutils literal"&gt;socket&lt;/tt&gt;: we're automatically allocating a socket on our NIC with an IP
address that looks like &lt;tt class="docutils literal"&gt;192.x.x.x&lt;/tt&gt;. This whole syntax was a new discovery
for me as part of this project! Neat stuff!!&lt;/li&gt;
&lt;li&gt;&lt;tt class="docutils literal"&gt;subscribe2&lt;/tt&gt;: this is one of the ways that we can subscribe to our
fastrouter. Based on the &lt;tt class="docutils literal"&gt;server=127.0.0.1:2626&lt;/tt&gt; bit, we're working on the
assumption that the fastrouter and workers are all going to be running on the
same host. The &lt;tt class="docutils literal"&gt;key=foo.com&lt;/tt&gt; is how our router app knows which domain a
worker will serve requests for.&lt;/li&gt;
&lt;li&gt;&lt;tt class="docutils literal"&gt;wsgi&lt;/tt&gt;: our simple &lt;a class="reference external" href="http://www.bottlepy.org/"&gt;Bottle&lt;/a&gt; application.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now let's look at our minimal Bottle application:&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://www.codekoala.com/listings/uwsgi-fastrouter/app.py.html"&gt;uwsgi-fastrouter/app.py&lt;/a&gt;&lt;/p&gt;
&lt;pre class="code python"&gt;&lt;a name="rest_code_6a361ee9699a4e71bb0a021147d4f6d3-1"&gt;&lt;/a&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;bottle&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;default_app&lt;/span&gt;
&lt;a name="rest_code_6a361ee9699a4e71bb0a021147d4f6d3-2"&gt;&lt;/a&gt;
&lt;a name="rest_code_6a361ee9699a4e71bb0a021147d4f6d3-3"&gt;&lt;/a&gt;
&lt;a name="rest_code_6a361ee9699a4e71bb0a021147d4f6d3-4"&gt;&lt;/a&gt;&lt;span class="n"&gt;application&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;default_app&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;a name="rest_code_6a361ee9699a4e71bb0a021147d4f6d3-5"&gt;&lt;/a&gt;&lt;span class="n"&gt;application&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;catchall&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;
&lt;a name="rest_code_6a361ee9699a4e71bb0a021147d4f6d3-6"&gt;&lt;/a&gt;
&lt;a name="rest_code_6a361ee9699a4e71bb0a021147d4f6d3-7"&gt;&lt;/a&gt;
&lt;a name="rest_code_6a361ee9699a4e71bb0a021147d4f6d3-8"&gt;&lt;/a&gt;&lt;span class="nd"&gt;@route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;a name="rest_code_6a361ee9699a4e71bb0a021147d4f6d3-9"&gt;&lt;/a&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
&lt;a name="rest_code_6a361ee9699a4e71bb0a021147d4f6d3-10"&gt;&lt;/a&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;'Hello World!'&lt;/span&gt;
&lt;/pre&gt;&lt;p&gt;All very simple. The main thing to point out here is that we've imported the
&lt;tt class="docutils literal"&gt;default_app&lt;/tt&gt; function from &lt;tt class="docutils literal"&gt;bottle&lt;/tt&gt; and use it to create an
&lt;tt class="docutils literal"&gt;application&lt;/tt&gt; instance that uWSGI's &lt;tt class="docutils literal"&gt;wsgi&lt;/tt&gt; option will use automatically.&lt;/p&gt;
&lt;p&gt;Finally, our nginx configuration:&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://www.codekoala.com/listings/uwsgi-fastrouter/nginx.conf.html"&gt;uwsgi-fastrouter/nginx.conf&lt;/a&gt;&lt;/p&gt;
&lt;pre class="code nginx"&gt;&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-1"&gt;&lt;/a&gt;&lt;span class="k"&gt;daemon&lt;/span&gt;                  &lt;span class="no"&gt;off&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-2"&gt;&lt;/a&gt;&lt;span class="k"&gt;master_process&lt;/span&gt;          &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-3"&gt;&lt;/a&gt;&lt;span class="k"&gt;worker_processes&lt;/span&gt;        &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-4"&gt;&lt;/a&gt;&lt;span class="k"&gt;pid&lt;/span&gt;                     &lt;span class="s"&gt;nginx.pid&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-5"&gt;&lt;/a&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-6"&gt;&lt;/a&gt;&lt;span class="k"&gt;events&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-7"&gt;&lt;/a&gt;    &lt;span class="kn"&gt;worker_connections&lt;/span&gt;  &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-8"&gt;&lt;/a&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-9"&gt;&lt;/a&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-10"&gt;&lt;/a&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-11"&gt;&lt;/a&gt;&lt;span class="k"&gt;http&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-12"&gt;&lt;/a&gt;    &lt;span class="kn"&gt;include&lt;/span&gt;             &lt;span class="s"&gt;/etc/nginx/mime.types&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-13"&gt;&lt;/a&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-14"&gt;&lt;/a&gt;    &lt;span class="kn"&gt;access_log&lt;/span&gt;          &lt;span class="s"&gt;./access.log&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-15"&gt;&lt;/a&gt;    &lt;span class="kn"&gt;error_log&lt;/span&gt;           &lt;span class="s"&gt;./error.log&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-16"&gt;&lt;/a&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-17"&gt;&lt;/a&gt;    &lt;span class="kn"&gt;default_type&lt;/span&gt;        &lt;span class="s"&gt;application/octet-stream&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-18"&gt;&lt;/a&gt;    &lt;span class="kn"&gt;gzip&lt;/span&gt;                &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-19"&gt;&lt;/a&gt;    &lt;span class="kn"&gt;sendfile&lt;/span&gt;            &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-20"&gt;&lt;/a&gt;    &lt;span class="kn"&gt;keepalive_timeout&lt;/span&gt;   &lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-21"&gt;&lt;/a&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-22"&gt;&lt;/a&gt;    &lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-23"&gt;&lt;/a&gt;        &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-24"&gt;&lt;/a&gt;        &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt; &lt;span class="s"&gt;foo.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-25"&gt;&lt;/a&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-26"&gt;&lt;/a&gt;        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-27"&gt;&lt;/a&gt;            &lt;span class="kn"&gt;include&lt;/span&gt;     &lt;span class="s"&gt;/etc/nginx/uwsgi_params&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-28"&gt;&lt;/a&gt;            &lt;span class="kn"&gt;uwsgi_pass&lt;/span&gt;  &lt;span class="n"&gt;127.0.0.1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;3031&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-29"&gt;&lt;/a&gt;        &lt;span class="p"&gt;}&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-30"&gt;&lt;/a&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-31"&gt;&lt;/a&gt;        &lt;span class="kn"&gt;error_page&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt; &lt;span class="mi"&gt;502&lt;/span&gt; &lt;span class="mi"&gt;503&lt;/span&gt; &lt;span class="mi"&gt;504&lt;/span&gt; &lt;span class="s"&gt;/50x.html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-32"&gt;&lt;/a&gt;        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;/50x.html&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-33"&gt;&lt;/a&gt;            &lt;span class="kn"&gt;root&lt;/span&gt; &lt;span class="s"&gt;/usr/share/nginx/html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-34"&gt;&lt;/a&gt;        &lt;span class="p"&gt;}&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-35"&gt;&lt;/a&gt;    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-36"&gt;&lt;/a&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-37"&gt;&lt;/a&gt;
&lt;a name="rest_code_c1165905e5944f618c7ec93df4809d20-38"&gt;&lt;/a&gt;&lt;span class="c1"&gt;# vim:filetype=nginx:&lt;/span&gt;
&lt;/pre&gt;&lt;p&gt;Nothing too special about this configuration. The only thing to really point
out is the &lt;tt class="docutils literal"&gt;uwsgi_pass&lt;/tt&gt; with the same address we provided to our router's
&lt;tt class="docutils literal"&gt;&lt;span class="pre"&gt;shared-socket&lt;/span&gt;&lt;/tt&gt; option. Also note that it will bind to port &lt;tt class="docutils literal"&gt;80&lt;/tt&gt; by
default, so you'll need root access for nginx.&lt;/p&gt;
&lt;p&gt;Now let's run it all! In different terminal windows, run each of the following
commands:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
sudo nginx -c nginx.conf -p $(pwd)
uwsgi --ini router.ini
uwsgi --ini worker.ini
&lt;/pre&gt;
&lt;p&gt;If all goes well, you should see no output from the nginx command. The router
app should have some output that looks something like this:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
spawned uWSGI master process (pid: 4367)
spawned uWSGI fastrouter 1 (pid: 4368)
[uwsgi-subscription for pid 4368] new pool: foo.com (hash key: 11571)
[uwsgi-subscription for pid 4368] foo.com =&amp;gt; new node: :58743
[uWSGI fastrouter pid 4368] leaving cheap mode...
&lt;/pre&gt;
&lt;p&gt;And your worker app should have output containing:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
subscribing to server=127.0.0.1:2626,key=foo.com
&lt;/pre&gt;
&lt;p&gt;For the purpose of this project, I quickly edited my &lt;tt class="docutils literal"&gt;/etc/hosts&lt;/tt&gt; file to
include &lt;tt class="docutils literal"&gt;foo.com&lt;/tt&gt; as an alias for &lt;tt class="docutils literal"&gt;127.0.0.1&lt;/tt&gt;. Once you have something like
that in place, you should be able to hit the nginx site and see requests logged
in your worker app's terminal:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
curl foo.com
&lt;/pre&gt;
&lt;p&gt;The really cool part is when you spin up another worker (same command as
before, since the port is automatically assigned). Again, there's no need to
restart nginx &lt;em&gt;nor&lt;/em&gt; the router app--the new worker will be detected
automatically! After doing so, each request will be spread out across all of
the subscribed workers.&lt;/p&gt;
&lt;p&gt;Here's a quick video of all of this in action, complete with multiple worker
apps subscribing to one router app. Pay close attention to the timestamps in
the worker windows.&lt;/p&gt;
&lt;div style="text-align: center"&gt;&lt;iframe width="425" height="344" src="//www.youtube.com/embed/i-AcAQaXMzw?rel=0&amp;amp;hd=1&amp;amp;wmode=transparent"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;p&gt;While this is all fine and dandy, there are a couple of things that seem like
they should have better options. Namely, I'd like to get the single
FastRouter+worker configuration working. I think it would also be nice to be
able to use host names or DNS entries for the workers to know how to connect to
the FastRouter instance. Any insight anyone can offer would be greatly
appreciated! I know I'm just scratching the surface of this feature!&lt;/p&gt;&lt;/div&gt;</description><category>clustering</category><category>docker</category><category>fastrouter</category><category>networking</category><category>nginx</category><category>opensource</category><category>programming</category><category>python</category><category>uwsgi</category><category>whoa</category><guid>http://www.codekoala.com/posts/uwsgi-fastrouter-and-nginx/</guid><pubDate>Wed, 28 May 2014 17:33:58 GMT</pubDate></item><item><title>Minion-Specific Data With etcd</title><link>http://www.codekoala.com/posts/minion-specific-data-with-etcd/</link><dc:creator>Josh VanderLinden</dc:creator><description>&lt;div&gt;&lt;p&gt;So I've been spending a fair amount of my free time lately learning more about
&lt;a class="reference external" href="http://www.saltstack.com/"&gt;salt&lt;/a&gt;, &lt;a class="reference external" href="http://www.docker.io/"&gt;docker&lt;/a&gt;, and &lt;a class="reference external" href="http://www.coreos.com/"&gt;CoreOS&lt;/a&gt;. Salt has been treating very well. I mostly only
use it at home, but more opportunities to use it at work are near.&lt;/p&gt;
&lt;p&gt;The first I remember really hearing about &lt;a class="reference external" href="http://www.docker.io/"&gt;Docker&lt;/a&gt; was when one of my co-workers
tried using it for one of his projects. I didn't really spend much time with it
until after SaltConf earlier this year (where lots of others brought it up).
I'm pretty excited about Docker. I generally go out of my way to make sure my
stuff will work fine on various versions of Linux, and Docker makes testing on
various platforms insanely easy.&lt;/p&gt;
&lt;p&gt;&lt;a class="reference external" href="http://www.coreos.com/"&gt;CoreOS&lt;/a&gt; is one of my more recent discoveries. I stumbled upon it in the wee
hours of the night a few weeks ago, and I've been very curious to see how
CoreOS and my fairly limited knowledge of Docker could help me. For those of
you who haven't heard of CoreOS yet, it's kinda like a "hypervisor" for Docker
containers with some very cool clustering capabilities.&lt;/p&gt;
&lt;p&gt;I was able to attend a SaltStack and CoreOS meetup this past week. Most of the
CoreOS developers stopped by on their way to &lt;a class="reference external" href="http://gophercon.com/"&gt;GopherCon&lt;/a&gt;, and we all got to see
a very cool demo of CoreOS in action. It was very cool to see everything in
action.&lt;/p&gt;
&lt;p&gt;One of the neat projects that the CoreOS folks have given us is called &lt;a class="reference external" href="https://github.com/coreos/etcd"&gt;etcd&lt;/a&gt;.
It is a "highly-available key value store for shared configuration and service
discovery." I'm still trying to figure out how to effectively use it, but what
I've seen of it is very cool. Automatic leader election, rapid synchronization,
built-in dashboard, written in Go.&lt;/p&gt;
&lt;p&gt;Anyway, I wanted to be able to use information stored in an etcd cluster in my
Salt states. &lt;a class="reference external" href="https://github.com/techhat"&gt;techhat&lt;/a&gt; committed some initial support for etcd in Salt about a
month ago, but the pillar support was a bit more limited than I had hoped. Last
night I submitted a pull request for getting minion-specific information out of
etcd. This won't be available for a little while--it's only in the develop
branch for now.&lt;/p&gt;
&lt;p&gt;To use it, you'll need a couple of things in your Salt master's configuration
file (&lt;tt class="docutils literal"&gt;/etc/salt/master&lt;/tt&gt;). First, you must configure your etcd host and
port. In order to use this information in our pillar, we need to configure this
using a named profile. We'll call the profile "local_etcd":&lt;/p&gt;
&lt;pre class="literal-block"&gt;
local_etcd:
  etcd.host: 127.0.0.1
  etcd.port: 4001
&lt;/pre&gt;
&lt;p&gt;Now we can tell Salt to fetch pillar information from this etcd server as so:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
ext_pillar:
  - etcd: local_etcd root=/salt
&lt;/pre&gt;
&lt;p&gt;Be sure to restart your Salt master after making these modifications. Let's add
some information to etcd to play with:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
etcdctl set salt/foo/bar baz
etcdctl set salt/foo/baz qux
&lt;/pre&gt;
&lt;p&gt;After doing so, you should be able to grab this information from any minion's
pillar:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
salt "*" pillar.items foo
test1:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            qux
test2:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            qux
&lt;/pre&gt;
&lt;p&gt;Ok, that's great! We've achived shared information between etcd and our Salt
pillar. But what do we do to get minion-specific data out of etcd? Well, we
need to start by modifying our master's configuration again. Replace our
previous &lt;tt class="docutils literal"&gt;ext_pillar&lt;/tt&gt; config with the following:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
ext_pillar:
  - etcd: local_etcd root=/salt/shared
  - etcd: local_etcd root=/salt/private/%(minion_id)s
&lt;/pre&gt;
&lt;p&gt;Note that the original etcd root changed from &lt;tt class="docutils literal"&gt;/salt&lt;/tt&gt; to &lt;tt class="docutils literal"&gt;/salt/shared&lt;/tt&gt;. We
do this so we don't inadvertently end up with &lt;em&gt;all&lt;/em&gt; minion-specific information
from etcd in the shared pillar. Now let's put the sample data back in (again,
noting the addition of &lt;tt class="docutils literal"&gt;shared/&lt;/tt&gt;):&lt;/p&gt;
&lt;pre class="literal-block"&gt;
etcdctl set salt/shared/foo/bar baz
etcdctl set salt/shared/foo/baz qux
&lt;/pre&gt;
&lt;p&gt;To override the value of one of these keys for a specific minion, we can use
that minion's ID in the key:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
etcdctl set salt/private/test2/foo/baz demo
&lt;/pre&gt;
&lt;p&gt;Now when we inspect our pillar, it should look like this:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
salt "*" pillar.items foo
test1:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            qux
test2:
    ----------
    foo:
        ----------
        bar:
            baz
        baz:
            demo
&lt;/pre&gt;
&lt;p&gt;Notice that the value for &lt;tt class="docutils literal"&gt;foo.baz&lt;/tt&gt; is &lt;tt class="docutils literal"&gt;qux&lt;/tt&gt; for minion &lt;tt class="docutils literal"&gt;test1&lt;/tt&gt;, while
its value is &lt;tt class="docutils literal"&gt;demo&lt;/tt&gt; for &lt;tt class="docutils literal"&gt;test2&lt;/tt&gt;. Success!&lt;/p&gt;&lt;/div&gt;</description><category>etcd</category><category>programming</category><category>salt</category><guid>http://www.codekoala.com/posts/minion-specific-data-with-etcd/</guid><pubDate>Sun, 27 Apr 2014 23:35:33 GMT</pubDate></item><item><title>Whew.</title><link>http://www.codekoala.com/posts/whew/</link><dc:creator>Josh VanderLinden</dc:creator><description>&lt;div&gt;&lt;p&gt;I work on a test automation framework at my day job. It's Django-powered, and
there's a lot of neat stuff going on with it. I love building it!&lt;/p&gt;
&lt;p&gt;Anyway, yesterday during a meeting, I got an email from a co-worker who seemed
to be in a bit of a panic. He wrote that he accidentally deleted the wrong
thing, and, being Django on the backend, a nice cascading delete went with it
(why he ignored the confirmation page is beyond me). He asked if we had any
database backups that we could restore, also curious as to how long it would
take.&lt;/p&gt;
&lt;p&gt;Well, lucky for him (and me!), I decided &lt;em&gt;very&lt;/em&gt; early on while working on the
project that I would implement a custom database driver that never actually
deletes stuff (mostly for auditing purposes). Instead, it simply marks any
record the user asks to delete as inactive, thus hiding it from the UI. Along
with this, nightly database backups were put in place.&lt;/p&gt;
&lt;p&gt;I'll be quite honest--I had a moment of fear as I considered how long it had
been since I really checked that either of these two things were still working
as designed. I implemented the database driver before I learned to appreciate
unit testing, and I haven't made it to that piece as I've been backfilling my
unit test suite (yet). As for the nightly database backups, I had never
actually needed to restore one, so for probably the last year I didn't really
bother checking a) that they were still being produced or b) that they were
valid backups.&lt;/p&gt;
&lt;p&gt;Thankfully, both pieces were still working perfectly. All I had to do was
undelete a few things from the database, as I haven't made a UI for this. After
doing that, I realized that one set of relationships was not handled by the
custom driver. To fix this, I just restored the most recent nightly backup to a
separate database and extracted just those relationships I was interested in.
And it worked!&lt;/p&gt;
&lt;p&gt;This is the first time I've really been bitten by a situation like this
personally. I'm very pleased that I had the foresight to implement the
precautionary measures early on in my project. I've also learned that I should
probably keep up with those measures a bit better. I definitely plan to make
some changes to help mitigate the potential for bad stuff to happen in the
future. But it looks like I have a good foundation to build upon now.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: unseen magic and valid database backups FTW.&lt;/p&gt;&lt;/div&gt;</description><category>backup</category><category>django</category><category>logrotate</category><category>mysql</category><category>precaution</category><category>programming</category><category>python</category><guid>http://www.codekoala.com/posts/whew/</guid><pubDate>Thu, 20 Feb 2014 14:30:21 GMT</pubDate></item><item><title>SaltConf 2014</title><link>http://www.codekoala.com/posts/saltconf-2014/</link><dc:creator>Josh VanderLinden</dc:creator><description>&lt;div&gt;&lt;p&gt;Being one to try to automate all teh things, I'm always curious to find and
experiment with new tools that appear which are supposed to help me be lazy.
&lt;a class="reference external" href="http://docs.saltstack.org/"&gt;SaltStack&lt;/a&gt; is one such tool.&lt;/p&gt;
&lt;p&gt;I first stumbled upon references to SaltStack sometime before the summer of
2012. At the time, I only put enough effort into SaltStack to be aware of what
it does and a little bit of its history. I remember telling a few of my friends
about it, and adding it to my &lt;em&gt;TODO&lt;/em&gt; list. At some point, I even installed it
on a couple of my computers.&lt;/p&gt;
&lt;p&gt;The problem was that I never made time to actually learn how to use it. I kept
telling myself that I'd experiment with it, but something else always got in
the way--kids, work, gaming... Also, I had briefly used tools like &lt;a class="reference external" href="http://www.getchef.com/chef/"&gt;chef&lt;/a&gt; and
&lt;a class="reference external" href="http://puppetlabs.com/"&gt;puppet&lt;/a&gt; (or tried to), and I had a bad taste in my mouth about configuration
management utilities. I'm sure part of my hesitation had to do with those
tools.&lt;/p&gt;
&lt;p&gt;Anyway, fast forward to the beginning of January 2014. Salt is still installed
on my main computer, but I've never even launched or configured it. I decided
to uninstall salt and come back to it another time. Just a few short days after
uninstalling salt, my supervisor at work sent me an email, asking if I'd be
interested in attending &lt;a class="reference external" href="http://www.saltconf.com/"&gt;SaltConf&lt;/a&gt;. I was more than happy to jump on the
opportunity to finally learn about this tool that I had been curious and
hesitant to use (and get paid to do it!).&lt;/p&gt;
&lt;div class="section" id="the-training"&gt;
&lt;h2&gt;The Training&lt;/h2&gt;
&lt;p&gt;I was able to sign up for an introductory course for SaltStack, which took
place on Tuesday, January 28th. This was an all-day ordeal, but it was very
intriguing to me. Normally, I'm one of the quiet ones in a classroom setting. I
rarely ask questions or comment on this or that. This was not the case with the
training course. I was all over everything our instructors had to say. I was
hooked.&lt;/p&gt;
&lt;p&gt;A lot of topics were quickly reviewed during the training. What normally takes
3 days was compressed into a single-day course. It was rather brutal in that
sense--tons of material to digest. I think they did a fantastic job of
explaining the core concepts and giving a fair number of examples during the
training.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="the-conference"&gt;
&lt;h2&gt;The Conference&lt;/h2&gt;
&lt;p&gt;&lt;a class="reference external" href="http://www.saltconf.com/"&gt;SaltConf&lt;/a&gt; really began on Wednesday, and there were some absolutely fantastic
sessions. I was particularly impressed with a demo of &lt;a class="reference external" href="https://www.vmware.com/products/vcloud-application-director.html"&gt;VMware's vCloud
Application Director&lt;/a&gt;, which can orchestrate the creation of entire clusters
of inter-related servers.&lt;/p&gt;
&lt;p&gt;Other sessions that were quite interesting to me mostly related to
virtualization using &lt;a class="reference external" href="http://www.docker.io/"&gt;Docker&lt;/a&gt;, straight &lt;a class="reference external" href="http://linuxcontainers.org/"&gt;LXC&lt;/a&gt;, and &lt;a class="reference external" href="http://libvirt.org/"&gt;libvirt&lt;/a&gt;. I'm very excited to
become proficient with salt when dealing with virtualized environments.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="the-certification"&gt;
&lt;h2&gt;The Certification&lt;/h2&gt;
&lt;p&gt;SaltStack officially introduced its first certification, known as SSCE
(SaltStack Certified Engineer). The certification fee was included in the
registration for the conference. Despite only having a matter of hours worth of
&lt;em&gt;rudimentary&lt;/em&gt; experience with SaltStack, I decided I might as well take a stab
at the exam. I fully expected to fail, but I had absolutely nothing to lose
other than an hour taking the exam.&lt;/p&gt;
&lt;p&gt;Well, I took the exam Wednesday night, after the full day of training and
another full day of seeing real-world uses for salt. I did spend an hour or two
reading docs, installing, and configuring salt on my home network too. Eighty
questions and 56 minutes later, I learned my score.&lt;/p&gt;
&lt;p&gt;I got 68 our of the 80 questions correct--85%! Not bad for a newbie. I hear the
pass/fail threshold is 80%, but I've yet to receive my SSCE number or anything
like that. Hopefully by Monday I'll receive that information.&lt;/p&gt;
&lt;/div&gt;
&lt;div class="section" id="moving-forward"&gt;
&lt;h2&gt;Moving Forward&lt;/h2&gt;
&lt;p&gt;It actually occurred to me that I've basically built my own version of the
platform-independent remote execution portion of SaltStack (for work). Many of
the same concepts exist in both salt and my own implementation. I will say that
I am partial to the my design, but I'll most likely be phasing it out to move
toward salt in the long term.&lt;/p&gt;
&lt;p&gt;After attending all three days of SaltStack deliciousness, I'm absolutely
convinced that salt will be a part of my personal and professional toolkit for
a long time to come. It's an extremely powerful and modular framework.&lt;/p&gt;
&lt;p&gt;In the little bit of experimentation that I've done with salt on my home
network, I've already found a few bug that appear to be low-hanging fruit. I
plan on working closely with the community to verify that they are indeed bugs,
and I most definitely plan on contributing back everything I can. This is such
an exciting project!!&lt;/p&gt;
&lt;p&gt;If you haven't used it yet, you must research it and give it a try. It is a
game-changer.&lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;</description><category>awesome</category><category>conference</category><category>configuration management</category><category>remote execution</category><category>salt</category><category>technology</category><category>work</category><guid>http://www.codekoala.com/posts/saltconf-2014/</guid><pubDate>Sun, 02 Feb 2014 04:44:16 GMT</pubDate></item><item><title>Startech: Scammer Scammed</title><link>http://www.codekoala.com/posts/startech-scammer-scammed/</link><dc:creator>Josh VanderLinden</dc:creator><description>&lt;div&gt;&lt;p&gt;My wife and I took the kids out to visit family out in California for
Thanksgiving break this past year. It was a fantastic visit. We all had a great
time. I even got a fun story out of the first evening there! I made sure to
write down the details in my phone shortly after this occurred, and I've
decided to post them here for others to enjoy.&lt;/p&gt;
&lt;hr class="docutils"&gt;
&lt;p&gt;My wife's grandfather received a phone call from a "Startech" company
(&lt;a class="reference external" href="http://800notes.com/Phone.aspx/1-213-330-0187"&gt;213-330-0187&lt;/a&gt;, according to the phone's caller ID). The caller yammered on
about having detected a bunch of viruses on grandpa's computer, and he claimed
that he was calling to help us get rid of them. Being the resident tech guy,
grandpa handed the phone off to me to deal with the situation.&lt;/p&gt;
&lt;p&gt;The Indian guy on the other end again explained that he found out that our
computer has several viruses and was going to walk us through how to get rid of
them. He asked if my computer was turned on, to which I responded that no, the
computer wasn't currently on. He asked if I could go turn it on and sit in
front of it.  I told him I would. While the computer was "booting," he asked
how old the computer is. I told him it was maybe three years old. Eventually, I
told him the computer was ready.&lt;/p&gt;
&lt;p&gt;At this point, he asked me if I saw a key on my keyboard with the letters C, T,
R, and L. Obviously, I did. Then he asked if I could see a key near that with a
flag on it. When I said that I could see it, he asked me to find the R key.
Once discovered, he instructed me to push the flag key and the R key.&lt;/p&gt;
&lt;p&gt;I told him that I pushed the keys, but nothing happened on my computer. He
patiently asked me to try again. When I again stated that nothing happened, he
asked me to describe which keys I was pushing. I told him I held down the flag
key and the R key at the same time, and he claimed that it was not possible for
nothing to happen when I push those keys.&lt;/p&gt;
&lt;p&gt;I believe that's when he instructed me to hold the flag key down with one
finger then hold down the R key with another. Again nothing. He asked me to try
a few more times, because maybe my computer was just slow. For each attempt, I
claimed that nothing had happened, and he muttered something about this not
being possible. Mind you, I wasn't even looking at a computer during any if
this.&lt;/p&gt;
&lt;p&gt;Eventually, he gave up trying to get that dialog to pop up. He said there was
another option. He asked which Web browser I use, if it's called Mozilla
Firefox, Google Chrome, or Microsoft Internet Explorer. I said. "Uhm, I think
it's called &lt;a class="reference external" href="http://midori-browser.org/"&gt;Midori&lt;/a&gt;..." He was a bit confused, asking me to repeat the name. I
did repeat it, and I even spelled it out for him. Apparently it wasn't
important, because he just shrugged it off and continued with his script.&lt;/p&gt;
&lt;p&gt;He asked me to type into the address bar the following address: www.appyy.com.
I told him that I typed it in and it just said "Page Not Found." He was a bit
skeptical at first, asking me to verify what I had typed into the address bar.
He asked me to try again, again claiming that it is not possible for the page
to not load.&lt;/p&gt;
&lt;p&gt;That's when I asked him if I had to be connected to the Internet to follow this
step, because I couldn't be on the phone and on the Internet at the same time.
He let out a sort of exasperated sigh, then asked if there was any other number
he could use to call me while I was on the Internet. I told him I only have the
one number, and he diligently asked if a had any friends or family who could
come over so I could use their phone. I said everyone I know is out of town for
the holidays.&lt;/p&gt;
&lt;p&gt;I believe he then went on a little rant about them calling everyone in my state
about their viruses. No doubt in my mind :)&lt;/p&gt;
&lt;p&gt;Then, trying to be helpful, I asked if maybe he could email me the instructions
do I could walk through them after we hung up. He said he would just say them
over the phone for me to write down. I told him I was okay with that, and then
he started listing off the steps: "the first thing you'll need to do is hang
up, then...." That's when I hung up on him. He called back, but we just laughed
with each other instead of answering.&lt;/p&gt;&lt;/div&gt;</description><category>california</category><category>humor</category><category>phone</category><guid>http://www.codekoala.com/posts/startech-scammer-scammed/</guid><pubDate>Mon, 06 Jan 2014 06:10:03 GMT</pubDate></item><item><title>InstArch</title><link>http://www.codekoala.com/posts/instarch/</link><dc:creator>Josh VanderLinden</dc:creator><description>&lt;div&gt;&lt;p&gt;My blog has obviously been quite inactive the past year. I've started a bunch
of new projects and worked on some really interesting stuff in that time. I'm
going to try to gradually describe the things I've been playing with here.&lt;/p&gt;
&lt;p&gt;One project I started in the summer of 2013 is a personal Arch-based LiveCD. My
goal in building this LiveCD was purely personal: I wanted to have a LiveCD
with my preferred programs and settings just in case I hosed my main system
somehow. I want to have minimal downtime, particularly when I need to keep
producing for work. That was the idea behind this project. I called it
InstArch, as in "instant Arch".&lt;/p&gt;
&lt;p&gt;The build scripts for this project are &lt;a class="reference external" href="https://bitbucket.org/instarch/livecd"&gt;hosted on bitbucket&lt;/a&gt;, while the ISOs I
build are &lt;a class="reference external" href="http://sourceforge.net/projects/instarch/files/ISOs/"&gt;hosted on sourceforge&lt;/a&gt;. InstArch was recently added to &lt;a class="reference external" href="http://linux.softpedia.com/get/System/Operating-Systems/Linux-Distributions/Instarch-103049.shtml"&gt;Softpedia&lt;/a&gt;,
and it has received a bit of interest because of that. Again, the idea behind
this project was entirely personal--I'm not trying to make a new distribution
or community because I'm dissatisfied with Arch or anything like that. There is
some erroneous information about InstArch on Softpedia, but I haven't yet
written them to ask them to fix it. Soon enough :)&lt;/p&gt;
&lt;p&gt;If you're interested in playing with my live CD, feel free to download it and
offer suggestions on the &lt;a class="reference external" href="https://bitbucket.org/instarch/livecd/issues?status=new&amp;amp;status=open"&gt;issue tracker&lt;/a&gt;. I may or may not implement any
suggestions :) I've already had one person email me asking about the default
username and password for InstArch. If you also find yourself needing this
information:&lt;/p&gt;
&lt;ul class="simple"&gt;
&lt;li&gt;username: &lt;tt class="docutils literal"&gt;inst&lt;/tt&gt;&lt;/li&gt;
&lt;li&gt;password: &lt;tt class="docutils literal"&gt;arch&lt;/tt&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You shouldn't need this information unless you try to use &lt;tt class="docutils literal"&gt;sudo&lt;/tt&gt; or try to
switch desktop sessions.&lt;/p&gt;
&lt;p&gt;Here's a video of my live CD in action.&lt;/p&gt;
&lt;div style="text-align: center"&gt;&lt;iframe width="425" height="344" src="//www.youtube.com/embed/aCLQwlOTazY?rel=0&amp;amp;hd=1&amp;amp;wmode=transparent"&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;p&gt;Also note that I haven't built/published a new live CD for several months.&lt;/p&gt;
&lt;hr class="docutils"&gt;
&lt;p&gt;Another part of the InstArch project, which I started looong before the actual
LiveCD, was to create my own personal Arch repository. It tracks a bunch of
packages that I build from the AUR and other personal Arch packages. Anyone is
free to use this repo, and it's one that's built into my live CD.&lt;/p&gt;
&lt;p&gt;If you wish to use this repository, add my key:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
pacman-key -r 051680AC
pacman-key --lsign-key 051680AC
&lt;/pre&gt;
&lt;p&gt;Then add this to your &lt;tt class="docutils literal"&gt;/etc/pacman.conf&lt;/tt&gt;:&lt;/p&gt;
&lt;pre class="literal-block"&gt;
[instarch]
SigLevel = PackageRequired
Server = http://instarch.codekoala.com/$arch/
&lt;/pre&gt;&lt;/div&gt;</description><category>instarch</category><category>linux</category><category>open-source</category><category>projects</category><guid>http://www.codekoala.com/posts/instarch/</guid><pubDate>Thu, 02 Jan 2014 17:30:41 GMT</pubDate></item><item><title>Check Your Receipts</title><link>http://www.codekoala.com/posts/check-your-receipts/</link><dc:creator>Josh VanderLinden</dc:creator><description>&lt;div&gt;&lt;p&gt;This morning I stopped for gas at a gas station that is associated with a
grocery store. Buy more groceries, save a few cents off each gallon pumped at
their station. That sort of deal. I found a gas voucher in my coat from a
grocery shopping trip that should have allowed 25 cents off each gallon, so I
figured I might as well use it before it expired.&lt;/p&gt;
&lt;p&gt;When I scanned the little barcode on the voucher, I noticed that the display
only registered a 20-cent-per-gallon discount. I also noticed that it would let
me pump only ~7.5 gallons instead of the 20 that the voucher was good for.
Luckily, there was an attendant in the tiny shack for the gas station that
early. I approached him and asked what was going on--why I wasn't getting my
full discount.&lt;/p&gt;
&lt;p&gt;Obviously, he didn't believe my claims and had to see things for himself. He
scanned the voucher and saw exactly what I described. Confused, he scuttled off
to his shack to investigate. He couldn't figure out the exact cause, but
ultimately he decided that someone else also had the same code or something
from their own shopping trip. He was kind enough to actually give me the cash
value of the 25-cents-per-gallon discount right then and there, so that's cool.&lt;/p&gt;
&lt;p&gt;Moral of the story: if you use such gas vouchers, be sure to check the
displayed discount with what you see on the voucher. If you notice a
discrepancy, maybe you'll be lucky enough to get the cash value like I did!
What makes it even more exciting is that I rarely use the full "up to 20
gallons" part of the voucher before the expiration. Bonus!&lt;/p&gt;&lt;/div&gt;</description><category>discount</category><category>vigilance</category><category>voucher</category><guid>http://www.codekoala.com/posts/check-your-receipts/</guid><pubDate>Thu, 02 Jan 2014 14:22:36 GMT</pubDate></item><item><title>Test-Driven Development With Python</title><link>http://www.codekoala.com/posts/test-driven-development-with-python/</link><dc:creator>Josh VanderLinden</dc:creator><description>&lt;div&gt;&lt;p&gt;Earlier this year, I was approached by the editor of &lt;a class="reference external" href="http://sdjournal.org/"&gt;Software Developer's
Journal&lt;/a&gt; to write a Python-related article. I was quite flattered by the
opportunity, but, being extremely busy at the time with work and family life, I
was hesitant to agree. However, after much discussion with my wife and other
important people in my life, I decided to go for it.&lt;/p&gt;
&lt;p&gt;I had a lot of freedom to choose a topic to write about in the article, along
with a relatively short timeline. I think I had two weeks to write the article
after finally agreeing to do so, and I was supposed to write some 7-10 pages
about my chosen topic.&lt;/p&gt;
&lt;p&gt;Having recently been converted to the wonders of test-driven development (TDD),
I decided that should be my topic. Several of my friends were also interested
in getting into TDD, and they were looking for a good, simple way to get their
feet wet. I figured the article would be as good a time as any to write up
something to help my friends along.&lt;/p&gt;
&lt;p&gt;I set out with a pretty grand plan for the article, but as the article
progressed, it became obvious that my plan was a bit too grandios for a regular
magazine article. I scaled back my plans a bit and continued working on the
article. I had to scale back again, and I think one more time before I finally
had something that was simple enough to &lt;em&gt;not&lt;/em&gt; write a book about.&lt;/p&gt;
&lt;p&gt;Well, that didn't exactly turn out as planned either. I ended up writing nearly
40 pages of LibreOffice single-spaced, 12pt Times New Roman worth of TDD stuff.
Granted, a fair portion of the article's length is comprised of code snippets
and command output.&lt;/p&gt;
&lt;p&gt;Anyway, I have permission to repost the article here, and I wanted to do so
because I feel that the magazine formatting kinda butchered the formatting I
had in mind for my article (and understandably so). To help keep the formatting
more pristine, I've turned it into a PDF for anyone who's interested in reading
it.&lt;/p&gt;
&lt;p&gt;So, without much further ado, here's the article! Feel free to &lt;a class="reference external" href="http://www.codekoala.com/pdfs/tdd.pdf"&gt;download&lt;/a&gt; or
print the PDF as well.&lt;/p&gt;
&lt;iframe src="http://docs.google.com/gview?url=http://www.codekoala.com/pdfs/tdd.pdf&amp;amp;embedded=true" style="width:100%; height:700px;" frameborder="0"&gt;&lt;/iframe&gt;&lt;/div&gt;</description><category>howto</category><category>mock</category><category>open-source</category><category>programming</category><category>python</category><category>tdd</category><category>testing</category><category>unittest</category><category>unittesting</category><guid>http://www.codekoala.com/posts/test-driven-development-with-python/</guid><pubDate>Sun, 29 Dec 2013 21:14:43 GMT</pubDate></item></channel></rss>