Cat Time Lapse

My cat is usually waiting at the door when I come home. I hoped he wasn’t waiting there long, so I set up a time lapse to find out. Initially I tried an old webcam connected to my Linux server with a script to periodically capture an image, but that failed after 4 hours. The kernel logged something about video URB and UVC probe control. (Not USB. Dunno what it was. Didn’t care to find out.) Continually connecting and disconnecting from the camera probably isn’t the friendliest of use cases for a video subsystem.

What I ended up doing was using the built-in Windows 10 camera application, which I was happy to discover supports time lapse. I used it to save images at 5 second intervals. In contrast with my hack using the other webcam, this application continually captures images, meaning the camera stays active, and it only saves them at the interval. The first time I tried it the camera captured fine until I got home, but I got home late enough that it was too dark to see the cat anymore. Today I got home early enough to find this sequence right before I got home:

He enters from the left of frame. Either behind the camera or below its field of view, perhaps eating food from his bowl there.

He looks out the window for close to a minute. Judging by the timing this was just before and as I was parking (visible from his position) and walking up to the building.

He then waits at the door for about 20 seconds:

Then I greet him and turn off the capture:

A fun experiment! It resulted in 1.3 GiB of JPEGs. I wonder what cues he has learned for my arrival. The stairs up to the second floor squeak so I assume he notices that at least. Earlier on he was on the couch looking around, and at the window looking around. This sequence here was just the last few minutes. He certainly didn’t seem to be actively waiting for long, which I’m glad to see.

Filesystem Transplant

Not having snapshots on ext4 finally got too annoying. I was able to copy the root filesystem off, then format with btrfs and copy it back on. I used System Rescue CD to do this, turn off Copy on Write on database files, and edit /etc/fstab with the new filesystem and UUIDs. It worked! Eventually. There were a few snags:

  • Grub gets angry when you wipe all your partitions. I’m still not clear on what the UUID it was looking for was, because it didn’t look like the old root filesystem. Using System Rescue CD’s Super Grub Disc image I was able to boot into the system and run update-grub and install-grub, which fixed it.
  • tar with bzip2 is slow. Using tar without compression ended up being much, much faster.
  • Taking out the drive with the swap partition caused the boot to hang until timeout. There isn’t an mkfs.swap, but there is a mkswap.
  • The script I used to disable CoW didn’t preserve ownership information, so I had to re-chown things appropriately. Oddly PostgreSQL still started, but MySQL did not. That was nice because it alerted me to the problem.

Hooray for snapshots! I’m hoping to set up snapshot backups Soon. (TM)


I recently resigned as the Freenet project release manager. The reduction in the amount of obligations I have is refreshing, but I do feel I’m lacking a project to focus on. I’m curious to see how this turns out.

Treacherous Variable Names

Now for the chronicle of a bug clearly caused by poor variable names. (And arguably also the lack of semantic meaning given to string types.) This  buggy function searches for a device with a given uuid:

 * Search the specified glob for devices; return error code. On
 * success dev->fd is set to a valid file descriptor and the file
 * is locked.
static int __find_dev(struct vdev *dev, struct uuid *uuid,
                      const char *const path)
	glob_t paths;
	int ret;
	size_t i;

	ret = glob(path, 0, NULL, &paths);
	/* Error handling omitted... */

	for (i = 0; i < paths.gl_pathc; i++) {
		const char *const fname = paths.gl_pathv[i];

		dev->fd = mopen(fname, O_RDWR);
		if (dev->fd < 0)

		if (is_requested_device(dev->fd)) {
			ret = __try_dev(dev, uuid, path);
			if (!ret)
				goto found;


	ret = -ENOENT;


	return ret;

mopen() is given fname, the path produced by glob(), but __try_dev() is given path, which is actually a glob. This caused the device to import correctly but report as its path the glob used to find it instead of its actual path. path is now named search_glob, and fname is now named path. Our existing tests did check that the device path was reported correctly, but they didn’t catch this because they all explicitly specified the exact path to the device to use, so the search glob was the correct path. If the search glob had been some kind of glob type instead of a string then the patch that caused this bug wouldn’t have compiled.

Fun With Concurrency

First a bug, then an exercise in assuming the common case.

We were parallelizing a single-threaded operation to increase throughput and came upon a race condition that at first manifested as crashing only on Ubuntu. One thread read jobs from the disk and passed them to an existing worker pool for processing. Once all processing was complete, the reading thread exited. When a worker finished processing it marked its job as complete, then released its locks. This could result in releasing a freed lock – because the thread pool was also needed for other purposes it wasn’t joined first. My best guess as to why it only showed up on Ubuntu (and Debian, it turned out) is that it has different defaults for what we assume to be stack protection, though that wasn’t clear from listing them with gcc -Q --help=target. Though changing the order of lock releases solved the immediate problem there were also issues with deadlock due to dependencies involving the thread pool. We ended up having an additional processing thread that was joined before exiting, which avoided the problem.

While this parallelization does get its speed from doing previously single-threaded work in a thread pool, it has a twist. A final processing step must be done in disk order, but waiting for the processing slowed down reading. By reading speculatively as though processing had succeeded, we achieved a ~2x performance increase. Yay!

Network Interfaces

If I set the scene with remotely configuring network interfaces at 3 AM you’ll probably already guess it was a bad idea. In retrospect I would have been better served by my reservations that I should do it when I was better rested and had given it some thought. As it was, I was configuring a bridge for a KVM according to the excellent Debian Handbook.

/etc/init.d/networking reloading the network interfaces didn’t bring the new ones up, so I tried restarting them. One of the last messages I saw was "Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces". The last message I saw was a DHCP release. Whoops. What I probably should have done is run ifup on the new interfaces directly instead of risking bringing down the interface I was connected over.


Freenet is a medium for censorship-resistant communication. It allows people to communicate by publishing and retrieving data securely and anonymously. When someone runs Freenet on their computer it is called a node. Each node connects with a limited number of other nodes. When two nodes are connected they are one another’s peers. Every node communicates with the rest of the network solely through its peers.

Each node has some amount of storage reserved for a datastore. A datastore is a shared space in which each node keeps data. Freenet can be thought of as a distributed, encrypted storage device. It allows inserting data into and fetching data from the network-wide datastore made up of all the individual nodes’ datastores.

In order to do this, Freenet must be able to determine which nodes to store data on, and later be able to find that data again. The process of finding a piece of data, or a place to store it, is called routing.

This is where math comes in. In graph theory, there is a type of network called a small-world network. A small-world network contains relatively short routes between any two nodes. This is good, because longer routes are slower and less reliable. Some types of small-world networks are especially interesting because they allow finding short routes with only locally available information. This is essential for Freenet because its nodes must perform routing with only locally available information through their limited number of peers.

Here’s the concept: all nodes have a network location, which is unrelated to geographical location. An inherent characteristic of every request sent into the network is that it has an ideal location to be routed to. Nodes route requests by giving them to their peer whose location is closest to that ideal location. In order for this to be effective, the network must have a specific characteristic: it must have a good distribution of “link lengths,” which are differences between the locations of connected nodes.

Locations can be thought of as wrapped around a circle: 0 at one point, approaching 1 as it goes around, then wrapping back to 0. 0.3 is 0.2 away from 0.5, and 0.1 is 0.2 away from 0.9. This distance between peers’ locations is called the connection’s link length. On average, nodes must have many connections with shorter link lengths, and a few connections with longer link lengths. One can think of this as being able to quickly make large leaps on the location circle and also make small adjustments.

Depending on the network security level, a node can run in “opennet” mode. It will connect with nodes run by untrusted people the node’s operator does not know, called strangers. This is in contrast to the preferred mode of operation, called “darknet,” in which the node only connects to people the node operator knows in person, at least enough to be pretty sure they aren’t a secret agent or incredibly bad at securing their computer.

Okay, so this is all fine and good, but so what – what can it do? The most straightforward use is to insert a file and share the key with others so that they can retrieve it. The problem becomes how to tell other people. If all Freenet can do is act as a file storage device in which one can only retrieve files one already knows about, Freenet can’t do much.

While this is a limitation, many people have still built useful applications on top of Freenet. They tend to use a web of trust to discover files inserted by identities people create, and put together those files to present a responsive user experience locally. They assemble something like the database that would usually be on a centralized server locally from fetched files.

A bunch of plugins and external applications allow interactive communication over Freenet. There’s real-time chat, email, a cross between Twitter and a Facebook wall, and other applications which provide completely decentralized forum systems.

I collect and analyze data about Freenet to provide estimates of things like the size of the network and help better understand the network’s behaviour. Feel free to take a look! The links in the footer starting with “USK@” are links to things in Freenet and won’t work here.


URXVT is a lightweight terminal emulator, (with an equally excellent page on the Arch Wiki) but I didn’t like the default color set, especially when using WeeChat. Here is why:

After putting the scrollbar on the right, using a larger XFT font, and using the same color set as Gnome Terminal’s “Tango” theme, things look much nicer:

Here’s the .Xresources to do so:

URxvt.background: #300a24
URxvt.foreground: #FFFFFF
URxvt.font: xft:DejaVu Sans Mono:size=12
URxvt.iconFile: /usr/share/icons/Humanity/apps/48/terminal.svg
URxvt.scrollBar_right: true
! gnome-terminal Tango theme
! black
URxvt.color0 : #2E2E34343636
URxvt.color8 : #555557575353
! red
URxvt.color1 : #CCCC00000000
URxvt.color9 : #EFEF29292929
! green
URxvt.color2 : #4E4E9A9A0606
URxvt.color10 : #8A8AE2E23434
! yellow
URxvt.color3 : #C4C4A0A00000
URxvt.color11 : #FCFCE9E94F4F
! blue
URxvt.color4 : #34346565A4A4
URxvt.color12 : #72729F9FCFCF
! magenta
URxvt.color5 : #757550507B7B
URxvt.color13 : #ADAD7F7FA8A8
! cyan
URxvt.color6 : #060698209A9A
URxvt.color14 : #3434E2E2E2E2
! white
URxvt.color7 : #D3D3D7D7CFCF
URxvt.color15 : #EEEEEEEEECEC

Copy and paste with URXVT took a bit of getting used to. Without additional setup, it requires using the Xorg paste buffer: selecting text copies; middle (scroll wheel) click pastes.

More Freenet Stats

Freenet is decentralized, so while it’s (intended to be) a small-world network and thus short routes exist between any two nodes, it can be difficult to have a routing algorithm which can find those routes using only local information. As far as I understand it, Jon Kleinberg’s work (brief, paper) forms the basis of Freenet’s networking model. We don’t have local connections, as they’re expensive to form and in empirical testing actually detrimental to performance, but apparently the model still holds. A core finding from this work is that if connections are distributed so that more-distant connections are less likely, an implicit structure forms which allows forwarding a message to the closest peer at each hop to form a shortpath. As Freenet uses 1-dimensional locations, this distribution is based on a probability which is proportional to the inverse of the distance. The ideal distribution is logarithmic, but from what I’ve gathered, Freenet’s distribution isn’t too close to it. Making the actual match the ideal is difficult – the network size is a factor in the distribution, but it cannot (practically) be known. Techniques by Oskar Sandberg (paper) are intended to produce this distribution without such knowledge, but Freenet’s implementation seems to not behave as intended. I hope to help discover why, and how to fix it.

Edit May 2, 2012: I replaced a rather dubious ideal distribution plot made with a quick Python script with a much better-looking one made with a real simulator. Here are both distributions on the same plot:

Edit August 1, 2012: Corrected Y axis label to refer to the percent of links, not nodes.

Both plots on the same graph

Adventures in Python

I’ve been spending most of my waking hours with Python over break, and I really like the language. Unlike the standard C++ library schoolwork is limited to, in Python I can generally find a library to make my task a great deal simpler. I find assumptions that I make about syntax while figuring things out tend to hold and work as I would expect, and it’s incredibly convenient to pop open an interactive shell to try out an idea before dropping it into a larger program. I actually like the whitespace-sensitivity of Python due to the rudimentary level of organization, style, and readability it provides. It seems like there’s much less boilerplate code and syntax compared with something like C++. That said, it can be odd to have it be an open question what type something is or what attributes it has and have that lead to problems. It can be frustrating to change code and not know if the types are right until that part runs. These are problems which would not be present in a statically typed language, but such a language will probably not be so flexible.

I’ve managed to get one project into a state in which I’m willing to show it the light of day: RelayBot. Not finding a working IRC bridge bot, I worked off an existing (but for me non-functional) implementation which heavily informed its design. I built my version by removing parts until it connected properly, then writing more functionality and removing still more until it did what I had in mind. I hope to use it to bridge a channel on FLIP and Irc2P.

The project (again in Python) which is not yet ready is a network probing and analysis application. It collects network topology information (optionally in a threaded fashion) and commits the results to a sqlite database for later analysis. It’s hoped that this will allow evanbd to replace a collection of Bash scripts which take an incredibly long time to run and are prone to breaking. The basic functionality is there, but it has many rough edges still. I’m partial to the peer distribution graph:

Histogram of Number of Nodes vs Claimed Number of Peers

GNUPlot really does give lovely images. What I find interesting about this is how there are clear peaks – many nodes claim 12 or 36 peers, which seems very likely to be a function of the peer connection caps and bandwidth limits. There were some outliers, with one node claiming 92 peers! What’s encouraging is that this overall pattern seemed quite stable even as many more probes were collected.

This project has made clear to me how much I need to learn SQL properly. I initially wrote a collection of three queries to generate this: one query retrieved keys which were used to iterate over the other two. Generating this graph took about two hours. I figured out how to rewrite it to use the proper SQL commands for getting the result, and the exact same graph generated in approximately 30 seconds! What’s more, there’s a command I’d like to write that I don’t know how: “Take the sum of the count of the distinct traceNums for each probeID.” It sounds so SQL-y I’m not sure quite how I haven’t been able to do so.

It’s been a fun break, and a shame it couldn’t last longer. Learning in this kind of an organic way with immediate results and self-demonstrating practicality is fantastic.