First post after being “away” – my server was down for a little while too :/ No matter, I wasn’t really in a position to blog anyway. Real life has interrupted for a bit, although I’ve managed a bit of hacking to distract myself.

So, we’ve had various troubles with IMAP. Somewhat hilariously, although I wrote a message to -devel about this, I did it from my trunk Bongo, which doesn’t actually send e-mail since the smtpc code was committed (which I think is a configuration issue over here rather than a bug of Pat’s). So, d’oh to that. I’ll blog about this instead, then.

The problem

In one word – “Thunderbird”. Tbird is a pretty smart IMAP client in many ways, and one of the smart things it does is use multiple IMAP connections to do many things at once: for example, doing things like mail filtering on your Inbox in the background while you’re reading mails in another folder. The issue with doing many things at once is that you then enter the world of concurrency, which is relatively Hard.

To prevent concurrent tasks from corrupting each other, what I call “locks” are put in place. You can think of these a bit like signals in a railway shunting yard: you don’t want to risk collision between trains, so you put signals on the track in such a way that trains obeying the signals will never meet. We have locks around access to store collections, access to the database, access to the text index, and access to internal data structures: all these locks are used to ensure access works consistently.

Following this metaphor, locks are great, but there are problems you want to avoid – one of which is “starvation”. If a train waits for the green signal but only gets it rarely, it’s effectively “starved of track” – similarly, a process waiting on a lock and not getting in there is starved of a resource – usually, some piece of data.

Starvation was effectively happening in our IMAP. When the IMAP agent is starved of access to a store object for too long, it returns an error to the IMAP client, and the IMAP client assumes that everything has failed and starts again. Sadly, with Tbird, starvation wasn’t a rare occurance – it was effectively guaranteed. If Tbird was doing something like moving mail in one IMAP connection, that IMAP connection would work the Store quite hard and would effectively prevent the other IMAP connection from getting access unless by fluke of timing it was able to grab a lock in the short time the other process didn’t hold it.

The solution

So, if we care about making sure that starvation doesn’t occur, you need some system other than just basic signalling. You want a system to ensure some kind of basic fairness. Now, in computer science, you can write about “fairness” until the cows come home, but in this case it’s probably good enough that a queue be put in place – that ensures that everyone gets a turn at some point.

It turned out that our existing locking system wasn’t able to really do this. In fact, it was pretty complex, for another good reason: it has both “shared” and “exclusive” locks. The semaphore system works like signals in the train yard: only one process is allowed at a time. Let’s say the code was written to a “look, but don’t touch” rule: if the code isn’t altering anything, you can have many of them looking at the same time without fear of corruption, which is good for performance. However, you can’t do that with basic semaphores alone, you need to build that on top. In fact, you can look at the previous code and see how complex it is – a lot of that is machinery to allow shared locks, and mediate access when someone comes along and does want to touch – you have to get rid of all the look-ers first.

What I’ve written is along the lines of the original. There is a concept of shared and exclusive locks, and it attempts to mediate between those two. However, when the lock in unavailable, where the previous version waited a while and then gave up, this version puts the process in a queue and waits until access is possible. You might wonder why it needed a ground-up new version of locking – well, you don’t need to be a coder to compare the original code to the new hotness, and you’ll see mine is half the size, better commented and (in theory) more featureful – I tried bolting it on top of the old, it was buggy and awful.

The science bit

How does it work? Well, it’s relatively simple. Like the old system, there is a pool of locks, each of which is associated with a particular store resource that we want access to. Each lock is made up of two integers (readers, writer) and a queue. ‘readers’ is the number of processes who hold a shared lock, ‘writer’ is true if a process holds an exclusive lock. If ‘writer’ is false, you acquire and release a shared lock just by incrementing and decrementing ‘readers’. If ‘readers’ is zero, ‘writer’ can be set to true to acquire a write lock.

What if those assumptions (‘writer’ is true or ‘readers’ is greater than zero) don’t hold? That means someone else has the lock we want, and we’re going to have to wait: we turn to the queue. We place a ticket in the queue, and wait for that ticket to be called. At the point the ticket is called, the lock is free and we can grab it for whatever purpose.

When does your ticket get called? Well, if you’re a reader, you get called when either the writer holding the lock lets it go, or the reader in front of you in the queue got its shared lock. If you’re a writer, it happens when all the readers have let go of their shared locks. Obviously, for the latter to happen, you have to ensure that readers don’t skip in front of waiting writers: in fact, this turns out to be quite easy to code, because the logic is simple: when you come to acquire the lock (whether reader or writer), you look at the queue and if it’s not empty, you join the back of it.

That’s the idea of the code, anyway, and hopefully if you read through the code you’ll find it follows that logic. It’s already partly hooked into the store as of r680, and at least for me, I don’t seem to get IMAP errors any more, which is great. I’m awaiting feedback from people like Pat who have huge mailboxes, though 😉

Like any locking system, it may well have serious bugs which cause the whole house of cards to fall down. Hopefully these will shake out pretty quickly: I’m using this revision on my system, so if there is a problem it should be obvious. Also, the interaction with the Store WATCH system is potentially dodgy at the moment. If this code works, there is a good amount of legacy code which can come away and again simplify things massively, which will be great – that was just a patch too far at this stage, though 🙂