Alex Hudson

Thoughts on Technology, Product, & Strategy

Month: June 2009

Did you vote in Fedora elections?

Mike McGrath asks a pertinent question: why didn’t you vote? Although I did actually take the time to vote, I can totally understand why someone wouldn’t.

For one thing, as others have said, a lot of the candidates were quite similar: they said many similar things, and I don’t doubt any of them could do the job competently.

Personally, I tried to vote on issues where possible. I voted for people who had a clear view of what “Fedora” means to them, and sided with those who had a specific vision (that is to say, not those who take a pluralistic all-things-to-all-men approach). I tried to vote for bettter communication (though most candidates were pro- that), and for those who had a track record of commitment: these various “posts” don’t really have much in terms of power, so I mostly am interested in a. what the person would bring in terms of time and resources, and b. the attitude of the person.

I have to add that many of my votes were coloured by mailing list conduct – that’s mainly where my view of the candidates was formed, because I couldn’t really get through all the town hall stuff, and didn’t think it as accurate a picture of behaviour in any event. Where I thought possible, I voted pro- “be excellent to each other” candidates, and less favouably to those I didn’t think had been so excellent to people.

It’s a really complex picture, and many people eligible to vote I suspect just don’t come into contact with many of the candidates. I know I have only really had dealings with one, who was indeed excellent to me, and so he definitely got my vote. But beyond that, it was actually quite difficult to choose between.

Stuff-as-a-service…

I read with interest the various Twitterings about price of disk space – in particular, about Bitfolk, but it applies to any other service really. Andy’s take on this is really worth interesting, although I personally think he’s defending Bitfolk’s pricing unnecessarily.

My basic philosophy on this is that price isn’t an issue; things cost what they cost. What is really at stake is the value of the service: is what I’m paying for worth it? Of course there are plenty of businesses which are effectively discount-merchants – the Ryanairs and Tescos of this world – but they don’t offer an awful lot of value. Trying to compete on price caters for a certain type of customer, but it’s a limited market.

It’s too easy to fixate on the cost of materials. You can look at a sandwich and say, “Oh, that probably cost 20p in bread and filling. £2 is a rip-off”. Things simply are not that simple though, unless you’re really truly price-fixated. If you have time to spend scouring the market for a cheap sandwich that’s similar in quality, fine – personally, my time is much more valuable than that.

Similarly, with VPS services, it’s easy to put together some raw numbers on hardware, divide it by a few people, and assume that it’s very cheap. Occupancy, failure and incidental costs don’t really factor into it, or the time to manage it. With that kind of service, the economies of scale really matter, and it’s difficult to see how you could do it more cheaply for the same price. But at the end of the day, even that doesn’t really matter.

What matters is the value of the service. For me, the value of Bitfolk is excellent: it’s a huge amount of management taken off my shoulders. I don’t want to manage Xen; I don’t want to have to worry about hardware-this and the vast kernel-patch hideousness. I don’t want to have to worry about I/O rates. Bitfolk is a service of good quality, and while it’s never going to offer bullet-proof VPS uptime I suspect you can’t get much better without moving to some canny clustering setup.

This is what I don’t understand: bitching about the pennies. Is £24pa too much for 20 Gb disk space? That’s £2 a month. That’s our sandwich from up there. It’s just not worth worrying about, in budget terms it’s noise. What matters is the quality of the service.

Windows Vista lameness (for future reference)

I’ve hit across this problem a couple of times and always end up having to look up the magic incantations, so I’m going to store it here for posterity and in the hopes it may also aid other people.

Problem: Windows Vista / XP machine on a wireless network behaving extremely oddly. You can often browse to Google, for example, but basically nowhere else – it’s like other websites just time out.

Issue: For some reason, the MTU discovery doesn’t seem to work – the OS ends up sending packets which are too big and things stop working. This is probably an issue with one of the wireless routers involved I would imagine, rather than windows itself.

Solution:

You’ll need an Administrator shell. Find the ‘Command Prompt’ in the Start Menu, and right-click to select ‘Run as Administrator’. Then, use this command to find the name of the interfaces on the machine:

netsh interface ipv4 show subinterfaces

Using the name we found above, do:

netsh interface ipv4 set subinterface "Name We Just Found In Quotes" mtu=1400 store=persistent

It’s a complete hack, but it works, and since I use Windows about once a year I really don’t care 😀

A few words on CMake…

I did promise Lance that I would blog more on Bongo, and I’m going to try to stick to a post a week at least – however, this first one will only tangentially be about Bongo.

Since the project was initially released, the autotools build system was what you needed to create Bongo. There are a variety of benefits to using autotools, and it’s an extremely well-tested and mature system. However, it’s also relatively difficult to understand and not particularly quick. Over time we accreted more and more things into our build which no-one understood fully and that would occasionally blow up in our face.

Now, a little while ago I started a new branch of Bongo, called “memmgr-glib”. This was mainly to replace the memory allocator – maybe Pat can blog a bit more about this, but the short story is that this change has highlighted a variety of bugs in Bongo. The branch version of Bongo therefore seems extremely unstable – actually, it’s just a lot less bug-tolerant, which is overall a good thing – but we used that as a cue to make some other changes.

One of which was a switch from autotools to CMake as our build system. Just to give one really obvious stat:

autotools cmake
Configure 52 seconds 3 seconds
Initial compile 47 seconds 21 seconds
Install 1m 15 seconds(!) 2 seconds

Clearly, there is a big difference in each category: it must be said that the build systems aren’t doing quite the same things at this point, and I haven’t done these tests properly as benchmarks, but fundamentally the times are extremely different. I expect eventually that the compile time will even out – the difference will become minimal, and is already of the same order. Configure and install are clearly quicker though, and there aren’t any amazing short-cuts being taken on the cmake side.

The huge difference this makes for developers is the compile-test cycle. A re-compile with either system is basically pretty quick – something like a few seconds each. However, “make install” with both systems is quite different. To do a proper update, it’s much quicker with cmake, which means the testing cycle is really quick.

One other thing that is also big, for me at least: with CMake, we can finally do out-of-source builds. It’s possible with autotools, but our system never quite got it right. With CMake, you can check out Bongo from svn and then do something like ‘mkdir build/; cd build/; cmake ../; make’. The source we checked out isn’t touched, and nothing gets built in the source tree – it all happens in another directory. Not only does it feel cleaner, but it means that you’re not likely to commit any files which shouldn’t be there and if something goes wrong you can just nuke the build directory and start again easily.

I’ve yet to teach the bongo-build bot on IRC about this new system, but I expect that it will make building there a lot quicker too – mainly because it goes through the cycle above more than once (it configures twice, for example). It will be interesting to see what a difference it makes!

Fedora 11 is unleashed.

It’s been great to see Fedora 11 released, even with a couple of small delays it didn’t seem to me like a terribly problematic release.

As Rawhide, I’ve been using it on and off for a while now, and to be honest aside from the few things I ended up filing, there hasn’t been an awful lot wrong with it. One thing I’d particularly like to call out are the small improvements arriving in virt-manager, which is slowly improving release by release into a really tasty piece of software.

I was slightly surprised by the various Mozilla-based apps being included in pre-release version, because they’re important to me and Thunderbird in particular doesn’t feel ready. However, to be fair to the maintainers, I haven’t encountered any particularly bad problems – they’re fine packages, and Firefox 3.5 in particular feels quite awesome. My main complaint is really that I’m missing my various plugins (which I’m sure can be hacked to work, but you know).

The artwork also deserves special mention I feel, because it looks really rather good. I use my laptop professionally a lot, and having it look respectable is very important to me. Right now, I think it looks more than respectable – big thanks to the art team.

It’s going to be a shorter path to Fedora 12, and I hope to involve myself mostly in QA – what small amounts of hardware testing and bug filing I could do seemed to help progress, and Fedora is so good now I think one of the crucial things in the future will be to ensure regressions are as small as possible and picked up as early as possible.