Alex Hudson

Thoughts on Technology, Product, & Strategy

Category: fedora (Page 3 of 6)

Potential Gna! issues

It looks like someone has been attacking Savane-derived hosting platforms. Recently Savannah has been down, and the page that has now gone up confirms that they had a security breach. Unfortunately, Gna! has a similar code-base, and their site now confirms that they are investigating an issue too.

This has a knock-on issue for Bongo, since we use Gna! hosting. Our download area appears to still be alive, and thankfully we have always signed the releases. You can check a release of Bongo quite simply:

  gpg --verify bongo-0.6.1.tar.bz2.sig bongo-0.6.1.tar.bz2

This should result in a confirmation that the file is correctly signed with the key 9B6913D7, which is available on public webservers – just search for that ID.

We will check the SVN repos and other parts of the project when Gna! comes back on line to ensure that these have not been attacked; but I can say with some confidence that we have no reason to expect that r1323, the current HEAD, has anything nefarious in it whatsoever. Certainly, if the security breach was as recent as Savannah’s appears to be, there is nothing to fear.

Copyright changes ahead for the UK? SAS v WPL goes to Europe

I don’t particularly like talking law on this blog; it’s boring and – for the most part – disinteresting. However, recent developments in SAS Institute Inc v World Programming Limited (as written up here – thanks to Cristian for bringing this up at FFII) deserve to be aired.

The basic story is that the Judge in this case is deeply unsure of the boundary of copyright. For those who don’t know, SAS is a statistical package which is both popular and influential, and to a large extent can be thought of as a programming development environment. WPL, the defendants, wrote software which could interpret SAS programs. There is no direct analogy in the free software world, but LibreOffice Calc interpreting Excel spreadsheets is close enough for the purposes of our discussion.

The Judge, unsure of the boundary, has sent a number of questions to the European Court of Justice (ECJ). The questions are hypothetical, but clearly designed to test the waters and figure out where this line falls. As an example of some of the questions in our Calc vs. Excel example, he’s asking:

  • Does accessing the file format of Excel constitute copyright infringement?
  • Does interpreting Excel-compatible formula constitute copyright infringement?
  • Is it copyright infringement to copy the behaviour (intended or otherwise) of Excel when processing spreadsheets?
  • Does it make a difference you copy functionality from Excel into Calc by reading Excel’s manual versus observing Excel’s behaviour?
  • Does it make a difference if you have a license to Excel?

(Just to be clear, we’re not talking about Calc and Excel, I’m just interpreting J. Arnold’s questions in this context to make them more readily understood)

Now, the armchair lawyers amongst my readership have probably already thrown their arms up at all these questions and exclaimed, “Copyright doesn’t extend that far!”. And to an extent, they would be correct: originality has always been a defence to copyright infringement, and if any of the questions above were to be answered in the affirmative, we would see the start of that changing.

What makes this different, I think, is that we’re really seeing the weakness of copyright law treating software as a literary work. This has always been bunk, really: software is no more literary than a shopping list, and although the case of verbatim copying (with or without transformation) is open-and-shut copyright infringement there have always been “grey areas”. As one example, the FSF’s position on dynamic linking and the GPL: as a derivative work it does seem to fall under the purview of copyright, but it’s obviously a world away from literary copyright.

The WPL case is also one where the copying was explicit, deliberate and planned: they definitely did copy things. They just didn’t literally copy the software code, or decompile the software: they re-created it from the ground-up. So we’re definitely talking about a case of copying here, which it would seem could also be the purview of copyright.

This is going to be a really interesting case, and is going to have a fundamental effect on free software if we get some interesting answers to these questions. On one hand, it casts an immediate dark shadow over a number of projects: Samba being an obvious case in point, which has previously reached legal agreement in Europe about how it can copy Microsoft while still avoiding the patents that Microsoft hold on certain functionality. But while desktop apps which copy Microsoft make the most obvious cases, you could equally see problems for 3D graphics drivers, people implementing compilers, all sorts of areas – particularly where free software is still catching up to proprietary software.

But of course on the other hand, this would also strengthen the copyright position of free software applications. Companies that currently dance around the (L)GPL-style licenses will find themselves on thin ice indeed, and those proprietary implementations of leading free software will start having to be extremely careful.

It’s very unlikely that many of the questions will be answered in such a way that the copyright system becomes like the patent system: for one, it would be such a massive change that it would require primary legislation at a European level to become legally sound. And there are few cases exactly like this one, where the copying is so obvious and blatant.

The precedents being set here will be extremely important, though. Our understanding of copyright will almost certainly change from the outcome in this case, and will necessarily become more nuanced. The idea of “clean-room reverse engineering” may become more nebulous, and the “I wrote it from scratch” defence could become weaker.

If nothing else, this highlights that no law is truly ever settled, and possibly portents to more movement in this area in the future: I’ve described before how the UK Government is making noises about revisiting intellectual property laws, and in our current weak economic state it is extremely tempting for politicians to beef up some of these laws in order to “create wealth”. Cameron, our Prime Minister, is particularly in thrall to Google, as if they set any good example for our businesses. It’s sometimes very easy to just think about patents and lose sight of the bigger picture.

SparkleShare updates

It’s been a little while since I talked about SparkleShare; since then it has moved project hosting (here’s my fork) and there have been various changes – thankfully, updating the packages for the new version and to get it on Fedora 14 didn’t take too long. There are likely to be problems here and there with the packages – invitations don’t seem to work right now, but I haven’t tracked down that bug yet – but they should be mostly working. Please let me know if you’ve tried them and found any problems.

The situation with repos hasn’t really improved very much. At the moment I use mock to build packages; even SparkleShare (which is quite simple) takes about nine minutes to build in mock. It’s also quite a manual process, and if mock encounters errors then obviously the whole thing becomes very time consuming.

I really don’t understand why there isn’t a better solution than this yet. By using something like Koji, you can achieve much more automation – but Koji is hardly straightforward. And Koji uses mock, so is therefore not going to provide anything in the way of a speed increase.

Now, I understand why it’s involved, I get the whole clean root thing. But there needs to be something better, because the tools are a pain, and I only build for x86_64 and i386. If you’re building something in Fedora, you have access to the main koji and things aren’t too bad. Outside of that, you’re pretty much on your own and things get complicated, slow and/or manual quickly. If anyone has any better ideas than mock and koji, please please let me know…

A late review of Fedora 14

It seems like everyone has had their word on the latest release, but like a fashionably-late party-goer, I’m going to waltz in at the 11th hour and offer my 2p 🙂

I think it’s well-known at this point that 14 has shaped up to be a very good release, but I’d like to draw attention to one point in particular: the version of Nouveau in this release is another big, big step forward. I have a relatively bog-standard Dell D830, and 14 is the first time that:

  1. suspend/resume has worked out of the box – this is huge for me
  2. the Mesa 3D drivers, although marked experimental, work well enough to run Compiz easily

Is nouveau’s performance great? No, to be honest, it actually feels slightly slower here than on 13 (although almost certainly because I’m now in Compiz, not metacity) – but for me, this doesn’t matter, being able to suspend is massive. I could even envisage the 3D stuff being turned on by default in the next release or two.

If there’s part of the system which sticks out as still being sub-optimal, though, it’s the application install experience. I know I’m not saying anything new here, or probably anything anyone disagrees with. A great example is attempting to install OpenOffice.org on a clean Live install (OOo no longer comes by default on the CD), because you have to negotiate a couple of problems:

  1. you have to figure out where the openoffice packages are (searching on “openoffice” isn’t enough sadly; it pulls through large amounts of non-openoffice packages);
  2. once you’ve found the packages, you have to figure out which ones you need – amongst a sea of langpacks, extensions and other stuff, are the bits you actually need. Calc is relatively easy to find; Impress less so, Writer comes right at the bottom (alphabetical, see) – not easy. Plus then there are the bits you do actually want – extra graphics filters, extended PDF support, etc.
  3. then, when you’ve figured out which bits to install, you set it going and the “success” dialog looks an awful lot like “fail”:
    PackageKit dialog coming up with no actual content.They are lovely icons, though 🙂

I’ve previously said that I don’t really understand why all of these types of installation tasks are grouped together in the same application: for example, my belief is that font installation is much better served by a Google Fonts-alike web service which can be used to browse and try fonts live: you’d then hit an “Install” button or something which would then trigger the excellent PackageKit. However, many people remain unconvinced even in the face of the actual numbers and things.

It’s the same with this. We need some kind of application store. I personally don’t see why this should be conflated with the package mechanism: packages are the how, the app store is the what. It makes no sense, to me, that all packages be treated identically: for example, if an application can talk to the PackageKit interface for its plugins/extensions, there’s no reason to have that stuff in the app store at all. Similarly, just because something isn’t a GUI application doesn’t mean it should be excluded: why can’t we have a “Python Developers’ Corner” in the store to browse libraries and things? That’s what I want as a user (yes, developers are users too).

This isn’t going to get fixed quickly, and sadly I think efforts like Ubuntu’s Application Store don’t solve many of the problems: if your application store is just a majorly cut-down view of the package database, I think you’re doing it wrong (for one thing, it doesn’t scale as you add back in all the packages you cut out).

At some point over Christmas I might have a go at attacking this problem; a lot of the pieces needed are already in place: PackageKit is more than capable enough of installing things from the web, and I really think that having an actual prototype that people could use would do so much to illustrate the idea that even if it wasn’t used, it would help push things in a better direction.

bkuhn on Canonical

If you haven’t read it already, Bradley Kuhn’s take on where Canonical are aiming is deeply interesting. There is bound to be push-back against the article, because it does connect a few distant dots, but I found it particularly interesting because it’s apropos of a recent discussion on Surrey LUG’s mailing list – which has no public archive so therefore I cannot link, but the gist of the thread was a discussion on the various approaches Canonical takes to getting income.

I personally disagree that it seems like Canonical are attempting some kind of open core strategy: I think it’s arguable that they’re seeking to leave that avenue open, but for them to sell proprietary software products at any point in the next few years would destroy pretty much their entire brand image. In particular, Shuttleworth has made extremely strong statements in the past about never charging money for software – and it seems pointless to be proprietary if you’re giving the software away (well, for the most part).

However, there are a couple of things I find pretty depressing that Kuhn brings up, particularly the admission that Canonical is still not profitable and would take “some more time” to achieve that. By 2007, some three years into the Ubuntu project, it had already cost some $25 million, and of course we’re another three years on since then – given the growth of Canonical, a total spend of $50 million to this point would seem particularly conservative.

What I never understood to begin with, and still fail to understand, is why Shuttleworth has ham-strung the project from the start by promising there would never be a pay-for version a la Red Hat. My thinking in this regard is pretty simple: when you’re paying for all this development, you have to recoup those costs somehow. When you’re providing support or doing all these other activities, you leverage that development, but you incur further costs which then also need to be recouped.

Pretty much everyone else who commercially develops free software has some method of recoup in position. Red Hat, as an example, sell security updates, support, and the OS itself. Nokia invest in software to sell their phones; similarly the likes of Intel, Oracle, and all sorts of other businesses invest in the software to enable hardware sales. Google invest in the likes on Android because it gives them a huge amount of control over the mobile market. And of course, there are plenty of small players too, all of whom have some kind of strategy to ensure their development activity is not some loss-making exercise in the grander scheme of things.

If Canonical’s commercial model confuses me, though, there is one thing which is abundantly clear: for that company to fail would be incredibly damaging. It would be difficult to imagine a more worse-case scenario than seeing Canonical at ten years old, still burning millions of dollars each year and still not making a profit.

Of course, Canonical likely has that luxury; previous break-even targets have been set and then moved for whatever reason, and the vision still seems to be about growing hard, branching out and becoming some “cohesive whole” (which is avoiding the issue). So it’s not likely that they’re going to fold in the next five years. But in away, that’s almost worse: much like a Government-funded entity, there is this insulation from commercial reality which ensures that “the vision” will be followed. The must be an opportunity cost there, room for a new Ximian or a new Mandrake, and fresher more innovative visions that are currently squeezed out of the market.

Philip Green report on Govt spending; UK Free Software

For a number of years, the discussion amongst UK-based free software professionals has been about how to do more with Government. The most active discussions happened around the time of the UK open-source, standards and re-use policy was developed (around 2004; it has been updated since): it wasn’t great before, and it hasn’t improved an awful lot since.

In very similar ways people have bemoaned the accessibility of Government procurement processes for micro/small businesses, and it’s basically the same problem – the “big guys” tend to be pushing proprietary solutions.

So, it was with some interest I read Philip Green’s recent report on Government efficiency. I had some fears about what I was going to read; “efficiency” generally means driving costs down hard, which usually means bulkier contracts with fatter discounts. And that’s what we’ve got.

People who don’t work with UK Government on IT projects probably don’t have much of an idea of the state of play. Right now, discretionary spending within most of it is limited to contracts to £10,000 value. Above that, and you need more central approval, and there is an absolute bar on any project worth over a million. In my book, that’s basically micromanagement.

“Government must leverage its name, its credit rating and its buying power”

This is the central theme of the report. Basically, Government could be buying stuff in bulk, or doing things for itself, in areas that it currently is not. And by Government, I really do mean “Government”: right now, the various Departments often do things for themselves, and by business standards each department is a pretty large enterprise. Green is saying that Government, as a whole single unit, should be doing things.

This is a knock-out blow to free software and small business; it’s difficult to see how any SME will be able to supply Government with anything but the most specialist of services. What’s more, the principle of multi-vendorship enshrined (but rarely acted upon) in the open-source policy is more or less going to be a thing of the past.

Just major suppliers?

You see, here’s the problem. The Green review is mainly looking at main suppliers: contracts worth millions of millions of pounds. However, that doesn’t really mean that it’s not going to affect small contracts.

A good example is the NHS. For the last 12 years, there has been an NHS-wide agreement for Microsoft software. Who knows how much this has cost, obviously hundreds of millions, but equally obviously at a vastly reduced rate. Not only was the NHS able to deploy pretty much any Microsoft software it felt like, NHS staff were virtually free to take it home with them (ok, they had to pay £9 for Office). For those unfamiliar with the NHS they employ, one way or another, around 7% of the UK workforce. That’s a lot of Office installs.

Are NHS employees really likely to become Fedora users if it costs them a few tens of pounds to get the latest Microsoft operating system and office suite? I’m not sure LibreOffice is quite that compelling, and frankly people spend more money at the pub on a Friday night.

It’s the network (effect)

Once central IT platforms are bought and paid for, Government-wide, that puts a huge obvious limit on what applications can be deployed on it. No free software desktop, that much is obvious. But what about free software desktop apps? Having seen the likes of Fujitsu deploy Government desktops in what is commonly called “lock down mode”, I don’t see it that likely that many people are going to be Firefox users in that environment

And what about delivering software as a service? Many people have touted that as a way of doing an end-run around the desktop lock-down. But sadly, the picture is probably worse there, if anything. Government will have centralised hosting as an IT service, and they will expect their services to be hosted on their platform for reasons of security and reliability.

That platform is not going to be an Apache-based platform: being realistic, it will either be some grandiose Java application server (Websphere?) or it’s going to be Sharepoint. Which will mean any application you want to sell to Government will have to run on one of those platforms.

Whither the future

If Government acts on the Green report, and implements anything close to the full recommendations, what we will see in the next five to ten years is a central Government IT platform which is homogenised to a much, much greater extent than we have ever seen. Networking and telephony may be provided by a single central Government unit (Green called it “GovTel”), and computing is likely to be outsourced to a single organisation: an IBM, or a Fujitsu, or similar.

No more WordPress running the Number 10 website. No more small enclaves of innovative, independent projects trying to blaze a trail in Government IT. The future is going to be breath-takingly corporate, bland, and hostile to small business. Some readers may remember the APLAWS project: funded by Pathfinder, this was almost a fluke project which showed just what free software could do. It brought together some big open-source companies, charities and local Government, and developed a mostly-free CMS system that could have been further developed by any local authority that cared to look at it. Sadly, although still in use, project development seems to be dead and in this new Green world, it would never be procurable now.

It’s a very sad picture. Yes, we all want Government to be “efficient” and cost-effective. But there is an opportunity cost here. The opportunity cost is the small businesses and the people innovating in IT, whose products will effectively be beyond UK Government’s reach.

The NHS has also shown us a picture of what happens when these large agreements come to an end. Mass license compliance activity, which never had to be performed before. Large numbers of staff being asked to pony up £100 for an Office suite they already have but no longer have the rights to use. They’re expensive agreements to get out of.

Philip Green has managed to reduce the IT procurement problem to little more than finding the cheapest box of tea bags. I think it’s a poor way of thinking about the problem, because IT moves on so quickly. By centralising and planning IT to the extent being recommended (and this from a Conservative Government!), it will remove any and all flexibility to respond to changes, and impose an IT inertia on the country as a whole.

Sad times.

LibreOffice

LibreOffice is a wonderful, welcome, huge step forward for what was OpenOffice.org. People are complaining about the name and stuff, but really, that stuff doesn’t matter: what does matter is that this now unfetters developers to do anything from the firing of drive-by patches to more fundamental work, and get the project going at the speed it deserves.

Lots of people have talked about the direction the project ought to be going in; I fear to some extent many of those people think that OpenOffice.org is a better app than it actually is. I’m a heavy user of it, and there’s a lot of it which is really hard to recommend to people. As an example, I’ve been working on a document that’s some sixty pages long over the past week: every time it saves, it takes an absolute age, and the entire suite – including spreadsheets or any other part of it you have open – just locks up entirely while that happens. And because it auto-saves every now and then (a necessity, sadly, because it’s not rock-solid) it means every ten minutes there’s virtually a minute of down-time while it saves the various files. And yes, this was an OpenDocument file – which should be the best performing backend (though I haven’t checked if this is actually true!).

However, with all the faults the suite has, it is still extremely valuable, and root-and-branch fundamental change to the codebase is exactly what it doesn’t need right now. I can’t see a Firefox-alike style project doing much other than throwing away what makes it so worthwhile: of course, everyone could name features that could be junked (“Who uses StarBasic?!”), but for every person who thinks a given feature is basically worthless there are likely to be at least two people for whom it’s crucial.

For what it’s worth, I think these points are the most important, though:

  1. OOXML support. Yes, OpenDocument is the native format, and should get the most love. But the output from Microsoft Office should come a close second. Let’s not kid ourselves: interop with Office is the key reason people can use LibreOffice at all. This feature is absolutely, 100% crucial for large numbers of users.
    Support is already in there, of course, but it needs to be better. There’s no technical reason LibreOffice can’t support these files as well as Office.
  2. User interface. It’s not as bad as people make out, for the most part, but some bits of it are clearly stinky sewers. Setting custom colours being the obvious pain-point which hits me every time. There are key bits which need significant work.

Other stuff, like resource usage, speed, etc. will come naturally with time one hopes, and in most ways the suite is basically usable now. New features are also definitely required – online collaboration being something which comes immediately to mind – but with an easier route into the codebase which doesn’t require libraries of specifications to be filed before work begins, it’s much more likely that people will commit to starting such features. OpenOffice.org 3.3 is a pretty lame release feature-wise – for most users, they won’t identify anything new beyond the “fast find” feature (which on its own is hardly rocket science) – and in a sense, it really doesn’t matter what features LibreOffice starts to deliver. It just needs to deliver new features consistently.

This has, of course, highlighted Oracle’s up-to-now lack of participation in the community. Presumably the web-based Oracle Office will be a tremendously exciting product, but it’s a totally different track. This also puts IBM at something of a cross-roads: OpenOffice.org itself appears to be on life-support (and potentially dead, depending on what Oracle decide to do). It’s conceivable that they could ship a version of Symphony on top of LibreOffice, although one would suspect it would be harder than what they do now. I hope they take the plunge and donate large parts of Symphony to LibreOffice – though I find it difficult to believe that they might.

In the same way that big business like Oracle and IBM ought to step up to the place, I think it’s also incumbent on pretty much everyone using LibreOffice to contribute. Maybe that contribution could be in the way of templates, graphics, documentation improvements and other non-code stuff. Hopefully, more excitingly, it will be a lot more straightforward to download LibreOffice, compile the bits you’re interested in and make code changes that can be sent upstream. I suspect for most people this will, initially, be little more than correcting obvious faults and one-liners, but it’s certainly something I’m going to have a bash at again and hopefully the success of the project will encourage many other people to do that too.

Beyond dogfood

I don’t usually like to do a me-too post, but mizmo is right on again with her thoughts on jcm’s post. I raised a similar question at a town hall meeting earlier this year – basically, asking if Fedora is really suitable for day-to-day use as a primary desktop. My personal situation is much like some of those who answered, that it works for me but that I would find it difficult to recommend. There was one part of someone’s answer which particular rang true for me, which just happened be to be mizmo’s (how convenient!):

“It makes me really sad to have so many friends and family members who are highly technical and opt not to use Fedora. I know many Fedora contributors whose families use Windows, OS X, or other distros. I feel very strongly this is a problem.”

This is absolutely key. Fedora has to have a role, and I think to often the “<x> isn’t stable” and “stuff is breaking” arguments get batted away with “This isn’t RHEL/CentOS”. Without being strongly usable for something, fewer people will be tempted to use it.

So I would like to make my technical feature plea, along similar lines to the comments about updates. Right now, Fedora revs extremely often, and isn’t supported for long. Yet, the major version upgrade story is really pretty sorry – even with preupgrade. It’s not straightforward to upgrade to a new version and quite often packages will break.

If we’re keeping up this pace, I think it should be much more incumbent on Fedora as a whole to make major version upgrades clean and easy. Packagers should be testing this constantly. I saw figures recently about versions of Fedora being pulled by yum updates, and it didn’t look pretty reading. If Fedora is about being “First” we have to bring users with us.

Fedora people repos & Sparkleshare

Recently a new system has been added to people.fp.o, the ability to host yum repositories. It’s not an equivalent of Ubuntu’s PPA system by any stretch of the imagination, but it’s another useful facility to have available.

I’ve been testing this over the past few hours with a new package: SparkleShare. For those who’ve never heard of it before, this is essentially a little tray app that synchronises a local directory tree with one held on a remote server: you can think of this as being very similar to Ubuntu One, Dropbox, iFolder or similar. However, what’s interesting here is that this is built on top of git: so SparkleShare essentially automates the commit and pull/push process, handling it invisibly for you, while still giving you a pretty solid system underneath.

Now, SparkleShare isn’t really ready for Fedora itself yet – it’s still under a large amount of development, there are funky bugs in it now and then, and it’s likely going to change constantly. However, it’s also useful software already, and is something I want to try out on a number of my machines, so the new people-based repos were the obvious candidate.

If you want to try it too, you just need to enable my new SparkleShare repo:

Recently a new system has been added to people.fp.o, the ability to host yum repositories. It’s not an equivalent of Ubuntu’s PPA system by any stretch of the imagination, but it’s another useful facility to have available.

I’ve been testing this over the past few hours with a new package: SparkleShare. For those who’ve never heard of it before, this is essentially a little tray app that synchronises a local directory tree with one held on a remote server: you can think of this as being very similar to Ubuntu One, Dropbox, iFolder or similar. However, what’s interesting here is that this is built on top of git: so SparkleShare essentially automates the commit and pull/push process, handling it invisibly for you, while still giving you a pretty solid system underneath.

Now, SparkleShare isn’t really ready for Fedora itself yet – it’s still under a large amount of development, there are funky bugs in it now and then, and it’s likely going to change constantly. However, it’s also useful software already, and is something I want to try out on a number of my machines, so the new people-based repos were the obvious candidate.

If you want to try it too, you just need to enable my new SparkleShare repo:

sudo curl -o /etc/yum.repos.d/fedora-sparkleshare.repo \
    http://repos.fedorapeople.org/repos/alexh/sparkleshare/fedora-sparkleshare.repo

And then you can “yum install sparkleshare”. Fedora 13 and i386 only right now – I’ll have x86_64 builds up later today, and I’m going to be doing Fedora 14/Rawhide as soon as I can get builders up for those two.

Thunderbird: Fedora & the future

It’s only been a couple of months since I last wrote about the future of Thunderbird, but I’ve been thinking about it again recently. The immediate issue which prompted me to write this was the disturbing news that a potentially bad crasher bug in Thunderbird has gone unfixed in Fedora even though a patch was submitted about a month ago because of sensitivity over trade marks. Although some users on the devel list appear to be dealing out their usual standard of hyperbole on this, it is an extremely difficult position to defend: who knows if the maintainer would have actually released an update by now, but the immediate problem is the mark.

The company I work for moved offices recently, and this also set me thinking about Thunderbird again as we update our e-mail systems. As well as an update breaking one of the add-ons we rely on, there are still basic features missing from this mailer which we need as a business, and doing things like adding good-looking signatures to e-mails is bizarrely difficult and user-unfriendly.

We’re also in the position of still running on Thunderbird 2. We’re there because it’s a reasonable little client, but Thunderbird 3 is not: it comes with bad defaults which need to be switched off, and the search is irritatingly difficult to manage. Every now and then I search and rather than the useful folder filter I get the craptastic separate search tab, which doesn’t work because I’ve turned off Gloda.

Thunderbird 3.1 is supposed to be an easier upgrade for Thunderbird 2 users. Two problems: first, I don’t really believe it, and second, there are now no new Thunderbird 2 releases planned. So we’re now on an unsupported product with only an upgrade to a product we’re unhappy with available.

What would moving to Thunderbird 3 involve for our organisation? Well, primarily, it’s a support issue. We’re distributed (as well as having an office), so we would need to be giving users some kind of training so they could support themselves on the new software (avoiding all the inevitable “Where has button X gone?” type support calls), and ideally we’d want some distribution mechanism so we could control the setup of Thunderbird for our users. Of course, no such stuff is readily available – you can’t even buy it from Mozilla Messaging, the business set up to develop Thunderbird. This seems unbelievable to me; we can’t be the only business who’d be willing to pay for a business-ready Thunderbird distribution.

It’s now getting to the point where we will be making decisions. I can guarantee that we will be testing Evolution on Windows, to evaluate its suitability as a cross-platform client. My misgivings about this before have again centred on commercial support and reliability: however, Evolution has a much, much better business story, a clear development roadmap and solid history of releases.

Evolution would also be an easy sell to our users with the enhanced address book and calendaring support. It doesn’t look amazingly Windows-native to me, but that’s potentially a quite small problem – the main thing is testing it’s reliable.

I would have never thought Evolution would even have been a contender on Windows, but to be honest if we’re not in a position to receive commercial support for either suite, the choice becomes a lot more interesting – and obviously for our Linux users, it’s stable and has a great integration story.

Page 3 of 6