Alex Hudson

Thoughts on Technology, Product, & Strategy

Category: bongo (Page 1 of 9)

Speculation on Google’s “Dart”

Just yesterday people jumped on the biographies and abstract for a talk at goto: the Keynote is Google’s first public information on Dart, a “structured programming language for the world-wide web”. Beyond knowing a couple of the engineers involved – which allows a certain amount of inference to take place – there’s also some speculation that Dart is what this “Future of Javascript” email referred to as “Dash” (this seems entirely possible: a dash language already exists; Google already used ‘Dart’ for an advertising product but have since stopped using that name, potentially to make way for the language).

I thought it would be interesting to have a look at some of the details of this new language. One thing seems quite certain: Google’s Javascript engine, V8, is going to target this, because it’s going to target client-side application programming to begin with. V8 is, of course, very popular – it’s in Chrome, it’s in Node.js, it’s going to be put in Qt. However, it hasn’t really been a brilliantly standalone project (witness the problems getting Chromium into Fedora, as an example) and the addition of Dart will almost certainly make this worse.

So, what else do we know?

Compiles to Javascript

It seems likely that the language will, at least in a proper subset, compile into Javascript – a lot like Coffeescript does. Personally, I cannot stand Coffeescript for the same reasons I really don’t like python, but there is some obvious win to this approach: you get backwards compatibility with existing systems and, usually, a method of interacting with existing code and libraries.

I suppose the first question is, how different to Javascript will it be? It will almost certainly be object-oriented, but that need not imply prototypical inheritance – it could be the Javascript compiler will do some fancy trick with object to make things appear more classical. Coffee does this to a large extent too, and I think we’ll see a similar approach. I doubt much of Coffee’s syntax would be copied – it’s almost Perl-like in its terseness sometimes – but I think there will be a similar approach to the object model.

There will be other differences. Javascript is relatively typeless, I suspect Dart will have types of some sort at least optionally. The scoping rules will probably be a bit different as well – the “let” keyword has never caught on wildly, but some level of block scoping (as an example) would be an obvious improvement.

Not just a language

I think it’s relatively clear from the “Dash” discussion that this isn’t just going to be a language: templating and possibly even MVC will be available alongside, some how. I expect to see some interesting things here, actually – there might not be much impact on the language (although a way of embedding HTML templates might be handled specially) but I think it will be closely aligned to these key tools. The Javascript world has been doing some interesting stuff – see Backbone.js and Knockout.js as two obvious examples – but it will be really interesting to see how much “platform” is put into Dart.

There is a worry here, of course, that it’s too restrictive. Knockout is actually a great example: it’s MVVM architecture, not MVC, and for a lot of jobs I’ve actually been really, really impressed with it. It’s simple, straightforward, but most of all productive. It would be a shame if you can’t do something similar in Dart, but I would bet you can. Binding data onto a web page is so fundamental, so basic, that I really think there will be some interesting stuff there.

Binary Dart?

I’m not really sure about this, but I’ll chuck it out there anyway: clearly, writing Dart in a text editor is going to be fine. However, I also expect that there would be alternative delivery mechanisms. Right now, people use tools like Closure to “compile” Javascript into a more compact representation. Clearly, if you’re starting with a new language, you could specify a binary format right from the start. This would also sit beside NaCl/Pepper quite nicely, and allow multiple resources to be included into a project without having to have multiple file downloads into the browser.

Google are going to be focussed on deployment of large, highly-interactive apps, I think – so although the small page enhancement stuff would still be on the table, really I think Dart is going to be about writing GMail and Google Docs. In that context, being able to wrap up everything into a nice deployment package makes a whole heap of sense.

A month to wait?

Sadly, I don’t think we’re going to know too much more before goto;. I had a look around the V8 source, there aren’t really many clues in there to what’s coming. If they’re offering a compile-to-Javascript option, that might be the only thing released at first – so Dart would effectively be a standalone compiler only to begin with, growing features to target specific engines later on.

Who can program?

Over the past couple of weeks, I’ve been pondering the above question for a number of different reasons. For people who really study programming, like I attempt to, there are a number of claims/myths/legends/tales that are commonly held about people who cut code for a living, such as:

  1. some programmers, the “alphas”, are as much as ten times more efficient than the common programmer;
  2. there are people who “get” computers, and those who don’t. Cooper splits these into “humans” and “homo logicus”. Those who don’t grok computers are destined to never be able to program;
  3. there are people who are paid to cut code, and who simply can’t – they rely on auto-complete IDEs, cut’n’paste library code, etc.;

For the purposes of this post, I’ll separate between these different concepts: the “goats” (people who cannot code, at all), the “sheep” (people who code, perhaps professionally, but poorly) and the alphas. Sheep and alphas are collectively referred to as coders.

Saeed Dehnadi’s PhD homepage cropped up recently, on Hacker News I think, and mentions some tests which have had varying degrees of success in differentiating between goats and coders. Somewhat surprisingly, it’s claimed that it’s possible to administer tests to people before they have been taught to code and yet still determine whether or not they will be able to code. The tests are disturbingly simplistic, but although they involve (pseudo-)code, they’re actually designed to determine the mental models people apply to the problem, and in particularly whether people apply a consistent model.

I have to say, I remain a little bit sceptical about all of that. It reminds me of a question a lecturer once asked our class, while working on set theory: “Why to people refer to ‘∀’ as ‘upside down A’, and refer to ‘∃’ as ‘backwards E’? They’re both simply rotated π radians”. I remember thinking to myself that he obviously had little clue how the human mind approached novelty – and in particular attempt to label things with a communicable tag. “`A’ transformed with a π radians rotation about the centre point” not having quite the same ring about it. But maybe there was a point in there, somewhere, about finding single consistent models that can be re-applied.

It’s really tempting to think about this in terms of tests. This is, after all, one of the reasons vendor certification programmes came into life: to reassure employers of whatever description that the person they’re hiring to do a task with a specific product really is able to do what they say they are able to. And it does work, kind of, after a fashion. If what you need is within the narrow scope of the studies of the certification, you can be generally assured that the person does at least know the theory. However, for programming, this is a bit different – frankly, there is no such thing as “narrow scope” when you’re talking about cutting code. Some people, like Paul Graham, go as far as to say questions like “how do you pick good programmers if you’re not a programmer?” are basically unanswerable (mistake 6 in his classic list of 18 mistakes that start-ups make).

It’s also difficult to talk about how you can tell the difference between sheep and alphas (let’s pretend, for a moment, that there is no spectrum in between there – that’s probably not true, but it might be a somewhat valid simplification). How many people read Reg Braithwaite’s discussion of using FizzBuzz during interviews and didn’t recognise the picture he paints? Let me repeat here his main thesis:

“199 out of 200 applicants for every programming job can’t write code at all. I repeat: they can’t write any code whatsoever.”

He refers back to another classic, Joel Spolsky’s thoughts on the hiring process from back in 2005. So, what are all these people doing applying for jobs that they are effectively incapable of doing, and how many of these people actually end up being hired and contributing to the sheep pool? It’s difficult to know exactly why they are applying, but part of the reason has got to be the modern tools available to programmers: both the editing environments, the IDEs, and the documentation and programming resources available. Some coders will have no idea about the types of tool I’m talking about, having never used the likes of IntelliSense and visual designers.

Let me give you a clue. Take a long hard look at this abomination. Just to be really clear about what’s going on there, they have a tool which can effectively write code based on semantic guessing – the words you’ve written, the types of the variables involved, the context of the code, that kind of thing. This is like a builder using a brick-laying machine to build a house without the need of any intervening thought about such trifles as “how strong should this wall be?”, “am I putting this wall in the right place?” and even “what if I need to later put a door in here?”. Simplifying the coding process is an admirable goal, and in fact has been an ongoing process since we wrote the first assemblers to translate mnemonic instructions into machine code. However, the ideal has always been to raise the level of abstraction to the height at which it makes sense to solve the problem at hand. You don’t want to write an entire database system in assembler, it’s not really a high level enough language. But you may want to dip into assembler in specific parts. Sometimes, it will hide the details of what’s going on underneath from the programmer, and occasionally that will annoy the programmer. In general, you do not what to be writing large pieces of code where you actually have no idea of what’s going on – an alpha would never, ever stand for that; a sheep, on the other hand, would.

Jeff Atwood has another collection of thoughts on his 2007 post about coders who can’t code. The conclusion he reaches is relatively natural, based on his references and the above: you ask people to actually write some code. Amazingly, surprisingly, gob-smackingly, this still isn’t happening – even today. I could name, but won’t, places which are hiring programmers based on not much else but their claimed experiences. I know people who’ve been through such processes, and have seen such myself. Do you need to ask a question of each candidate, make them do something original? No, of course not – you could even simply ask for a sample of code and take it on trust that they wrote it. My experience on the hiring end is that it’s actually quite easy to tell whether someone wrote a piece of code or not, and the most telling piece of information is not the code itself, but the problem it solves – seeing the type of code that a candidate thought was appropriate to display actually says an awful lot about their tastes and sensibilities.

If I was doing this again right now, what would I do? Probably about the same as I did last time: ask people who turn up to do some simple tests. It’s shocking how often people with an otherwise interesting CV just totally bomb even the most simple request, but it’s nothing much more than a filter. Trying to determine higher-level skills is actually fundamentally more difficult, because the more skill you’re attempting to detect the more parochial your test necessarily becomes, to the point where you’re filtering out people who simply don’t have the exact same background / knowledge set as yourself. Much more important is the capacity to learn and apply problem-solving techniques – part of me thinks that asking them to write code in a language they’ve never seen before might be an interesting test, but it would be fantastically difficult to pitch it at the right level.

I’m going to end with a link to a discussion on StackExchange about how to detect a passionate programmer. I’m not sure I agree with much more than about 50% of it, but there are a lot of ideas in there for predictors of programming expertise. Interestingly, there are even some companies out there who claim to be able to help out in the hiring process to clear out goats and sheep. I have a lot of sympathy for people like Eric Smith who are rolling their own tests to attempt to judge skill. I have to say, though: it really shouldn’t be this hard.

Short thoughts on the riots.

Last night, we decided to order pizza – we don’t do it often, it’s lazy but sort of a treat. However, out of the three local well-known places, only one was open: the other two had shut down early. Now, we don’t live in London per se, but Croydon (where there were major fires and a member of the public was shot just a night ago) is only a few miles east, and Clapham a few miles north. Sutton, the local town, had some windows broken by youths, but to be honest this isn’t exactly exceptional behaviour in Sutton.

What happened when the pizza came was slightly surreal. The delivery guy was very nice, but asked me to wait at the door until he left. Why? Because there was a group of three kids just down the road, and basically, he was afraid of them. Just to be clear, most people buy using their cards these days, so this guy wouldn’t be carrying much money and didn’t have any other pizza on his bike. He wanted to make sure that if something did happen, there would be someone watching. After he left, I did begin to wonder whether this pizza chain really did make the right decision to stay open. I don’t think he was actually in much danger, but to work in an environment where you believe yourself to be in danger is probably just as bad.

So what’s going on here? I think this has been a long time coming, if we’re very honest. Anti-social behaviour has been a significant political issue for the last fifteen years, and you only need to look back to the student fees marches just last year to see people rioting in London in pretty much the same way. The difference, of course, was that under the flag of another cause, people assumed that this wasn’t a more general problem.

No-one really knows why this is happening. This isn’t a sudden thing, this has been years in the making, and the cause or causes are probably no longer determinable. My personal favourite reason is education: I don’t think this country has ever had an education system which hasn’t failed a significant minority of the young, but in years gone by those failed by the system have been able to get jobs in factories and other places which had a wage that could support a family. Those jobs don’t exist in the same way any more, and they certainly don’t pay a living wage.

What is disturbing, though, is what comes out of the mouths of these people when you listen to them being interviewed. I’ve heard a variety of interviews with people across the country who’ve been rioting, and the message is basically the same no matter who they are. They hate the police (the “Feds”), but this is just a proxy for authority of any sort. They don’t care who gets hurt, they don’t identify with any particular societal structure except for maybe their own gang, and they see an easy opportunity to gain at low risk of getting caught.

This actually reminds me of various pieces of stories from Jon Ronson’s “The Psychopath Test”. I’m certainly not diagnosing these people with antisocial personality disorder, but good lord they certainly display many of the traits. You just need to look down Hare’s list and they tick practically every box at least once over – the main thing that they’re not doing that a real psychopath would do is the lying and manipulation to cover their tracks. Right now, many of these kids don’t feel the need to even cover their faces from the cameras which they know are there.

The radio told me this morning that three men had been run over, attempting to protect their neighbourhood. Many people have come out on the streets, the anti-rioters, the wonderful Riot Wombles, and it’s tremendously sad that rioters are not just attacking buildings and cars but also now their fellow man. I expect a number of commentators to draw immediate parallels with Grand Theft Auto, which had already been linked to these troubles by people who believe games influence real life. I think all it demonstrates is the lack of imagination the rioters have. They’re like a kind of Commercial Zombie, roaming the streets for TVs instead of brains, destroying everything in their way. They don’t know how to deal with people personally, all they can do is imitate and emulate others, whether it’s video game characters, movie characters, hip-hop stars, whoever. I don’t wish to dehumanise them, but they just seem to be incapable of rational thought, they just play out TV stories because they don’t seem to know any other way.

Drag Me to Shell, p2.

(this is part 2; you may want to read part 1 before reading this)

I said last time I would go into the file maangement side of GNOME 3 a bit more, and I think I would be right in saying that there are a number of people who think this is probably one of the weakest aspects of the release.

The first thing to say is, I vaguely surprised myself by the lack of problem in this area. If you read various reviews, the changes in accessibility to file management and the lack of desktop icons are quite often brought up as serious issues, and as a relatively heavy user of the desktop file space I imagined that this would be the thing which would hurt the most.

Turns out, it didn’t. And I don’t think I’m praising GNOME 3 here: thinking about it, what I’m actually doing is saying GNOME 2 was an awful lot less good at this than I remember. So, sure I did use the desktop file space, it because a dumping ground for current work and I would move stuff in and out of that area because it was handy to be able to jump into files that way rather than have to go navigating for them. But it turns out, I didn’t really use that as much as I thought – what I actually do is I go into applications and either use the recent lists within applications, or I go wandering from their open dialog. That, and I open up terminals and find directories manually, and then run software like geany directly from that terminal.

So, what I’m really saying is yes, it is sucky – but no, the suckiness isn’t actually that much worse than it was. Nautilus the file manager itself has not really changed that much, it’s a bit more streamlined, and it’s mainly useful for burning CDs for me. And general bits of file area cleanup, and mounting remote file systems, and that’s really about it. There’s nothing in GNOME 3 that really helps me manage this stuff properly, but this doesn’t seem to be much of an step backwards.

I’m not sure how this is going to get resolved going forwards – it clearly does need to change for the better – but what is extremely reassuring is the little peek into the design process on the GNOME wiki. On the whiteboard space is a little page about finding and reminding, which covers many of the problems I’ve talked about here. I’ve no idea if that’s the direction it will actually go in, but what is nice is seeing design decisions being taken in the context of actual research findings of people who’ve studied this area properly. I’m sure there are plenty of kick-ass designers who can come up with nice flows for this kind of thing, but call me old-fashioned: nothing beats a proper study with some real data and genuine findings.

This is a bit short and sweet. The next post I make is going to be a little bit more on the technical side, looking at the Javascript underpinnings of the system. This is extremely interesting to me: I’m a power user, and I love being able to open stuff up, tinker with it and customise it. GNOME 3 promises more toys in this regard than ever before, and my initial dabblings with GJS and the various libraries in the GNOME stack make me think that there is so much more possible in this area. There is a heck of a lot of power in the current GNOME stack, and although some people have written 3 off as a “newbie’s desktop” I think they do so at their own peril: they literally don’t know what they’re missing. Of course, it’s not all great – the documentation for the Javascript APIs is almost totally absent right now – but the potential is incredible.

Drag Me to Shell, p1.

This is part one of what will be a multipart blog series: how tremendously exciting, eh?! In all seriousness, with GNOME 3 imminent, I thought rather than do a review of the desktop it would be much more interesting to talk about it from the perspective of a relatively hardened Linux enthusiast actually using it within a business environment.

First up, disclosures: I’m an extremely happy GNOME 2 user. I have a copy of Fedora 12 on my Eeepc 901 netbook, with what is now a relatively ancient version of gnome-shell on it, but to be honest the shell is little more than an interface for launching Firefox on that machine. Other than that, I’ve not really used GNOME 3 / gnome-shell in more than passing. I called this post “Drag me to shell” quite deliberately: honestly, I’m happy with GNOME 2. But, I’m somewhat forcibly trying to move myself to GNOME 3 full time. (Yes, I have seen these various KDEs and Unitys an other desktops. No, I’m not interested, and this isn’t meant to be taken as some kind of comparative to other systems. Also, I’m running this on what is to become Fedora 15, which has changed like wind blowing sand recently, so there’s stuff in here that may well change before the final GNOME 3).

When you log into a desktop for the first time, when it’s freshly installed, it kind of has this blank newness look to it. It’s a lot like new car smell, and it doesn’t tend to last very long as you install stuff, reconfigure the theme, stick files on the desktop and all that – very soon, it’s looking a bit worse for wear and you have to start tidying up things again. Now, initially, getting into GNOME 3 felt quite restrictive: there’s a few things which aren’t there, there are limited controls on how the desktop feels. But, a few days in, I’m already beginning to appreciate this – like a self-cleaning oven, that new car smell hasn’t yet gone away. It will be interesting to see if it does.

Of course, I have already installed a lot of my business apps. I need Firefox, Evolution and LibreOffice like most knowledge workers, and to be honest things really aren’t much different here. Firefox 4 is an awesome upgrade to 3, and although there isn’t much in the way of genuine GNOME integration, this is all meat and potatoes stuff. However, I also do a lot of development – both web development and “real” software – and it’s relatively crucial to me that these other things work well. I’m a keen user of MySQL Workbench for administration and schema design, and I tend to edit my code in either Geany or vim depending on the project.

Thankfully, the workflow with these apps hasn’t really changed much. Like many “enterprise” apps, Workbench presents a tabbed MDI interface with only a single window: it doesn’t really take advantage of any of the new stuff in the shell, but it doesn’t come a cropper much either. The only slight oddity is that the interface now sprouts entirely out of a “Home” tab in the most peculiar fashion:

I don’t think this is a GNOME 3 thing, it seems to be something that Workbench as acquired all on its own. Very odd, but hardly a huge issue in practice  – I just end up with an awful lot of grey space on the toolbar.

So that’s normal apps which tend to run as single windows. However, for apps that have multiple windows, I think things have improved tremendously in GNOME 3. I’ll walk through one example – using the GIMP for graphical editing – the terminal/command line being an obvious one.

Now, right off the bat, there are some nice touches in GNOME 3 that either weren’t in earlier versions or I just didn’t notice them. One thing I really appreciated was the hook up between the local MIME database and the remote list of applications available. I’d logged into my fresh install, but hadn’t yet put the GIMP on – so when I asked it to open a picture, it didn’t have that in the options though. But look what I could do from there:

The window on the left is what I got – the “which application should I open?” – but then, at the bottom, as well as showing me other apps I have available on my system, there’s also the “Find applications online” button. Click on that, and I get this new list on the right. Granted, it’s not amazingly pretty, but it basically just gave me a list of applications which fiddle with images, and right in the middle there was GIMP. Click, click, done. Excellent: I didn’t have to go all the way into add/remove applications and do a search there and come back, the workflow was really smooth.

And once you get into the GIMP things have improved again. GIMP is an MDI application as well, but gives you multiple windows, not tabs. You have a toolbar window, another window for layers, and then a window each for every image you have open. With other window managers, I have to say sometimes it was a bit of a pain to navigate around these windows using either the mouse or the keyboard.

So here’s where it starts getting good. Whether by chance or design, the toolbar opens up stuck to the top-left corner, and the layers window stuck top-right. The picture opens up in the middle. This is a really great, sensible default. But then, as you open up further windows (copying image parts between different buffers is a really common workflow for me), the new Alt-Tab switcher really starts to come into its own.

Sadly, I can’t seem to screenshot this (having to hold down Alt interferes I suppose), but as well as the application icon coming up in the switcher, the two windows appear beneath. I can click on them with the mouse if I want, or – even better – I can use the brand new (to me?) Alt-` combo (the key above tab) to cycle through the application windows. Note, it doesn’t cycle to toolbar or layers – just the actual windows I’m operating in, the images. I cannot sufficiently describe how cool Alt-` is and how it’s already becoming part of my muscle memory.

I have a feeling the window management stuff is going to be a little bit make-or-break for some people. I use workspaces a lot, and it’s crucial to me that they work well. Because of this, having to Alt-Up or Alt-Down to switch feels somewhat unnatural – I don’t know why, but the landscape of left-to-right just makes more sense. Now, I don’t yet have a dual monitor setup, and I have to say that as a power user I would imagine it would make more sense to spin workspaces up and down like a slot machine if I had screens side-by-side. But, right now, this does jar a bit.

Next post, I’m going to go into file management. Right now, I have a lot of problems in this area: as an obvious example I used the “Desktop” folder as a project space. I have scripts which would move files on and off the desktop while I was working on them. Now, Desktop as a folder still exists – but, it just doesn’t mean anything any more, and Nautilus basically prefers to be in the home directory. Accessing files through the GUI is now much more of a pain. However, I’m going to work on this and see how I feel after a week – maybe there are things I’m missing, maybe there will be new ways of doing things, maybe right now it just is that bad. Tune in for part 2 if you care.

Fedora 15 & Gnome leadership

It has been an incredibly interesting week in free desktop-land, in that kind of “interesting like a soap opera” kind of way. I guess it’s not news that different participants have different recollections of the same series of events, but it is a bit sad to see it writ so large on a public stage.

Timing-wise, it’s quite co-incidental, but it’s enlightening (I think) to read Mark Shuttleworth’s latest “Internal competition is healthy, but depends on strong and mature leadership” alongside Mark Wilcox’s “What happened to Nokia?” of a month ago. I’m quite clearly going to side with latter-Mark on this one: internal competition is generally not healthy; in fact, in my experience, it can be of the most damaging things you can do to a group of people. That’s not to say that it’s always a bad thing – to a large extent, it works for the Linux kernel (who I think are a special case in this regard) – but in a community telling someone their contribution isn’t wanted is a hurtful thing. You can see the hurt if you read what Mark S. is saying, it’s both implicit and explicit. Internal competition isn’t a solution to this, though, of course – it’s the equivalent of taking the disagreement outside and settling it mano-a-mano, swapping one hurt for another. It’s a red meat solution, a particularly macho form of solving problems.

For me, from the outside, Gnome 3 has been an example of a particularly successful collaborative project. If you go to gnome3.org and “Try it out”, you’re not downloading a copy of Red Hat / Fedora there – it’s OpenSuse underneath, built on their rather wonderful Open Build Service. All of the design has gone on in public (Hylke’s list of designers was interesting), and as a long-term gnome-shell user (I’ve been using it regularly since Fedora 12 on my Netbook) it’s easy for me to appreciate just how much work has really gone into this system.

Of course, Gnome 3 is not going to be for everyone. That’s ok, there’s KDE (and others). This is another example of where competition isn’t really: sure, you can run the same apps in both desktop environments, but generally users of one are not going to be immensely happy in the other environment (particularly power users). They don’t compete head-to-head in that sense. In the same sense, I think that’s the same thing that happens with distros. Yes, Fedora 15 will almost certainly lose some users because of the default setup. The inclusion of Gnome 3 will irk some, the inclusion of systemd will irk others, and to a large extent it was always thus (pulseaudio, networkmanager, etc.). Again, there are other distros, the Debians, OpenSuses, and even Ubuntus of this world, and to a great extent they really don’t directly compete with each other. Sure, some people move from one to another like they’re changing underwear. I think this is why Fedora can afford to be an adventurous distro, why Debian can’t really afford to put out bad releases, etc. – each to their own.

So, how much competition is too much? Where does the line lay? I don’t think it’s easy to tell. What is clear is that the amount of drama on this issue way, way exceeds the amount it deserves. Owen Taylor has said that including “appindicators” in gnome-shell is still on the table – so in that sense, there is a bit of a fuss about nothing (of course it’s arguable, and hypothetical, that his opinion has changed on this subject).

What is really needed, though, is a much clearer vision of where the desktop ought to be going. Mark S. has said that the Gnome 3 “trajectory” is wrong and has already failed. So where should it go? Where is Unity trying to go? Mark S. has already given up on Gnome, but talks about having Unity and KDE co-ordinate closely via freedesktop.org.

I think what is sad about this is the focus on the differences by the participants. Looking from the outside, gnome-shell and Unity are incredibly similar, and have been since release. Anyone looking at screenshots can see that; they obviously have the same influences. The Unity 2D system is even implemented in QML, which is another variant of Javascript just like gnome-shell’s gjs underpinnings. I find it difficult to believe that one could not be modified without a lot of work to look/behave much like the other. This isn’t a technical dispute.

Thoughts on Nokia & MS

As predicted, Microsoft and Nokia are tying a knot of sorts, and all sorts of people are extremely disappointed by this news. I’m an Android user right now, but I’m particularly disappointed because Android just isn’t the free platform it claims to be.

A lot of people are blaming Microsoft and dreaming up “entryism” conspiracy theories. These people are entirely wrong; the decision to go MS was signalled a long while ago by Nokia’s board. Nokia are a $40B business: decision making doesn’t work like that. What is true, though, is that occasionally the free software community gets the benefit of large corporations putting resources into developing software, and occasionally those corporations change their mind later. We celebrate the former and mourn the latter, it’s only natural – I’m a big GNOME fan, but it seems that GNOME Mobile, MeeGo, and the various related stacks are basically dead in the water at this point.

This match-up makes a huge amount of sense for Nokia, but sadly it is going to alienate some of their current user base. I liken this to Bob Dylan’s move to electric guitar: his fan base called him a sell-out, and never ever forgave him. Fundamentally, his music changed beyond all recognition. Whether this is right or wrong, of course, lies in the eye (or ear) of the beholder.

Amusingly, this also means that of all the development platforms for native mobile apps, Mono is now exceptionally well-placed. It can compile native code and make full use of native APIs, and comes in an Apple flavour already, with Android along the way. I guess this is an additional sting in the tail for some, particularly since Qt could have also played that role exceptionally well, but we must acknowledge that the free software mobile development stack is actually in quite good shape right now. We don’t have the right development environment for HTML5 apps yet, though.

It’s exceptionally sad that a really free mobile OS hasn’t come to fruition. OpenMoko took a long time to come to market and wasn’t developing quickly enough, that same verdict has now been given on MeeGo. Android is close, but is not developed in an open fashion and in the matter it is delivered is not a free OS. The “commoditization” argument has been shown to be wrong.

What the move to WP7 does signal is strong integration into Windows and, I guess, Exchange and Sharepoint. People aren’t going to care about the OS in a couple of years. For free software to matter in this space, the focus has to be on integration and apps. It doesn’t matter, after all, what’s running the hardware: what matters is what you can do with it. By struggling for freedom at the hardware and OS level, it’s very easy to lose sight of the bigger picture – and with it, strive for things which are totally irrelevant to 99.99% of phone users.

Open Source Expo 2011

Today was Open Source Expo day. While I had been asked by one of the organisers whether or not I could propose a .Org to exhibit there, I decided against it for a couple of reasons: mainly, because I hadn’t heard very much about the exhibition, was a bit worried about the timing, and questioned whether or not it would be a good use of time for me or anyone else involved in an open source project to attend. To be clear, this event is held over two working days, and is in the middle of London: not the end of the world for me, I could take time off work, but others I know are consultants and would be literally losing money by going. I have to say, I’m glad I didn’t spend much longer there than my lunch break today allowed.

Now, Open Source Expo this year was “co-located” with Cloud Expo Europe. This immediately sent up red flags; while all of the interesting Cloud stuff is of course being driven by free software projects, the cloud expo speaker list is primarily composed of execs from relatively large corporations attempting to flash their with-it credentials to the kids of today, giving rise to various vacuous talk titles such as “Ahead in the Cloud”, “Networking the Cloud: Is it the Journey or the Destination?” (network and cloud? surely you’re kidding!!), and “SaaS 2.0 Open Cloud Computing”. I apologise to the speakers whose talk titles I have used; it may be that your talk was exceptionally entertaining and at the cutting edge of cloud technology, and certainly there are speakers such as Michael Meeks who I would pay to be informed by, but I’m afraid it was a deeply uninspiring schedule.

However, if the “Cloud” side looked a little bit dated, the “Open Source” sister website was even worse. Even as I write – on the 2nd February as the event is actually taking place – there are still significant proportions of the site missing. In particular:

  • “The conference programme will be available here soon.” – well, no. In fact, there doesn’t seem to be an open source conference at all. There are five or six vaguely open-source talks and speakers, no idea if you have to pay for the conference but I’m certainly not hanging around for two entire days just to see a couple of open source talks.
  • “This year’s speakers list is to be announced shortly, please check back soon.” – again, no. I’m not sure the difference between the “speakers list” and the “conference programme”, anyway – perhaps it’s just the same list sorted alphabetically and then chronologically.
  • [Sponsors] “LinuxIT is seeking Open Source projects to include in the .org village, this exciting area of Cloud Expo” – no list of .orgs, and there are serious problems with the village (see below).

So what part of this conference is really open source? It’s difficult to tell. Aside from the .org village, there is no obvious “open source” area, and it doesn’t look to me like there’s an open source conference happening at all. If you’re interested in open source technology, and not cloud per se, there really isn’t much here for you at all.

But here’s the rub. Ok, there are not many open source companies there. But to be honest, there’s not much here at all. On a single level at the Barbican, it’s a very small and claustrophobia-inducing space, and it took me literally ten minutes to walk around the entire show. Large parts of floor space are taken over by the speaking areas, the meeting rooms and the cafe, and there are only 35 exhibitors. It’s difficult to get excited about this, and don’t get me wrong: I want to be interested in the corporate area. I can name four exhibitors of whom I’m a customer. But what wonderful new stuff is there? There’s only so many hosting companies or people offering SaaS applications I can stomach; I want to come to Expo to see what’s coming, not what’s been around for the past ten years.

Back to the village. I repeat what I said just now, “there’s not much here at all”. The .Org area, which is one of the most interesting and cutting-edge areas of the Expo in years gone by, has been decimated. I kid you not, this is a picture of the entire .Org village:

Yes, I can hear what you’re saying from the other side of the internet. “That can’t be it! That’s just a stand”. No, no it isn’t. Front left you have a PXE project (I’m sorry guys, I forgot your name – maybe it was gPXE – but you had hardware and it looked a polished presentation), and behind them back-left is Ubuntu UK. In the centre is Drupal, and back-right is Debian. LPI are front-right: I’m not totally sure what LPI are doing there, and since their stand was unattended for the 40mn I was there I didn’t get chance to ask, but it seems like LPI took over sponsorship of the .Org area from LinuxIT. How/why is not to speculate, but it’s an extremely odd match in my opinion.

Compare and contrast this photo with previous Expos. If you haven’t been, the Open Source Expo website helpfully provides a picture from 2009 on the front page. Or compare to this report from 2005. Or this picture of the .Org village from 2008. This is by far the smallest .Org village by some significant margin. The question has to be asked; was it really right to label the “Open Source Expo” as a. existing and b. co-located with the Cloud Expo? The honest answer has to be no; open source / free software is a fact of life and if we’re being honest the various web developer shows have twice the amount of “open source” as was available here.

All this said, I have two major complaints. The first is the missed opportunity. Everyone knows that some of the most cutting-edge “cloud” development is happening in the open source world, but this Expo didn’t give that impression – in fact, you’d get entirely the opposite impression. Here are some of the key cloud technologies that were entirely absent from this Expo:

  • Drizzle, MariaDB, and the other lightweight SQL-ish clustered databases
  • MongoDB, CouchDB, and the other lightweight no-SQL clustered databases
  • GlusterFS, GridFS, or any number of the other shared internet storage systems
  • node.js, nginx, or other innovative server systems
  • jQuery, YUI, Dojo, or any other front-end UI development stuff
  • Eucalyptus, libvirt, Cobbler, or any other virtual machine provisioning/hosting system
  • Hadoop or any other map/reduce style processing system

… and that’s just off the top of my head. Also absent were any developers knowledgeable about the above, or any IT managers who have deployed any of the above, or any business people running enterprises on the above. Yes, there are not many people working with the above because the above technologies are quite new: but that is the entire point of Expo. I don’t want to go to Expo to talk about people deploying virtual machines or network storage. Woop woop! Boring alert. Expo is about stuff to come. New stuff, things people haven’t seen/done before. Sorry, but the cloud happened about five years ago.

Second major complaint: timing. Why on earth early February? Linux Expo was always traditionally on in October, which seemed to work reasonably well. Early February, though, is traditionally FOSDEM time, and FOSDEM starts in a couple of days. Are people really going to come to an Expo in London and then go to Brussels a few days later? No. Are people going to visit the London Expo in preference to FOSDEM? No. So what you’re left with is the people in/around London who can’t get to FOSDEM (like me) but who can come in for Expo.

This whole post is horribly negative and I apologise to people who’ve put time and effort into Open Source Expo / Cloud Expo to make it work. I apologise to people who’ve taken time off work or otherwise invested time to come to Expo to exhibit or to be a speaker at the conference; I don’t write this to belittle your contribution. The question has to be asked, though: how do we solve the problems which have beset this Expo, and make it better in future?

Updates to bongo’s storetool

One thing that has always bothered me is that there has always been little way of getting data in and out of Bongo stores easily. Well, no longer (sort of): I’ve upgraded the storetool a bit to make this easier. Some examples are better than words:

$ ./sbin/bongo-storetool -u admin -s _system list /config
0000000000000011 4096 /config/aliases 0
0000000000000013 7 /config/queue 705
0000000000000014 7 /config/pop3 50
0000000000000015 7 /config/mailproxy 100
0000000000000016 7 /config/antivirus 155
0000000000000017 7 /config/global 76
0000000000000018 7 /config/smtp 512
0000000000000019 7 /config/manager 711
000000000000001a 7 /config/imap 70
000000000000001b 7 /config/antispam 61
$ ./sbin/bongo-storetool -u admin -s _system get /config/pop3 out
$ cat out
{
 "version": 1,
 "port": 110,
 "port_ssl": 995
}
$ # do some editing of out
$ ./sbin/bongo-storetool -u admin -s _system put /config/pop3 out

It’s not totally complete yet: we need some commands to remove, move documents, fiddle with types, flags and properties too – but it’s a start!

Potential Gna! issues

It looks like someone has been attacking Savane-derived hosting platforms. Recently Savannah has been down, and the page that has now gone up confirms that they had a security breach. Unfortunately, Gna! has a similar code-base, and their site now confirms that they are investigating an issue too.

This has a knock-on issue for Bongo, since we use Gna! hosting. Our download area appears to still be alive, and thankfully we have always signed the releases. You can check a release of Bongo quite simply:

  gpg --verify bongo-0.6.1.tar.bz2.sig bongo-0.6.1.tar.bz2

This should result in a confirmation that the file is correctly signed with the key 9B6913D7, which is available on public webservers – just search for that ID.

We will check the SVN repos and other parts of the project when Gna! comes back on line to ensure that these have not been attacked; but I can say with some confidence that we have no reason to expect that r1323, the current HEAD, has anything nefarious in it whatsoever. Certainly, if the security breach was as recent as Savannah’s appears to be, there is nothing to fear.

Page 1 of 9