Alex Hudson

Thoughts on Technology, Product, & Strategy

Category: bongo (Page 3 of 9)

First attempt with Gnome Shell

As part of doing some testing of the upcoming Fedora 12 release, I decided to put it on my EeePC 901 netbook. I had a feeling it would be relatively well supported, and indeed the installation was extremely smooth. I was going to also test the work Peter has been doing on the Moblin packages, but instead I got detoured into Gnome-Shell: my Fedora 12 laptop runs Nouveau (no 3D for me!) and my desktop is ATi but not Fedora 12 yet.

In a word: wow. For what is supposed to be a technical preview for Gnome 3.0 which, we should remember, is about a year away, this is a remarkably polished bit of software. The simple black borders looked a bit basic in screenshots but in use the understated nature of the interface actually looks a bit classy. The Applications menu and particularly the side panel are a bit rough looking, but I don’t have the side panel turned on.

The workspace metaphor is taking me a bit of time to get used to; it’s quite similar to the Gnome 2 system but has slight differences. I’ve had a bit of trouble launching apps in the way I want, but I keep discovering new features to it (e.g., dragging and dropping the app into the workspace) which give me better ways of doing things. Not being able to access applets from the Activities page is also a bit odd, but eh.

What is particularly praiseworthy, I feel, is that even though it doesn’t appear to be designed particularly with netbooks in mind – unlike, for example, Moblin – it does seem to work really well. I’ve put the Chromium browser on there as well, to save screen real estate, and the combination is much better than the relatively-stock Ubuntu I had on before.

Although there aren’t really any new features compared to Gnome 2, everything seems to be placed in a smarter way, and in particular I could see non-technical users using these workspaces and actually “getting it”. I suspect a lot of what’s remaining is really “small scale” stuff – the polish which really makes things sing. I noticed odd problems with windows not popping up when they should occasionally, and stuff like that – or clicking a link and having it appear on a browser in another workspace with no notification. These are the small fit-and-finish pieces which make an application great, though.

Whatever happened to the Chandler project?

Years ago, Mitch Kapor invested large sums of money into the OSAF – Open Source Applications Foundation – to come up with a new personal information manager called Chandler. Having burned through the $8 million they got via various means ($5M from Kapor originally), most (all?) of their developers were cut loose early in 2009 as I understand it, leaving the Chandler project – and associated projects – somewhat in limbo.

On a whim, I decided to have a look at the current state of play: turns out Chandler 1.0 is pretty much dead, and the work is now concentrating on re-architecting the software for Chandler 2.0. Having been in a similar position myself with the Bongo project – though I like to think the work we’ve done has been more evolutionary than revolutionary, except where absolutely necessary (e.g., the Hula store not being able to handle concurrency – d’oh!). The pages don’t seem to say why, although I seem to remember Chandler suffering from various pretty severe performance issues (i.e., being unusable).

It also reminds me somewhat of the situation with Mozilla Messaging and Thunderbird 3. MozMess has had a few millions injected into it, and now their developers are on a spree of embedding “search”, databases, bizarre active folder systems, and “conversation view” into Thunderbird – literally every release in version 3 I’ve been using (and I use it as my main client) has been a step backwards for me.

Clearly the injection of substantial money isn’t any particular driver of success; indeed, on the other hand, it seems to have a negative correlation on those (extremely limited) data points. However, without resources to have people develop, it’s difficult to see how to build up enough momentum to make this stuff happen. It is something of a conundrum.

litl breaks cover – what to think?

So, litl has finally broken cover – I had written previously some first thoughts based on the various tid-bits that had leaked, and I thought it might be interesting to follow up on this. Sadly, it doesn’t seem that I had missed anything particularly big: this is supposed to sit like a photo frame in various places in your home, can connect up to a TV but doesn’t really do any multimedia stuff itself, doesn’t have non-wifi wireless, and doesn’t have a touchscreen. In short, it’s a big Chumby, and very similar to both that and Joggler.

I still don’t get why they went to the effort to design their own hardware. Ok, moving parts are gone – that’s good (although, it has a CF drive? Aren’t those hard drives writ small?). Aside from that, this is just a bit of a bendy netbook. One hopes they didn’t spend too much on the design of that. And the controller: well, ok, it has a wheel and a button, and the Mac-like dispensation with other stuff probably makes it reasonably easy. But infra-red: oh dear. This isn’t like the Wii wand; you need line-of-sight.

The software interface is obviously a bit different to what’s on the market at the moment, but not that far a departure from the likes of Joggler. It’s built around channels, which is a familiar concept from TV. I’m not totally sold on this: the channel concept is beginning to die a death precisely because it doesn’t work in the modern era, and it reminds me a bit of early computer UIs which were designed around physical desks and files (which is partly why we still call them “desktops”). However, I’m willing to give it a go and see if it grabs me – the proof of these puddings is always in the eating.

What slightly scares me is the use-cases set out on the website. “View photos”. “View photos on your TV!”. “Get the weather”. “Get movie listings”. “Access the web”. Etc. Oh, and “plug it into your TV”. Ok, I can see some point (although, I’m not really feeling this). But not a word on the point I made previously: battery life. Being portable and light is great if you can move it around freely. Will the battery life keep up with that? How will it work? If the litl ends up being tethered to the mains for the most part, suddenly most of the appeal I can think of has flown out of the window.

And then, the big, big disappointment: litl want $700 for this. Assuming a decent exchange rate, that’s still £450 with the best will in the world. Plus they want an extra £10 or so for the remote control, which bafflingly is an optional extra. Looking at what that would buy me in on the High Street if I went out this afternoon:

  • £250 gets a 10″ Atom-based netbook with storage built in (yes, it’s a PC)
  • £450 gets me the same netbook but with added mobile broadband, so not only can I take it everywhere around the house, I can take it around the country and still access my stuff
  • £150 buys me a Joggler, which for my money does a lot of what litl does but has a touch screen and is even more portable. It’s not as flexible or powerful I guess, but I could buy three of these for one litl and I can text mobiles directly from it.
  • £140 buys me a Chumby – less portable, less flexible, more interactive.
  • £340 wins me a Fizzbook Spin. It’s ruggedized, the screen’s more adjustable as is the webcam, it has a handle and the orientation adjustment. It also has the important touchscreen.
  • £300 also buys me various TV media centres which play photos, video, browse the web etc. Sure, it’s stuck under the TV.

In reality, how many of these are actually in competition with litl? Probably not many directly, although Joggler comes very close. It depends what you mean by “in competition”, though. I know a couple of people who leave a laptop around in their living room for accessing the web, so I vaguely understand what litl are aiming at. I’m just not sure this is on target, though, and for all the nice design it’s missing (unless I’m mistaken, which I could be) a touch screen and wireless wand – both of which I think are crucial features if you want people to either pick this up or use it from the sofa. A keyboard is entirely useless in both those scenarios.

So, I’m inevitably feeling a bit disappointed. I believe they’re missing a trick: not because they don’t have a good idea; I think it is a good idea (in most respects, anyway) and it appears to have been executed well. However, I think also that times have changed since this thing was on the drawing board. One excerpt from the “software essay” on the litl site stood out for me:

“Let’s say you have a litl at home, a litl in your weekend home, and a litl plugged into your TV.”

Wow, what an assumption to start from. Are they really aiming this device at people with a second home who can afford to drop $2000+ on three netbooks? That’s not the economic climate I see around me today.

And, I think that sums up my problem with this device. What functions it has seem to be well designed. But, it’s a complete luxury device. It’s not enabling me to do things I couldn’t do before. It’s simple, yes, but when I think about it, simple isn’t the first word that springs to mind: the first word I thought of when I saw this was “decadent”. Decadence of design, of implementation, and of product. That’s not a good thing.

Update 12:45 – there’s a pretty enlightening interview with John Chuang up now too; and the absolute last sentence is probably the most revealing: “We can have a long-term view because we’re self-funded. We know we have a great device, and right now we want to make sure we sell it to customers who are buying it for the right reasons.”

Not just decadent design of a decadent product, but now decadent marketing. Buying a product for the right reasons?! Whoa. This just smells of multi-millionaire play-thing, a la Chandler, OLPC, to name but a few.

Update 13:40 – last update I’m making on this. Engadget, who leaked the hardware first, are not impressed by the price either.

The other point I wanted to make in this post but forget: the UI is very similar to the Palm Pre, for me, or the Nokia N900. Both of which devices will be a. cheaper than this and b. much more useful to me. But then, I guess I’m just not the right sort of customer 🙂

Last point – it appears possible this thing runs Ubuntu under the hood. The Crunchbase listing for litl has Canonical down as an OS service provider, and there’s a PPA on launchpad for “bfallik-litl” – bfallik on twitter has not tweeted, but is friends with Peter Teichman (of Hula fame) and other known litlers.

Given Havoc Pennington, ex of Red Hat, is the Director of Operating Systems or something at litl, that’s a bit of a surprising choice for me, but I guess the OS layer is likely to be pretty thin anyway – the stuff on top is Javascript-based somehow, possibly the whole thing is running in a Mozilla window much like the Pyro Desktop.

First thoughts on litl’s Easel.

I’ve been waiting for litl to break cover for what seems like forever. The people seem to be all extremely smart, and it sounded like they had such a great idea, even if no-one knew what it was. However, engadget have seen some FCC information on a new “Easel” product from litl – and I can’t help but feel a bit disappointed, because it’s a netbook.

Of course, it almost certainly isn’t. The FCC photos take the extremely strange step of photographing it standing on the top screen edge: a position many netbooks wouldn’t be able to reach (some do, but not many). That, combined with a strange rubber insert along that top edge and a conveniently located power button make me think this thing is designed to spend much of it’s life in that position. Much like a photograph standing on the mantlepiece, this thing is probably designed to sit in your living space and “do stuff”. I could be wrong; the IR detector on the front of the keyboard is presumably needed for something, but there could be another built into the bezel somewhere (the FCC photos appear to show one). Interestingly, it’s designed by FIC apparently – the same people from whom OpenMoko spun out of.

What “stuff”, I’m not sure. It doesn’t look like it has a touchscreen (and it has the keyboard attached), but it does seem to have a webcam. Presumably you can Skype or something through this thing, but to interact with it you have to pick it up and flip it over? I suppose one possibility is that it has insane battery life, a wake-on-wlan function somehow and doesn’t need to be permanently strapped into a power socket, but I don’t know – having an Intel Atom and stuff probably means it doesn’t do that.

Doesn’t seem to have DVD/TV even though it has an HDMI output, doesn’t have 3G, doesn’t have touchscreen, etc. If it’s supposed to do something like a Joggler that would make some sense, except that again it doesn’t seem quite so interactive. Can’t believe it’s any good for gaming.

So, I’m kind of at a loss. If you’re going to all the effort of designing your own hardware, there’s usually a reason – a la OLPC. But this doesn’t seem to have many interesting features, except that it’s totally legacy and peripheral free – it only has one USB socket. Presumably the software has tonnes of interesting features, but in these days of Maemo and Moblin I wonder how close to the state of the art this is, or whether it’s in a kind of Daikatana situation. If it turns out to be another Chumby – well, that’s nice, but again disappointing (Chumby already exists, after all).

We only have around a week to wait to find out apparently, but I’m kind of left feeling “what’s the point”? If this is really just a netbook for accessing web content, I don’t understand it at all. You don’t need custom hardware for that, and people already know how to use web browsers. I hope I’m wrong, but it’s vastly underwhelming so far.

WikiReader – “Project B”

Let me start this by saying that I really, really want OpenMoko Inc. to be a raging success. With Android, Palm Pre and other “Linux phones” showing pretty how not to do things (jury’s out on N900 for me still), the properly free smartphone is an idea whose time is very definitely here. Sadly, with the freeping creaturism of the phone market and the need to develop both a hardware and software stack simultaneously, that didn’t seem to work out so well, so OM are now going to their backup plan: “WikiReader“.

Now, I’m a huge fan of Simple. I don’t particularly like the look of this device, but I respect the design: the reduced form factor, the insane battery life, the readable screen. Not sure on the wedge shape (presumably necessitated by the choice of AAA power supply), not sure on the buttons (surely it could have just been one touch screen?), but those are design choices. It has obviously been designed, and that’s excellent.

However, although it has been designed, who has it been designed for? The wedge shape makes it less pocketable, and most adults I know already have phones which beat this device into a cocked hat. So I’m pretty sure it’s not really designed for me. Because it’s essentially an offline device, presumably the people it is designed for are mostly/entirely offline: however, if they’re offline because they can’t afford it, it’s difficult to see how/why they would pay $100 for one of these things. I’m also deeply sceptical of any project which attempts to address the “IT needs of the developing world” in a fashion which involves shipping basic devices that no-one in London or New York or (other “not developing world” place) would actually use.

So, my conclusion is that this device has been designed for children, and probably children in families who have a pretty high income. But, here’s the thing: if I was designing it for children, I would not make a device that was black and white, had no pictures / illustrations / animations, had no music / sound, etc. I mean, this thing is boring. And is adult wikipedia actually suitable for children? I don’t know what the reading age of the site actually is, but I’d imagine you’d have to be into your teens to understand most of it (particularly without diagrams and stuff).

I hope I’m the one who’s dead wrong about this device. I’m thinking of excuses, right now, I can use to buy one. But, it doesn’t have any kind of connectivity: I couldn’t hack it to store contacts or calendar appointments, and putting stuff like a wifi card into the micro-sd slot (assuming that would even be possible – does it have in-built flash? think not..) would effectively kill the battery life. I have this horrible feeling in the pit of my stomach that this is a brilliantly designed device implementing a wonderful idea that no-one actually will want. And that would be very, very sad.

“Free hardware” seems like an obviously winning idea. Has anyone actually successfully executed it yet though?

A couple of words on Clutter…

For various reasons, I’ve been playing with Clutter over the past week. Rather than the 1.0 release that was announced a little while ago, I’m still on 0.8 – for a number of reasons, but mainly because 1.0 isn’t really available in any distro yet, and because the various language bindings are not yet up to date. LWN has a pretty decent write-up of the 1.0 release (subscriber only for the next week).

Let me talk about the negative things about Clutter first. Here are my main problems with it:

  • documentation for the bindings isn’t great. I haven’t looked at the C++ stuff, which I have a suspicion might be good, but information for pyclutter and Clutter# is extremely hard to find. For the project I’m doing, I’ve ended up writing it in C – I would have much preferred to use C#, Python or Perl in preference. Ho hum.
  • it’s not great on non-accelerated hardware. No, scratch that – it’s terrible. My machine has a decent Radeon, but because I’m using the free software driver I don’t get 3D. Clutter therefore sucks big-time. For the project I’m working on, this showed up quite early on: I couldn’t get animations to work. I would run the animation, but nothing would happen. Turned out that because I had something else going on, it wasn’t getting enough screen update time and was basically just freezing rather than dropping frame rate. I’ve changed things around to deal with this, but you absolutely need 3D acceleration for Clutter.

However, all that said, I’m extremely impressed by Clutter. The API is pretty simple: I haven’t coded using it before, and I think I learned it in about four hours. Obviously, you need to have some kind of understanding of what a scene graph is, and some of Clutter’s terminology is confusing – for me, Alpha was the worst as every time I saw it crop up I assumed it was something to do with opacity. A better name would have been ValueTimeRelationship or something – it’s just a function which computes a value between 0 and 1 given a time parameter t.

What I really like is ClutterScript – again, awful name, because it’s not a script in the programmatic sense – the ability to define parts of the user interface in an external JSON file, bring it in and play with it. It’s quite basic (there are lots of things that I’d like to do that you just can’t do, although some of it is hackable) but a big timesaver, especially in C.

It does seem to me that Clutter and Gtk need to get it together somehow. I suppose if they continue to work together relatively well it’s not too much of an issue, but I think the Clutter bits really ought to be built into the toolkit, or the other way around: there are still substantial gaps between the tool (like accessibility, as mentioned in the LWN article). But it has intrigued me about GNOME 3. I’ve seen gnome-shell and bits of Mutter, but I can see how this could be extremely compelling.

I just wonder how it’s going to work with my video card.

Compiling Bongo

Recently, we changed the build system of Bongo – we’ve moved away from autotools. This isn’t to say that autotools is necessarily that deficient, but the new CMake system we’re using is a lot more suitable for our kind of project. This has brought some immediate benefits – much simpler build system, much quicker compiles and installations (‘make install’ in particular is now much faster), and a slightly simpler source tree. We can also now build binaries out-of-source, which is a huge boon.

However, we haven’t yet really documented properly all the different build options and how it works. So here it is – your primer to the new Bongo build system.

Once you’ve checked out Bongo, you’ll see a source tree which looks something like this:

$ ls
ABOUT-NLS  cmake           COPYING  import   INSTALL  po      TODO
AUTHORS    CMakeLists.txt  doxygen  include  man      README  zoneinfo
ChangeLog  config.h.cmake  HACKING  init     NEWS     src

The first thing we should do is create a new directory to do our build in: this stops all our build files from littering the source tree.

$ mkdir build
$ cd build/

Now we need to configure the build. There are two ways of doing this, and I use both! One way is good to start off, the other way is good for tweaking. You’ll see what I mean, but let’s start with the initial configuration. This is how I usually start:


The first argument points to the Bongo source directory. Because I made a ‘build’ directory in the source tree and went into it, we’re just pointing at our parent directory. Then come some other options. Every option is prefixed with “-D”, and some of them are CMake options and others are Bongo options. In full:

  • CMAKE_INSTALL_PREFIX: where we want to install to. I use /tmp/build for testing, and /usr/local/bongo when I want to run it in production.
  • BONGO_USER: which user you want Bongo to run as. I use my user account for testing, “bongo” for production. You can also run as “root” if brave (not recommended!)
  • CMAKE_BUILD_TYPE: set this to Debug to generate information for gdb, otherwise leave this option out.
  • DEBUG: enable code paths which generate debugging messages. Both this option and the previous are for either advanced users or developers, really.

There are other options to the Bongo build, but these are the main ones. However, once you’ve configured Bongo, you may want to tweak something: perhaps turn on debugging, or change one of the file paths, or something different. The easiest way to do that is simply:

$ ccmake ./

Note that it’s “ccmake”, not “cmake”. This starts an interactive application where you can change each configuration item. You point it at the build directory, not the source, and it gives you all the various tweakable options. You’ll see that they are the same options that we pass to cmake – and indeed, you can pass them to cmake! There’s even an advanced mode with even more knobs (press ‘t’). When you’re done, press ‘c’ to configure the build and then ‘q’ to quit.

Once you have configured the build, you have access to the usual make commands:

$ make
$ make install
$ make clean

The first builds Bongo, the second installs it to your prefix, the last removes the built files.

Google Chrome OS

As I drove to a business planning session (a.k.a. entrepreneur funday) this morning, I heard on the radio that Google had announced their intention to release a consumer operating system. This was interesting news, albeit too brief, and being at this event I wasn’t able to check the news until I got back this evening.

Pretty much the first thing I read was Andrew Savory‘s take on this. Wow, how disappointing.

Not technologically, though. A browser on top of a Linux core – ok, that makes sense. It will obviously need a few other bits and pieces too, but intrinsically we’re not a million miles away from where Pyro was going (as Alex Graveley noted). However, the community side of it is deeply, deeply disappointing. But, considering Android, not entirely surprising.

Although the announcement ends “we’re definitely going to need a lot of help from the open source community to accomplish this vision” I somehow doubt that: this is another example of FLYOSS – Fly-tipped Open Source Software. All the vision I’m seeing here is the old joke about Messers. Bodjit & Scarper dancing in my head.

There is a lot of talk about the security architecture about this, but I’m pretty sure that’s not the hardest problem to solve when you’ve junked virtually every app bar the browser, and there is a lot of talk about how great this will be for developers. But will the apps be as compelling as those created using Mozilla Prism or Adobe Air? Or will Google re-invent that wheel too, with their own desktop toolkit? Will Chrome OS need a Google account to work?

I think Google are about three years late with this already, anyway – the Netbook revolution has pretty much come and gone, and people are just installing Windows on them. It’s ridiculous, but people don’t much care about the size/speed of them, and they’re getting them basically for free with broadband contracts. By Q3 2010 that boat will have not only sailed but be half-way around the globe on its world cruise.

Mono and the MCP

It has been interesting watching the debate around Mono over the past few months. As essentially an independent observer – albeit one who has used Mono and can almost code C# – I couldn’t help the sneaking feeling that somehow, some of this was being orchestrated behind the scenes.

Particularly on the “anti-Mono” side, it has been pretty clear that an agenda of agitation has been in effect, with various distributions being prodded into making statements either way and various “users” kicking up stink on mailing lists – not least a certain infamous blog writer being caught red-handed whilst goading people on to write angry letters. I don’t know if speeches like Stallman’s were co-ordinated – I suspect more likely happenstance – but it all seemed very well timed.

And now it seems that Novell have been leading Microsoft into giving a Community Promise surrounding patent claims that cover the “standard” parts of Mono. I predict this is going to have a surprisingly negative effect within the community, however. It validates the arguments of people worried about Mono, and this proposed split of Mono into “Standard bits covered by MCP” and “Other bits not covered by MCP” is actually going to fuel the flames: inevitably, people will assume the non-MCP bits are a total patent mine-field, no matter what is actually in that area. Parts that people are quite happily shipping right now – such as – will be targetted next by people “anti” Mono. And for the parts covered by MCP; well, I expect not much to change: certainly, it’s not likely to convert many people to Mono.

In this scenario, I would actually suggest this is a step backwards. People will read the MCP news as an admission that Mono is at risk from Microsoft patents, and it certainly will not unite the community in any fashion.

Personally, though, I think this move signals that Mono is now basically big-time. I don’t know how many will agree, but the smoke-signals have been in the air for a while: major products such as Sims 3 by EA shipping with Mono, for example. It’s big enough that Microsoft is having to sit up and take notice; it cannot be long now before Microsoft starts shipping either parts of Mono or its own implementations of key Mono tools and libraries.

I don’t think “the Mono issue” is going to be resolved in the Linux community any time soon. However, as I’ve said before – I don’t think that matters. The community of people using Mono, and the community for whom Mono is an attractive proposition, is an extremely large and probably not (for a large part) particularly Linux users.

Like Firefox before it, Mono is becoming a break-out technology which appeals to an entirely different set of developers. In Firefox’s case, it became the web developer tool par excellence due to its vastly better debugging, inspection and developer tools. Mono isn’t comparable with MS’ development tools yet, but already has a strong appeal to people wanting to use .net development tools in non-Windows environments. You can use Mono to develop for Microsoft’s own Xbox 360 – as far as I’m aware, you can’t do that with Microsoft’s own .net yet.

It will be interesting to see where the Linux community moves to on this issue over the next couple of years. Whether or not Mono gets used much doesn’t really matter any more though; Mono is now an entirely successful project in its own right and it’s going to be here whether we like it or not.

A few words on CMake…

I did promise Lance that I would blog more on Bongo, and I’m going to try to stick to a post a week at least – however, this first one will only tangentially be about Bongo.

Since the project was initially released, the autotools build system was what you needed to create Bongo. There are a variety of benefits to using autotools, and it’s an extremely well-tested and mature system. However, it’s also relatively difficult to understand and not particularly quick. Over time we accreted more and more things into our build which no-one understood fully and that would occasionally blow up in our face.

Now, a little while ago I started a new branch of Bongo, called “memmgr-glib”. This was mainly to replace the memory allocator – maybe Pat can blog a bit more about this, but the short story is that this change has highlighted a variety of bugs in Bongo. The branch version of Bongo therefore seems extremely unstable – actually, it’s just a lot less bug-tolerant, which is overall a good thing – but we used that as a cue to make some other changes.

One of which was a switch from autotools to CMake as our build system. Just to give one really obvious stat:

autotools cmake
Configure 52 seconds 3 seconds
Initial compile 47 seconds 21 seconds
Install 1m 15 seconds(!) 2 seconds

Clearly, there is a big difference in each category: it must be said that the build systems aren’t doing quite the same things at this point, and I haven’t done these tests properly as benchmarks, but fundamentally the times are extremely different. I expect eventually that the compile time will even out – the difference will become minimal, and is already of the same order. Configure and install are clearly quicker though, and there aren’t any amazing short-cuts being taken on the cmake side.

The huge difference this makes for developers is the compile-test cycle. A re-compile with either system is basically pretty quick – something like a few seconds each. However, “make install” with both systems is quite different. To do a proper update, it’s much quicker with cmake, which means the testing cycle is really quick.

One other thing that is also big, for me at least: with CMake, we can finally do out-of-source builds. It’s possible with autotools, but our system never quite got it right. With CMake, you can check out Bongo from svn and then do something like ‘mkdir build/; cd build/; cmake ../; make’. The source we checked out isn’t touched, and nothing gets built in the source tree – it all happens in another directory. Not only does it feel cleaner, but it means that you’re not likely to commit any files which shouldn’t be there and if something goes wrong you can just nuke the build directory and start again easily.

I’ve yet to teach the bongo-build bot on IRC about this new system, but I expect that it will make building there a lot quicker too – mainly because it goes through the cycle above more than once (it configures twice, for example). It will be interesting to see what a difference it makes!

Page 3 of 9