Back in 2010 I wrote a post about Canonical’s business direction, in response to something Bradley Kuhn had posted. Both he and I were worried about Canonical becoming reliant on an “open core” business model – worried not just from the perspective that it would dilute the principle of Ubuntu, but that frankly every time I have seen this executed before it has been a dismal failure.

The posts are worth re-reading in the context of Mark Shuttleworth’s announcement today that Ubuntu will be dropping a number of their in-house technologies and, more importantly, abandoning the explicit goal of convergence. I would also say, read the comments on the blogs – both Bradley and I found it deeply strange that Canonical wouldn’t follow the RHEL-like strategy, which we both thought they could execute well (and better than an open core one).

Of course, our confusion was – in hindsight – obvious. We weren’t seeing the wood for the trees. The strategy has since been spelled out by Simon Wardley in his rather good talks; one example is here:

It’s well worth to take the time to watch that and understand the strategy against RedHat; but it’s pretty easy to state: “Own the future, wait for it to come to us”. Let’s see why this is important.

Against RedHat, Canonical already had an excellent competitor who were well-positioned within the marketplace. I originally believed that there would space in the market for two such competitors doing approximately the same thing, albeit different, and that therefore a RHEL-approach would make sense. The weakness would be that RHEL would likely dominate “Enterprise” and therefore servers (somewhat), Ubuntu would get hobbyists and desktops, and with all the goodwill in the world that’s not where the money is.

Wardley’s argument is excellent, though. They didn’t want to be in a position to go toe-to-toe with RedHat – they are well-funded, have a great product, good staff. They’re a tough foe. In order to compete successfully, they needed to ensure that RedHat would be self-limited somehow – and this is where the “always free” promise kicks in. I always thought that Canonical were tying their own hands by refusing to make money from good product; actually, it turned out the other way around – Canonical were tying the hands of RedHat’s sales force by setting a pricing strategy they would never willingly match. Therefore, they would never compete directly.

None of that would be any good except that Canonical recognised the future was in virtual machines – and later containers – rather than servers. As individual hardware boxes began to run hundreds of virtual servers, the old licensing models stopped making sense, but Ubuntu was free. The deployment model changed, Ubuntu was suddenly everywhere, and RedHat still had their hands tied.

Now, back to today, and the decision to drop parts of the project. Personally, I think it makes a bunch of sense, and while I never understood the “always free” position (and I now recognise how important that actually was!), a lot of the points I was saying then still ring true today:

I find it difficult to envisage how they will be truly, properly, successful until they pull in a single direction and communicate that clear message about what Canonical are actually about; because right now no-one knows. Are they open core or free software? Are they a music sales and file-sharing business, or are they enterprise-level server management bods? Right now it’s Canonical Jack of All Trades.

I never expected the Phone / Tablet / TV thing to pan out, and I doubt I was alone in that regard. Convergence as a principle makes sense, and I have no doubt that some good tech demos were put together to demonstrate the principle and the value proposition. But it was a massive chunk to bite off, and too many core technologies were involved for them to go alone with.

Canonical have shown that they will ditch in-house tech before – for example, upstart vs. systemd – but this is the biggest switch they have made and this will have taken significant guts and bravery to do.

So, what about the future. Mark’s post mentions that the focus is now on Cloud and IoT. Wonderfully buzzwordworthy, but almost certainly a great decision. First, basing decisions on what is resonating with customers and generating growth is never a bad idea. But second, this is much more clearly an “own the future” path again – Convergence could have been that if they had been able to execute and if convergence was a good thing; but in practice I think it was too much work and I’m not really sure people are bothered about it.

On Cloud, Kubernetes is definitely “where it’s going to be”, but as a project they limited themselves in terms of scope. In order to make k8s truly useful, something will need to live on top, and that thing could be supplied by Canonical (or, at least, a popular version of a thing). I think Docker has demonstrated this – and I also don’t think Docker are executing well on this; they have brand and mindshare but are building something that is fundamentally ropey in key areas. The demand is definitely there.

IoT is significantly more interesting, though. Ubuntu Core has been shown to be a good match for production robotics, and frankly the work I’ve done on ARM / Beaglebone in the past has demonstrated this to me pretty conclusively. Not only is Ubuntu a good fit, but the majority of IoT devices out there right now are screaming out for something much better.

The current IoT devices are being pumped out of manufacturing at an astonishing rate, and become obsolescent surprisingly quickly. Lots of people are buying “smart LED lights”, as an example, and these things are usually horrendously insecure. They can be added into botnets, can be used to control people’s home networks, or worse. I couldn’t contemplate having equipment like this around in a corporate network, but the truth is it’s coming or has arrived already – the smart TV, the multifunction printer, Bring Your Own Device, all of these are already vectors.

Ubuntu being free and multi-platform already makes it a good candidate for IoT devices, and the likelihood is that lots of these are shipping already on Ubuntu without Canonical’s help. The key to this is, can Canonical drive take-up, and can they get IoT device manufacturers to pony up for their services?

On one hand, they have some formidable hurdles to get over. A lot of IoT manufacturers are based in the Far East, and ship stuff rapidly by chucking it over the fence. They haven’t (so far) demonstrated much appetite for anything with a per-unit cost structure or provide anything in the way of long-term support.

From my perspective, the way to change this is to technically make it so that offering a stable IoT platform that can receive ongoing security and updates is simpler than a manufacturer rolling their own. Second, make it so that this platform has useful IoT functionality baked in – and this is where convergence tech could be useful. Convergence required multiple devices to co-ordinate and co-operate; this is true in IoT as well, except that this is less about UI and more about servers and protocols, where Canonical are on much stronger ground anyway.

It’s still a big ask, and I would want to be much more comfortable that they could create a more defensible perimeter around some of this stuff, but I think having a clearer direction and a reduced scope will definitely allow them to focus much more.