Alex Hudson

Thoughts on Technology, Product, & Strategy

Category: bongo (Page 2 of 9)

Copyright changes ahead for the UK? SAS v WPL goes to Europe

I don’t particularly like talking law on this blog; it’s boring and – for the most part – disinteresting. However, recent developments in SAS Institute Inc v World Programming Limited (as written up here – thanks to Cristian for bringing this up at FFII) deserve to be aired.

The basic story is that the Judge in this case is deeply unsure of the boundary of copyright. For those who don’t know, SAS is a statistical package which is both popular and influential, and to a large extent can be thought of as a programming development environment. WPL, the defendants, wrote software which could interpret SAS programs. There is no direct analogy in the free software world, but LibreOffice Calc interpreting Excel spreadsheets is close enough for the purposes of our discussion.

The Judge, unsure of the boundary, has sent a number of questions to the European Court of Justice (ECJ). The questions are hypothetical, but clearly designed to test the waters and figure out where this line falls. As an example of some of the questions in our Calc vs. Excel example, he’s asking:

  • Does accessing the file format of Excel constitute copyright infringement?
  • Does interpreting Excel-compatible formula constitute copyright infringement?
  • Is it copyright infringement to copy the behaviour (intended or otherwise) of Excel when processing spreadsheets?
  • Does it make a difference you copy functionality from Excel into Calc by reading Excel’s manual versus observing Excel’s behaviour?
  • Does it make a difference if you have a license to Excel?

(Just to be clear, we’re not talking about Calc and Excel, I’m just interpreting J. Arnold’s questions in this context to make them more readily understood)

Now, the armchair lawyers amongst my readership have probably already thrown their arms up at all these questions and exclaimed, “Copyright doesn’t extend that far!”. And to an extent, they would be correct: originality has always been a defence to copyright infringement, and if any of the questions above were to be answered in the affirmative, we would see the start of that changing.

What makes this different, I think, is that we’re really seeing the weakness of copyright law treating software as a literary work. This has always been bunk, really: software is no more literary than a shopping list, and although the case of verbatim copying (with or without transformation) is open-and-shut copyright infringement there have always been “grey areas”. As one example, the FSF’s position on dynamic linking and the GPL: as a derivative work it does seem to fall under the purview of copyright, but it’s obviously a world away from literary copyright.

The WPL case is also one where the copying was explicit, deliberate and planned: they definitely did copy things. They just didn’t literally copy the software code, or decompile the software: they re-created it from the ground-up. So we’re definitely talking about a case of copying here, which it would seem could also be the purview of copyright.

This is going to be a really interesting case, and is going to have a fundamental effect on free software if we get some interesting answers to these questions. On one hand, it casts an immediate dark shadow over a number of projects: Samba being an obvious case in point, which has previously reached legal agreement in Europe about how it can copy Microsoft while still avoiding the patents that Microsoft hold on certain functionality. But while desktop apps which copy Microsoft make the most obvious cases, you could equally see problems for 3D graphics drivers, people implementing compilers, all sorts of areas – particularly where free software is still catching up to proprietary software.

But of course on the other hand, this would also strengthen the copyright position of free software applications. Companies that currently dance around the (L)GPL-style licenses will find themselves on thin ice indeed, and those proprietary implementations of leading free software will start having to be extremely careful.

It’s very unlikely that many of the questions will be answered in such a way that the copyright system becomes like the patent system: for one, it would be such a massive change that it would require primary legislation at a European level to become legally sound. And there are few cases exactly like this one, where the copying is so obvious and blatant.

The precedents being set here will be extremely important, though. Our understanding of copyright will almost certainly change from the outcome in this case, and will necessarily become more nuanced. The idea of “clean-room reverse engineering” may become more nebulous, and the “I wrote it from scratch” defence could become weaker.

If nothing else, this highlights that no law is truly ever settled, and possibly portents to more movement in this area in the future: I’ve described before how the UK Government is making noises about revisiting intellectual property laws, and in our current weak economic state it is extremely tempting for politicians to beef up some of these laws in order to “create wealth”. Cameron, our Prime Minister, is particularly in thrall to Google, as if they set any good example for our businesses. It’s sometimes very easy to just think about patents and lose sight of the bigger picture.

bkuhn on Canonical

If you haven’t read it already, Bradley Kuhn’s take on where Canonical are aiming is deeply interesting. There is bound to be push-back against the article, because it does connect a few distant dots, but I found it particularly interesting because it’s apropos of a recent discussion on Surrey LUG’s mailing list – which has no public archive so therefore I cannot link, but the gist of the thread was a discussion on the various approaches Canonical takes to getting income.

I personally disagree that it seems like Canonical are attempting some kind of open core strategy: I think it’s arguable that they’re seeking to leave that avenue open, but for them to sell proprietary software products at any point in the next few years would destroy pretty much their entire brand image. In particular, Shuttleworth has made extremely strong statements in the past about never charging money for software – and it seems pointless to be proprietary if you’re giving the software away (well, for the most part).

However, there are a couple of things I find pretty depressing that Kuhn brings up, particularly the admission that Canonical is still not profitable and would take “some more time” to achieve that. By 2007, some three years into the Ubuntu project, it had already cost some $25 million, and of course we’re another three years on since then – given the growth of Canonical, a total spend of $50 million to this point would seem particularly conservative.

What I never understood to begin with, and still fail to understand, is why Shuttleworth has ham-strung the project from the start by promising there would never be a pay-for version a la Red Hat. My thinking in this regard is pretty simple: when you’re paying for all this development, you have to recoup those costs somehow. When you’re providing support or doing all these other activities, you leverage that development, but you incur further costs which then also need to be recouped.

Pretty much everyone else who commercially develops free software has some method of recoup in position. Red Hat, as an example, sell security updates, support, and the OS itself. Nokia invest in software to sell their phones; similarly the likes of Intel, Oracle, and all sorts of other businesses invest in the software to enable hardware sales. Google invest in the likes on Android because it gives them a huge amount of control over the mobile market. And of course, there are plenty of small players too, all of whom have some kind of strategy to ensure their development activity is not some loss-making exercise in the grander scheme of things.

If Canonical’s commercial model confuses me, though, there is one thing which is abundantly clear: for that company to fail would be incredibly damaging. It would be difficult to imagine a more worse-case scenario than seeing Canonical at ten years old, still burning millions of dollars each year and still not making a profit.

Of course, Canonical likely has that luxury; previous break-even targets have been set and then moved for whatever reason, and the vision still seems to be about growing hard, branching out and becoming some “cohesive whole” (which is avoiding the issue). So it’s not likely that they’re going to fold in the next five years. But in away, that’s almost worse: much like a Government-funded entity, there is this insulation from commercial reality which ensures that “the vision” will be followed. The must be an opportunity cost there, room for a new Ximian or a new Mandrake, and fresher more innovative visions that are currently squeezed out of the market.

Philip Green report on Govt spending; UK Free Software

For a number of years, the discussion amongst UK-based free software professionals has been about how to do more with Government. The most active discussions happened around the time of the UK open-source, standards and re-use policy was developed (around 2004; it has been updated since): it wasn’t great before, and it hasn’t improved an awful lot since.

In very similar ways people have bemoaned the accessibility of Government procurement processes for micro/small businesses, and it’s basically the same problem – the “big guys” tend to be pushing proprietary solutions.

So, it was with some interest I read Philip Green’s recent report on Government efficiency. I had some fears about what I was going to read; “efficiency” generally means driving costs down hard, which usually means bulkier contracts with fatter discounts. And that’s what we’ve got.

People who don’t work with UK Government on IT projects probably don’t have much of an idea of the state of play. Right now, discretionary spending within most of it is limited to contracts to £10,000 value. Above that, and you need more central approval, and there is an absolute bar on any project worth over a million. In my book, that’s basically micromanagement.

“Government must leverage its name, its credit rating and its buying power”

This is the central theme of the report. Basically, Government could be buying stuff in bulk, or doing things for itself, in areas that it currently is not. And by Government, I really do mean “Government”: right now, the various Departments often do things for themselves, and by business standards each department is a pretty large enterprise. Green is saying that Government, as a whole single unit, should be doing things.

This is a knock-out blow to free software and small business; it’s difficult to see how any SME will be able to supply Government with anything but the most specialist of services. What’s more, the principle of multi-vendorship enshrined (but rarely acted upon) in the open-source policy is more or less going to be a thing of the past.

Just major suppliers?

You see, here’s the problem. The Green review is mainly looking at main suppliers: contracts worth millions of millions of pounds. However, that doesn’t really mean that it’s not going to affect small contracts.

A good example is the NHS. For the last 12 years, there has been an NHS-wide agreement for Microsoft software. Who knows how much this has cost, obviously hundreds of millions, but equally obviously at a vastly reduced rate. Not only was the NHS able to deploy pretty much any Microsoft software it felt like, NHS staff were virtually free to take it home with them (ok, they had to pay £9 for Office). For those unfamiliar with the NHS they employ, one way or another, around 7% of the UK workforce. That’s a lot of Office installs.

Are NHS employees really likely to become Fedora users if it costs them a few tens of pounds to get the latest Microsoft operating system and office suite? I’m not sure LibreOffice is quite that compelling, and frankly people spend more money at the pub on a Friday night.

It’s the network (effect)

Once central IT platforms are bought and paid for, Government-wide, that puts a huge obvious limit on what applications can be deployed on it. No free software desktop, that much is obvious. But what about free software desktop apps? Having seen the likes of Fujitsu deploy Government desktops in what is commonly called “lock down mode”, I don’t see it that likely that many people are going to be Firefox users in that environment

And what about delivering software as a service? Many people have touted that as a way of doing an end-run around the desktop lock-down. But sadly, the picture is probably worse there, if anything. Government will have centralised hosting as an IT service, and they will expect their services to be hosted on their platform for reasons of security and reliability.

That platform is not going to be an Apache-based platform: being realistic, it will either be some grandiose Java application server (Websphere?) or it’s going to be Sharepoint. Which will mean any application you want to sell to Government will have to run on one of those platforms.

Whither the future

If Government acts on the Green report, and implements anything close to the full recommendations, what we will see in the next five to ten years is a central Government IT platform which is homogenised to a much, much greater extent than we have ever seen. Networking and telephony may be provided by a single central Government unit (Green called it “GovTel”), and computing is likely to be outsourced to a single organisation: an IBM, or a Fujitsu, or similar.

No more WordPress running the Number 10 website. No more small enclaves of innovative, independent projects trying to blaze a trail in Government IT. The future is going to be breath-takingly corporate, bland, and hostile to small business. Some readers may remember the APLAWS project: funded by Pathfinder, this was almost a fluke project which showed just what free software could do. It brought together some big open-source companies, charities and local Government, and developed a mostly-free CMS system that could have been further developed by any local authority that cared to look at it. Sadly, although still in use, project development seems to be dead and in this new Green world, it would never be procurable now.

It’s a very sad picture. Yes, we all want Government to be “efficient” and cost-effective. But there is an opportunity cost here. The opportunity cost is the small businesses and the people innovating in IT, whose products will effectively be beyond UK Government’s reach.

The NHS has also shown us a picture of what happens when these large agreements come to an end. Mass license compliance activity, which never had to be performed before. Large numbers of staff being asked to pony up £100 for an Office suite they already have but no longer have the rights to use. They’re expensive agreements to get out of.

Philip Green has managed to reduce the IT procurement problem to little more than finding the cheapest box of tea bags. I think it’s a poor way of thinking about the problem, because IT moves on so quickly. By centralising and planning IT to the extent being recommended (and this from a Conservative Government!), it will remove any and all flexibility to respond to changes, and impose an IT inertia on the country as a whole.

Sad times.

ActiveSync & Bongo; patently a problem

This evening I completed a new bit of code which has been uploaded to the bongo-web projects; it’s a Z-Push back-end and is only barely functional at this point: however, it works well enough that on my HTC Desire phone I can set up an ActiveSync account, it authenticates and synchronises contacts into my Bongo. To make it usable for just contacts will take a little bit more work, because at the moment it’s not storing them in the Bongo-native format, and it’s difficult to test that syncing is actually working without so more clients – then after that we get to do the same dance again with the calendar (although at that point, 60% or so of the code needed would have been written).

However, even though it’s are the “barely walking” stage it is actually doing useful things, so hooray! If nothing else, I already have a system to back-up my phone contacts working, and in maybe another week or something I’ll have something pretty awesomely functional which I can take on the road. Plus there’s no reason why shared contacts/etc. won’t be sync-able, which is even more awesome.

But there is a black lining to this cloud. Z-Push, although it seems to be reasonably well-developed and commercially backed software, has a problem: ActiveSync is pretty heavily patented in the US, and it requires a license to develop software which uses the protocol. Hence Google buying a license for the protocol to cover Android 2.2 and above, where the support will be native.

This doesn’t pose much of a problem for us as a project: there’s really nothing in the back-end as designed which would be infringing on anything except good taste. However, it’s also totally useless without Z-Push, and as it stands it looks unlikely that Z-Push will be entirely “clean” for USA-based users and developers to try any time soon.

I’ve really been dabbling with this to scratch the personal itch of being able to sync contacts and events on my phone, and so far it looks like that will be straightforward. I can’t help feeling a bit guilty though…

Bongo & Roundcube

It’s been a little while since I’ve posted anything about Bongo; for much of this year there hasn’t been an awful lot to write about – we’ve all been pretty busy. However, yesterday we had a teleconference which is worth talking about.

One of the problems we’ve had is that working on a number of pieces of the system, including the backend and web front ends, has been difficult – both parts are in development, and having everything subject to change like that it pretty difficult. On top of that, the web parts we were bequeathed from Hula just don’t work well and ideally want to be restarted with a modern JS library underneath.

So, we’ve agreed to take a slightly different short-term path: initially, we’re going to ship a version of Roundcube as our mail client. Yesterday, I demonstrated some of the work that Lance and I had done to this end, which comprises a skin for Roundcube (based on the Dragonfly design and assets), and a Bongo plugin based on the PHP bindings we developed as part of the Dragonfly-NG project. As well as the standard IMAP and SMTP support, then, the plugin connects straight to Bongo and pulls through your address book – as well as your own, you can also access shared address books on the server. This all works right now and is pretty useful.

The direction we’d like to take this is to continue the development of these additional parts to make Roundcube as good a Bongo client as we can make it; including:

  • calendar access;
  • server-side rules, signatures and vacation settings;
  • anti-spam training;
  • removing dependencies we’re not interested in (SQL being the obvious one).

It will be interesting to see what extent we can do this within the existing Roundcube framework; the plugin API doesn’t seem sophisticated enough quite at this point. However, the intention is definitely not to fork the project: where we can’t do things within the plugin, we’ll need to see if we can put forward proposals which are more generally acceptable to everyone – making Bongo-specific core changes would be pretty easy, but really not the road we want to go down.

The initial version of this plugin and various other bits of code are available in the Bongo-Web project on Gna!. The intention here is that we will release this concurrently with Bongo, so the two pieces fit together well: we will also be developing an separate administration tool to sit along side this.

What this means for the future web development is unclear at this point. It’s still possible we would take forward our own client development in the future, but that isn’t something we need to think about at this point: and even if we did, it would still be advantageous to maintain good support for Roundcube users in the future.

Thunderbird: Fedora & the future

It’s only been a couple of months since I last wrote about the future of Thunderbird, but I’ve been thinking about it again recently. The immediate issue which prompted me to write this was the disturbing news that a potentially bad crasher bug in Thunderbird has gone unfixed in Fedora even though a patch was submitted about a month ago because of sensitivity over trade marks. Although some users on the devel list appear to be dealing out their usual standard of hyperbole on this, it is an extremely difficult position to defend: who knows if the maintainer would have actually released an update by now, but the immediate problem is the mark.

The company I work for moved offices recently, and this also set me thinking about Thunderbird again as we update our e-mail systems. As well as an update breaking one of the add-ons we rely on, there are still basic features missing from this mailer which we need as a business, and doing things like adding good-looking signatures to e-mails is bizarrely difficult and user-unfriendly.

We’re also in the position of still running on Thunderbird 2. We’re there because it’s a reasonable little client, but Thunderbird 3 is not: it comes with bad defaults which need to be switched off, and the search is irritatingly difficult to manage. Every now and then I search and rather than the useful folder filter I get the craptastic separate search tab, which doesn’t work because I’ve turned off Gloda.

Thunderbird 3.1 is supposed to be an easier upgrade for Thunderbird 2 users. Two problems: first, I don’t really believe it, and second, there are now no new Thunderbird 2 releases planned. So we’re now on an unsupported product with only an upgrade to a product we’re unhappy with available.

What would moving to Thunderbird 3 involve for our organisation? Well, primarily, it’s a support issue. We’re distributed (as well as having an office), so we would need to be giving users some kind of training so they could support themselves on the new software (avoiding all the inevitable “Where has button X gone?” type support calls), and ideally we’d want some distribution mechanism so we could control the setup of Thunderbird for our users. Of course, no such stuff is readily available – you can’t even buy it from Mozilla Messaging, the business set up to develop Thunderbird. This seems unbelievable to me; we can’t be the only business who’d be willing to pay for a business-ready Thunderbird distribution.

It’s now getting to the point where we will be making decisions. I can guarantee that we will be testing Evolution on Windows, to evaluate its suitability as a cross-platform client. My misgivings about this before have again centred on commercial support and reliability: however, Evolution has a much, much better business story, a clear development roadmap and solid history of releases.

Evolution would also be an easy sell to our users with the enhanced address book and calendaring support. It doesn’t look amazingly Windows-native to me, but that’s potentially a quite small problem – the main thing is testing it’s reliable.

I would have never thought Evolution would even have been a contender on Windows, but to be honest if we’re not in a position to receive commercial support for either suite, the choice becomes a lot more interesting – and obviously for our Linux users, it’s stable and has a great integration story.

On oData

Occasionally there are things that I read about on the web which happen to fit perfectly with some need I have at the time: and “Open Data Protocol”, or just oData, is one of them. I think I got hip to this by reading Miguel’s post on oData, but looking around it has been mentioned in a few other blogs I follow.

What is oData? Put simply, it’s a bit like being able to do SQL queries over the web – for non-technical people it’s deeply disinteresting, but what it effectively promotes is an ability for web-based services to open access to their databases in a pretty straightforward and standards-compliant method.

Now, there is some commentary that this is effectively trying to subvert another set of standards, RDF, OWL & SPARQL. I have to say up-front that I don’t see the comparison particularly: they’re similar in many ways, but also quite different, and I personally think they are more complimentary than competitive. However, the people who specified oData are Microsoft – so the “anti-competitive” label is one which sticks easily. It’s a lazy criticism in my opinion, but that’s up to the commentator.

More serious is the problem Miguel raises: while there are a number of free software oData consumer libraries available already, there are limited options for producing oData services. This is a major issue. As relatively light-weight as oData is, it’s still a pretty broad specification: your service needs to be able to produce both XML and JSON, and there are particular schemas and URL structures you have to follow. I’m tempted to start to write an oData producer for PHP, but it’s likely to be a lot of effort for not much immediate gain.

Another problem I’ve seen is that authentication and authorisation is basically not mentioned at all; the nearest we get is in section 8 of the overview:

“The Open Data Protocol does not define a new scheme for authentication or authorization. Instead, implementers of OData services may opt to use the authentication and authorization technologies that fit best with their target scenario.

“The use of authentication mechanisms to prevent the insertion or editing of resources exposed by an OData service by unknown or unauthorized clients is recommended but not required.”

This is a particular problem for me, because the immediate itch I have that this could scratch would need authentication. If one oData service uses HTTP basic auth, and another uses a cookie-based system, that’s an issue – it hinders interoperability. In a way, I understand why they did it – it’s a somewhat orthogonal issue, and once you start prescribing features like that there’s no logical reason to prescribe other HTTP features like if-modified-since, but it does seem to me to be a pretty key issue. Not all data wants to be public.

All that said, I’m planning on digging deeper into oData. It’s extremely interesting, and I think of large potential value in the future – and being honest, there is nothing else like it immediately available. The JSON format alone is of huge value, since it means that browsers can access all this data immediately. There’s just a fair amount of work for it to become useable…

Making Thunderbird sustainable

Tbird is a software product quite close to my heart: I think it’s important for a number of reasons, not least because it’s one of the few cross-platform mail clients that works well on Windows, and feels comfortable for use in a commercial context. Having Mozilla Foundation spin Thunderbird out to a new commercial entity didn’t fill me with cheer because they were essentially cutting it loose, and Thunderbird 3 didn’t excite me before it was released and hasn’t done since it was released.

Slightly more depressing than all of these, though, are the plans put forward for 2010. Development for 3.1 seems to be about making updates from 2.x less painful, and making some of the features better – all things which 3.0 should have been, in all honesty (upgrading from 2.x to 3.0 has put a number of people I know right off Tbird, to the point they’ve switched to something else). The plans to put Thunderbird on an economically sustainable footing also look staggeringly underdeveloped: Mozilla Messaging has been around since September 2008, and from the look of it there is still absolutely no vision about how this is going to happen. What is going to happen is a series of “experiments”, but it’s not really clear to me how you can judge the potential of a business model on that basis.

My specific worry about this is that by trying a series of experiments, they’re basically going to do a prototype-y half-assed version of each, none will work, and the whole thing will come crumbling down. This is specifically why businesses do market research: they test the market before they develop the product, rather than put effort in a direction which isn’t going to be successful. More than this, ideas for development of Thunderbird have been terribly unexciting so far: more experiments in the “web 2.0” direction may be interesting for some people, but I struggle to see how people are going to pony up for any of this.

There has also been seemingly no effort to bring into the core the crucial Thunderbird feature which pretty much everyone clamours for: the Calendar. Yes, the plugin exists, and yes it’s pretty good. But in all honesty, there is absolutely no way on earth I would deploy that setup in a business right now with Tbird auto-updating itself, because at some point something will go wrong on update and people’s calendar plugin will stop working. So either I turn off updates, or I don’t use the plugin, and the balance doesn’t weigh in favour of the plugin.

Joe Brockmeier has written some thoughts of his own on the economic future, which involves basically setting up as a mail service for people to use Thunderbird against. I would worry it’s a little bit late in the game for that; businesses willing to pay for that kind of thing already have plenty of options available to them and it’s difficult to see how Mozilla Messaging can add significant value in that area without carrying horrendous costs.

My take: personally, I would want to see them focus on deployment and management of Thunderbird. Specifically, that means some kind of management system for Windows-based networks, whereby I can control updates, configure accounts, control user’s settings centrally, etc. That’s something critical to broad deployment of Tbird in large organisations, and doesn’t really exist right now. It would also be something worth paying for.

Whether or not enough “things worth paying for” can be created, though, is an open question. Fundamentally, there is a problem with giving away the client for free: it is a development cost, and in order to recoup that cost you have to create value in ways which wouldn’t be possible without the client. Every business model that doesn’t rely on that client being available for free as leverage doesn’t recoup the cost, it shoulders it. And that is the fundamental problem facing “open source business models”.

Why Google is really pulling out of China

Since the news about Google demanding unfiltered search came out earlier today, I’ve speculated in a number of places that Google have broader reasons for wanting out of China, and that the issue of search – and, by extension, free speech – was not exactly #1 on their list of priorities. In particular, I mentioned on LWN my thoughts on what kind of an effect Chinese infiltration of Google Apps would have on the customer base they’re trying to build.

To reprise some of the facts in the reports;

  1. BBC News mentioned “a sophisticated and targeted” attack, but then later says that Google pointed the finger at phishing and malware scams rather than security problems.
  2. The reports are that Gmail accounts were mainly targeted, and these Gmail accounts were of known dissidents.

Now, 2) is quite believable, but 1) is not at all. Phishing and malware is nothing new, there’s not a lot you can do to stop it, and it’s certainly not sophisticated and targeted unless the Chinese Government was intercepting legitimate Google traffic. Even then, with SSL, that’s a difficult proposition.

Wikileak’s twitter account added a couple of other suspicions:

  1. China has been quietly asking for the same access to google logfiles as US intelligence for 2-3 years now.”
  2. Gossip from within is Shanghai office used as CN gov attack stage in US source code network.”

With this kind of affair, it’s usually instructive to consider the adage of Cui bono. Sure, Chinese intelligence probably could use more information about dissidents and would probably like log file access and things like that. Almost certainly they try to access gmail accounts too. I have little doubt that Google would have immense trouble detecting this from the more general problem of phishing, and to that extent it’s really not their problem – it’s a social engineering problem.

There is a far larger prize at stake, of course. If Google source code is under attack, which seems reasonable, this presents two major issues. The first, that Google code could be used by China: there is the issue of straight-up rip-off, which devalues Google at the very least. However, Google is already #2 and is well behind Baidu, the native search system. There is motivation to do this of course, but it’s not exactly the biggest prize on offer.

The larger prize is access to code to work out security issues. Google does not develop code in the open, and while most of it is probably secure there are doubtless issues that a determined attacker could find more easily with access to code. And, once you start getting access, you start being able to gain the stuff of real value: the information stored on Google’s systems, in Gmail, in Google Docs, in Postini, as well as the various logs and other behavioural data associated with advertising.

Put like that, Google simply cannot afford to work in China. In one sense, China is lawless: there is a certain class of “criminal” who is state-sponsored and therefore can do as they please. There is no good technological defence to this, there is only the question of whether to participate or not. And what are the potential costs of participating? Essentially, limitless. Major US and EU firms on Google Apps will not want their business information readable by the Chinese authorities. More than that, firms doing business with other firms using Google Apps will not want their information readable too.

On Facebook, I gave the example of Jaguar Land Rover as one company who use Google Apps for everything. Will they like the idea of the Chinese authorities being able to see what they’re up to? No. Even if it’s not happening, it will put the wind up them: potentially, it could destroy their business. And lets remember, there are now millions of businesses on this platform.

This highlights one danger of cloud computing: not only do you have to trust the provider, but you also have to trust that the aggregation of data in one place doesn’t become a sufficiently juicy target for someone else. And Google is a very, very juicy target.

Let me speculate further on a few potential issues in the future (none of which are problematic at this point today):

  • Google controls large amount of “dark fibre” and “private internet”. That’s a juicy target.
  • Google are putting Android into many handsets. Yum, yum, another juicy target!
  • Google maps / street map / other raw data. If, somehow, the collection of data could be controlled by another agent – well, that’s quite a useful tool to have.

This doesn’t even go into the potential issues of having hardware controlled by another agent in your data centre or in your phone, which isn’t outside the realms of practical possibility for the Chinese Government either.

Let’s be clear about this: all Governments have secret services, and the Chinese are by no means the most adept or technologically advanced. However, they may be the most dangerous and the most likely to work clandestinely. Google must know already that they have spies working for them, not just in China but in all their major offices. Most good industrial espionage is internally undetectable, because it’s acquired information out only. The Chinese spies clearly have been up to many detectable activities, which puts them in a very different class.

Update: Wired has a very good article up with more details about the attack. In short, there was a specific piece of malware targeted at Google to pull their source code out of the organisation. It doesn’t say what, but it does say that the large amounts could be sent and doesn’t say how quickly they detected in. Which is exceptionally scary. I would have thought the Windows-based attack would limit the scope of what could have been lifted, but this doesn’t explain many of the other rumours about Google’s Chinese offices, and doesn’t (on its own) explain Google’s seeming decision to withdraw. Possibly, Google were attacked in other ways too.

Fedora 12 & ChromeOS

It has been great to see Fedora 12 release this week. Apart from the major kerfuffle over PackageKit (which I understand, even if I don’t recognise the problem), it seems to have gone really well – especially since the reaction in the critical press has been surprisingly un-critical. Hardware support seems to be good, including graphics, which is slightly surprising given the huge amount of change in this area, viz.:

“So, of the four “major” distributions over the past month (Ubuntu 9.10, Mandriva 2010, openSuSE 11.2 and Fedora 12), the only one that didn’t crash, hang or otherwise misbehave on at least one of my laptop/netbook/nettop systems was Fedora.” – “Saving the ‘Best’ for Last“, J A Watson at ZDNet.

I think this is a tremendous accomplishment by the people working on this stuff, and the release team, given the short and punchy nature of this development schedule. Fedora 13 has a great base to start from, although it also has to live up to what seems to be a pretty good Fedora 12 release.

It was also interesting to see that news of Google’s Chrome OS hasn’t really overshadowed the release: yes, people are writing a bit about it, but it hasn’t really set the world alight in the same way Wave did a few weeks ago. And you know what – I think it’s because it’s really not a terribly hot idea. In fact, I would say that Fedora 12 and Gnome Shell (the preview of part of Gnome 3) is actually a better Chrome OS than Chrome  OS.

The stuff that Chrome supposedly brings to the table, I already have. My netbook unsuspends in 5 seconds flat, and it lives its life “on”. This fast boot stuff is basically worthless to me. The user experience of Chrome OS is surprisingly close to what Gnome Shell already offers; and of course I’m able to run proper applications not just web-based ones. And, actually, I think Moblin actually gets a lot of this stuff right already: particularly the deep integration between the desktop shell and web applications. If I’m honest, this just looks to me like another Google “re-invent the world and outspend the people already innovating in this area”-type project. If Google were not so fat on advertising revenue, there is no way they could make a play like this.

And, to be honest, I’m not entirely sold on the people already innovating in this area already. I’ve written before about litl’s webbook – which ChromeOS seems to be aimed squarely at. If I were litl, I would be pretty deeply worried, since a web-based OS with Ubuntu underpinnings running on custom hardware is basically a rough description of both projects. And you can bet that Google (or, their hardware partners) aren’t going to be knocking these out at $700 a pop.

I’m deeply unconvinced by this “web apps only” approach, anyway. A browser is not the be all and end all, and files exist on disk for a reason: “it’s too complex for normal people” argument just does not stack up at all. What ChromeOS, and litl and others, are doing is saying “we can make a computing experience as simple as a television”, which is fine. But then you get a computing experience just like a television, with minimal interactivity and flexibility. I mean, custom hardware is great, but one size doesn’t fit all – you have to talk to cameras, printers, mobile phones, and all manners of other gadgetry. That needs OS support. Are Google going to come up with some kind of USB-to-website system so my camera’s photos can be uploaded via flikr? In what world of UI interaction does that even make sense?

What you end up with is an emasculated system where apps don’t talk to each other or share content easily (can you insert images into Google Docs directly from flikr?). It’s entirely retrograde, and a non-compelling view of computing which limits users to various hard-coded paths of functionality. For simple stuff like sending e-mail or browsing the news online, it’s fine. For “harder” stuff like attaching a document to an e-mail: well, you’d better hope that you use Google Docs and GMail and that those apps can talk to each other. Better hope, though, you don’t have a video camera and want to edit some clips – there’s no website for that yet, so you’re stuffed, bud. And even when it arrives, it’s going to take a while to upload those hundreds of megabytes of Little Eric walking for the first time, because that ADSL connection you have with 8Mb download has only a 512K upload.

It’s clear these “computers” are only going to have a very limited appeal. And this, to me, is where Fedora (and other free OSes) shine. You can have the Moblin interface, or a proper Gnome desktop. The hardware support is great; it will work on your netbook or on your full desktop. You can make the choice to cut down and simplify the interface, and not lose the ability to run the more complicated apps.

If I were to prognosticate, I would say that litl will be around for maybe a couple of years, but not much more. ChromeOS, Moblin and the like will find their place on cheap photo-frame-like touch PCs which people will scatter around the home, but there will still be a real PC in the corner.

Page 2 of 9