Alex Hudson

Thoughts on Technology, Product, & Strategy

Category: misc (Page 1 of 2)

Stackanetes

There’s a great demo from the recent OpenStack Summit (wish I had been there):

OpenStack is a known massive pain to get up and running, and having it in a reasonable set of containers that might be used to deploy it by default is really interesting to see. This is available in Quay as Stackanetes, which is a pretty awful name (as is Stackenetes, and Stackernetes, both of which were googlewhacks earlier today) for some great work.

I’m entirely convinced that I would never actually run anything like this in production for most conceivable workloads because there’s so much duplication going on, but for those people with a specific need to make AWS-like capability available as a service within their organisation (who are you people?!) this makes a lot of sense.

I can’t help but feel there is a large amount of simplification in this space coming, though. While “Google for everyone else” is an interesting challenge, the truth is that everyone else is nothing like Google, and most challenges people face are actually relatively mundane:

  • how do I persist storage across physical hosts in a way that is relatively ACID?
  • how do I start resilient applications and distribute their configuration appropriately?
  • how do I implement my various ongoing administrative processes?

This is why I’m a big fan of projects like Deis for businesses operating up to quite substantial levels of scale: enforcing some very specific patterns on the application, as far as possible, is vastly preferable to maintaining a platform that has to encompass large amounts of functionality to support its applications. Every additional service and configuration is another thing to go wrong, and while things can made pretty bullet-proof, overall you have to expect the failure rate to increase (this is just a mathematical truth).

CoreOS in many ways is such a simplification: universal adoption of cloud-config, opinion about systemd and etcd, for example. And while we’re not going to go all the way back to Mosix-like visions of cluster computing, it seems clear that many existing OS-level services are actually going to become cluster-level services by default – logging being a really obvious one – and that even at scale, OpenStack-type solutions are much more complicated than you actually want.

 

 

Some notes on Serverless design: “macro-function oriented architecture”

Over the past couple of days I’ve been engaged in a Twitter discussion about serverless. The trigger for this was Paul Johnston‘s rather excellent series of posts on his experiences with serverless, wrapped up in this decent overview.

First, what is serverless? You can go over and read Paul’s explanation; my take is that there isn’t really a great definition for this yet. Amazon’s Lambda is the canonical implementation, and as the name kind of gives away, it’s very much a function-oriented environment: there are no EC2 instances to manage or anything like that, you upload some code and that code is executed on reception of an event – then you just pay for the compute time used.

This is the “compute as a utility” concept taken more or less to its ultimate extreme: the problem that Amazon (and the others of that ilk) have in terms of provisioning sufficient compute is relatively well-known, and the price of EC2 is artificially quite high compared to where they would likely want to go: there just is not enough supply. The “end of Moore’s law” is partly to blame; we’re still building software like compute power is doubling every 18 months, and it just isn’t.

Fundamentally, efficiency is increasingly the name of the game, and in particular how to get hardware running more at capacity. There are plenty of EC2 instances around doing very little, there are plenty doing way too much (noisy neighbour syndrome), and what Amazon have figured out is that they’re in a pretty decent place to be able to schedule this workload, so long as they can break it down into the right unit.

This is where serverless comes in. I think that’s a poor name for it, because the lack of server management is a principle benefit, but it’s a side-effect. I would probably prefer macro-function oriented architecture, as a similar but distinct practice to micro-service oriented architecture. Microservices have given rise to discovery and scheduling systems, like Zookeeper and Kubernetes, and this form of thinking is probably primarily responsible for the popularity of Docker. Breaking monolithic designs into modular services, ensuring that they are loosely coupled with well-documented network-oriented APIs, is an entirely sensible practice and not in some small part responsible for the overall success Amazon have had following the famous Bezos edict.

Macrofunction and microservice architectures share many similarities; there is a hard limit on the appropriate scale of each function or service, and the limitation of both resources and capability for each feels like a restriction, but is actually a benefit: with the restrictions in place, more assumptions about the behaviour and requirement of such software can be made, and with more assumptions follow more powerful deployment practices – such as Docker. Indeed, Amazon Lambda can scale your macrofunction significantly – frankly, if you design the thing right, you don’t have to worry about scaling ever again.

However, one weakness Paul has rightly spotted is that this is early days: good practice is really yet to be defined, bad practice is inevitable and difficult to avoid, and people attempting to get the benefits now are also having to figure out the pain points.

It’s worth saying that this architecture will not be for everyone – in fact, if you don’t have some kind of request/response to hook into, frankly it won’t work at all – you’re going to find it very difficult to develop a VPN or other long-lived network service in this environment.

Many of the patterns that should be applied in this scenario are familiar to the twelve-factor aficionado. Functions should be written to be stateless, with persistent data discovered and recorded in other services; configuration is passed in externally; et cetera. Interestingly, no native code is supported – I suggest this is no surprise, given Amazon’s investment in Annapurna and their ARM server line. So, interpreted languages only.

A lot of this reminds me a lot of the under-rated and largely unknown PHP framework, Photon. While this is not immediately obvious – Photon’s raison d’etre is more about being able to run long-lived server processes, which is diametrically different to Lambda – the fundamental requirement to treat code as event-driven and the resulting architecture is very similar. In fact, it was very surprising to me that it doesn’t seem to be possible to subscribe a Lambda handler to an SQS topic – it’s possible to hack this via SNS or polling, but there is no apparent direct mechanism.

It’s difficult to disagree that this is the wave of the future: needing to manage fewer resources makes a lot of sense, being able to forget about security updates and the like is also a major win. It also seems unlikely to me that a Lambda-oriented architecture, if developed in the right way, could ever be much more expensive than a traditional one – and ought to be a lot less in practice.

What I realised I’m missing from Gnome

Not that long ago, I did a switch on my Android phone: against all the promises I made to myself beforehand, I switched on the Google account and allowed it to sync up to GCHQ/NSA the cloud. I did this for one main reason: I had just got an Android tablet, and I despised having to do the same stuff on each device, particularly since they weren’t running the same versions of Android, and one was a Nexus – so not all the UI was the same. The benefits, I have to say, were pretty much worth it: I don’t have too much sensitive data on there, but the ease of use is incredible. What was particularly good was that when I broke my phone, and had to have a new one, once the new one was linked up everything was basically back how it was. That’s tremendously powerful.

Now, I recently acquired a bit of Apple equipment and of course installed Fedora 19 on it. Just to digress briefly: installing Fedora 19 on any new Mac hardware, particularly if you want to keep Mac OS X around (I don’t much care for OS X, but keeping it for now seems handy), is tremendously difficult. I had wired ethernet (brilliant, because I was using the netinstall – which, I should note, is a truly wonderful experience in the new Anaconda) which was lucky, since the wifi doesn’t work by default. The disk partitioning is incredibly complex, and the installation documentation is not particularly good. At some point I might try and help update the documentation, but it would feel a little like the blind leading the blind at this stage: although I have Fedora booting, the Mac OS X grub entries don’t work.

Logging into my desktop, though, I realised everything was bare. This was not like the Android experience at all – everything, from my username to my dot config files, needed to be set up again. I rarely change hardware, and previously I saw this as a reason to make a fresh start of things: but actually, now I value the convenience more highly.

It’s not like things are totally bad:

  • Gnome’s account settings can pull in some limited information, from Google or OwnCloud or other similar systems
  • Apps like Firefox have excellent built-in secure synchronisation that’s not a complete pain to set up
  • you can use apps like SparkleShare to make specific directories available elsewhere.

However, this really isn’t the experience I want:

  1. I should be able to use some online “Gnome Account” in the same way I can set up Enterprise Login during install
  2. That “Gnome Account” should have all my key configuration, including the details of other accounts I have linked up (maybe not the passwords, but at least the settings)
  3. If I have online storage / backup somewhere, it should offer to sync that up
  4. I should be able to sync my entire home data, not just specific bits
  5. If the two machines are on, I should be able to access one from the other – even if there’s a firewall in the way

I realise point five above is particularly moon-on-a-stick territory.

Technically speaking, a lot of the basic bits are kind of there, one way or another. Most Gnome apps use the standard dconf settings system, and in theory it’s possible to synchronise that stuff where it makes sense (this is, of course, handwaving: whether or not you want all settings exactly the same on each machine is virtually an impossible question to answer). Discovering and syncing other data shouldn’t be that hard. Remote access to another machine is definitely much harder, but the various protocols and discovery mechanisms at least exist.

Annoyingly, there doesn’t seem to be much development in this direction – not even prototypes. There are lots of nasty problems (syncing home directories is fraught with danger), and even if you were willing to buy into a simpler system to get the goodies, making it work in Gnome is probably an awful lot easier than dealing with the other apps that aren’t Gnome aware.

I’m certainly not in much of a position to start developing any of this stuff right now, but it would be interesting to at least attempt to draw out a believable architecture.  A decent 70 or 80% solution might not even be too hard to prototype, given the tools available. It would be interesting to hear from anyone else who is working on this, has worked on it, or knows of relevant projects I’ve missed!

 

A first look at docker.io

In my previous post about virtualenv, I took a look at a way of making python environments a little bit more generic so that they could be moved around and redeployed at ease. I mentioned docker.io as a new tool that uses a general concept of “containers” to do similar things, but more broadly. I’ve dug a bit into docker, and these are my initial thoughts. Unfortunately, it seems relatively Fedora un-friendly right now.

The first thing to look at is to examine what, exactly, a “container” is. In essence, it’s just a file system: there’s pretty much nothing special about it. I was slightly surprised by this; given the claims on the website I assumed there was something slightly more clever going on, but the only “special sauce” is the use of aufs to layer one file system upon another. So from the point of view of storage alone, there really isn’t much difference between a container and a basic virtual machine.

From the point of view of the runtime, there isn’t an awful lot of difference between a virtual machine and a container either. docker sells itself as a lightweight alternative to virtual machines, but of course there is no standard definition of a “virtual machine”. At one end of the spectrum are the minimal hardware OSen that can be used to assign different host resources, including CPU cores, to virtual machines, and those types of VM are effectively not much different to real hardware – the configuration is set on the fly, but basically it’s real metal. On the other end of the spectrum you have solutions like Xen, which make little to no use of the hardware to provide virtualisation, and instead rely on the underlying OS to provide the resources that they dish out. docker is just slightly further along the spectrum than Xen: instead of using a special guest kernel, you use the host kernel. Instead of paravirtualisation ops, you use a combination of cgroups and lxc containers. Without the direct virtualisation of hardware devices, you don’t need the various special drivers to get performance, but there are also fewer security guarantees.

There are a couple of benefits of docker touted, and I’m not totally sold on all of them. One specific claim is that containers are “hardware independent”, which is only true in a quite weak way. There is no specific hardware independence in containers that I can see; except that docker.io only runs on x86_64 hardware. If your container relies on having access to NX bit, then it seems to me you’re relying on the underlying hardware having such a feature – docker doesn’t solve that problem.

The default container file system is set up to be copy-on-write, which makes it relatively cheap diskspace-wise. Once you have a base operating system file system, the different containers running on top of it are probably going to be pretty thin layers. This is where the general Fedora un-friendliness starts, though: in order to achieve this “layering” of file systems, docker uses aufs (“Another Union File System”), and right now this is not a part of the standard kernel. It looks unlikely to get into the kernel either, as it hooks into the VFS layer in some unseemly ways, but it’s possible some other file system with similar functionality could be used in the future. Requiring a patched kernel is a pretty big turn-off for me, though.

I’m also really unsure about the whole idea of stacking file systems. Effectively, this is creating a new class of dependency between containers, ones which the tools seem relatively powerless to sort out. Using a base Ubuntu image and then stacking a few different classes of daemon over it seems reasonable; having more than three layers begins to seem unreasonable. I had assumed that docker would “flatten out” images using some hardlinking magic or something, but that doesn’t appear to be the case. So if you update that underlying container, you potentially break the containers that use it as a base – it does seem to be possible to refer to images by a specific ID, but the dockerfile FROM directive doesn’t appear to be able to take those.

The net result of using dockerfiles appears to be to take various pieces of system configuration out of the realm of SCM and into the build system. As a result, it’s a slightly odd half-way house between a Kickstart file and (say) a puppet manifest: it’s effectively being used to build an OS image like a Kickstart, but it’s got these hierarchical properties that stratify functionality into separate filesystem layers that look an awful lot like packages. Fundamentally, if all your container does it take a base and install a package, the filesystem is literally going to be that package, unpacked, and in a different file format.

The thing that particularly worries me about this stacking is memory usage – particularly since docker is supposed to be a lightweight alternative. I will preface this with the very plain words that I haven’t spent the time to measure this and am talking entirely theoretically. It would be nice to see some specific numbers, and if I get the time in the next week I will have a go at creating them.

Most operating systems spend a fair amount of time trying to be quite aggressive about memory usage, and one of the nice things about dynamic shared libraries is that they get loaded into process executable memory as a read-only mapping: that is, each shared library will only be loaded once and the contents shared across processes that use it.

There is a fundamental difference between using a slice of an existing file system – e.g., setting up a read-only bind mount – and using a new file system, like an aufs. My understanding of the latter approach is that it’s effectively generating new inodes, which would mean that libraries that are loaded through such a file system would not benefit from that memory mapping process.

My expectation, then, is that running a variety of different containers is going to be more memory intensive than a standard system. If the base containers are relatively light, then the amount of copying will be somewhat limited – the usual libraries like libc and friends – but noticeable. If the base container is quite fat, but has many minor variations, then I expect the memory usage to be much heavier than the equivalent.

This is a similar problem to the “real” virtual machine world, and there are solutions. For virtual machines, the same-page mapping subsystem (KSM) does an admirable job in figuring out which sections of a VM’s memory are shared between instances, and evicting copies from RAM. At a cost of doing more compute work, it does a better job that the dynamic loader: shared copies of data can be shared too, not just binaries.  This can make virtual machines very cheap to run (although, if suddenly the memory stops being shareable, memory requirements can blow up very quickly indeed!). I’m not sure this same machinery is applicable to docker containers, though, since KSM relies on advisory flagging of pages by applications – and there is no application in the docker system which owns all those pages in the same way (for example) qemu would do.

So, enough with the critical analysis. For all that, I’m still quite interested in the container approach that docker is taking. I think some of the choices – especially the idea about layering – are poor, and it would be really nice to see them implement systemd’s idea of containers (or at least, some of those ideas – a lot of them should be quite uncontroversial). For now, though, I think I will keep watching rather than doing much active: systemd’s approach is a better fit for me, I like the additional features like container socket activation, and I like that I don’t need a patched kernel to run it. It would be amazing to merge the two systems, or at least make them subset-compatible, and I might look into tools for doing that. Layering file systems, for example, is only really of interest if you care a lot about disk space, and disk space is pretty cheap. Converting layered containers into systemd’able containers should be straightforward, and potentially interesting.

Short thoughts on the riots.

Last night, we decided to order pizza – we don’t do it often, it’s lazy but sort of a treat. However, out of the three local well-known places, only one was open: the other two had shut down early. Now, we don’t live in London per se, but Croydon (where there were major fires and a member of the public was shot just a night ago) is only a few miles east, and Clapham a few miles north. Sutton, the local town, had some windows broken by youths, but to be honest this isn’t exactly exceptional behaviour in Sutton.

What happened when the pizza came was slightly surreal. The delivery guy was very nice, but asked me to wait at the door until he left. Why? Because there was a group of three kids just down the road, and basically, he was afraid of them. Just to be clear, most people buy using their cards these days, so this guy wouldn’t be carrying much money and didn’t have any other pizza on his bike. He wanted to make sure that if something did happen, there would be someone watching. After he left, I did begin to wonder whether this pizza chain really did make the right decision to stay open. I don’t think he was actually in much danger, but to work in an environment where you believe yourself to be in danger is probably just as bad.

So what’s going on here? I think this has been a long time coming, if we’re very honest. Anti-social behaviour has been a significant political issue for the last fifteen years, and you only need to look back to the student fees marches just last year to see people rioting in London in pretty much the same way. The difference, of course, was that under the flag of another cause, people assumed that this wasn’t a more general problem.

No-one really knows why this is happening. This isn’t a sudden thing, this has been years in the making, and the cause or causes are probably no longer determinable. My personal favourite reason is education: I don’t think this country has ever had an education system which hasn’t failed a significant minority of the young, but in years gone by those failed by the system have been able to get jobs in factories and other places which had a wage that could support a family. Those jobs don’t exist in the same way any more, and they certainly don’t pay a living wage.

What is disturbing, though, is what comes out of the mouths of these people when you listen to them being interviewed. I’ve heard a variety of interviews with people across the country who’ve been rioting, and the message is basically the same no matter who they are. They hate the police (the “Feds”), but this is just a proxy for authority of any sort. They don’t care who gets hurt, they don’t identify with any particular societal structure except for maybe their own gang, and they see an easy opportunity to gain at low risk of getting caught.

This actually reminds me of various pieces of stories from Jon Ronson’s “The Psychopath Test”. I’m certainly not diagnosing these people with antisocial personality disorder, but good lord they certainly display many of the traits. You just need to look down Hare’s list and they tick practically every box at least once over – the main thing that they’re not doing that a real psychopath would do is the lying and manipulation to cover their tracks. Right now, many of these kids don’t feel the need to even cover their faces from the cameras which they know are there.

The radio told me this morning that three men had been run over, attempting to protect their neighbourhood. Many people have come out on the streets, the anti-rioters, the wonderful Riot Wombles, and it’s tremendously sad that rioters are not just attacking buildings and cars but also now their fellow man. I expect a number of commentators to draw immediate parallels with Grand Theft Auto, which had already been linked to these troubles by people who believe games influence real life. I think all it demonstrates is the lack of imagination the rioters have. They’re like a kind of Commercial Zombie, roaming the streets for TVs instead of brains, destroying everything in their way. They don’t know how to deal with people personally, all they can do is imitate and emulate others, whether it’s video game characters, movie characters, hip-hop stars, whoever. I don’t wish to dehumanise them, but they just seem to be incapable of rational thought, they just play out TV stories because they don’t seem to know any other way.

Getting rid of Google’s annoying “background image”

For some reason, Google have decided to put large images as the backdrop to their search engine. Not only are they large and grating, but they change over time and it’s horrible.

I’ve turned this off by putting the following in Firefox’s userContent.css:

@-moz-document domain(www.google.co.uk) {
 #fpdi { display: none !important; }
 body { background-color: #ccc !important; }
}

This gets rid of most of the nasty. However, sadly, the file you need to edit may or may not exist, and could be in a variety of different places, and of course there doesn’t seem to be any good way of doing this easily. First, you need to find your profile folder – and once you’ve located that, the userContent.css file goes in the chrome directory.

Make sure it’s the userContent.css file you edit and not the similar userChrome.css file – they’re not the same thing.

Update: just to be clear; you will need to restart your browser after you make this change – it’s not instant. Also, you may need to change the domain given – I use .co.uk, so that’s what I put in there. For google.com users, it seems like you can change your background without needing an account for now – but who knows for how long?

Also, my theory is that Google have done this to be annoying. If you have an iGoogle account, you can set the background to be white again – which a lot of people will want to do. Implicit message: your experience is better with an iGoogle account. Sorry Google, but I don’t actually want one of those.

Why Google is really pulling out of China

Since the news about Google demanding unfiltered search came out earlier today, I’ve speculated in a number of places that Google have broader reasons for wanting out of China, and that the issue of search – and, by extension, free speech – was not exactly #1 on their list of priorities. In particular, I mentioned on LWN my thoughts on what kind of an effect Chinese infiltration of Google Apps would have on the customer base they’re trying to build.

To reprise some of the facts in the reports;

  1. BBC News mentioned “a sophisticated and targeted” attack, but then later says that Google pointed the finger at phishing and malware scams rather than security problems.
  2. The reports are that Gmail accounts were mainly targeted, and these Gmail accounts were of known dissidents.

Now, 2) is quite believable, but 1) is not at all. Phishing and malware is nothing new, there’s not a lot you can do to stop it, and it’s certainly not sophisticated and targeted unless the Chinese Government was intercepting legitimate Google traffic. Even then, with SSL, that’s a difficult proposition.

Wikileak’s twitter account added a couple of other suspicions:

  1. China has been quietly asking for the same access to google logfiles as US intelligence for 2-3 years now.”
  2. Gossip from within google.cn is Shanghai office used as CN gov attack stage in US source code network.”

With this kind of affair, it’s usually instructive to consider the adage of Cui bono. Sure, Chinese intelligence probably could use more information about dissidents and would probably like log file access and things like that. Almost certainly they try to access gmail accounts too. I have little doubt that Google would have immense trouble detecting this from the more general problem of phishing, and to that extent it’s really not their problem – it’s a social engineering problem.

There is a far larger prize at stake, of course. If Google source code is under attack, which seems reasonable, this presents two major issues. The first, that Google code could be used by China: there is the issue of straight-up rip-off, which devalues Google at the very least. However, Google is already #2 and is well behind Baidu, the native search system. There is motivation to do this of course, but it’s not exactly the biggest prize on offer.

The larger prize is access to code to work out security issues. Google does not develop code in the open, and while most of it is probably secure there are doubtless issues that a determined attacker could find more easily with access to code. And, once you start getting access, you start being able to gain the stuff of real value: the information stored on Google’s systems, in Gmail, in Google Docs, in Postini, as well as the various logs and other behavioural data associated with advertising.

Put like that, Google simply cannot afford to work in China. In one sense, China is lawless: there is a certain class of “criminal” who is state-sponsored and therefore can do as they please. There is no good technological defence to this, there is only the question of whether to participate or not. And what are the potential costs of participating? Essentially, limitless. Major US and EU firms on Google Apps will not want their business information readable by the Chinese authorities. More than that, firms doing business with other firms using Google Apps will not want their information readable too.

On Facebook, I gave the example of Jaguar Land Rover as one company who use Google Apps for everything. Will they like the idea of the Chinese authorities being able to see what they’re up to? No. Even if it’s not happening, it will put the wind up them: potentially, it could destroy their business. And lets remember, there are now millions of businesses on this platform.

This highlights one danger of cloud computing: not only do you have to trust the provider, but you also have to trust that the aggregation of data in one place doesn’t become a sufficiently juicy target for someone else. And Google is a very, very juicy target.

Let me speculate further on a few potential issues in the future (none of which are problematic at this point today):

  • Google controls large amount of “dark fibre” and “private internet”. That’s a juicy target.
  • Google are putting Android into many handsets. Yum, yum, another juicy target!
  • Google maps / street map / other raw data. If, somehow, the collection of data could be controlled by another agent – well, that’s quite a useful tool to have.

This doesn’t even go into the potential issues of having hardware controlled by another agent in your data centre or in your phone, which isn’t outside the realms of practical possibility for the Chinese Government either.

Let’s be clear about this: all Governments have secret services, and the Chinese are by no means the most adept or technologically advanced. However, they may be the most dangerous and the most likely to work clandestinely. Google must know already that they have spies working for them, not just in China but in all their major offices. Most good industrial espionage is internally undetectable, because it’s acquired information out only. The Chinese spies clearly have been up to many detectable activities, which puts them in a very different class.

Update: Wired has a very good article up with more details about the attack. In short, there was a specific piece of malware targeted at Google to pull their source code out of the organisation. It doesn’t say what, but it does say that the large amounts could be sent and doesn’t say how quickly they detected in. Which is exceptionally scary. I would have thought the Windows-based attack would limit the scope of what could have been lifted, but this doesn’t explain many of the other rumours about Google’s Chinese offices, and doesn’t (on its own) explain Google’s seeming decision to withdraw. Possibly, Google were attacked in other ways too.

Patent infringement to be criminalised?

That’s apparently what a group of UK inventors are asking for. On the face of it, their arguments are pretty hard to dismiss: if you have a patent, it is extremely costly to “enforce” it and essentially means it’s only open to the big boys.

Sadly, the article doesn’t really talk much about patent quality or the goals of the patent system, and although it brings up the problem of accidental infringement / independent invention, it doesn’t really explore any possible solutions. Certainly, we do seem to have a system which requires a severe overhaul at the moment, though.

Come on, Facebook – re-instate Tom Brake MP

Now, I’m not a huge one for using web applications as a means civic communication – I tend to believe that communicating with your representatives is much better done in a public space rather than a private one like Facebook. However, this story (on the face of it) is quite disturbing.

Transport for London recently announced the removal of the N213 night bus service between Croydon and Sutton. For many people, particularly young people going out of a night in Croydon, although this service wasn’t overcrowded it was important. A number of people on Facebook started a group to protest this, and took to the streets of Wallington last night.

Our local MP, Tom Brake, has been a Facebook user for years now and has tended to be pretty good about using it intelligently: joining good local causes, using it as another way of letting people know what he’s up to, and that kind of thing. So, he also joined the “Save the N213” group and posted various letters that he’d sent to the Mayor / TFL.

Now, however, Facebook has suspended his account: it’s like he doesn’t exist on the site any more. No comments, no profile, unceremoniously de-listed from the various groups.

Fine upstanding local residents

Fine upstanding local residents

Why has this happened? Well, according to LibDem Voice, “his account was automatically suspended when their system detected an unusually large amount of traffic to and from his account“. That is to say, the protest against the N213 – which Tom was participating in, not really organising – was too successful, and Facebook assumed something bad was happening.

MPs need to be easily accessible by their constituents. On issues like public transport, children and young adults are particularly important because they don’t have the option driving. Representing them effectively means, realistically, being able to contact the local community via Facebook (and services like it) because that’s what these people use in the same way older generations write letters to the local newspaper.

It’s difficult to know what to do about this. It’s difficult to see how a kind of public service obligation could be imposed on something like Facebook; equally, setting up something genuinely public and civic-minded is unlikely to attract the demographic we’re talking about.

Mono and the MCP

It has been interesting watching the debate around Mono over the past few months. As essentially an independent observer – albeit one who has used Mono and can almost code C# – I couldn’t help the sneaking feeling that somehow, some of this was being orchestrated behind the scenes.

Particularly on the “anti-Mono” side, it has been pretty clear that an agenda of agitation has been in effect, with various distributions being prodded into making statements either way and various “users” kicking up stink on mailing lists – not least a certain infamous blog writer being caught red-handed whilst goading people on to write angry letters. I don’t know if speeches like Stallman’s were co-ordinated – I suspect more likely happenstance – but it all seemed very well timed.

And now it seems that Novell have been leading Microsoft into giving a Community Promise surrounding patent claims that cover the “standard” parts of Mono. I predict this is going to have a surprisingly negative effect within the community, however. It validates the arguments of people worried about Mono, and this proposed split of Mono into “Standard bits covered by MCP” and “Other bits not covered by MCP” is actually going to fuel the flames: inevitably, people will assume the non-MCP bits are a total patent mine-field, no matter what is actually in that area. Parts that people are quite happily shipping right now – such as ASP.net – will be targetted next by people “anti” Mono. And for the parts covered by MCP; well, I expect not much to change: certainly, it’s not likely to convert many people to Mono.

In this scenario, I would actually suggest this is a step backwards. People will read the MCP news as an admission that Mono is at risk from Microsoft patents, and it certainly will not unite the community in any fashion.

Personally, though, I think this move signals that Mono is now basically big-time. I don’t know how many will agree, but the smoke-signals have been in the air for a while: major products such as Sims 3 by EA shipping with Mono, for example. It’s big enough that Microsoft is having to sit up and take notice; it cannot be long now before Microsoft starts shipping either parts of Mono or its own implementations of key Mono tools and libraries.

I don’t think “the Mono issue” is going to be resolved in the Linux community any time soon. However, as I’ve said before – I don’t think that matters. The community of people using Mono, and the community for whom Mono is an attractive proposition, is an extremely large and probably not (for a large part) particularly Linux users.

Like Firefox before it, Mono is becoming a break-out technology which appeals to an entirely different set of developers. In Firefox’s case, it became the web developer tool par excellence due to its vastly better debugging, inspection and developer tools. Mono isn’t comparable with MS’ development tools yet, but already has a strong appeal to people wanting to use .net development tools in non-Windows environments. You can use Mono to develop for Microsoft’s own Xbox 360 – as far as I’m aware, you can’t do that with Microsoft’s own .net yet.

It will be interesting to see where the Linux community moves to on this issue over the next couple of years. Whether or not Mono gets used much doesn’t really matter any more though; Mono is now an entirely successful project in its own right and it’s going to be here whether we like it or not.

Page 1 of 2