Alex Hudson

Thoughts on Technology, Product, & Strategy

Category: freesoftware (Page 1 of 8)

Faster Continuous Integration with some stowage patterns

Everyone wants faster Continuous Integration (CI). The quicker you can get from commit to test results, the better – but there’s a bit of tension here. A CI should start close to “first principles”, so that you know an entire build is good and repeatable. But the more work the CI is doing, the slower it will go.

I’ve mentioned before that I use a combination of stowage and container-based CI systems, and in professional life we’re very heavily invested in GitLab now (which is an excellent system in general). Containers are a good starting point for CI, because they’re very easy to set up, do some work, and then throw away again. Getting clean builds is very straightforward.

Today I released version 0.5.0 of stowage, which brings a few additional small features which really simplifies usage in CI.

Read More

Initial impressions on Symfony 3.3

I wrote a couple of times previously about Symfony 4, particularly the architecture and use of Makefiles. It’s now at the point of being testable, so I took it for a short spin.

One comment for Linux users: your operating system will probably need an upgrade. I’m a Fedora user, and the stable release only includes PHP 7.0. Although that’s recent, it’s not good enough. Thankfully Fedora 26 has been in alpha a little while, and the beta is due at the end of this month. It was an easy upgrade, and includes PHP 7.1.4, a new enough version.

Read More

Fedora’s revised mission

Coming up with convincing vision and mission in a corporate environment is never easy – in fact, I think it’s one of the most difficult things you can do. Setting a clear and crisp vision is crucial to create an aligned organisation. Refining down into an elevator-pitch sized statement while avoiding generalisations, platitudes and (frankly) abstract jibberish is practically impossible.

Doing so outside a corporate environment I think is even more difficult – money is at least a straightforward motivation to hang a hat on. One of the best guides to doing this is Guy Kawasaki’s Art of the Start Manifesto, I particularly like the bit about making mantra.

The new mission for Fedora is:

Fedora creates an innovative platform that lights up hardware, clouds, and containers for software developers and community members to build tailored solutions for their users.

Unfortunately a few weasel words have crept in (“innovative”, “solutions”, “platform”), but I love the overall trust of it. The four foundations are pretty close to being a mantra and they’re indirectly reflected in this.

The main thing I find disappointing is that the mission ties in specifically to a few delivery mechanisms. Lighting up containers should be a consequence of the mission, and not be part of the mission, for my money. Containers are very relevant today – but are they going to be relevant in five year’s time?

Fedora is an operating system, the role of an operating system is to create an environment in which software can be allocated the resources with which to run. The end user value in having an operating system is precisely that you can run software easily. Having great support for containers is, right now, singularly important is being able to do that.

Let’s quickly thing about the future, though. Containers are one piece of technology, Kubernetes is a broader one. Day by day, I see more and more investment in k8s – to the extent that relatively soon we’ll need to think about k8s as an operating system in its own right. It’s never going to replace Fedora, and Fedora can run within k8s, but applications that run on k8s need to be quite specifically tailored for that – or even designed for that from the start. The whole story that k8s has is really about resource allocation.

Kubernetes isn’t the last word by any means. The true future is Serverless, which is coming at us at a rate of knots. Should this be part of mission? Right now, almost certainly not – but the mission should be the test. If the mission is about delivery of an environment in which people can run software, Serverless software will become increasingly relevant in that mission in the near future.

The really positive thing about the mission is that it can be used to test ideas. If something isn’t aimed at developers or the community, it likely doesn’t fall into the mission. Replacing legacy Unix is not part of the mission either.

If the redefined mission ends up being useful, it will be because it helps align the community and enables better, more consistent decisions to be taken – everyone will help push Fedora in the same direction. It will be great to see that happen.

Symfony 4 and a flex-able approach

The news about plans for Symfony 4 has got a number of my dev team a bit excited about the possibilities – although, so far, there is precious little information about the new tool, Flex, and exactly what “composition over inheritance” will mean in practice.

One similar solution that this reminds me strongly of is Django, and its approach to “apps” – although this isn’t necessarily highlighted, a composition-type system ships with Django and comes with a number of both positive and negative implications.

Read More

Habitus: the right way to build containers

So, after my previous slightly ranty post, I’ve been trying out a few different tools and approaches to building containers, attempting to find something which is closer to my idea of what good looks like. One tool stands out from the rest: Habitus.

(Not to be confused with Habitat, an annoyingly similar project with an annoyingly similar name. I have no idea which came first, suffice to say, I had heard of Habitat before and discounted it as being irrelevant to my use cases – and therefore almost overlooked Habitus during my research)

Habitus provides just-enough-Make to bring some sanity to the Docker build process, with the following killer features:

  1. ability to order container builds by expressing a dependency from one to another
  2. first-class support for artefacts created during the container build process, which can be extracted and used as input for later builds
  3. management API to provide build-time secrets into the containers

Read More

The ongoing poverty of the Docker environment

I spent a few hours this weekend attempting to re-acquaint myself with the Docker system and best practices by diving in and updating a few applications I run. I wrote up an article no long after Docker’s release, saying that it looked pretty poor, and unfortunately things haven’t really changed – this doesn’t stop me using it, but it’s a shame that the ecosystem apparently has learnt nothing from those that came before.

Read More

Containing incestuousness

Having droned on a little the other day about duplication in Stackanetes (in hindsight, I had intended to make a “it’s turtles all the way down” type jibe), I’ve been delighted to read lots of other people spouting the same opinion – nothing quite so gratifying as confirmation bias.

Massimo has it absolutely right when he describes container scheduling as an incestuous orgy (actually, he didn’t, I just did, but I think that was roughly his point). What is most specifically obvious is the fact that while there is a lot of duplication, there isn’t much agreement about the hierarchy of abstraction: a number of projects have started laying claim to be the lowest level above containers.

It comes back to this; deploying PaaS (such as Cloudfoundry, which I try to hard to like but seems to end up disappointing) is still way too hard. Even deploying IaaS is too hard – the OpenStack distros are still a complete mess. But while the higher level abstractions are fighting it out for attention, the people writing tools at a lower level are busy making little incremental improvements and trying to subsume new functionality – witness Docker Swarm – they’re spreading out horizontally instead of doing one thing well and creating a platform.

I don’t think it’s going to take five years to sort out, but I also don’t think the winner is playing the game yet. Someone is going to come along and make this stuff simple, and they’re going to spread like wildfire when they do it.

Stackanetes

There’s a great demo from the recent OpenStack Summit (wish I had been there):

OpenStack is a known massive pain to get up and running, and having it in a reasonable set of containers that might be used to deploy it by default is really interesting to see. This is available in Quay as Stackanetes, which is a pretty awful name (as is Stackenetes, and Stackernetes, both of which were googlewhacks earlier today) for some great work.

I’m entirely convinced that I would never actually run anything like this in production for most conceivable workloads because there’s so much duplication going on, but for those people with a specific need to make AWS-like capability available as a service within their organisation (who are you people?!) this makes a lot of sense.

I can’t help but feel there is a large amount of simplification in this space coming, though. While “Google for everyone else” is an interesting challenge, the truth is that everyone else is nothing like Google, and most challenges people face are actually relatively mundane:

  • how do I persist storage across physical hosts in a way that is relatively ACID?
  • how do I start resilient applications and distribute their configuration appropriately?
  • how do I implement my various ongoing administrative processes?

This is why I’m a big fan of projects like Deis for businesses operating up to quite substantial levels of scale: enforcing some very specific patterns on the application, as far as possible, is vastly preferable to maintaining a platform that has to encompass large amounts of functionality to support its applications. Every additional service and configuration is another thing to go wrong, and while things can made pretty bullet-proof, overall you have to expect the failure rate to increase (this is just a mathematical truth).

CoreOS in many ways is such a simplification: universal adoption of cloud-config, opinion about systemd and etcd, for example. And while we’re not going to go all the way back to Mosix-like visions of cluster computing, it seems clear that many existing OS-level services are actually going to become cluster-level services by default – logging being a really obvious one – and that even at scale, OpenStack-type solutions are much more complicated than you actually want.

 

 

Some notes on Serverless design: “macro-function oriented architecture”

Over the past couple of days I’ve been engaged in a Twitter discussion about serverless. The trigger for this was Paul Johnston‘s rather excellent series of posts on his experiences with serverless, wrapped up in this decent overview.

First, what is serverless? You can go over and read Paul’s explanation; my take is that there isn’t really a great definition for this yet. Amazon’s Lambda is the canonical implementation, and as the name kind of gives away, it’s very much a function-oriented environment: there are no EC2 instances to manage or anything like that, you upload some code and that code is executed on reception of an event – then you just pay for the compute time used.

This is the “compute as a utility” concept taken more or less to its ultimate extreme: the problem that Amazon (and the others of that ilk) have in terms of provisioning sufficient compute is relatively well-known, and the price of EC2 is artificially quite high compared to where they would likely want to go: there just is not enough supply. The “end of Moore’s law” is partly to blame; we’re still building software like compute power is doubling every 18 months, and it just isn’t.

Fundamentally, efficiency is increasingly the name of the game, and in particular how to get hardware running more at capacity. There are plenty of EC2 instances around doing very little, there are plenty doing way too much (noisy neighbour syndrome), and what Amazon have figured out is that they’re in a pretty decent place to be able to schedule this workload, so long as they can break it down into the right unit.

This is where serverless comes in. I think that’s a poor name for it, because the lack of server management is a principle benefit, but it’s a side-effect. I would probably prefer macro-function oriented architecture, as a similar but distinct practice to micro-service oriented architecture. Microservices have given rise to discovery and scheduling systems, like Zookeeper and Kubernetes, and this form of thinking is probably primarily responsible for the popularity of Docker. Breaking monolithic designs into modular services, ensuring that they are loosely coupled with well-documented network-oriented APIs, is an entirely sensible practice and not in some small part responsible for the overall success Amazon have had following the famous Bezos edict.

Macrofunction and microservice architectures share many similarities; there is a hard limit on the appropriate scale of each function or service, and the limitation of both resources and capability for each feels like a restriction, but is actually a benefit: with the restrictions in place, more assumptions about the behaviour and requirement of such software can be made, and with more assumptions follow more powerful deployment practices – such as Docker. Indeed, Amazon Lambda can scale your macrofunction significantly – frankly, if you design the thing right, you don’t have to worry about scaling ever again.

However, one weakness Paul has rightly spotted is that this is early days: good practice is really yet to be defined, bad practice is inevitable and difficult to avoid, and people attempting to get the benefits now are also having to figure out the pain points.

It’s worth saying that this architecture will not be for everyone – in fact, if you don’t have some kind of request/response to hook into, frankly it won’t work at all – you’re going to find it very difficult to develop a VPN or other long-lived network service in this environment.

Many of the patterns that should be applied in this scenario are familiar to the twelve-factor aficionado. Functions should be written to be stateless, with persistent data discovered and recorded in other services; configuration is passed in externally; et cetera. Interestingly, no native code is supported – I suggest this is no surprise, given Amazon’s investment in Annapurna and their ARM server line. So, interpreted languages only.

A lot of this reminds me a lot of the under-rated and largely unknown PHP framework, Photon. While this is not immediately obvious – Photon’s raison d’etre is more about being able to run long-lived server processes, which is diametrically different to Lambda – the fundamental requirement to treat code as event-driven and the resulting architecture is very similar. In fact, it was very surprising to me that it doesn’t seem to be possible to subscribe a Lambda handler to an SQS topic – it’s possible to hack this via SNS or polling, but there is no apparent direct mechanism.

It’s difficult to disagree that this is the wave of the future: needing to manage fewer resources makes a lot of sense, being able to forget about security updates and the like is also a major win. It also seems unlikely to me that a Lambda-oriented architecture, if developed in the right way, could ever be much more expensive than a traditional one – and ought to be a lot less in practice.

Tech2020

For a while now, I have been waxing lyrical (to those who will listen) about the variety of new tools and analyses available to people who want to prognosticate. If nothing else, the current craze for data within most businesses has resulted in people almost literally swimming around in the stuff without an awful lot of an idea about what to do with it, and while this has lead to some unspeakably shambolic practices (those who know me will likely have heard me on my hobby horse about proving models with actual experimentation) it has also opened up new horizons for people like me.

So, I’m delighted to have been invited to give a talk I submitted to the Bartech Tech2020 conference, this coming week in London – the first meeting of this particular group, there is a great line-up of speakers, all of whom are going to be reading the runes and describing their vision of the year 2020. Wonderfully, the various talks will be recorded and available, so there will be significant opportunity come the year 2020 to look back and groan loudly at the errors and omissions that will have piled up nicely by that point.

There are some brilliant speakers lined up, and I have to confess to being eager to particularly hear from this lot (in no particular order):

  • Zoe Cunningham – the old refrain, “culture eats strategy for breakfast”, has never been more true than now. It’s also one of the most difficult things to set right and predict;
  • David Wood – working in healthcare, I’m incredibly interested in David’s talk, and am certain that what we will call healthcare in another ten year’s time will in many ways bear little resemblance to what is practised now;
  • Simon Riggs – in all honesty, I’m hoping he’s going to be talking at least in part about homomorphic encryption because I just read the Gentry paper recently and it’s fascinating, but there is so much to come in this space – particularly now that data is so large and non-local that all sorts of new strategies are needed.

I’m going to attempt to tweet through most of the conference in low volume, probably on #tech2020, and look forward to putting a few more faces to names from Bartech.

Page 1 of 8