“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” – first principle, Agile Software Manifesto (my emphasis)
There’s a great demo from the recent OpenStack Summit (wish I had been there):
OpenStack is a known massive pain to get up and running, and having it in a reasonable set of containers that might be used to deploy it by default is really interesting to see. This is available in Quay as Stackanetes, which is a pretty awful name (as is Stackenetes, and Stackernetes, both of which were googlewhacks earlier today) for some great work.
I’m entirely convinced that I would never actually run anything like this in production for most conceivable workloads because there’s so much duplication going on, but for those people with a specific need to make AWS-like capability available as a service within their organisation (who are you people?!) this makes a lot of sense.
I can’t help but feel there is a large amount of simplification in this space coming, though. While “Google for everyone else” is an interesting challenge, the truth is that everyone else is nothing like Google, and most challenges people face are actually relatively mundane:
- how do I persist storage across physical hosts in a way that is relatively ACID?
- how do I start resilient applications and distribute their configuration appropriately?
- how do I implement my various ongoing administrative processes?
This is why I’m a big fan of projects like Deis for businesses operating up to quite substantial levels of scale: enforcing some very specific patterns on the application, as far as possible, is vastly preferable to maintaining a platform that has to encompass large amounts of functionality to support its applications. Every additional service and configuration is another thing to go wrong, and while things can made pretty bullet-proof, overall you have to expect the failure rate to increase (this is just a mathematical truth).
CoreOS in many ways is such a simplification: universal adoption of cloud-config, opinion about systemd and etcd, for example. And while we’re not going to go all the way back to Mosix-like visions of cluster computing, it seems clear that many existing OS-level services are actually going to become cluster-level services by default – logging being a really obvious one – and that even at scale, OpenStack-type solutions are much more complicated than you actually want.
The various videos of the speakers from Tech2020 – including yours truly – are up and available for Skillsmatter members. Going back to my previous blog post, I can heartily recommend the speakers who I was excited about, but have to say, I was blown away by the overall quality of the conference. Even those topics I didn’t think would hold much interest or news for me turned out to be incredibly interesting, and I daresay the next editing of this conference will be something to watch out for.
That all aside, let’s look at the highlights:
It’s classicly classful
And in fact, it seems more or less mandatory: they’ve used main() as the code entry point again, and like in Java, for anything non-trivial you’re basically going to have to write a class which implements that function.
Wider standard library set
Sugar for asynchronicity
This is probably more important than the Promises interface. Dart has message passing built-in, like many suspected. And, it looks nice and simple: you have ports, and you can either receive messages from them or send messages to them. The receivers are basically event-driven in the same way a click handler would be. Seeing the value here is difficult in some ways: it will be interesting to see how the balance is struck, because if you are designing a class you could either make an API which creates Promises, or send/receive messages – the net effect is roughly the same. You probably don’t want to implement both, but which system you use is up to you. The message passing interface is slightly more decoupled; but it’s probably easier to abuse in the longer term.
It’s all sugar
There are some extras, like the ability to reference needed CSS, which point to a slightly grander future where Dart scripts are rolled up with their related resources into something more easily distributable. And maybe this is the point: the unveiling of Dart was not really a beginning itself, but the beginning of a beginning. They’ve designed the language to attempt to grow with your application: you can start small and simple, but as your application grows you can add more to it (like types and interfaces) to make it more structured and, hopefully, safer to run. And in the same sense, Dart itself is going to grow over time as well.
Apologies, but I couldn’t help but comment on this.
- I don’t believe it’s broadcast delay. There is a delay there, but it’s of the order of seconds. His waffle at the end is about the same length. I don’t see what difference it would make – it could make the switch happen earlier (e.g., at 1:40 on the video above) but I don’t see any gain. With people analysing the video, it doesn’t really matter where it happens.
- I don’t believe it’s split video. That’s horribly difficult to get right, I’m not sure anyone proposing it has actually tried it – particularly since they also broadcast this thing in HD. It’s also not very Derren Brown.
- I don’t believe it’s a 55k take split. There’s no visible edit split – and there would have to be one, somewhere – and he knew he got all six at the end. The stuff Derren was saying about “sorry if I only get five right” – eh. This idea is plain nonsense.
- I don’t believe it’s eInk etc. Not very Derren Brown. It’s certainly not projection or quickly stuck on numbers, either.
- I don’t believe there is a hidden wall. The balls cast a shadow on Derren; if it’s a wall it’s exceptional.
- I don’t believe there is anything dodgy about the stand. Looks like perspex to me, and the ball’s labels are pointed up and out.
- I don’t believe the “last ball moved a bit” stuff. The last number out was the 2; which is actually the ball on the far right when they’re turned. If the balls had been replaced, the 2 would be the last in / labelled, so the ball on the far left as it’s turned around should be the one raised…
- I don’t believe he predicted anything, or used statistics, or that the balls in the lottery were influenced.
- I don’t believe he palmed the balls. Not enough time, not physically possible.
- I don’t believe he’s holding anything in front of the balls. The labels are on the surface of the balls.
- Finally, I don’t believe he’s actually going to give us any insight into how this trick was performed. The fact that people are hanging on the little ball moving or the video thing – clutching at straws. This was a good trick
However.. what I would say…
- I’m interested to know why the TV got turned off. It was moving out of shot, and presumably they could have had set runner turn it off / down off camera. That makes me think they really couldn’t stick on the show longer than the ball announcement.
- There is a gap between the numbers being announced and turning around the balls. This gap must be necessary; the writing down of the numbers and stuff didn’t have to happen.
- The ball holder has sloped-back sides. The front and back of the box are shallow and don’t raise high, but the sides raise up – there’s no need for that; he only needs a box. The slopes match roughly where the labels on the balls are located. Was something attached to the front of them somehow? Looking in between the balls, it could be a line of tape going across the front (particularly between 35 and 39 at about 2:25) – equally though, it could be compression bleed. Not convinced it would give the 3D effect required either.
- Are they even balls? We know that looking at the inside of a sphere and the outside is very similar – indeed, everyone’s seen those masks at the fair which seem to follow you around. I suspect they probably are balls, though.
Personally, I’m going for tape over the balls, palmed over it somehow. The sides of the holder and the apparent grey between the balls. I doubt we’ll know for sure tho.
I read with interest the various Twitterings about price of disk space – in particular, about Bitfolk, but it applies to any other service really. Andy’s take on this is really worth interesting, although I personally think he’s defending Bitfolk’s pricing unnecessarily.
My basic philosophy on this is that price isn’t an issue; things cost what they cost. What is really at stake is the value of the service: is what I’m paying for worth it? Of course there are plenty of businesses which are effectively discount-merchants – the Ryanairs and Tescos of this world – but they don’t offer an awful lot of value. Trying to compete on price caters for a certain type of customer, but it’s a limited market.
It’s too easy to fixate on the cost of materials. You can look at a sandwich and say, “Oh, that probably cost 20p in bread and filling. £2 is a rip-off”. Things simply are not that simple though, unless you’re really truly price-fixated. If you have time to spend scouring the market for a cheap sandwich that’s similar in quality, fine – personally, my time is much more valuable than that.
Similarly, with VPS services, it’s easy to put together some raw numbers on hardware, divide it by a few people, and assume that it’s very cheap. Occupancy, failure and incidental costs don’t really factor into it, or the time to manage it. With that kind of service, the economies of scale really matter, and it’s difficult to see how you could do it more cheaply for the same price. But at the end of the day, even that doesn’t really matter.
What matters is the value of the service. For me, the value of Bitfolk is excellent: it’s a huge amount of management taken off my shoulders. I don’t want to manage Xen; I don’t want to have to worry about hardware-this and the vast kernel-patch hideousness. I don’t want to have to worry about I/O rates. Bitfolk is a service of good quality, and while it’s never going to offer bullet-proof VPS uptime I suspect you can’t get much better without moving to some canny clustering setup.
This is what I don’t understand: bitching about the pennies. Is £24pa too much for 20 Gb disk space? That’s £2 a month. That’s our sandwich from up there. It’s just not worth worrying about, in budget terms it’s noise. What matters is the quality of the service.
Just the other day, the UK Government published a new paper entitled “Open Source, Open Standards and Re–Use: Government Action Plan” (amusingly, they’re also tracking the tag #ukgovOSS to get responses – so there we go!).
I’ve had experience with the previous consultations: the various versions of the “open source policy” which, at best, were statements of non-discrimination and said very little positive, and the various explorations into default routes for exploitation which as far as I can tell resulted in very little. There was also the abortive “Open Source Academy” whose page is still online but I dare not link for fear people might think it relevant.
I think in general it is extremely positive that the Government continues down this path. In the interests of full disclosure, the company I work for (and co-own) does a lot more public sector work than most, and we do supply Government with open source solutions (we’re currently developing a national-level bespoke database system, as an example). I’m very biased in seeing them procure more open source, because that’s what we want to sell, but also I know how much money we’ve saved the various organisations we’ve worked for (I don’t put this down to open source per se because it’s more complex than that, but it’s in there).
But in the interests of trying to be constructive, there is still a lot wrong with this. Let me go through some of the big issues which leap off the page.
The policy on proprietary software says that Government will take into account “exit, rebid and rebuild costs”. I’m not totally sure why these costs are seen as specific to proprietary software – they’re very much not – I don’t see any way this can be assessed objectively and fairly. I fear this is going to be little more than a nod to later costs; what is really required is a working exit strategy from day one.
Second, the policy of “The Government will look to secure full rights to bespoke software code [etc.]” is really pretty wrong-headed. Our business has been on the receiving end of this policy already because it’s actually pretty common, and the “full rights” translated into the language of lawyers effectively means “we will own the exclusive copyrights/etc. to the stuff you develop for us”. This has a number of side-effects which I doubt those who drafted it realise.
The first is that it completely disadvantages those who actually develop software and hold copyrights. When you give up ownership, you lose all your rights to that work. That means you can no longer re-use what you developed – which wouldn’t be true of any supplier of software who didn’t own the IPRs in the first place.
The second side-effect is that the later “Where appropriate, general purpose software developed for government will be released on an open source basis” is basically a nonsense. Let’s be real about this: no-one in Government is going to be going around releasing stuff as open source on a regular basis. Indeed, Government should not be doing that: it’s not a software house, and it doesn’t have the expertise or the competency to do it. The original developers are the ones who should be doing it.
What’s worst about this aspect of the policy is that it underlines that fundamentally, Government has missed a key aspect of what open source is about: that you don’t need “full rights” in order to be in control. Let’s stop calling it open source for a moment, and call it free software. This means you have the freedom to run it, distribute it and modify it (in any combination). Government doesn’t want the copyrights: Government wants the freedom. As a tax-payer, I applaud that whole-heartedly. So please, don’t ask for the copyrights: ask for the freedom.
I really welcome the approach that Government is taking here, but I do feel that a couple of tricks have been missed. Hopefully they will realise that from reading this – always happy to attend another Government consultation, guys 😉
The awful News of the World managed to shut up about it, although they did give it away to anyone with half an eye, but the Daily Mail can’t – they’ve felt it necessary to spoil the worst-kept secret in TV and name Ben Collins as “The Stig”.
Personally, I hope they keep Collins. I don’t think anyone actually cares who the Stig is; except newspapers it seems.
So, Christmas is now over for another year pretty much – and it has been good fun, albeit interspersed with slight bouts of snottiness.
Amongst some of the very lovely gifts I was given, I have a really nice selection of books that I need to make time for:
- The Art Of The Start, by Guy Kawasaki – very little to say about this one since it is so well-known, but I’ve been meaning to read the full thing since I was given a precis of the first chapter by a friend.
- Don’t Sleep, there are Snakes, by Daniel Everett – this is a story of language and life experienced by a missionary living with a tribe fo Amazonians. I haven’t read any of this yet, but was captivated by an abridged version read on the radio.
- Making History, by Stephen Fry – this Fry’s only work of fiction I believe, and I have very high hopes for it. Reviews have been invariably flattering, but then Fry’s name alone commands a good amount of benefit of doubt.
- The Black Swan, by Nassim Nicholas Taleb – an explanation, of sorts, of randomness and how it affects our lives. I’m not totally sure what to expect of this yet.
- Influence: the Psychology of Persuasion, by Robert B. Cialdini – again, a relatively well-known book which doesn’t need an awful lot of discussion here.
I’m looking forward to reading all of these, and will post small reviews as I get there. Slightly interestingly, to me at least, the last book on the list – Influence – was actually recommended to me by the same friend who passed on the TAotS precis, but too late for Christmas. Somewhat bizarrely, I had already put it on a list of books I was interested in sometime in November, and the list was very short and little thought was put into it – indeed, I re-did my list in early December from scratch, “Influence” didn’t make it back in and I didn’t think about it again until I received it on Boxing Day.
Perhaps also what I need is some bookshelf type module for my blog too, to remind myself of what I’ve read through so that I can review it at the end of the year – some kind of new year’s resolution appears to be calling.