Millions of words are expended on software architecture. Fashions come and go; some patterns last a long time, others are a flash in the pan. One day, Model-View-Controller is all the rage. The next, it’s Model-View-ViewModel. So on and so forth – the next new architecture is the One True Way or a genuine silver bullet, until it’s not, at which point it’s legacy, technical debt or code smell.

Developers talk too much about architecture. In the future tense, it’s always what the next architecture is going to enable them to do, what problems it will solve. In the past tense, it’s usually about what the architecture prevents them doing, why the architect was bad, why it’s the wrong pattern, etc. Static architecture design is the wrong thing to think about, and here’s why.

Change

The only thing constant in a software project is change. Software is never “done” – change might happen more slowly, but like a business, software is either progressing or regressing.

When people think about “architecture” they’re usually thinking about cathedrals or bridges, styles of building and significant levels of future planning. Already, we can see that this is somewhat antithetical to most sensible software development practices.

Excessive engineering has a number of costs. When we don’t put enough work into a feature, and take short-cuts that we know will need to be re-addressed in the future, we rightly call this “technical debt” . We’re entering into a deficit in the knowledge that this will allow us to move more quickly, like how a mortgage enables us to buy a house earlier than saving the money.

It’s notable that there is no opposite common term. People don’t generally talk about “technical investment” or similar. In fact, I read an article the other day attempting to define “technical wealth” as a positive spin on technical debt. In fact, I agree that’s a great way of framing it. But, the usual inverse of “technical debt” is what we call “over-engineering”.

There’s no straightforward definition of “over-engineering”. I tend to think of it as delivering features or capability that has no immediate value. That’s not to say the work doesn’t have value – usually, everyone will agree it has no immediate value, but predict that it will have significant future value. Often, people will also predict that some types of over-engineering will also increase the efficiency of future deliveries, and they’re not necessarily wrong.

I think it is always dangerous when we move too far away from present value. There always needs to be some way of ensuring that we’re working on the most valuable next increment, and I always feel that delivering working increments today is more important than attempting to find efficiency gains for development tomorrow, which are often illusory.

Bringing this back to architecture: developers will often invest significant time and effort in architecture that delivers no value today. This raises two problems; first, there may be other features that we can deliver that represent more value, and second, circumstances change and the value we predict to receive may never come to fruition.

Planning to scale a piece of software is an obvious example of this. If I’m expecting only 1,000 users in the next quarter, and aiming to have 10,000 the quarter after, should I be engineering for the 10,000 users? Most of the time, my argument is no – of course there are plenty of examples of where software has needed to scale much more quickly, but 90% of the time this is simply not true.

In fact, I think there are more examples of software not being delivered quickly enough and therefore failing commercially than software that had to scale much more quckly than expected. Unvalidated opinions and product design are the enemy of development teams.

Planning is important, though. It’s absolutely the case that the fastest development teams are speedy because they get started early – they often have pieces of the jigsaw puzzle to hand and pre-sorted before they attempt to put everything together. Many projects will have information about what will happen in the future, and it makes no sense to disregard that information and find efficiency savings where possible.

How to square this circle? How do we plan for the future while focussing entirely on present value?

In terms of software architecture, I call this “Pokémon Architecture”.

Architecting for Evolution

The key change in mindset is to recognise that no architecture lasts very long; no piece of software is ever perfect or finished, and the main thing we need to achieve is a platform that is straightforward to iterate on as we change it.

In Pokémon (the games, the TV series, etc.), one of the key features of the creatures is that many of them can evolve. Through a process a bit like metamorphosis, they can become bigger / stronger / differently capable versions of the original Pokémon.

For me, this is the key thing we want to achieve with an architecture. We want something that delivers the value today, and works properly, but where there is a clear evolutionary path to some future state which is much more capable.

A nice example of this is the “monolith first” practice that many advocate when developers are thinking about creating microservices. There are people on both sides of this argument of course, with some good reasons against doing this.

My argument here is that if we take this as an example of Pokémon Architecture, we are doing something slightly different. Yes, start out creating a monolith, but also set out the ground rules that will protect the ability to evolve into something different in the future. This might be about separation between different pieces of code, the internal segregation, or other rules.

There are other examples of Pokémon architectural practices. Doing good domain-driven design and establishing clear bounded contexts is another, but equally so is establishing clear contracts in an object-oriented system. Clear responsibilities for a piece of software, established contracts and decoupled components enable us to replace services easily – and I argue that it’s fundamental to see that replacement as an evolutionary, not revolutionary process.

Pokémon Architecture is fundamentally about protecting and insuring the ability to refactor. We probably want to start with something small, so that we can deliver a working increment quickly and validate the idea. Of course we want to scale and grow, but we also understand that often we aren’t going to need it (YAGNI) and on that principle we don’t refactor until we have some idea of why we need to refactor. The architecture is designed to evolve.

This puts a big emphasis on learning, on monitoring and metrics, and foremost contact with customers. Personally, I think that’s about right.