It’s interesting watching history repeat itself. There are a number of fashions that come and go in technology: thin client computing comes back every twenty years or so, for example. In the 80s, Unix was very big – it faded a bit in the nineties but then came roaring back with Linux.

Another venerable bit of software is coming back into fashion – good old Make. It’s not the perfect tool by any means, and the niche it once had is no longer that relevant. However, I think we’re going to see a growth in its usage once again. Let me explain why.

Make’s beginning

The origin of Make is relatively simple. Most original Unix commands were written in a statically compiled language, usually C. Writing good code demands that the author structure it into separate files. Compilers don’t understand this so well, though: they need to be told while file to process, how and when. Make serves this purpose: a clear ordering can be defined to build the software. Rules are written within Makefiles (the Make configuration) to explain how to create outputs.

At a basic level, rules within Make are a bit like functions. They take input (usually files) and create output (again, usually files). Ordering is created when a rule needs an input that doesn’t exist: Make tries to find a rule that would create the file as an output, and runs that first.

Of course, like other software, Make grew features like topsy. There are lots of crufty corners, interesting features, and there are many implementations of Make. The various versions don’t all quite work the same way, but GNU Make is pretty much defacto at this point.

The decline and rise of Make

As C fell out of fashion, so Make followed. Of course there are other compiled languages, but they often bring their own tools to the show. Makefiles are also not always simple to understand, Make also has a number of footguns included (the famous “Recursive Make considered Harmful” paper described a few).

Sadly, Make also had a difficult friend – autotools. They usually came together. Based on the archaic m4, autotools are an amazingly powerful way of creating Makefiles – but they’re hideously complex.

Things are beginning to change. I’ve been following Fabien Potencier’s blog on Symfony 4, and one of the major changes is the re-introduction of Makefiles. Fabien describes it at “perhaps the most controversial feature of Symfony 4” – which may well be true.

You’ll find Makefiles in a variety of other projects. Static site generators often use them, and occasionally you’ll see it used as a wrapper around tools. It (forgive the pun) makes a lot of sense when you’re working with files, and any process that creates or converts files.

Introducing ATOM

I’m in the process of reintroducing Makefiles as a build process within my teams. I’ve written before about using Habitus to build containers: I still believe strongly this is The Right Way. I had previously used Make, and I said then thought it best to replace Make with Habitus. My view here has changed slightly.

Habitus, in a way, is a lot like other specialist build systems. Tools like phing, npm, and maven are great build systems – but they’re usually specific to a language. Some tools, like gulp or webpack, are even more specialised.

All of these tools have a place, but they also have a cost. Introducing a new tool to a team is expensive. Introducing an array of different tools to new developers takes time. Using the right tool for the job is always a good idea, but using things that are 80% good enough can be better if you can leverage existing knowledge.

I now think the right approach is to use enough Make. There are a number of benefits to use GNU Make as a basis for new projects:

  • it’s installed by default on a lot of machines, and easily available if not. On MacOS and Linux it’s just there.
  • it’s pretty simple to explain usage, even to people who don’t understand Make. Legions of people learned the “make && make install” incantation.
  • it has enough build-oriented features to make it a better proposition than a bash script.
  • it’s generic enough that you can use it in many different contexts.

I’m calling this approach, in the absence of anything better, the ATOM approach – A Touch Of Make. These basic principles apply:

  1. make is the primary developer UI. The 80/20 rule should apply: tasks done 80% of the time should be invoked through make unless it’s impossible.
  2. the project-specific Makefile should be as simple as possible. Common rules should be distilled out into separate, generic Makefiles that are imported across a number of projects.
  3. any dependencies required by the project should be installable with a single invocation. A developer shouldn’t need to understand all the build and the involved build tools from the get-go.
  4. the Makefile co-ordinates the build, it doesn’t perform it. It is possible to use shell commands to catenate files together instead of webpack – but we shouldn’t do that, unless the saving of avoiding a more specific tool is substantial.
  5. the project-specific Makefile should be as short as possible; ideally less than thirty lines. A 244-line-long Makefile is not following the ATOM approach.

Keeping it simple

There are tonnes of features in Make. I think a lot of them are worth avoiding. Using phony targets is pretty much a necessity. Using as much of the dependency management to speed up builds is wise too. Avoiding traps like recursive Make is a good idea; turning off the in-built ruleset is a given (unless you’re compiling C, I guess).

There are still a bunch of problems using Make. It would be great to be able to inherit rules from others. Multi-line shell would be nice, as would a broader selection of dependency management systems.

At some point I’d like to define a more formal subset of Makefiles that exemplify the ATOM approach, and I think it would be viable to think about what a generic ATOM tool would actually look like – but in the meantime, I think taking a simple approach to projects using Make delivers a huge amount of value.