Increasingly popular in the last couple of years, I think 2020 is going to be the year of “no code”: the movement that say you can write business logic and even entire applications without having the training of a software developer. I empathise with people doing this, and I think some of the “no code” tools are great. But I also thing it’s wrong at heart.
Why “No Code”?
On the face of it, the reason for wanting to remove code is obvious: software developers are expensive, in short supply, and fundamentally don’t produce things quickly. The end result can be highly variable and expensive to run and maintain.
But the need for software is clear. A modern “digital organization” needs a lot of software, much of it highly customised or even bespoke (I’m not going to defend this perspective in this post - it would take too long - but my definition of a “digital transformation” is one in which the key processes within an enterprise are updated/transformed to be defined by software primarily).
Wouldn’t it be great if we could create digital businesses, or even digital products, without having to create software in the traditional sense? This is the general way of things: new technologies start out difficult to use, but are made easier over time, and more accessible.
There are mainly two distinct benefits to transitioning business processes into the software domain:
- “change control” becomes a software problem rather than a people problem. Instead of large amounts of retraining to alter existing processes or introduce new ones, you can “just” do a software release. Change becomes faster to implement and roll-back.
- it’s easier to innovate on what makes a business distinct. When businesses are doing the same as their competitors, it’s a sure sign things are becoming commodity. This is fine for some businesses, but most people don’t want to be in commodity services/products.
Many businesses fail attempting to do digital transformation to access these benefits. The downside of trying to make this jump is that suddenly you’re becoming, at least in part, a software development company. Surprise: most companies are not good at this! A software environment is one of infinite possibility because most everything is achievable, with enough resource (time, money, people). Most people are great at dreaming about the possibilities, but less good at putting in place the constraints needed to achieve realistic results.
What is the basic problem with “no code”?
At the heart of the issue is the concept that “computer programming” - which is to say, writing code - is a constraint on the development of software. That there is some “higher level” on which people can operate, where development is much simpler but the end results in some way the same.
Specifically, the idea of writing business logic in text form according to the syntax of a technical programming language is anathema. To make an analogy: it’s a little bit like saying all software developers are like car mechanics, who get into the guts of an engine. But most people just need to be able to drive the car, and a simpler interface can be put on top of the machine (a steering wheel and some pedals). Over time, we can abstract away mechanical details (e.g. a mechanic might like to drive stick, but most people prefer an auto gearbox), until the point that anyone can do it. Yay, we democratized it!
Unfortunately, this just bumps into a large set of problems which have already been thoroughly studied in computer science.
The “simpler abstraction”
Attempting to simplify programming has been going on since quite early in the industry: BASIC was an attempt to allow people to write software in what looked like English, and indeed it was extremely successful (cf. Visual Basic).
However, abstraction is already a key concept with coding systems, and it tends not to simplify: in fact, many developers actively attempt to ensure code is sufficiently concrete so that it remains readily understandable.
The “simpler syntax”
Thinking that the main problem was writing the text, there have been attempts to simplify the syntax or even remove it altogether - there are numerous graphic development systems. Some are only marginally removed from the code (e.g. Scratch), others bear no relation.
The development of these syntaxes has generally run into the problem of expression: once they are simple enough to pick up quickly, they are no longer expressive enough to use in many scenarios. And vice-versa: some languages have the ability to define a custom language within them, called domain-specific languages (DSLs). Few of these languages have ever been truly successful amongst the development community at large, primarily because they again make thing extremely complex.
Configuration over code
Many No Code advocates are building significant systems by pulling together off-the-shelf applications and integrating them. This is made much more straightforward with tools like Zapier, which are able to broadly integrate into different systems.
This begins to fall down in two ways. First, you’ve spread the logic across a variety of different systems, so it becomes more difficult to reason about the application as a whole.
Second, more importantly, the logic has been implemented as configuration as opposed to code. The logic is constrained by the ability of the applications which have been wired together, but it’s still there. Programmers face this dilemma constantly: do we trust an external system and put a lot of configuration effort into it, or attempt to handle more of the logic ourselves?
The logic doesn’t go away. Just because a decision is embedded into the wiring of a Zapier rule doesn’t remove any of the burden of maintenance / correctness.
The equivalence of code
There are reasons why developers still use plain text - mainly to do with productivity, tooling and the ability to be concise. However, I have little doubt that if something came along that was better, many (not all!) developers would drop text like a hot rock.
But the representation of the logic doesn’t reduce the fundamental complexity of the thing that it describes. In the same I way I can write “two” and “2” and mean the same thing, there are many ways of writing out business logic.
Which is to say, this supposed process in a visual development environment:
Could be exactly equivalent to:
def process_email(self, address):
if not self.validate_email(address):
raise InvalidDataException(_("Address is not valid"))
self.store(address)
In the first example, I need to know how the visual environment works. In the second, I need to know a language and a development environment. But both of those are skills that are easily acquired. The commonality across both of them is that I need to understand what the logic is supposed to do, and how it will work.
To understand software - of any type - you need to be able to mentally model the system that is being represented, and make predictions about how it will work in different scenarios based on that.
This is exactly the same reason many people have trouble with modern digital devices. The aopcryphal “VCR timer” problem is because the hardware has few buttons to give it input, but highly complex internal working: so the user needs to retain in their head an advanced model of the internal state of the device.
Some people suggest that this is a skill that is not acquirable; that either you can reason about the internal state of something, or you can’t. Those that can’t would be fundamentally unable to program. I’m not sure I buy this - it skates a little too close to “theory of mind” to me - but certainly you can’t do it without a lot of practice. And it doesn’t matter if the logic is in text or is visual, frankly.
Is “No Code” just “No Good”?
Absolutely not.
I think it is unfortunate that over the 70-odd years of programmable computers, we are still roughly stuck using tools similar to those developed in the first 20 years (actually, if anything, we have reduced the power of tools over that time - Lisp didn’t take over the world).
So of course we should still be attempting to improve our languages and environments. Consider these two pieces of code:
#include <string.h>
#include <stdlib.h>
char *add_domain_name(char *source) {
const size_t size = 1024;
char *dest = malloc(size+1);
strncpy(dest, source, size);
strncat(dest, "@example.com", size);
return dest;
}
And then this:
function add_domain_name(username: string): string {
return username + "@example.com";
}
The first example is C (invented around 1972), the second is TypeScript (release 2012, almost exactly 40 years later). They have approximately the same syntax in many places, but TypeScript is much more advanced than C. In particular, the developer doesn’t need to worry about allocating memory, or the character set encoding of the string, or a host of other things.
Indeed, for a large enough application, most of the business logic will have been implemented at quite a high level, and the differences between the languages will be even less obvious. It certainly doesn’t look much like the state of the art has progressed very far in 40 years.
Where does “No code” fail in practice?
There are a large number of systems available, some of which are extremely advanced - as an example, you can define extremely complex software in Salesforce Cloud, without having to write a single line of code. It’s a mix of visual programming, basic rule setting and configuration.
Projects often begin with a “prototype”, to show the platform can do it. These are very quick to put together, and fulfil 80% of the brief. Success? Sadly, no - as coders know, the devil is in the details.
With someone else’s platform, you often end up needing to construct elaborate work-arounds for missing functionality, or indeed cannot implement a required feature at all. As an example, I once used a platform which had an e-mail auto-responder, but one which could not be put behind a spam checker, or which was able to examine the SMTP envelop. To use it meant producing spam back-scatter, which is a receipe to get quickly banned from many e-mail systems. There was no way to use the feature in practice.
As a project progresses, assuming it can navigate the maze of requirements and implement features fully, you then have trouble with the productionizing. Change control is an obvious example.
With code, we’re used to being able to create a change, deploy that into a separate environment to test a change, and then be able to deploy into production. Or, failing that, we can deploy into production but switch the feature on progressively, so that if there are errors we can pick them up quickly and resolve them without impacting all our users.
With “no code”, it tends to be difficult or impossible to have a non-production environment. Even if you do have one, accurately copying changes over from one to the other is non-trivial. Salesforce has some excellent tooling available to make this work, and even in that environment it’s extremely difficult to do.
Where does “No code” succeed?
Questionning the need for software in the first instance is always a good idea, but “No Code” systems are extremely good for putting together proofs-of-concept which can demonstrate the value of moving forward with development.
As non-IT systems, they’re also incredibly useful for getting actual business user design input and feedback. While we talk a lot about agile development, it’s very rare that I see end-users on the development teams themselves: giving people access to tools where they can contribute and participate must be an unalloyed good.
There are many tools which, while not “no code” per se, also allow users to produce more technical output. My favourite example is Looker, the business intelligence tool, but there are many such in different niches. As an aside on Looker: I find it extremely interesting that a lot of the model development in that environment happens in plain text, using regular software development tooling. I think this is one of the reasons it has ended up being successful.
Conclusion
I think “no code” as an alternative to most mainstream development is a pipe dream. None of the progress over the last 70 years makes me think we’re anywhere close to replacing text-based development (arguably, since the introduction of the web, we’ve gone backwards).
The various “no code” tools are extremely valuable, but I think must be deployed somewhat sparingly. They are not a software development silver bullet, and as likely to make situations worse rather than better.
In particular, I think business executives without technical backgrounds need to be extremely wary of the siren’s call. But then, I’m a software development CTO, so I would say that, wouldn’t I?
The sweet spot for me is the “power user”: the user of IT who is extremely adept already, and is probably bending tools beyond their design. Giving them an environment in which to deliver is extremely important, but this should be a joint effort with more technical staff - the two sides should not be in opposition.