Sam Altman has published a thoughtful piece about what was previously called the Singularity, which he now refers to as The Merge. I’m not sure these concepts are quite the same – the traditional Singularity was less a statement about humanity than a theory that, at some point, the improvement of intelligence of machines is sufficiently fast that it becomes pointless to predict the future. “The Merge” for Altman is about the point at which human intelligence might start accelerating – which assumes the machines will want to bring us with them. Not quite the same.

However, Altman believes The Merge has already started. I’m pretty convinced it hasn’t, but I think what is interesting is identifying which signs might be indicative or not.

Signs it might have started

Altman points to the following as examples that the Merge has begun.

  • social media feeds influence how we feel
  • search engines influence how we think
  • algorithms influence what we do (usually, what we buy)
  • all this influence and AI technology is effectively co-evolving

I think there are a lot of different things to unpack in here, and I’m not sure they all point in the same direction.

It’s very easy to look at social media, for example, and decide that it has had a bad influence on us. People are able to converse across the globe now, but it turns out it enables people to talk to other like-minded people, and then in groups shout at people who don’t think the same. There is a polarising effect at work.

Equally, a lot of the traditional media is also coming in for criticism. TV news offers “equal opportunity” to politicians whose views would have been entirely repellent a couple of decades ago, and we blame the TV for giving them airtime. The politicians get voted in, and we’re told we cannot criticise the voters because the Liberal Elites simply do not understand why they voted that way, too.

I think a lot of these things are processes that have been under way for some time. The hand-wringing about climate change is an interesting case in point: we have a huge amount of evidence about this being a problem, but as the evidence has mounted, so has the opposition to any action.

We live in a post-enlightenment age where we understand fully how limited our knowledge is; scientists are happy to go onto TV and say how little they know. We understand there is no such thing as absolute proof. We also increasingly disrespect these “experts”, pointing out where they have been wrong in the past.

As our knowledge has increased, so has our ability to understand the limits of that knowledge, and the fact that much knowledge is never truly provable. Even our most basic principles about the Universe, such as the laws of thermodynamics, are not accepted as hard constants. The break-down in absoluteism has not been matched by an improvement in our ability to discuss facts on a relativist basis; we still have trouble with probability and risk, and our intelligence seems ill-equipped to deal with these concepts accurately.

On all of this basis, I don’t believe The Merge has begun. A lot of the issues we have now have become problems over a long period of time, and the fact that technological improvements are worsening some of them in unclear ways is not an immediate cause for alarm.

Acceleration is occurring

It would be trite to say that the current rate of technological progress is at all normal or comparable with previous historical periods. It would be completely wrong. While it’s possible to point to inventive explosions in the past, where the world was suddenly confronted with a vast array of new technologies, it is very clear that computationally we are in a very different place now.

Bitcoin is an excellent example. An entirely ephemeral, mathematical system is suddenly (over a short period of years) consuming measurable amounts of global energy. Bitcoin is a pretty obvious meme in the original sense of the word, and has spread frighteningly fast. It has also held together amazingly well: encoding ideas into software apparently causes them to fragment less often, and when they do split into different memes (like Ethereum) those tend to spread just as fast and with similar robustness.

The technical progress Altman points to is irrefutable. It would not have been believable that a piece of software could learn grandmaster-level chess in the space of a few hours from just the rules a few years ago. While this is a highly specialised example still, I think this AI cycle has been remarkable in terms of both keeping down the hype yet consistently surpassing expectations.

What might the signs be?

If I disagree with the signs Altman gives, what are the things I would look to, in order to determine if indeed we are beginning to move closer to machines?

For me, use of technology, or deference to technology, is not really a sign. I can use a laser to measure gaps much more easily and accurately than using a tape, but that’s simply another tool. If we admit tool use, the The Merge has already started in crows. There’s an unevenness to this as well: plenty of people use calculators and trust the answers. In fact, many adults are not able to manually perform the calculations a calculator can.

I do think that attempting to measure the rate of technological progress is important. The development of new tools obviously helps us design and build new technology, which is a nice virtuous circle, but this still very much relies on a human being directly in the loop.

For me, then, an obvious sign is where technical progress is being made with humans out of the loop. The discovery of new plays in Go and Chess by Alpha is a small example of this I think; it’s a very rare occurrence that the use of technology on its own pushes forward our science.

We can compare this to the search for proofs in mathematics. There are a couple of examples where theories have been proven by some exhaustive machine search, but by and large the improvement in mathematical techniques has not changed in terms of velocity.

Some of the next signs I would look for:

  • discovery or design of novel materials or chemicals. In principle, AI-like techniques can be brought into the materials science world, and could cause the discovery of new forms of material. We only recently discovered graphene, even though it had been sitting under our noses for a long time, and only in the last few years has it been possible to work out how to manufacture it. There are probably many amazing materials waiting to be discovered.
  • development of new algorithms or approaches which wring much more performance out of existing hardware. We’ve been stuck at the end of Moore’s Law for a little while now, butting up against the realities of quantum physics. Our tools to design chips and the software that runs on them haven’t changed significantly, really. If we can start getting more performance out of the hardware by software automatically optimizing itself, that would be another indicator that progress is relying less on human input. Arguably, this is happening with AI guiding the layout of chips, but AI hasn’t had much to say about writing software yet.
  • self-designing manufacturing methods. We’ve already had a lot of hoopla about 3D printing, and it hasn’t really gone very far (mainly due to limits in materials…). In principle, a machine that could self-print and assemble parts would be capable of experimenting and developing new processes much quicker than humans could. We’ve already developed 3D printers that can print most of their own parts, although there are still many significant limitations here.

It does seem possible to me that a breakthrough in software-based AI could knock all of this into irrelevance: a true general AI would of course be game-changing, and while no-one believes we’re particularly close to building one, we also don’t know how to build one – so in principle, it could be one year away or a thousand (or more).