You’ve likely heard the term “metaverse” many times over the past few years, and outside the realm of science fiction novels, it has tended to refer to some kind of computer-generated world. There’s often little distinction between a “metaverse” and a relatively interactive virtual reality world.

There are a huge number of people who think this simply a marketing term, and Facebook’s recent rebranding of its holding company to “Meta” has only reinforced this view. However, I think this view is wrong, and I hope to explain why.

Back in 2011, Marc Andreessen authored his well-known article, titled “Software is Eating the World”. Andreessen was simply pointing how how many services and processes were moving into the software - something that was well under way in 2011, and difficult to argue with - and that effectively the “domain of software” was expanding rapidly. I will refer to this domain as the “digital domain”, and the real world as the “physical domain”.

There are plenty of examples of how software was eating the world:

  • before the internet, you would have to go to a shop to buy a book. After the internet, you could visit the shop virtually to order the book. Indeed, the book itself may no longer be in the physical domain - it could easily be an e-book.
  • other types of content, such as news, music, TV shows, and films - have been steadily moving into the digital domain - to the point few people now have the physical assets.
  • more and more hardware devices are simply computers in boxes. Many physical items now cannot run without computers - a good example are modern car engines, which rely on computers for precise fuel injection timing.

Physical items like cameras moved from capturing physical content to generating digital content, and the entire chain of activity afterwards would become a digital process. Why was all this happening? A digital camera is more complex and expensive than a film camera, and at least initially produced much worse images - so what was the benefit?

The speed of change

When physical items are transferred into the digital realm, we gain an immediate advantage in terms of speed of change. How those items behave, and how they work with other items, can be adjusted as a “simple matter of programming”. Obviously this is all within the constraint of the capability of hardware still left in the system, but if we think about the digital camera - which was initially a worse item - there are things we can do today that would have been unthinkable previously:

  • we can choose between varieties of different filters that produce vastly different output
  • we can take tens of frames in a short period of time, and capture the picture that best represents the scene we were shooting
  • we can select areas of the picture and “erase” people / objects we do not want using only automatic processes
  • we can fix blur, judder, and other capture issues without having to retake pictures

Suddenly, the item has become significantly more useful, and as computing power increases, the capability of the item increases.

We can make similar arguments about most other things that move into the digital domain. Their capability increases, cost goes down, and more than that - they become vastly more amenable to automation and integration. We can build machines that are not constrained by physical limitations.

The Metaverse is the Interface

So, to begin with, the metaverse is really associated with the interface between the physical and digital domains. It’s largely software, but it’s software designed primarily to represent or interact with physical items.

SecondLife is often touted as an recent example of this process, but it’s actually pretty advanced. The physical side of the interface is pretty simple - people just run an application on their computer that presents a 3D world - but the software side is complex. Many people are able to log into a single environment, interact, and communicate. To a cynic it’s a barely more advanced medium than a conference call, but the ability to share and operate on virtual items is advanced.

“Metaverse” for me is a spectrum, not an absolute. An even earlier example of this in my opinion is server rack management software. If you have a rack full of servers, and most of them will have disk drives, you occasionnally need to do maintanence - for example, replacing a drive. But they’re all alike! How do you know which is which?

Most management software gives you the ability to turn on lights on specific pieces of hardware. Some pieces of hardware may also control these lights with buttons - so that a person can press the button, causing the indicator to light, and communicating to the software which piece of hardware they’re looking at. This is effectively a bi-directional metaverse interface! There’s a representation of the hardware in the software environment, and we can select hardware items from either the physical or digital domains, in order for us to carry out maintenance tasks accurately.

What singles out the Metaverse?

While I would offer this example of rack hardware being something that you can interact with from both the physical and digital domains - and therefore being metaverse-like - most people probably wouldn’t accept it as being very far along the “metaverse spectrum”. So, what are the attributes that make something “more metaverse”? Here’s what I think about:

  1. Are there digital representations of physical objects? If not, then it’s not really “metaverse” - this is just a virtual environment. The closer the functional representation, the more “metaverse” this is. Typically, this would include multiple people/avatars.
  2. How rich is the interaction? Closed-input solutions - like the rack equipment example - are much less rich than a standardised open-input solution. The reason the web works so well is that anyone with a browser can access a web site (more or less…). In a metaverse, you should be able to encounter new objects and interact with them, without needing to have knowledge of them previously
  3. How present is it in the physical domain? We have many ways to bring software out into the physical world - from AR/VR interfaces through to haptic feedback, video displays, etc. There should not be a disconnect between the physical and digital representations, and as far as possible, activity in the digital realm should reflect in the physical.

We can think of many metaverse-like features in things we’d find at home already. Home heating systems are an example: primarily software-controlled, but with multiple physical sensors and probably a physical interface that represents the state of the system, we have much more control over our home heating with these systems. In some cases, the software representation of the system will be analogous to the physical layout of the system, to make the operation clearer.

In a future metaverse platform, perhaps our entire homes would be modelled in some kind of virtual environment. This would encompass home heating and security systems for sure, but could equally bring in smart appliances like fridges and washers, home entertainment systems, mains power management, and more.

What is the benefit to this? I mean, clearly, there’s not much benefit to being able to walk around my house in 3D just so that I can go upstairs to turn off a radiator. Maybe the typical 3D interface that people think of as being “metaverse” is not the most efficient - and that’s not really the point. The point is that once available in a software-defined environment that reflects the physical, we suddenly have a lot more capability available.

There are obvious overlaps with IOT technology, and automation in general. And while this is true, I don’t think it detracts from the point about the metaverse - just because I can automate my home heating doesn’t make it metaverse, and equally, if all the things I want to do in the metaverse are achievable with IOT integration, that doesn’t stop it being metaverse also.

Impacts on the physical domain

I think the stand-out feature of what will become “metaverse” will largely be about how the digital domain is reflected back into our physical environment. Being able to address and interact with objects based on their physical location alone is a useful capability - especially for things that might move about! The physical world is rarely static.

I haven’t spoken much about NFTs or “Web3” yet. These are related concepts, though. If we think of an NFT as a digital item given the physical attribute of uniqueness, we’re not far wrong - it’s quite a metaverse concept to apply a physical constraint in the digital domain.

Potentially, in the near future, we will also see NFT-like concepts for physical items. An NFT is a purely digital item, but some items could exist concurrently in both the physical and digital realms - and NFTs do not address this. If I own a physical object, in what sense is that ownership extended to the digital object? What is the difference between possession/control and ownership? Is it possible to give someone a physical object and for them to verify that it is properly represented by some specific digital object (this is an extremely hard question to even state clearly!)?

To a large extent, I think we can avoid most of the thinking in this area, because NFT-type physical constraints are generally much less useful in the digital domain. There are many examples of art-like and other tradeable digital items relying on NFTs to denote ownership, but most content is (and, I think, will remain) licensed not sold. For the most part, where the “double spend” problem doesn’t apply, we don’t need NFT-style responses.

I do think there will be further impacts on the physical domain. Possibly, AR will become much more prevalent - the thoughts about Apple glasses or even MagicLeap indicate this - but I think we’ll also find that new physical modes of interaction will become more present. Voice is clearly usable now, although we haven’t yet found a way of generalising this as an interface in the same way we have the web.