Everyone is talking about what AI can do right now, and the impact that it is likely to have on us. This weekends’s Semafor Flagship (which is an excellent newsletter; I recommend subscribing!) asks a great question: “What do we teach the AI generation?”. As someone who grew up with computers, knowing he wanted to write software, and knowing that tech was a growth area, I never had to grapple with this type of worry personally. But I do have kids now. And I do worry. I’m genuinely unsure what I would recommend a teenager to do today, right now. But here’s my current thinking.

The bear is dancing

Alan Cooper’s book, “The Inmates Are Running The Asylum”, is ostensibly about product development - but really, it’s very much about software products. And he used the phase “Dancing Bearware”, which sticks with me even today. Products are in this category because they are dancing bears: we are amazed by the fact the bear is dancing, even if the quality of the dancing is poor.

I think it’s fair to say that ChatGPT is one of the first AIs to leave this phase in the public consciousness. This bear isn’t just shuffling side to side any more, the ursine equivalent of the dad dance. It’s doing a remarkable rendition of Swan Lake, even though every once in a while it’s making up its own steps.

But, I would contend, we should have seen this coming. Generative AI is relatively new, but the ability to handle human content has come on leaps and bounds in the last twenty years. If we look at language models specifically, basic Bayesian statistics turn out to be awesome at differentiating spam and non-spam email - in fact, this was how I first heard of Y-Combinator founder Paul Graham, back in 2002 when he published his “Plan for Spam”. Specific words, and word sequences, can be great clues to aspects of the content.

And, of course, vector language models have transformed (sic) the ability of machines to translate written text between human languages. Rough translation is pretty much a solved problem at this point. In fact, the technology here is now so advanced that on your mobile phone, you can:

  • point the camera at a menu written in any foreign language,
  • have the phone convert that image into text,
  • have the phone detect which language the text is written in,
  • convert the text to your native tongue;
  • rewrite the image of the menu into your native tongue in a style- and visually-similar enough way that it looks like a photo of the same menu.

And it can do all of this in more or less real time, such that the translated menu floats in front of you in a kind of Augmented Reality way. This is genuine Star Trek type technology - albeit some of that processing is happening out in the cloud, not on the phone.

Point is, language models are incredibly powerful, and even very basic statistical approaches (like Bayesian filtering) have turned out to have remarkably good outcomes.

Who’s at risk today?

I think it’s pretty easy to make a list of jobs that AI will make more or less irrelevant: either today, or in the very near future.

This blog post is not written by ChatGPT. But I’m pretty confident that I could have given it an outline of the points I wanted to make, and it would have given me a pretty good rendition. I’ve written a couple of books: I could fine-tune GPT on my particular writing style, and I doubt that any reader of my blog would identify the difference.

Content generation, in general, is clearly going to be very different in the future. Graphic design, copywriting, editing, to name but a few roles - all will be substantively impacted.

People talk about music as being another example. I think, actually, music was done already. Is it possible to make a living as a musician? I mean yeah, in principle, but you either have to be incredibly original, very entertaining, or extremely accomplished - and in many cases, some mix of all three. These are not well-paying jobs for 99% of artists.

The majority of commercial musical output is also what we might snootily refer to as “culture lite content”. What has sold commercially has been factory output for decades now, and if you listen to music in most places in the world, you’ll be hard-pressed to notice significant differences. To a large extent, post-synthesizer and sampler, music is “complete”: while there will always be new tunes, it’s difficult to imagine new genres or instruments. Perhaps my imagination is just too limited, but the variation in music from 1910-1950 (let’s say) feels significantly greater than 1980-2020.

Examining myself, GitHub CoPilot - a system I’ve now used for a couple months - is surprisingly good. I’m not talking about the bear shuffling around on its toes; there are moments when CoPilot writes some code for me and my jaw drops. I can reason about how it got to the answer, it still consistently surprises me that it can get to the answer.

As an example for this: I asked CoPilot to write a Python utility that would process HTML files, and write out any tables it found in the HTML, but in Excel format. I estimate that it would probably take me an hour or two to do this task?

Here’s the prompt I used:

Write a CLI tool that will take in an HTML file, extract any tables within, and convert the tables to Excel file format

Here’s the output:

def main():
    """ main function """
    parser = argparse.ArgumentParser(description='Convert HTML tables to Excel')
    parser.add_argument('html_file', help='HTML file to convert')
    parser.add_argument('-o', '--output', help='Output file name')
    args = parser.parse_args()

    if not os.path.isfile(args.html_file):
        print('Error: {} is not a file'.format(args.html_file))
        sys.exit(1)

    with open(args.html_file, 'r') as html_file:
        soup = BeautifulSoup(html_file, 'html.parser')

    tables = soup.find_all('table')

    if len(tables) == 0:
        print('Error: No tables found in {}'.format(args.html_file))
        sys.exit(1)

    if args.output:
        output_file = args.output
    else:
        output_file = os.path.splitext(args.html_file)[0] + '.xlsx'

    writer = pd.ExcelWriter(output_file)

    for i, table in enumerate(tables):
        df = pd.read_html(str(table))[0]
        df.to_excel(writer, 'Sheet{}'.format(i + 1), index=False)

    writer.save()

(it got the library imports, and the other boilerplate, all correct as well. I did not tweak this output at all)

Is this code right? Yeah, it’s pretty much there. There’s a couple of things I might tweak. It’s old-style python in places, I would prefer things like f-strings to .format(), but these are minor issues.

Did I learn anything? Yeah. I would have used beautifulsoup as well, but I did not know that pandas had table and Excel support built in. In hindsight, that seems obvious! It makes sense! But I would have written that stuff more manually.

Now, of course, this is a relatively easy problem to specify and there are probably more than a few examples in the training set that CoPilot could use. But, as a productivity leg-up, GitHub CoPilot is hilariously powerful. Like, ridiculous. This is not the IDE wars over again, this is not IntelliSense versus vim or whatever. Programmers who don’t use AI in a few years will not be able to compete with those who do, it’s that stark a difference.

What should we learn, then?!

So, this is the problem: if we’re losing out on both culture and art, but also sciences (like software development, like protein folding, like drug discovery, etc.), very quickly we get to “what’s the point of the thing in the meat suit anyway?”. I think that’s overly pessimistic for many reasons, but here’s a few.

I agree with Semafor that a physical presence has advantages. Plumbers have been well paid for many years, and this doesn’t seem like it’s going to change any time soon. AI is not going to fix the lack of joiners in the building trades, and we will always need shelter and clean water.

But, here’s the thing. People like to joke about the Metaverse as being silly/irrelevant. These people are wrong, and Meta/Apple will prove this in the next five years. The Metaverse is where we can interact with virtual things. Software is eating the world, and everything is becoming virtual - just think how many “things” are now in your phone, from calculators/torches/cameras through to the translation app I talked about earlier. We will need to go into the world of software to interact with these things, phone interfaces and buttons will not be enough. Not if, but a question of when. As things become more virtual, the physical world becomes less and less useful / interesting / necessary (sorry).

Will jobs that involve caring for people survive? Honestly, I just do not see this, and I think Semafor are wrong here. Elderly care is amongst the least-paid and least-valued work there is, teaching is tragically-paid and getting worse, and nursing is going that way. From a purely economic perspective, these are not particularly productive jobs: absent a very different economic model it’s difficult to see it changing. Instead, unfortunately - and I don’t like this world - it will become more and more automated. Have you been to McDonald’s recently? Did you place your order with a person or a tablet? Exactly.

And I disagree fundamentally on the culture part, too. Music has already demonstrated this to us. When were poets ever well paid? Do you want to work in a fine restaurant kitchen, even today? Do you fancy the high life as an actor? I mean, none of this appeals to me. I’d quite like to write - but authors, again, not well paid.

Why do I keep coming back to money? I’m not sure how else to rate these things. The value we place on work is measured in money, really. And either it has a market price, or we subsidize it and the money comes from somewhere else. When we’re talking about humans as an overall workforce, that “somewhere else” becomes extremely important.

So, what would you teach?

Primarily, and probably for the next twenty years, I think AI will be augmentation / a force multiplier. Let’s take the example of a calculator: lots of people no longer do regular math due to having access to calculators. But you have to know how to use them!

Have you ever watched a shop assistant attempt to take VAT off a total? They know VAT is a 20% tax. They have a calculator. Unless they do this somewhat regularly, I can guarantee you most of them do not know the sequence of buttons to press. Some calculators even have a dedicated “%” button - this is a bear trap for the unwary.

So, for me, the primary thing to teach is how to use AI. It makes sense to ban AI from some subjects or in some classroom circumstances: in the same way many maths exams have to be done without a calculator. But, the only reason this is important is because you need to know enough to know when the AI is wrong.

Here’s a coding example:

""" write a function to convert an XHTML string
into a plain text string """
def html_to_plain_text(xhtml_str):
    # remove all tags
    text = re.sub(r'<[^>]*>', '', xhtml_str)
    # replace html entities
    text = html.unescape(text)
    # replace multiple spaces with a single space
    text = re.sub(r'\s+', ' ', text)
    # remove leading and trailing whitespace
    text = text.strip()
    return text

I can’t use this function; it’s wrong. Amongst other reasons, it will not work with CDATA sections correctly. But here’s the thing! It does, mostly, work. It’s not completely wrong! But it’s not well-engineered. It’s a 90% solution, and in some specific scenarios may be good enough.

I need to know enough about the problem to identify when a solution is incorrect, even when it works in some obvious cases. And a lot of programming is like this: ask your favourite developer about character encodings, or datetime/timezone conversions, or any number of other niche issues. I found out last week that Azure apps are typically behind a SNAT gateway with only 128 slots! I reckon 90% of developers would not understand why that’s even a problem without research, though.

So, learning how to drive AI is important. And, the more content that is generated through AI, the worse it becomes. Typically, AI cannot write out a whole program (like the first example from CoPilot above) in one go - it’s much happier writing out pieces, like a function, or a specific stanza of code. The overall design and wiring still remains a pretty human endeavour.

(As an aside: as we develop different strategies for prompting AIs, including planning steps, the above point becomes less true over time. And I suppose this is a worry: how many of these AI limitations are fundamental? Probably not many of them.)

I do think the demand for coders will go up with AI, incidentally, but I think this will be a medium-term effect. We’re currently highly constrained, and the demand for developers is limited by both salary expectations and available people. If AI enables developers to work at a higher level, and/or for senior devs to become more productive, supply will increase but I see the limits on demand being the issue for the foreseeable.

Another crucial skill is going to be critical thinking. For a variety of reasons, I think this is difficult to acquire, and we’ve also had a poor record of teaching it in the past. Thinking critically without falling into the trap of being a dreadful cynic or pessimist is tough. But making use of AI output without critical examination is going to be dangerous. This will need both factual knowledge, critical faculties, but also a sense of ethics.

(As another aside: ethics has been part of engineering since, well, forever. Every computer scientist I know of was taught about the Therac-25 machine at university. There is nothing new here about AI, except that the ethical concerns are of a different nature. We will need a more extensive education on ethics, that’s all. I also think this is a different point to the basic level of safety we’re aiming for with AI - which is relative, and whether we’re happy with As-Low-As-Reasonably-Practical versus As-Low-As-Possible is the basic dilemma here).

What will the role of AI be?

When I think about conversational AI, and particularly language models, I think that their primary use case is going to be as a kind of universal API. Rather than call web functions between different pieces of software, integrations between services - or even whole businesses - will take the form of natural language conversations.

An example of this may well be customer services. Let’s say I bought a pair of shoes online, and they don’t quite fit, and I want to return them. Even now, I’m probably not talking to a human to arrange that, but maybe in the future I will just type into AssistantGPT “Can you arrange for me to return the trainers I bought recently?”.

What it will need to know/do:

  • some form of access to my records, to know which shop and transaction that was.
  • some understanding of the shop’s return processes - maybe AssistantGPT connects directly to StoreGPT and asks it what the process is.
  • maybe there’s a few different options - I can mail them back via the local post office, or take them back in store. AssistantGPT can just ask me the question of which I prefer?
  • if I can take it back in-store, is there a store near me? Perhaps I bought it online, and the shop is in Ireland: it should know that returning in person is not an option…
  • or maybe it figures out there’s a local delivery service that would be willing to pick up my parcel, separately from the services that the store offers.

At the end, AssistantGPT is always going to present the agreement before I execute it. “Here’s your options: … which do you want to proceed with?”. Once I give permission, it will again go away and do all the arranging on my behalf. I think this process, “Here’s a Task” / “Here’s how I would complete the task” / “Here’s permission”, will be incredibly common.

I don’t think this scenario is far-fetched, and for a variety of reasons it makes incredible productive sense: when you think about removing humans as the primary means of communication between both people and organisations, there are obvious imperatives to switch to this more asynchronous mode of communication. This is the Jeff Bezos webservices model of team organisation at Amazon, but writ large and applied to entire societies.

So, to some extent, we can look at Amazon as a taste of the future to come. People typically have two types of job: they are either relatively menial physical bodies employed in logistics systems where the robot either doesn’t exist or is too expensive, or they are brains-in-jars working behind the door of a transactional web API developing high-value services for Amazon Web Services.

Of course, there are a variety of other roles - marketing, finance, HR, etc. - but these roles are undifferentiated in many businesses, and I think that’s the hallmark of a role that is likely to disappear. Book-keeping, for example, is already on the way out - accountancy will follow, and finance auditing will happen later. Why have an auditor that just samples and checks/balances the books, when you can have software that literally reviews every single transaction?

Jobs of the future

I think we’re actually slightly too early in this process to identify the boundary of what will remain human endeavours. I can’t see any likelihood of the economic imperatives changing, and while there are still craftspeople who make their living weaving baskets today, I think it’s fair to say that there are jobs today that will be gone in the near future in the same way people don’t make baskets any more.

Physical presence may be one key attribute to predict roles that will be difficult to replace, but I think this is a matter of time: both robotics and the Metaverse will encroach on this faster than we think.

The ability to think critically will be a key determinant, and to some extent ethics will, as well. If we think about jobs that use these features today: insurance loss adjustment, through to Judges sitting in courts, are unlikely to be displaced by AI. While we might think AI can do a fairer job of their work, the reality is that society will not let software sit in judgement of humans in any substantive way any time soon.

Cultural jobs will not necessarily survive, but the jury is definitely still out on creativity. A lot of creative jobs rely on the ability to remix/build on other’s work, but we still place a lot of emphasis on originality. Of course, is there anything new under the sun? It probably gets a lot more difficult over time, but I think in a world of ubiquitous-AI content, uniqueness and authenticity will become regarded as a premium.

The ability to analyse, make decisions, implement decisions and respond to world events will likely still remain quite human. We don’t yet trust AI to enact their output automatically, and AI can’t get set goals or seek after them. To some extent, this is a bet against AGI - and I feel like this is a difficult bet to put odds on right now - but I feel somewhat confident in saying that we’re probably two or three major breakthroughs away from something that’s close to AGI. I’m just less confident about how long those breakthroughs may take (I am confident they are possible, and that there are no obvious technological barriers other than knowing what to implement).

My last point, I think, is my strongest. I’ve talked about critical thinking, a strong ethical stance, etc.: but the most important thing we can teach our kids is how to learn. This is literally what they are competing against; machines that are capable of learning. Right now the learning is slow and expensive, and the machines are not readily adaptable, but that will improve over time.

Humans are incredible at learning, but many people pretty much stop learning once they leave school. Many people have never taught themselves anything non-trivial. Many people have never learned by experimentation. Being able to learn in many different modes and across a variety of different skills is going to be the most important skill in the future.

I feel mostly worried about the state of our education system. Are universities, as the exist now, good enough? For undergraduate degrees, I feel like the obvious answer is no. Teaching people via AI is also much more limited than I think many people realise. The PhD, traditionally, is the first time that a student contributes original thought to the state of the art. Now, I’m not saying that everyone needs to get to PhD level - but, I do firmly believe that many PhD level skills will become extremely important.

So, what do I think people should learn in the AI future?

  • how to drive AI
  • how to critically assess the output of AI
  • how to understand the ethical boundaries of what tasks should/shouldn’t be within the domain of AI
  • how to write code (genuinely, I think this is the new “how to use a calculator”)
  • how to research topics
  • how to analyse information and synthesize hypotheses
  • how to perform experiments
  • how to learn new things

Many of us already do a lot of these things, day-to-day. It still sounds kind of knowledge-worker-y, when I read the list. But I think we’re going to need to do the above to a much higher standard than today.

I foresee a kind of Masters of Business Administration (MBA) of the future revolving around many of these skills. MBAs specifically were designed to equip mid-level business managers with all the skills they need to lead a business, but typically (and you can argue with me here) graduates have a relatively inflexible and overly-operational/financial/economic lens through which they view life.

A potential Masters of AI Administration (MAA) probably can’t exist, either. It’s not a one-time thing. It’s the life-long pursuit of learning that is key. The era of “a job for life” is already gone, I think we’re now at the end of “skills for life”. Maybe in the future we divide our life into decades, and spend the first year in every ten back in school? Now, there’s a thought…