Are we preparing our kids for the age of AI?

If artificial intelligence can write code, analyse financial markets, and act as a solicitor, what is left for our children to do?
As AI triggers a massive shift in the labour market, data shows that university-educated, white-collar workers are the most exposed to displacement. Thriving in this new era requires a radical pivot toward a "skill-based partnership" with technology, and the prioritisation of human connection, ethical reasoning, and creative synthesis.
But are our schools pivoting in the face of this uncertain future?
Read on to learn more, and join the conversation to share your thoughts.
Is AI just hype?
In 2021, Mark Zuckerberg announced that the future of technology was “The Metaverse”.
We were promised that conventional screens would disappear and that we’d work and play in The Metaverse via a headset strapped to our faces.
He was so confident that he renamed the parent company to Meta, and in the years since has sunk $80Billion into bringing his vision to reality.
In Mar 2026, Meta announced that they were shutting it down. So we know that technologies can be overhyped.
However, it’s likely that this lack of traction and eventual failure was down to one simple reason: it was technology that nobody (or not enough people) was asking for.
As a counter-example, ChatGPT broke records for growth in user adoption following its initial release in Nov 2022.
It had 1M users just 5 days after launch,
100M users within 2 months,
400M users by Feb 2025,
and currently has approximately 900M weekly users.
During this historical growth, numerous competitors have sprung up, such as Claude by Anthropic and Gemini by Google (the latter now exceeds ChatGPT’s growth).
AI is not the Metaverse. It’s delivering a product that people actually need, use, and adds value—and it’s not going anywhere.
The global economy relies on AI not being hype.
The capital investments in AI infrastructure (datacentres) are mind-boggling, and make Zuckerberg’s $80B bet seem trivial.
It is estimated that between 2026-2030 roughly $4Trillion will be spent on AI infrastructure.
Whilst these kinds of spending commitments are making markets nervous of late, it has historically rewarded these huge bets through the multiple on revenues being assigned to “big tech”.
As a result, the major indices (S&P500, Dow Jones, etc) have become heavily concentrated in big tech, with Nvidia (the maker of the chips going into these datacentres) becoming the most valuable company in history.
All this is to say, the mega-cap companies are “all-in” on AI and, to a large extent, so too is the global economy.
The whole world is incentivised to make this vision a reality. If AI fails to deliver as a disruptive technology, it would lead to massive write downs and crashing global indices.
But what happens if does live up to the hype? What does it mean for our kids’ futures?
It’s not ‘if’, it’s ‘how quickly?’
There was a time that standing in a lift all day to actuate a lever for people wanting to travel to a different floor was a career. That completely disappeared a long time ago due to improved technology.
Further down the scale is the checkout assistant. We’ve all witnessed their replacement by self-checkouts. They may not have totally disappeared, but they’re certainly under pressure.
Right at the other end of the scale is a plumber. The machine has not been invented that can replace them. Yet…
‘Knowledge work’ (coder, researcher, junior associate lawyer, etc) are already under pressure, and the rate of improvement in the technology replacing them (LLMs) is remarkable.
Robotics is the next frontier. There is a lag in robotics compared to LLMs (the ‘brains’ behind the robots), but they too are improving at an incredible rate.
So, in the long term, few careers feel truly ‘safe’ from AI and robotics. It’s not if they’ll be cannibalised, it’s when, and to what degree.
The effects are already being seen
While “mass layoffs” are becoming increasingly common (particularly in software development), the larger impact is in “silent compression” of job opportunities—with companies freezing hiring and expecting existing employees to cover more ground using AI automation.
Today, 37% of companies expect to replace some roles with AI (6), and the first to go are entry-level roles—impacting younger people entering the job market the hardest.
According to recent research by Anthropic, there has already been a 14% drop in job entry for young workers (aged 22-25) in highly AI-exposed industries (2).
These roles are the career ‘on-ramps’ for our graduates, and the trends are heading in the wrong direction. Not only does this squeeze reduce career opportunities, it also leads to short-term wage suppression for those able to land a role as more graduates chase fewer roles.
Which career paths are at the greatest risk?
Anthropic has done some excellent research on this question. Anthropic is not as well-known to the public as ChatGPT, but they are very popular in the enterprise (businesses) and so have massive amounts of user data to see what their technology is actually being used for.
According to their research, career paths at the greatest risk are (1)(2):
Computer programmers.
Customer service representatives.
Data entry keyers.
Medical records specialists.
Market research analysts.
Sales representatives (wholesale and manufacturing).
Financial and investment analysts.
Software QA analysts and testers.
Information security analysts.
Computer user support specialists.
The common themes are that these roles require limited creativity, are highly routine, and involve digesting a large corpus of information and producing an intellectual output.
Being a person that sits in a back office writing code - already under significant pressure.
Being a junior analyst in McKinsey, Deloitte Capgemini, etc - likewise.
Junior legal associates producing and reviewing boilerplate contracts - ditto.
Critically, these are not “low-tier” jobs. They are exactly the type of white-collar graduate jobs that act as career “on-ramps” for our kids. These roles earn 47% more than the average salary and are four times more likely to require a degree than the average job (7).
Is anything safe?
In the long term, who knows.
If there is hope that certain skill sets are protected in the long term, it’s by remembering that we’re social primates.
Why are there still any human operated checkouts?
Why did Tesla’s retail model (relying on online sales over physical dealerships) not set a new norm for the automotive industry?
Why did the Metaverse fail?
The answer may be that, as more of our lives are lived online, we crave human contact and will pay a premium for that. Further, it may not be that we simply crave human contact, it might be that only a human can deliver the product effectively.
Mehrabian’s Rule, that only 7% of human communication is verbal (38% is ‘tone’, and 55% is in facial expressions and body language), is well known, but misunderstood. Context is key.
Where communication is “Relational/Emotional”, the dominant form of communication is nonverbal (micro-expressions that communicate trust, attraction or hostility).
Where it is “Informational/Technical”, non-verbal cues provide very little additional data to the recipient.
Where it is “Digital/Text-based”, likewise (and emojis fill the nuance-gaps and provide extra cues for ‘tone’).
It is “relational/emotional” communication that relies most on the human touch, and we see that show up in the research.
The ‘safest’ career paths, according to the same research, include:
Healthcare and Therapy (e.g., Psychiatrists, Nurses, Surgeons).
Education and Human Development (e.g., Teachers, Counselors, Coaches).
Executive Leadership and Management (e.g., CEOs, Founders, Change Directors).
Legal, Governance, and Ethics (e.g., Judges, Mediators, AI Ethicists).
Creative and Strategic Roles (e.g., Creative Directors, Brand Strategists).
High-Touch Personal Services (e.g., Hair Stylists, Elder Care Providers, Personal Trainers).
High-stakes sales and negotiation (e.g. M&A, car sales, realtor)
These careers all involve “relational/emotional” communication.
We typically acknowledge and respect hierarchy, and want to be led by a charismatic leader with a compelling vision.
We want to be sold to by, or negotiate with, someone with whom we have built trust.
We want to be coached, taught, or cared for by someone we think has real care and empathy for us.

Preparing our kids for this new world
Does this mean that it’s pointless gaining skills and knowledge that AI can replace? Not at all.
An AI agent can build a functional website from a simple prompt, but only an experienced engineer with a functional and structural vision, and who leverages agents to write the code, can create a great one.
An AI translator can handle simple transactional communication, but only a cultural and linguistically fluent human can deliver the subtle Guanxi, Mianzi, Ganbei and Mingpian when negotiating with a Chinese business person. These principles signal respect of hierarchy and protocol, and are critical to doing business in the Chinese context.
Domain expertise will always be valuable, but only when part of a human/machine partnership (3). 10 years ago an accountant that refused to engage with Excel would be worthless, no matter how knowledgeable they were. Today, that same accountant is at risk if they cannot or will not engage with AI.
The following skillsets are likely to be the catalysts for opportunity:
Leadership.
Influence skills (e.g. sales and negotiation).
Creative and strategic vision.
Deep-thinking and communication of ideas (e.g. philosophy).
Care.
Use of AI tools to supercharge productivity(3).
The real winners will be:
Those that serve industries that technology struggles to cannibalise (high “relational/emotional” industries such as psychotherapy and negotiation).
Leaders.
Those that know how to leverage the AI tools in combination with their expertise to deliver high productivity.
Founders.
The age of the founder?
“The founder” has always been characterised as an individual with a vision, and the risk-appetite to go after it. Only now, it’s less risky than ever to ‘go after it’.
To create a startup, the founder needed significant capital upfront to buy the runway to test their idea.
Now, instead of employing 6 developers for 3 months to build a V1 product, they can get a minimum viable product up and running with an agentic coding platform. Instead of having to engage a law firm to create their contracts, they can get a good-enough draft from an LLM. AI doesn’t need an HR department. It’s never sick. It works for pennies a day.
With minimal investment, an idea can be tested very quickly, cheaply, and with very little risk.
Current concerns
So far we have focussed on the impact of AI on the future career prospects for our children, but there are clear and present risks from AI being in their lives right now.
Synthetic relationships and loneliness, radicalisation and polarisation, atrophy of their brains—the list of concerns goes on.
Whilst relational/emotional and leadership skills will be critical, our kids are living in less-social worlds and finding less opportunities to develop these strengths (4).
But this post is already long and depressing enough, so this can wait for a follow-up!
Is your school preparing your child for a career that won’t exist?
Over to you.
Do you feel confident that your child is being trained to develop AI fluency, leadership skills, entrepreneurialism, influence and communication skills, or hands-on practical skills?
Or is your school still preparing them to enter a career as an entry level lawyer, data analyst or to create or summarise reports for the Big Four?
How can we fill the gaps? Should we fill the gaps, or should we drive change in our schools?
Would they be better off with £560K in the bank when they hit 21, rather than a private education?
Join the conversation and let us know your thoughts in the comments below.
For what it’s worth…
I think about this a lot.
I’ve seen the world change around me. I started an online retailer with some chums in 2009, and it was a huge risk. We secured enough seed capital to pay a small team of developers, and had to give away equity in the company very early to get other expertise on board.
Today, I know that we could have built a better initial platform, much faster, with many fewer people, and for next-to nothing.
This is great news for the founder, but terrible news for the kids we hired straight out of university (who, incidentally, stayed with us for a decade until we sold that company and who did very well out of that transaction when their shares were sold).
The techno-optimists promise us a future in which drudgery is eliminated and we live in a world of universal abundance, leaving us free to pursue our higher-callings. They also told us social media would connect us and lead us to a better world, so forgive me for being cynical.
At the same time, I feel excited and optimistic. I believe that those that can leverage these tools and have a compelling creative vision have endless opportunities ahead of them. I also believe that humans will never be totally redundant, and that there will be lots of professions that require that human je ne sais quoi.
I think it’s clear that these opportunities won’t be available to all though, and that these professions won’t offer enough employment to go around. This will create massive philosophical and political questions (a further opportunity for those that are so inclined).
I still think it’s important for my kids to engage with their traditional lessons. I believe knowledge is good for its own sake whether or not AI can reproduce that knowledge instantly. Learning trains your brain to learn, and the more domains of knowledge you have the more connections you will form between those domains—and those connections are the basis of creativity (5).
I am going out of my way (i.e. during car trips) to expose them to more conceptual ideas. It can be a hard sell, but audio books like Extreme Ownership by Jocko Willink will teach them lessons about leadership they may not get in the classroom.
I’m also trying to support whatever entrepreneurial inclinations they demonstrate. Personalised mouse mats might be a saturated market that relies heavily on supply chain scale to deliver a margin, but I support their efforts anyway.
I encourage them to experiment with AI tools to build familiarity and to understand the underlying technologies and their limitations.
In summary, I’m trying to help them develop their entrepreneurial and creative curiosity, and fluency with AI tools, because I think this is under delivered by the school system.
Comments (0)
All comments are subject to our Terms of Service. Please contact us to report any content that you think violates them.
Unlike posting school reviews, which are anonymous by default, your full name will be published along with your comments in the blog.
Join the Conversation
You must be logged in to share your thoughts, comment, and like posts.
Don't have an account? Sign up
No comments yet. Be the first to share your thoughts!