The 4 Jobs of the Future

MARKETS
The 4 Jobs of The Future

In a significant turn of events, one of the key incumbents in the industry, Sam Altman, openly discussed the impact of AI on jobs.

Often considered a taboo in Silicon Valley to prevent politicians from freaking out, this turnaround implies that leaders are so convinced of AI's transformational nature that they don’t bother pretending anymore because this disruption is strong and coming soon.

However, they do claim that humans will remain, just not in their current form, aka new jobs will be born. But no one is writing down what these jobs will be.

Today, we are doing just that, helping those who want to immerse themselves in the industry or help leaders understand what roles their organization might need shortly.

Covering four jobs, we’ll first discuss ‘AI’s first phase of disruption’ and its societal and macroeconomic impact. Then, for each job, we’ll explain their raison d’être and, at the end of the article, I’ll list a set of primarily-free resources to start your journey today.

When Humans Became Numbers on a Spreadsheet

AI is coming for our jobs. It will probably not wholly substitute humans—except in concrete jobs—but make them so efficient that many current job incumbents will no longer be needed.

And the first people in line to fall are white-collar workers.

Productivity Enhancement

During the first wave of AI job disruption, which we focus on today, AI is synonymous with productivity.

Some enthusiasts like Sam also talk about how AI will go beyond humans, discovering ‘all of physics,’ but there’s little to no evidence AI is even remotely close to that vision. Therefore, we are focusing on the things that AI is clearly going to be capable of doing—if not already.

But most people frame ‘the AI problem’ wrongly. When asked about AI taking our jobs, we all automatically frame this reality as one where ALL humans in a job are replaced.

However, that will largely not be the case, especially when an enterprise’s hierarchy depends on accountability, aka someone taking the blame if things go south.

Who are we going to blame, a machine? Humans will remain. But when people hear that, they go to the opposite extreme, ‘Oh, AI isn’t replacing all of us. Hence, we are safe.’

No you’re not.

You will be safe only if you are better than your peers. In other words, while AI isn’t taking all of us to the unemployed street, it will certainly take many of us.

Therefore, the best way to be better than the rest in the age of AI is to embrace AI. An AI won’t substitute you, but you will be substituted by a human who knows how to use AI.

AI is not the enemy; it’s the weapon.

Long story short, the goal here isn’t to be better than AI, but to be better than your peers when AI reduces the staff count requirements by 70%.

Another common misconception is that AI will have a ‘microeconomic’ driver. That is, enterprises will want to use AI to reduce costs and improve profits. And while they have a point, that’s simply not seeing the bigger picture.

The Great Deflation

Society just loves to use the corporate greed card every time we can. However, it won’t be a matter of greed but survival. It’s not about profits… it’s about revenues.

AI will cause massive deflationary pressures in all industries. In layman’s terms, AI will make everything cheaper. It will make the value creation pipeline much more affordable, dropping barriers of entry—both at the enterprise and worker levels.

Competing will be more accessible than ever. This will lead to an extreme commoditization of most industries, a ‘race to the bottom’ in prices, as your competitors find new, more efficient ways to serve their products or services.

At the human level, AI will create an economy in which bold and action-biased people triumph over those looking no further than the next paycheck. AI will lead to a declarative economy; humans declare what they want, and AI executes. Nevertheless:

And this is just the tip of the iceberg, meaning you are still early if you’re reading this but already behind others. And in this era, humans are no longer compared to other humans but to AI-enhanced humans.

Soon, most humans in any given role will become numbers on a spreadsheet, completely disposable assets, as those who remain leverage AI—most probably the early adopters—become ‘100x workers’.

So, what can you do to prevent that?

As mentioned, AI will streamline preexisting jobs and create new ones. Crucially, many of these will be some of the highest-paying jobs in the world due to their cross-industry impact, their ‘AI brand,’ and, above all, talent scarcity.

Consequently, today, we are covering four jobs, from the more obvious ones to the high-signal, endurable ones that could waterproof you for decades.

The AI Whisperer / Red Teamers

This is, without a doubt, the most predictable of them all. In simple terms, the job of an AI Whisperer is to get the most performance out of AI models by adapting the prompt (what you send the model).

Already coveted in some industries, they know what triggers models and what breaks them… you get the point.

These people aren’t necessarily coders, as the job’s ‘programming language’ is mostly English, the language people like Andrej Karpathy argue will be the future programming language.

AI Whisperers take considerable time crafting the perfect prompt for each use case and must also champion hacks to be used inside the organization. They all probably know by heart the most important hacks I addressed in this newsletter here, here, and here.

On the flip side, it’s a job that could eventually disappear, as the better AI models become on a foundation basis, the less fundamental their role is. For instance, the prompting rules change considerably with the transition from System 1 LLMs (GPT-4o) to System 2 LRM (Large Reasoning Models) like o1-preview or o1-mini, and we seem to be moving toward ‘less-crafty’ prompts.

Source: OpenAI

Still, their job is fundamental and will continue to be for the foreseeable future. Importantly, they are also the most obvious role a company would want to hire, so it’s an easy on-ramp for most people, especially considering prompt engineering is more like a “patience game” than being extremely smart.

But while most AI Whisperers’ job is to ‘extract good from the model,’ Red Teamers put themselves in the shoes of a bad actor and identify ways in which malicious third parties can break the model to make it perform bad things.

Also known as Ethical Hackers, they are already extensively used in AI labs already, being a critical bottleneck for deploying new models.

As more companies become open to deploying in-house Generative AI solutions to their customers, these organizations will need extensive in-house red teaming to ensure their products are safe. In case you’re wondering, Red Teamers are still prompt engineers who play the role of bad actors but have a deeper, more technical understanding of these models' safety weaknesses.

Red Teaming, or hacking models, has become an art that people like Pliny the Liberator take to the extreme, being able to hack any LLM in minutes.

Importantly, the market for this role is expected to grow tremendously year over year for the next decade. According to Grandview Research, the prompt engineering market will grow at 32% CAGR (Compound Annual Growth Rate) through 2030.

But if talking with AIs all day isn’t your dream job, maybe becoming an essential piece of their training appeals more to you. For that, we move on to the Expert Data Generators.

Experts, A Dying Species

If AI is the car’s chassis, data is the engine. Without proper data, an AI is like a Ferrari with a Renault engine, the model might look good from the outside, but its performance will be crap.

Crucially, generalist models are considered better than specialists. In other words, our only obsession as an industry is to train foundation models (one model, hundreds of tasks) that ingest as much data as possible and, for that, become larger and larger, as we saw last Sunday.

While we must question the cost-efficiency of the entire process, the idea that generalists outperform specialists is supported by evidence.

A paper by Cambridge and Flatiron Institute researchers proved that generalist models that are then fine-tuned to a specific task generally outperform models trained specifically in that task (specialists).

However, as models like OpenAI’s o1 have proven, improving data quality (in that case, increasing the amount of synthetic reasoning data) is a fundamental need to continue the step-function improvements.

Sadly, doing this without care can lead to suboptimal results and eventually model collapse, as you can see in the graph below based on a New York Times piece back in August.

Source: NYT

What is model collapse?

Generative AI models are compressed representations of their training data. In layman’s terms, they ‘embody’ their data and can replicate similar sequences to the ones they saw during training. But they can also generate new data they had not seen before. But how?

By statistical interpolation, or combining known data points to form new ones. For instance, combining Shakespeare and iPhones to write a Shakespearean-style poem about iPhones.

However, if we retrain models from data generated by themselves, their data distribution (the data they are capable of generating, the data they know) narrows down, eventually reaching a point the model has forgotten most of its knowledge and collapses.

Sadly, it’s the fish that bites its tail. The better these models are, the more AI-generated content spreads through the Internet, decreasing the overall quality of Internet-scale datasets.

Therefore, while training on self-generated data or data from other models (both forms of synthetic data) is recommended when done with care, as Deepmind or OpenAI do, these labs still want as much real human data as possible.

These people, whom I call EDGs (Expert Data Generators) and AI Critics (when used to build reasoning datasets where every step is crucial), will be a rare sight as very few people are profound experts in a given subject matter and, due to the increasing demand and low supply, will be handsomely rewarded.

Humans annotate the steps that went wrong. Source: OpenAI

Most new AI proprietary datasets that OpenAI or Anthropic use to train the next generations are crafted by human experts who exercise their knowledge to write AI examples that the model can ingest.

In summary, you should strive to become an expert in an exciting field from which AI can learn (Mesopotamian pottery techniques from Alexander the Great’s era probably won’t do the trick) and leverage that knowledge by generating sample data sets these companies can ingest.

In the meantime, companies like Scale AI are already doing this, mainly through the use of manual human labor. However, other companies like Glaive AI strive to fully automate synthetic data generation, although the latter is still largely unproven.

Thus, EDGs present a massive opportunity for individuals as well. At this point, these jobs might feel like obvious, but ‘obvious’ is certainly not an adjective you would use to refer to our next job.

Simply put, it’s one of those that Hollywood will surely depict in their next drama series, as it feels more like a dark art than an actual job and which stems from what’s right now the hottest research area in academia and Silicon Valley, Mechanistic Interpretability, only to permeate soon every organization wishing to leverage neural networks:

AI Surgeons.

Subscribe to Leaders to read the rest.

Become a paying subscriber of Leaders to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In

A subscription gets you:
High-signal deep-dives into the most advanced AI in the world in a easy-to-understand language
Additional insights to other cutting-edge research you should be paying attention to
Curiosity-inducing facts and reflections to make you the most interesting person in the room