Where's the golden opportunity in AI?

šŸ TheTechOasis šŸ

part of the:

In AI, learning is winning. While Thursdayā€™s newsletter talks about the present, the Leaders segment on Sundays will make you aware of the future trends of AI with actionable insights to set you apart from the rest.

10-minute weekly reads.

šŸ’° Whereā€™s AIā€™s Golden Opportunity? šŸ’°

Whether you are an investor willing to bet on AI, a founder building a start-up, or someone willing to transition into the industry to futureproof your job security, the feeling of uncertainty is unavoidable.

But it shouldnā€™t have to be that way.

Today, we are delving into the AI industry through the entire creation pipeline, across all relevant companies and markets, to answer the great question:

Where in AI should you be focusing on?

šŸ’Ž Big Announcement šŸ’Ž


This month, I am launching TheWhiteBox, a community for high-quality, highly-curated AI content without the typical bullshit, hype, or ads, across research, models, markets, future trends, and AI products, in digestible and straight-to-the-point language.

But why would you want that?

  • Cut the Clutter: Value-oriented, focusing on high-impact news and clear intuitions to extract from them (why should you care).

  • Connect with Peers: Engage in valuable discussions with like-minded AI enthusiasts from top global companies.

  • Exclusive Insights: As the community grows, gain access to special content like expert interviews and guest posts with unique perspectives.

With TheWhiteBox, we guarantee you wonā€™t need anything else.

No credit card information required to join the waitlist. Premium members have immediate access, but are anyways very welcomed to fill in the waitlist form.

Going back to the previous question, we must break the AI industry down into its main layers, shown below, and evaluate opportunities and risks across the board. That is:

Which companies in each segment are making money, and what events could change the current picture?

To start, letā€™s take a look at the missing layer in the previous picture: hardware.

The Double C

With an extremely dominant player in Nvidia with a +90% share of the GPU market (the de facto hardware for frontier AI today) companies in this segment just need to exist to see their stocks or private valuations go up (with one notable exception, Intel).

Besides this lagging example (still beat analyst estimates in Q1 2024) companies in this sector have super healthy margins and are extremely heavily capitalized (and subsidized, due to its geopolitical importance).

In fact, if Nvidia isnā€™t winning more money itā€™s simply because they are constrained by their own supplying capability.

Long story short, itā€™s basically the best market to be in right now, as demand is booming. This gives incumbents like Nvidia enormous pricing power, to the point that its net profit margin is, brace yourself, 48% as of January 2024.

Nonetheless, since the beginning of the year, Nvidia is up 88%, almost doubling its already trillion-dollar valuation.

Among the risks, increasing global tensions between China and the US with the looming threat over Taiwan by the former, a place where TSMC, Nvidiaā€™s most important supplier, resides.

The US Government is obviously well aware, and its doubling its subsidizing efforts to ā€˜bring TSMC into the USā€™. By the way, Intel is also heavily subsidized by them.

Besides global instability, other possible disruptors to the current state of hardware are:

  1. Quantization. With the potential tendency for models to become ā€˜1-bit LLMsā€™, meaning that the parameters in the network can only take values of ā€˜1ā€™, ā€˜0ā€™, or in some cases, ā€˜-1ā€™, the number of matrix multiplications required to train and run LLMs goes to zero. In that scenario, other types of hardware besides GPUs, like CPUs, can be considered.

In this regard, companies like ARM or Intel benefit the most from this trend if it materializes.

  1. Custom hardware. The emergence of custom hardware (GPUs were originally meant for gaming) with examples like Groqā€™s Language Processing Units, and Cerebrasā€™s AI spec-absurd processor units.

At the inference and training sides respectively, these companies stand to bite at Nvidiaā€™s cake with an AI-only approach with very compelling alternatives.

  1. Finally, (tin hats on, please) companies like Extropic are aiming to kill the transistor era through Energy-based models. But this remains unproven.

In summary, high margins and pricing power, a big dominant player, and potential disruptions that could question its dominance.

But what about compute?

Shut up and take my Compute!

Companies in this layer run the infrastructure for training and deploying the AI models.

In some cases they build the hardware too, like Google, in other cases they procure it from companies like Nvidia, and in others, they serve as both builders and providers:

  1. Hyperscalers: Amazon (AWS) or Microsoft (Azure). You may also include Oracle, IBM, or Alibaba. These companies are cloud providers, meaning their impact goes well beyond AI.

  2. LLM Providers: AI-only, GPU-rich companies that specialize in deploying them for companies to train or deploy Large Language Models. These include companies like Together.ai, Anyscale, and Predibase, among others.

  3. Do it Alls: Companies that build the hardware and ā€˜serveā€™ it, like Nvidia, Google (also a hyperscaler in its own right), Intel, or Groq.

This is a highly capital-intensive sector to compete in, the dominant players are valued in the trillions of dollars and the rising stars have all raised money in the 8 figures, with most of them well above 9 figures.

And what about the margins? Well, hereā€™s where the fun ends.

Most of these companies are running their services at cost (at best) and hoping for network effects and money drying out in othersā€™ pockets as their winning ticket.

In case youā€™re wondering, the biggest proof of low margins are investment rounds. Companies here invest heavily in companies in the model layer, but in reality, itā€™s nothing more than a trade.

In other words, you give me equity (and the IP of course), and I give you compute credits, with Amazonā€™s $4 billion investments in Anthropic and Microsoft's $13 billion in OpenAI as the best examples.

Naturally, the main risk for companies in this segment is the insane stakes required to play the game.

The biggest example of how scary this is is Meta, as the stock had its worst day in years despite presenting great Q1 2024 results, with a year-over-year revenue increase of 27%, only because of Zuckā€™s unwavering commitment to spending billions of dollars in compute.

And for what?

Well, all these companies are betting that compute will be the currency of the future (in Sam Altmanā€™s words). However, academia is painting a different picture:

  • Production-grade models are getting smaller, with examples like Microsoftā€™s Phi-3 or Apple OpenELM being small enough to fit in a smartphone, with only a small gap with frontier AI models.

  • Also, very recent breakthroughs like Kolmogorovā€“Arnold Networks aim to substitute MLPs, the most compute-intensive parts of frontier AI models, which could drop energy requirements by up to 100x.

  • Additionally, decentralized AI aims to democratize LLM training and serving, allowing individuals to tap into this industry through decentralized incentive mechanisms based on blockchains. Simply put, compute could be democratized.

  • And furthermore, as we discussed last week, new techniques such as Evolutionary Model Merging allow for the creation of new models without actual training, lessening the demand.

Therefore, if the current trend is to go small and efficient, why are incumbents still throwing billions into compute?

Needless to say, they could very well know something we donā€™t, because Microsoft is allegedly ready to invest $100 billion into project Stargate, and the rest are doubling down into this trend too.

In my humble opinion, their strategy only makes sense if you are forward-looking and assume one thing: long inference models.

As I have discussed many times, the industry seems to be transitioning into models that require extensive amounts of energy and compute to run, as long inference models work by generating up to millions of possible responses before actually delivering their answers to increase accuracy.

Overall, this layer is incredibly reliant on future developments requiring humongous compute pipelines. Otherwise, they simply might be buying their ticket to doomsday.

In particular, if you buy into the vision that compute will be crucial to the future of AI, hereā€™s a golden nugget for you.

Despite many shorting the company, Google is better positioned than anyone for this alleged compute-driven world:

Moving on, we now go into the most recognizable segment, the Model layer.

Itā€™s Software, but with a Twist

Here you will find the companies everyone has learned by heart by now, mainly OpenAI, Anthropic, Google Deepmind, Mistral, the Chinese, and rising stars like Reka.

Margin-wise, they are not at cost, but the margins are not what a software company would usually have (at around 77% gross margin), as acknowledged by Mistralā€™s CEO during a recent podcast.

With examples like Anthropicā€™s leaked info, companies in this segment sit around 55%, which is not great.

And the revenues are, well, meh too, as echoed by The New York Times recently (again picking on Anthropic).

Nonetheless, investors like Chamath Palihapitiya consider that this layer is in a clear race to the bottom, to the point that AI models might never be a moat, but a commodity.

In particular, as for possible disruptors, open source will play a huge role here. Unless regulatory capture becomes a thing, which canā€™t be discarded based on the current trends at the federal and California levels, open-source models could seriously harm the prospects of proprietary closed models.

Overall, very tough layer to compete in too, with unattractive margins, scary and heavily capitalized competitors, and, probably the most important thing, an unprecedented lack of talent to train these models, which explains why they get paid millions.

I know what youā€™re thinking. The first three layers paint quite a discouraging landscape.

Luckily, this abruptly changes if we focus on the last segment, the application layer, where big names like Andrew Ng, venture fund a16z, or David Sachs, among others, are betting most of the value will accrue for people like you and me.

However, playing in this layer has a huge risk. In fact, whether you get insanely rich or steamrolled will depend on key decisions you make.

Subscribe to Leaders to read the rest.

Become a paying subscriber of Leaders to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In

A subscription gets you:
High-signal deep-dives into the most advanced AI in the world in a easy-to-understand language
Additional insights to other cutting-edge research you should be paying attention to
Curiosity-inducing facts and reflections to make you the most interesting person in the room