The 10 Pillars Of A Successful AI Leader

Escaping AI POC purgatory: Techniques for enterprise AI engineers

Many companies struggle to move generative AI from experimentation to production.

Join us Oct. 29 at 9am PT. Sam Julien, Writer's Director of Developer Relations, will share practical strategies to overcome enterprise AI engineering challenges using a full-stack approach.

Topics include:

  • Managing project scope

  • Improving accuracy with domain-specific models & graph-based RAG

  • Navigating AI application development

Can’t make it live? Register anyway and we’ll send you the recording.

FUTURE
The 10 Pillars of A Successful AI Leader

As AI slowly becomes omnipresent in our work and lives, even if you’re an employer or an employee, you will have to adapt to some of the most dramatic changes in your career.

And whether these changes are positive or negative for your prospects will depend on a series of traits and approaches based on ten pillars regarding technical intuition, Operational capacity, and Personal/Hiring Traits.

By the end of the article, you will have a clear set of guidelines I call ‘pillars’ of what I believe are necessary traits of a successful leader in the age of machines.

Let’s dive in!

Technical Intuition

Hold your horses one second. I’m not saying you must be an AI engineer, know how to code a script to call APIs or anything like that. With technical intuition, I’m referring to having a clear notion of how AI works, what it can or can’t do, and being aware of the different alternatives at your disposal when engaging in an AI use case.

There’s no way around this. Even if you’re the Senior vice president of Human Resources, AI will still massively affect you. You will have to make educated decisions about it, as the idea of AI agents or copilots becoming omnipresent across all parts of the organization is a matter of time, so the idea of having IT making these decisions for you is nonsensical. Bosses make the decisions regarding their teams, and your teams will be filled with semi-autonomous AI agents.

Therefore, in the age of machines, all Leaders must understand AI.

Consequently, I am going to provide you with some high-level guidelines that clearly define AI limits, how to regard products intuitively, the importance of open-source as a fundamental weapon in your arsenal, and proof that, luckily, the first step in the process is always the same.

A Simple Guide to AIs Limits

As we have explained several times before, AI’s essence has remained unchanged for 70+ years since Alan Turing described his view of AI as ‘the imitation game.’ To this day, AI largely remains the capability of machines to imitate our intelligence.

This is extremely powerful already because AIs act intelligently. However, that doesn’t mean they embody that intelligence. Sadly, this entails that an AI model’s performance is largely bounded by its own experience through the training data, something we discussed in full detail last week, despite consistent efforts from Silicon Valley to ensure this limitation goes unnoticed.

The fact your training data contains the task or knowledge isn’t the only factor. You also must consider the frequency; the more a fact or experience appears in the data, the higher your chances the model behaves as expected.

Nonetheless, this illusion of intelligence becomes even more complex with the introduction of foundation models like GPT-4 or Claude 3. At first glance, it feels that AIs are going beyond their training data and can ‘generalize' and perform equally well in unseen areas. But this is also provably false; they are still interpolating between known knowledge to infer plausible new outputs.

For example, if you ask ChatGPT to write a poem in Shakespeare’s voice about iPhones, there’s a good chance that a particular sequence isn’t included in its training data. Still, the model does know a lot about Shakespeare and iPhones, so it performs an interpolation, combining both known concepts to build a new one.

OpenAI takes videos left and right to create an interpolation (middle). Source: OpenAI

While this could be considered a form of generalization, we must differentiate interpolation from extrapolation; one is to combine two known concepts into a new one, and another is to use your general knowledge to induce behaviors outside your domain of expertise accurately. Or, using AI lingo, one is an in-distribution generalization, and the latter is an out-of-distribution generalization, the latter of which is AI’s holy grail in its quest to conquer human intelligence.

Leader Pillar nº1: Unfamiliarity = Failure. AIs should stay out of unfamiliar situations. If you do so, it will fail, period.

Embrace Non-determinism

Another crucial point about AI’s future is that it’s imperfect. In general, AI models are no longer trying to predict discrete values; they are usually trying to model many possibilities, aka assigning a likelihood to it being right… and wrong. They are also stochastic, meaning they have randomness added by design. All in all, they are beautifully imperfect machines.

What I’m implying with this is that the idea that we can make AIs perfectly robust should be long forgotten; every time you work with stochastic AIs like ChatGPT, you always have to assign a non-zero chance that the prediction is wrong.

However, this trade-off unlocks creativity and more nuanced responses by the model, so it’s necessary. Conversely, AI leaders will have to accept (and, crucially, educate others) that in the AI age, machines won’t be flawless; they will be faulty, just like humans are.

Therefore, it will be necessary to assemble AIs around fault tolerance frameworks for statistically based non-deterministic AI, like large language models (LLMs), and simply avoid them when errors are not an option, like in the software controlling a plane. As most AI today is inherently probabilistic, its boundaries are, thankfully, very clear.

Leader Pillar nº2: A 100% accurate AI does not exist. Educate your boss/team that AIs will make mistakes and, guess what, that’s okay if you planned for it.

Embrace AI As a Tool for Discovery

As we discussed on Thursday, I predict we will see an explosion in the number of start-ups and products that help companies use AI to discover patterns in data instead of using data to perform activities.

While productivity-based AI takes all the spotlight today, from ChatGPT to Tesla’s robots, there’s huge untapped potential in using AI to gain more profound intuition about your data.

And these use cases are already emerging:

  • AIs that predict machinery failure based on pattern recognition on different temperatures, humidity, and other sensors, as we saw Archetype AI wants to do,

  • AIs that identify potential risks for unexpected death in patients, as a Canadian hospital is using to drop these deaths by 26%,

  • AIs like AlphaFold (which led to its creators earning the Nobel Prize) or Gnome, which help us predict protein structure and identify new stable materials, respectively, speeding industries like drug discovery,

  • Tools like EVO, which can process the entire human genome and identify dependencies that may lead to illnesses or pathologies that can be prevented, or even perform genetic engineering,

  • Or tools like Iprova, meant to help you navigate vast amounts of data to identify key pain points across industries or verticals that might require new inventions. An AI to speed inventions, so to speak.

And the list goes on. Considering AI’s tendency to make mistakes and its excellent capabilities for pattern recognition, which impact its performance and adoption in automation-based use cases, using it as a tool to discover patterns in data just makes sense because AI becomes a tool of iteration and refinement, a place where mistakes are welcomed, and creativity is very welcomed; all in all the perfect place for stochastic AIs.

Leader Pillar nº3: Use AI to discover. Using AI to find patterns in your data is the shortest path to success, even more so than with productivity-based applications like ChatGPT.

Leverage Open-Source’s Power

The convenience of calling an external, simple-to-use web service that receives your text and responds appropriately, which is what OpenAI or Anthropic offer, is undeniable.

But you must acknowledge that this convenience has a price, and that’s performance. Building on benchmarks people don’t even comprehend, we are tricked into believing that these companies have built the perfect symbiosis between easiness of use and performance.

And while that’s true for broad performance (OpenAIs models are generally superior to others), it is absolutely false for task-specific performance, as these models lack the proprietary data companies have on their internal processes key to unlocking the true power of AI, which has to be fed in some way to the model. This leaves you with two options:

  1. Send your data to OpenAI to fine-tune (train) their model with it, forcing you to send your data across the open Internet and trust OpenAI, which has already been hacked, will safeguard that data (or worse, use it to train other models further)

  2. Deploy an open-source model in your private cloud and train it safely with your data.

What about RAG? Retrieval Augmented Generation is a nice way to decrease the amount of information you can send to a model, reducing processing costs. But RAG cannot compete with fine-tuning, which is basically helping the model know the stuff instead of having to search it in a vector database, and doesn’t solve the concerns of proprietary data leaving your IT organization.

In a nutshell, fine-tuning your data is a key performance unlocker, and the only safe solution is to do so with open-source models. AI Leaders should acknowledge this while having the capacity to convince other executives they may be reporting to that the answer isn’t always OpenAI.

Leader Pillar nº4: If you don’t fine-tune AI models with your data, you are doing something wrong. And the only safe way is through open-source.

Key Traits of A Successful AI Leader

Now that we’ve built a good understanding of the high-level intuitions a Leader must possess to thrive, we are taking a journey to our character, our decisions, and the real human and hiring skills that will determine whether we are well-suited—or not—to manage humans (and AIs) in the age of machines.

Subscribe to Full Premium package to read the rest.

Become a paying subscriber of Full Premium package to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In.

A subscription gets you:

  • • NO ADS
  • • An additional insights email on Tuesdays
  • • Gain access to TheWhiteBox's knowledge base to access four times more content than the free version on markets, cutting-edge research, company deep dives, AI engineering tips, & more