• TheTechOasis
  • Posts
  • Inflection's new Chatbot Pi will Make you Feel... Different

Inflection's new Chatbot Pi will Make you Feel... Different

🏝 TheTechOasis 🏝

What do Microsoft, Reid Hoffman (founder of LinkedIn), Bill Gates, Eric Schmidt, and Nvidia have in common?

Interestingly enough, they all have invested $1.3 billion combined into Pi.

Of course, I’m not referring to the famous number.

I’m referring to what I consider, after trying it myself (you will too for free soon), the most “human” AI chatbot in the world, even more so than talk-of-the-town ChatGPT.

But besides being the most advanced conversational AI in the world, it also hides very interesting secrets that position Pi in a unique position to become a real giant in the AI space.

Pi has all it takes

Pi is the first product of Inflection, founded by Mustafa Suleyman.

He’s no ordinary founder, as he was also the founder of DeepMind, today one of the most advanced AI research labs in the world.

And with Pi, what Mustafa is trying to build is the best conversational AI out there.

Pi is extremely kind, has amazingly developed conversational skills, and what’s more important, it’s designed to be endlessly helpful, patient, and willing to listen to you.

This last point is very important and can be easily understood by looking at any of Pi’s responses:

With every question you ask, Pi always asks you back for different directions the conversation can take, and it’s always (and I mean, always) focused on knowing what your opinions are about a subject and what you want to talk about.

This clearly indicates the efforts of the Inflection team when they performed Reinforcement Learning from Human Feedback (RLHF) on this chatbot.

When giving the model feedback, they modeled its conversational features to be clearly focused on lengthening the conversation and incentivizing the user to give their opinion back.

In other words, Pi is trained to be helpful and always relevant to the user’s intent.

But did you notice something special in the previous image?

The up-to-date AI chatbot

If you’re familiar with LLMs and Generative AI in general, you will realize it usually has one important limitation, they are not up-to-date with current events.

In general, an AI chatbot can only provide relevant and truthful information up to its training time cutoff.

In other words, the model has only seen data up to a certain point in time to when the model was actually trained and lacks complete knowledge of events forward.

For instance, ChatGPT is famously limited to knowledge prior to September 2021.

In this case, OpenAI offers the possibility of using ChatGPT’s plugins that allow you to query APIs that provide that information, but this process isn’t very smooth, to be honest.

However, for reasons that I can’t fully explain but only guess as Pi is a proprietary model, this chatbot is fully up-to-date with recent events… on the daily.

Amazingly, it seems the model isn’t actually retrieving information from knowledge bases in real-time, but actually knows that data.

When asking the actual model on this, it says that it uses “continual learning”, a method that allows AIs to continuously update themselves.

Although I can’t fully confirm, I suspect they are simply allowing for efficient querying to vector databases where Inflection is storing data gathered during real-time conversations like the one I had with Pi, as well as news and events currently unfolding.

Probably, the best example of what this continual learning method is can be found on NVIDIA’s recent GPT-4 in Minecraft paper, a model that continuously self-evaluates its actions to help the model guide itself through the Minecraft world, becoming progressively better at the game.

Anyways, Pi’s use of continual learning is a seamless experience for the user.

The Different AI Strategies

As a management consultant specializing in Technology and AI specifically, I delve into AI systems on a daily basis.

And I’ve never seen an AI as developed as Pi when it comes to conversational features.

It’s beyond anything you’ve ever seen.

However, Pi seems to have a very limited context window of only 1000 tokens, or 750 words (according to Pi itself).

The context window is the model’s working memory, allowing it to use previous answers up to 750 words behind to give useful context before answering back.

The longer the context window, the more complex and long texts the LLM can handle to tackle more complex tasks, like summarizing a book.

And with some context windows like those of ChatGPT or Claude in the multiple thousands (up to 100k) we can clearly see what direction each company is taking with Generative AI chatbots.

While some are focusing on becoming key to processing long documents and transforming humans into productivity machines, models like Pi are clearly being driven into day-to-day conversational interfaces that intend to become helpful and calming; like a lifelong companion.

I like to think of Pi as a diary on steroids, a chatbot that will accompany you on your daily reflections to make you feel better and less lonely.

In fact, you can actually journal with Pi, as you can see in the image below:

In summary, Pi seems to be an AI that’s not trying to substitute us at our jobs, but is focused on making us feel better, and be better.

And I can get along with that.

Key AI concepts you’ve learned by reading this newsletter:

- Pi, the state-of-the-art in conversational AI

- Continual Learning and training cutoffs

- Context windows

👾Top AI news for the week👾

😍 MidJourney launches 5.2 version and it’s stunning

🚀 NASA is building its own ChatGPT so astronauts can talk to spaceships

❤️ Scientists are using heartbeats to predict hit songs and flops

🖖🏼 OpenAI is planning on building a “supersmart personal assistant”