The AI Future Nobody Wants

🏝 TheTechOasis 🏝

part of the:

In AI, learning is winning.

While Thursday’s newsletter discusses the present, the Leaders segment on Sundays will inform you of future trends in AI and provide actionable insights to set you apart from the rest.

💎 Important News 💎


As things are going on, I wanted to provide clear details on my content schedule. I’m elevating the content I’ll deliver as part of the subscription.

This content will be divided into four categories:

  • Relevant news on the AI industry

  • Deep dives into trendy or relevant products and companies in the industry, starting with a deep dive into NVIDIA, the crown jewel.

  • Recaps on the state of markets (private and public) and insights derived from them

  • Deep dives on key technological trends you must pay attention to, like today’s issue on the privately-controlled human-knowledge interface.

Importantly, all content will also be published in TheWhiteBox Community feed, in case you need to access it anytime, and reach out in case you have any additional questions or insights you want to discuss.

If you haven’t joined yet, click below for a full-month free trial on the monthly subscription.

Today, I will convince you to become a zealous defender of open-source AI while scaring you quite a bit in the process.

Irremediably, through LLMs, AI is poised to become the interface between humans and knowledge, taking the throne from open search and social media. In other words, soon, everyone will obtain their knowledge almost exclusively from AI.

  • Kids will be tutored with AI Agents

  • A Copilot will summarize your job emails and draft your response

  • You will consult AI companion who knows everything about you on how to manage your latest fight with your significant other

And so on. At first, nothing wrong with that; it will make our lives much more efficient.

The problem? AI is not open, meaning there’s a real risk that a handful of corporations will control that interface.

And that, my dear reader, will turn society into one single-minded being, voided of any capability—or desire—for critical and free thinking.

Here’s why and why we should fight against that future.

An Ubiquitous Censoring Machine

A few days ago, ChatGPT experienced one of the major outages of the year, going down for multiple hours.

Growing dependence

Naturally, all major sites echoed this event, including one that referred to it as ‘millions forced to use the brain as ChatGPT takes morning off’, and the headline got me thinking.

Nonetheless, over the previous few hours, I had been going back and forth with my ChatGPT account as I needed the model every ten minutes—not for writing because it’s terrible—but to actually help me think.

And then, I realized: this is the world we are heading toward, a world where we are totally dependent on AI to ‘use our brains.’

But aren’t AI products failing miserably as we speak?

Last week, when we discussed whether AI was in a bubble, I argued that demand for GenAI products was, in fact, very low. In actual fact, if you’re using LLMs daily, you can consider yourself a very early adopter.

Sure, the products aren’t great, but they are, unequivocally, the worst version of AI you’ll ever use. Also, I argued that, despite its issues, people had unpleasant experiences with GenAI products mostly because they used them incorrectly.

They were setting themselves up for failure from the get-go. Nonetheless, as I’ve covered previously, these tools are already pretty decent when used for the use cases in which they were trained for.

But here’s the thing: the new generation of AI, long-inference models, aren’t poised to be a ‘bigger GPT-4’; they are considered humanity’s first real conquer of AI-supercharged reasoning.

And if they deliver, they will become as essential as your smartphone.

Machines that can reason… and censor

But wait, what are long-inference models? Long considered the secret that is no longer a secret, these are new types of models that, simply put, are given time to think.

Upon receiving a request, instead of abruptly responding and hoping for the best as our frontier models today do, the ‘GPT-5s’ of the future will respond only after they have met a definite uncertainty threshold.

But what do I mean by that?

When working on a difficult problem, humans do four things in our reasoning process: explore, commit, compute, and verify. In other words, if you are trying to solve, let’s say, a math problem,

  • you first explore the space of possible solutions,

  • commit to exploring one in particular,

  • compute the solution,

  • and verify if your solution meets a certain ‘plausibility’ threshold you are comfortable with.

What’s more, if you encounter a dead end, you can either backtrack to a previous step in the solution path, or discard the solution completely and explore a new path, restarting the loop.

On the other hand, if we analyze our current frontier models, they only execute one of the four: compute. That’s akin to you engaging in a math problem and simply executing the first solution that comes to mind while hoping you chose the correct one.

Good luck, right?

Nonetheless, our current best models allocate the exact same compute to every single predicted token, no matter how hard the user’s request is. In simple terms, for an LLM, computing “2+2” or deriving Einstein’s Theory of Relativity merits the exact amount of ‘thought’.

Knowing this, I bet you aren’t longer surprised by how limited these models are when facing complex problems.

Naturally, researchers knew this, and argued: Can’t we allow LLMs to execute that loop? And when they did, they realized this was the birth of real AI reasoning.

In point of fact, we have plenty of proof this is the real deal:

And these are just a handful of examples. Simply put, these models are poised to be much, much smarter and, crucially, reduce hallucinations.

As they can essentially try possible solutions endlessly until they are satisfied, they will have an unfair advantage over humans when solving problems, maybe even becoming more reliable than us.

Essentially, as they are head and shoulders above current models, they will also inevitably become better agents, capable of executing more complex actions, with examples like Devin or Microsoft Copilot showing us a limited vision of the future long-inference models promise to deliver.

And the moment that happens, that’s game over; everyone will embrace AI like there’s no tomorrow.

Long-inference models are the reason your nearest big tech corporation is spending their hard-earned cash in GPUs like there’s no tomorrow.

Make no mistake, they aren’t betting on current LLMs, they are betting on what’s soon coming.

But why am I telling you this? Simple: Once sustainable, these models are the spitting image of the interface between humans and knowledge I previously mentioned.

Read? AI. Write? AI. Work? AI!

Soon, AI will be the answer to everything.

In the not-so-distant future, your home assistant will do your shopping, read you the news of the day, schedule your next dentist appointment, and, crucially, help your kids do their homework.

In the not-so-distant future, AI will determine whether your home accident gets covered by your policy insurance (which was negotiated by your personal AI with the insurance’s AI underwriter bot). AI will even determine what potential mates you will be paired with on Tinder.

Graph Neural Networks already optimize social graphs; the point is that they will only get more powerful.

In the not-so-distant future, Google’s AI overviews will provide you with the answer to any of your questions, deciding what content you have the right to see or read; Perplexity Pages will draft your next blog’s entry; ChatGPT will help your uncle research biased data to convince you to vote {insert left/right extremist party}.

AI, AI, and AI.

Your opinions and your stance on society will all be entirely AI-driven. Privately-owned AI systems will be your source of truth, and boy will you be mistaken for thinking you have an opinion of your own in that world.

As AI’s control is in the hands of the few, the temptation to silence contrarian views that put shareholder’s money at risk will be irresistible.

But how will they do this?

Silencing Others’ Thoughts

Last week, we saw this incredible breakthrough by Anthropic on mechanistic interpretability. Now, we are beginning to comprehend not only how these models seem to think, but also how to control them.

LLMs are no longer this unpredictable word machine; we know we can pretty effectively censor what they can or can’t say. As we identify specific features, aka topics, we can choose to block them.

Current alignment methods can already censor content (fun fact, they do). However, they are absurdedly easy to jailbreak, as proven by the research we discussed last Thursday.

Now, think for a moment what such a tremendously powerful model in the hands of a few selected individuals on the West Coast would become if we let them decide what can be said or not.

Worst of all, in many cases, their intentions are as clear as a summer day.

Think like me or perish.

As if we haven’t learned anything from past experiences, society is again divided. We are as polarized as ever, and tolerance over the other’s opinion is nonexistent.

Think like me, otherwise you’re a fascist or a communist. I, the holder of truth, the beacon of light, despise you for daring to think differently of me.

Everyone is a stark defender of freedom of thinking… as long as you think like I do.

Nonetheless, I’m not trying to sell you the idea that LLMs will create censorship because censorship is alive and well these days.

The worse trait of social media isn’t really censoring, but how biased the view of the world becomes through its lens. Social media products are engineered to glue you onto the screen and monetize your eye balls through ads.

Thus, they will either ferociously feed on your confirmation bias, or will irate you through extremely contrarian opinions to yours. Either way, you are going to feel angered and more and more polarized… and glued to the screen of course.

Naturally, AI is already no exception, while proving in the process that no human cohort in society is safe.

Gemini 1.5 became anti-white to the point of utter foolishness, as the tool became absent of any capacity to generate anything resembling a white person.

Again, I’m not trying to sell you on any political ideology; I couldn’t care less. I’m not here to educate you about anything outside of tech because I don’t have the credentials for it.

But I do care about censorship, be that religiously or racially motivated, or right or left-leaning.

The point is that mainstream and social media have shaped society’s understanding for decades, but AI will only worsen matters if it becomes consolidated.

But how consolidated is AI becoming?

SB-1047 and Capital

Most of society is completely blindfolded in what’s at stake. I deeply feel our freedom of thought is in peril if the current trend in AI continues.

AI, presumably the most powerful technology ever created, is readying itself to become completely controlled by less than 100 people for two reasons: regulation and money.

The end of open-source?

SB-1047 is probably the first major bill to take a stance on how to regulate AI in the US, as:

Sponsored by radical organizations that believe AI will lead to human extinction, and without going into much detail, the bill is essentially a ban on mathematics (a reductionist view of AI is that LLMs are simply matrix multiplications), as echoed by the influential Naval.

Even the AI Alliance, a conglomerate of corporations like Meta or IBM, influential universities, and agencies supporting open AI, published an article condemning the bill while framing it as an “anti-open-source precedent.”

Long story short, it poses enough constraints and liability on model trainers to the point that no one, especially under-capitalized open-source developers, will dare touch an LLM training pipeline with a ten-foot pole, de facto killing open-source AI.

Specifically, instead of burdening the application creators, it offers potentially enforceable liability on the model creators, as if someone builds an iron sword and uses it to kill someone, and the blame is put on the iron provider.

For a more detailed explanation, read this article.

While I believe AI should be regulated in some way and share concerns about deprioritizing AI safety, AI has become a for-profit race between a few private incumbents, which is precisely why safety is left behind.

Safety doesn’t make money. In layman’s terms, making AI more private will inevitably make it more insecure as money becomes the driving factor.

Nevertheless, although proving big tech lobbying in this bill would be very hard, it takes no genius thought process to realize that they are the main beneficiaries, as they have all the capital and lobbying power in the world to handle the potential exposure for misuse of their models.

And to make matters worse, capital is becoming the main moat for AI supremacy.

Subscribe to Leaders to read the rest.

Become a paying subscriber of Leaders to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In

A subscription gets you:
High-signal deep-dives into the most advanced AI in the world in a easy-to-understand language
Additional insights to other cutting-edge research you should be paying attention to
Curiosity-inducing facts and reflections to make you the most interesting person in the room