- TheTechOasis
- Posts
- The AI Future Nobody Wants
The AI Future Nobody Wants
đ TheTechOasis đ
part of the:
In AI, learning is winning.
While Thursdayâs newsletter discusses the present, the Leaders segment on Sundays will inform you of future trends in AI and provide actionable insights to set you apart from the rest.
đ Important News đ
As things are going on, I wanted to provide clear details on my content schedule. Iâm elevating the content Iâll deliver as part of the subscription.
This content will be divided into four categories:
Relevant news on the AI industry
Deep dives into trendy or relevant products and companies in the industry, starting with a deep dive into NVIDIA, the crown jewel.
Recaps on the state of markets (private and public) and insights derived from them
Deep dives on key technological trends you must pay attention to, like todayâs issue on the privately-controlled human-knowledge interface.
Importantly, all content will also be published in TheWhiteBox Community feed, in case you need to access it anytime, and reach out in case you have any additional questions or insights you want to discuss.
Today, I will convince you to become a zealous defender of open-source AI while scaring you quite a bit in the process.
Irremediably, through LLMs, AI is poised to become the interface between humans and knowledge, taking the throne from open search and social media. In other words, soon, everyone will obtain their knowledge almost exclusively from AI.
Kids will be tutored with AI Agents
A Copilot will summarize your job emails and draft your response
You will consult AI companion who knows everything about you on how to manage your latest fight with your significant other
And so on. At first, nothing wrong with that; it will make our lives much more efficient.
The problem? AI is not open, meaning thereâs a real risk that a handful of corporations will control that interface.
And that, my dear reader, will turn society into one single-minded being, voided of any capabilityâor desireâfor critical and free thinking.
Hereâs why and why we should fight against that future.
An Ubiquitous Censoring Machine
A few days ago, ChatGPT experienced one of the major outages of the year, going down for multiple hours.
Growing dependence
Naturally, all major sites echoed this event, including one that referred to it as âmillions forced to use the brain as ChatGPT takes morning offâ, and the headline got me thinking.
Nonetheless, over the previous few hours, I had been going back and forth with my ChatGPT account as I needed the model every ten minutesânot for writing because itâs terribleâbut to actually help me think.
And then, I realized: this is the world we are heading toward, a world where we are totally dependent on AI to âuse our brains.â
But arenât AI products failing miserably as we speak?
Last week, when we discussed whether AI was in a bubble, I argued that demand for GenAI products was, in fact, very low. In actual fact, if youâre using LLMs daily, you can consider yourself a very early adopter.
Sure, the products arenât great, but they are, unequivocally, the worst version of AI youâll ever use. Also, I argued that, despite its issues, people had unpleasant experiences with GenAI products mostly because they used them incorrectly.
They were setting themselves up for failure from the get-go. Nonetheless, as Iâve covered previously, these tools are already pretty decent when used for the use cases in which they were trained for.
But hereâs the thing: the new generation of AI, long-inference models, arenât poised to be a âbigger GPT-4â; they are considered humanityâs first real conquer of AI-supercharged reasoning.
And if they deliver, they will become as essential as your smartphone.
Machines that can reason⌠and censor
But wait, what are long-inference models? Long considered the secret that is no longer a secret, these are new types of models that, simply put, are given time to think.
Upon receiving a request, instead of abruptly responding and hoping for the best as our frontier models today do, the âGPT-5sâ of the future will respond only after they have met a definite uncertainty threshold.
But what do I mean by that?
When working on a difficult problem, humans do four things in our reasoning process: explore, commit, compute, and verify. In other words, if you are trying to solve, letâs say, a math problem,
you first explore the space of possible solutions,
commit to exploring one in particular,
compute the solution,
and verify if your solution meets a certain âplausibilityâ threshold you are comfortable with.
Whatâs more, if you encounter a dead end, you can either backtrack to a previous step in the solution path, or discard the solution completely and explore a new path, restarting the loop.
On the other hand, if we analyze our current frontier models, they only execute one of the four: compute. Thatâs akin to you engaging in a math problem and simply executing the first solution that comes to mind while hoping you chose the correct one.
Good luck, right?
Nonetheless, our current best models allocate the exact same compute to every single predicted token, no matter how hard the userâs request is. In simple terms, for an LLM, computing â2+2â or deriving Einsteinâs Theory of Relativity merits the exact amount of âthoughtâ.
Knowing this, I bet you arenât longer surprised by how limited these models are when facing complex problems.
Naturally, researchers knew this, and argued: Canât we allow LLMs to execute that loop? And when they did, they realized this was the birth of real AI reasoning.
In point of fact, we have plenty of proof this is the real deal:
Andrew Ngâs team proved that when wrapping GPT-3.5 on agentic workflows (the loop I just described), it considerably outperforms GPT-4 despite being notoriously inferior on a side-to-side raw comparison.
Google considerably increased Geminiâs math performance, embarrassing every other LLM, including Claude 3 Opus and GPT-4, and reaching human-level performance in math problem resolution.
Q*, OpenAIâs infamous supermodel, is rumored to be an implementation of this precise loop.
Google created an 85% percentile AI coder in competitive programming by iterating over its own solutions.
Demis Hassabis, Google Deepmindâs CEO, has openly discussed how these models are the quickest way to AGI.
Aravind Srinivas, Perplexityâs CEO (not a foundation model provider, so he isnât biased), recently stated that these models are the precursor to real artificial reasoning.
And these are just a handful of examples. Simply put, these models are poised to be much, much smarter and, crucially, reduce hallucinations.
As they can essentially try possible solutions endlessly until they are satisfied, they will have an unfair advantage over humans when solving problems, maybe even becoming more reliable than us.
Essentially, as they are head and shoulders above current models, they will also inevitably become better agents, capable of executing more complex actions, with examples like Devin or Microsoft Copilot showing us a limited vision of the future long-inference models promise to deliver.
And the moment that happens, thatâs game over; everyone will embrace AI like thereâs no tomorrow.
Long-inference models are the reason your nearest big tech corporation is spending their hard-earned cash in GPUs like thereâs no tomorrow.
Make no mistake, they arenât betting on current LLMs, they are betting on whatâs soon coming.
But why am I telling you this? Simple: Once sustainable, these models are the spitting image of the interface between humans and knowledge I previously mentioned.
Read? AI. Write? AI. Work? AI!
Soon, AI will be the answer to everything.
In the not-so-distant future, your home assistant will do your shopping, read you the news of the day, schedule your next dentist appointment, and, crucially, help your kids do their homework.
In the not-so-distant future, AI will determine whether your home accident gets covered by your policy insurance (which was negotiated by your personal AI with the insuranceâs AI underwriter bot). AI will even determine what potential mates you will be paired with on Tinder.
Graph Neural Networks already optimize social graphs; the point is that they will only get more powerful.
In the not-so-distant future, Googleâs AI overviews will provide you with the answer to any of your questions, deciding what content you have the right to see or read; Perplexity Pages will draft your next blogâs entry; ChatGPT will help your uncle research biased data to convince you to vote {insert left/right extremist party}.
AI, AI, and AI.
Your opinions and your stance on society will all be entirely AI-driven. Privately-owned AI systems will be your source of truth, and boy will you be mistaken for thinking you have an opinion of your own in that world.
As AIâs control is in the hands of the few, the temptation to silence contrarian views that put shareholderâs money at risk will be irresistible.
But how will they do this?
Silencing Othersâ Thoughts
Last week, we saw this incredible breakthrough by Anthropic on mechanistic interpretability. Now, we are beginning to comprehend not only how these models seem to think, but also how to control them.
LLMs are no longer this unpredictable word machine; we know we can pretty effectively censor what they can or canât say. As we identify specific features, aka topics, we can choose to block them.
Current alignment methods can already censor content (fun fact, they do). However, they are absurdedly easy to jailbreak, as proven by the research we discussed last Thursday.
Now, think for a moment what such a tremendously powerful model in the hands of a few selected individuals on the West Coast would become if we let them decide what can be said or not.
Worst of all, in many cases, their intentions are as clear as a summer day.
Think like me or perish.
As if we havenât learned anything from past experiences, society is again divided. We are as polarized as ever, and tolerance over the otherâs opinion is nonexistent.
Think like me, otherwise youâre a fascist or a communist. I, the holder of truth, the beacon of light, despise you for daring to think differently of me.
Everyone is a stark defender of freedom of thinking⌠as long as you think like I do.
Nonetheless, Iâm not trying to sell you the idea that LLMs will create censorship because censorship is alive and well these days.
The mainstream mediaâs reputation is at an all-time low, as publications are no longer âbeacons of truthâ but âseekers of viralityâ; they just desperately search for their readerâs approval or rage (nothing gets more viral than being relatable or extremely contrarian) to pay the bills one more month.
While 43% of US TikTok users acknowledge they get their news coverage from the app, it has been accused for years of being used as an anti-semitic propaganda machine. Similarly, X is allegedly flooded with both anti-Jewish and anti-Muslim accounts, too.
The worse trait of social media isnât really censoring, but how biased the view of the world becomes through its lens. Social media products are engineered to glue you onto the screen and monetize your eye balls through ads.
Thus, they will either ferociously feed on your confirmation bias, or will irate you through extremely contrarian opinions to yours. Either way, you are going to feel angered and more and more polarized⌠and glued to the screen of course.
Naturally, AI is already no exception, while proving in the process that no human cohort in society is safe.
Gemini 1.5 became anti-white to the point of utter foolishness, as the tool became absent of any capacity to generate anything resembling a white person.
Again, Iâm not trying to sell you on any political ideology; I couldnât care less. Iâm not here to educate you about anything outside of tech because I donât have the credentials for it.
But I do care about censorship, be that religiously or racially motivated, or right or left-leaning.
The point is that mainstream and social media have shaped societyâs understanding for decades, but AI will only worsen matters if it becomes consolidated.
But how consolidated is AI becoming?
SB-1047 and Capital
Most of society is completely blindfolded in whatâs at stake. I deeply feel our freedom of thought is in peril if the current trend in AI continues.
AI, presumably the most powerful technology ever created, is readying itself to become completely controlled by less than 100 people for two reasons: regulation and money.
The end of open-source?
SB-1047 is probably the first major bill to take a stance on how to regulate AI in the US, as:
Joe Bidenâs AI Executive Order was a declaration of intentions, not a bill per se.
The EU Act focused more on analyzing the suitability of AI on a per-use-case basis, although the original piece was far worse and only saved in the last mile thanks to tough lobbying by Mistral and the French government).
Sponsored by radical organizations that believe AI will lead to human extinction, and without going into much detail, the bill is essentially a ban on mathematics (a reductionist view of AI is that LLMs are simply matrix multiplications), as echoed by the influential Naval.
Even the AI Alliance, a conglomerate of corporations like Meta or IBM, influential universities, and agencies supporting open AI, published an article condemning the bill while framing it as an âanti-open-source precedent.â
Long story short, it poses enough constraints and liability on model trainers to the point that no one, especially under-capitalized open-source developers, will dare touch an LLM training pipeline with a ten-foot pole, de facto killing open-source AI.
Specifically, instead of burdening the application creators, it offers potentially enforceable liability on the model creators, as if someone builds an iron sword and uses it to kill someone, and the blame is put on the iron provider.
For a more detailed explanation, read this article.
While I believe AI should be regulated in some way and share concerns about deprioritizing AI safety, AI has become a for-profit race between a few private incumbents, which is precisely why safety is left behind.
Safety doesnât make money. In laymanâs terms, making AI more private will inevitably make it more insecure as money becomes the driving factor.
Nevertheless, although proving big tech lobbying in this bill would be very hard, it takes no genius thought process to realize that they are the main beneficiaries, as they have all the capital and lobbying power in the world to handle the potential exposure for misuse of their models.
And to make matters worse, capital is becoming the main moat for AI supremacy.
Subscribe to Leaders to read the rest.
Become a paying subscriber of Leaders to get access to this post and other subscriber-only content.
Already a paying subscriber? Sign In