- TheTechOasis
- Posts
- The 'Am I Getting Steamrolled by AI?' Framework
The 'Am I Getting Steamrolled by AI?' Framework
š TheTechOasis š
part of the:
In AI, learning is winning. While Thursdayās newsletter discusses the present, the Leaders segment on Sundays/Mondays will inform you of future trends in AI and provide actionable insights to set you apart from the rest.
10-minute weekly reads.
š The āAm I Getting Steamrolled?ā Framework š
Probably the most common question I get about AI is whether and how much someone should be afraid. As an engineer by study and advisor by craft, my answer is thatā¦ it depends.
However, thatās a shit answer to a very important question. So I asked myself, how does someone actually address the elephant in the room?
For those reasons, today, I am providing you with the āAm I getting steamrolled by AI?ā framework, a no-hype, easy-to-understand mental model so that you can rest easy from now on if you happen to be safeā¦ or start finding solutions to the problem before itās too late.
As always, I will examine this question from three perspectives: as an employee, an entrepreneur, and an investor, to ensure that everyone benefits from it.
Letās do this!
š Big Announcement š
This month, I am launching TheWhiteBox, a community for high-quality, highly-curated AI content without the typical bullshit, hype, or ads, across research, models, markets, future trends, and AI products, in digestible and straight-to-the-point language.
But why would you want that?
Cut the Clutter: Value-oriented, focusing on high-impact news and clear intuitions to extract from them (why should you care).
Connect with Peers: Engage in valuable discussions with like-minded AI enthusiasts from top global companies.
Exclusive Insights: As the community grows, gain access to special content like expert interviews and guest posts with unique perspectives.
With TheWhiteBox, we guarantee you wonāt need anything else.
No credit card information required to join the waitlist. Premium members have immediate access, but are anyways very welcomed to fill in the waitlist form.
Setting the Stage
Clarifying several premises beforehand is fundamental, no matter how tempting it is to go straight to the point.
These premises are based on my understanding after years of working in this industry and the intuitions Iāve developed over time based on my experiences and the opinions of some of the most respected actors on the main stage.
Simply put, understanding where AI comes from (its current strengths and, more importantly, its main limitations) will make the framework more realistic and interpretable while avoiding falling into pointless overhype or wishful thinking.
And that starts with the compression principle, the true answer to what AI is.
The Compression Principle
Although it may initially seem counterintuitive, AI is synonymous with data compression. Without a doubt, those two words are by far the best summary of the current state of AI today (and may I insist on the word ācurrentā for now).
In simple terms, todayās AI takes a huge amount of data and compresses the knowledge derived from it into a digital file named āmodelā by extracting the underlying patterns of the data.
For instance, ChatGPT is an AI model that was fed the āentireā human written knowledge. Hence, by figuring out how words follow each other, it became capable of 'regeneratingā that knowledge back despite being, on average, three orders of magnitude smaller than the original data.
And the better the compression, the better the model.
Indeed, model compression, or how well the model ādoes more with less,ā is the best predictor of model performance (even more so than size), as proven by the graph below by Huang et al.
Thereās almost a 1:1 correlation between model compression and performance.
In fact, compression as a means for intelligence in humans has been studied for decades, since the dawn of information theory with Claude Shannon in 1948, or recently in āThe Hutter Prizeā (2006).
If you are having trouble understanding the intuition behind compression as a form of intelligence, think about it this way:
The capacity of an AI model (or human) to compress data means that it can extract the key patterns in the data and avoid unnecessary noise.
In other words, instead of paying attention to every detail, effective compression means storing only the essentials to build an effective representation, which in itself is an act of intelligence.
As an example, summarizing a complex document requires understanding and extracting the main ideas and discarding less important details.
Considering compression as an act of intelligence explains why LLMs are currently being explored as a means to reach AGI.
But why am I telling you this? Well, for two reasons:
Even though our frontier models are becoming better data compressors, aka more āintelligent,ā thereās huge room for improvement, considering we still consistently drop model sizes by 10x yearly for the same outcomes.
However, frontier models are still being trained based only on imitation learning, which means our current AI is far more limited than some overly enthusiastic people claim.
The Biggest Truth in AI No One is Telling You
Broadly speaking, our āmost intelligentā AI models are still being trained by teaching them to imitate us.
In other words, based on our current best methods, they can eventually reach human-level reasoning capabilities but will not become superhuman.
In a since-deleted tweet, Noam Brown, the main researcher behind OpenAIās efforts to improve LLM reasoning, expressed this view precisely:
Similarly, recently, Andrej Karpathy (the link goes straight to that moment) shared similar thoughts in the sense that while we might have figured out large-scale imitation learning (meaning we now have models that reasonably imitate human intelligence), we havenāt figured out how to scale these models into superhuman capacity.
Iām telling you all this as a precursor to the framework because you must not let the hype and fear get the best of you when deciding how AI impacts your craft.
The main point Iām trying to convey is that, although youāll see how some people have plenty of reasons to be very scared, frontier AI models are still primitive in most regards (reasoning or planning, to name some) and, thus, are still largely unprepared to take on most human cognitive tasks.
But state-of-the-art AIās limitations donāt end there. One of the many limitations of AI that has proven extremely persistent is Moravecās paradox.
Simplicity is relative
Articulated by Hans Moravec in the 1980s, the principle states that what is easy for humans is hard for machines and vice-versa.
In laymanās terms, while frontier AI can very effectively help you understand the intuitions behind quantum mechanics, we still get amazed whenever a robot learns to fold clothes, something a 5-year-old human can do easily.
Consequently, while we still see people anthropomorphize ChatGPT every time they can, in many stuff, AI is still in its very, very early days, especially in tasks that are, ironically, easy for humans.
In other words, if some parts of your job are very easy for humans, donāt assume that will also be the case for machines. In fact, quite the contrary.
In conclusion, you will agree by now that to evaluate how AI will impact you in the short term, we first had to set the stage on where AI is right now and, importantly, where itās not.
However, donāt take these limitations for granted, as AI could easily steamroll your job, company, or investment portfolio in the coming months or years.
Thus, how do we know if thatās you?
Let me introduce the āAm I Getting Steamrolled by AI?ā framework.
Navigating the AI Age
As mentioned, we are dividing this high-level framework into three parts:
Employees
Investors
Entrepreneurs
Of course, they overlap if you fall into more than one category. I particularly like OpenAIās COO's approach to this question.
If OpenAI were to release a model 100 times better than GPT-4, would you be excited or scared for your company/product/job?
Without further ado, letās start with employees.
Employees: Education and Emotion
This is the category where most people lie. Hence, itās no secret that itās the most controversial. And considering that institutions ranging from McKinsey to the IMF have added some fuel to the fire, everyone thinks their job is in peril.
Nonetheless, Gallup concludes that 22% of Americans feel AI will obsolete their jobs, and another study claimed that almost 60% perceive it as a āthreat to humanity.ā
Are these claims too dramatic? As always, it depends.
To identify whether thatās you, we must analyze three main factors: imitation, digital trace, and emotion.
Subscribe to Leaders to read the rest.
Become a paying subscriber of Leaders to get access to this post and other subscriber-only content.
Already a paying subscriber? Sign In