• TheTechOasis
  • Posts
  • AI Consolidates Even More, The Misleading Risks of AI, & More

AI Consolidates Even More, The Misleading Risks of AI, & More

THEWHITEBOX
New Premium Content

TLDR;

Technology:

  • đŸ€š OpenAI top executives leave and delays GPT-5

  • 🧐 AI existential threats are massively exaggerated

  • 😍 Google’s Gemma2 2B is a Titan

AI Hardware markets&venture:

  • đŸ€« Groq’s insane Series D $650 million round is a message to Nvidia

  • 📉 Will Intel Survive this?

AI General markets&venture:

  • đŸ„” Character.ai just showed us the crude reality of AI

  • đŸ’„ Are the markets imploding?

Million dollar AI strategies packed in this free 3 hour AI Masterclass – designed for founders & professionals. Act fast because It’s free only for the first 100.

TECHNOLOGY
OpenAI Loses 3 Top Executives in a day, also decreases expectations on GPT-5

In a single day, OpenAI has seen how three top executives have left the company and also has had to contain the excitement on releasing the next model, which will not be released on this year’s DevDay in October, almost guaranteeing a delay until 2025.

Among the departing figures, Greg Brockman, Sam Altman’s right-hand man and company president, will take a sabbatical leave for the rest of 2025. Additionally, John Schulman, a co-founder, will, surprisingly, join Anthropic. The other executive who left was Product Lead Peter Deng.

TheWhiteBox’s take:

Out of all three departures, Schulman is by far the most concerning. Leaving a company you founded to join its biggest rival to “deepen my focus on superalignment” is a rather clear way of saying that OpenAI is no longer focusing on safety, and I’m leaving the boat.

This was the same argument Jan Leike made; the previous superalignment lead at the firm joined Anthropic, too, although in his case, he was much more clear on the motives.

OpenAI seems to be on a never-ending spree of bad PR stunts, becoming the industry’s ugly duckling. Adding insult to injury, they are no longer the ones leading the charge, as Google, Meta, and Anthropic have all created models that, at the very least, match OpenAI’s model prowess.

At the same time, they seem to be burning cash like there’s no tomorrow, with The Information estimating that value to be $5 billion for 2024, according to sources.

And the cherry on top? Elon Musk just announced yesterday another lawsuit against Sam Altman and OpenAI, accusing him and the man who just went on a sabbatical of lying to him about their true intentions (making a profit instead of building AGI). Let’s not forget that Elon Musk is an OpenAI founder and that this company was indeed constituted as a non-profit, which is clearly no longer the case.

Regarding GPT-5, this delay is no surprise, as Mira Murati mentioned a few months ago. For whatever reason, people thought GPT-5 was coming out soon, which is, of course, not going to happen.

AI existential threats are massively exaggerated

This is a highly interesting and easy-to-read article by the newsletter Snake Oil. It argues that the x-risk threats that AI allegedly represents for society stand on very shaky evidence (if any), to the point that, in most cases, they are just guesses.

TheWhiteBox’s take:

Time after time, researchers have proven that most x-risks related to AI are based on no evidence and that current LLMs are far from being dangerous to use, to the point that most dangerous uses expose information already publicly available on the Internet.

This article takes a more scientific approach, trying to understand x-risks from the three perspectives doomers have presented: inductive, deductive, and subjective risks. The conclusion is clear in all cases: they are exaggerated and lack clear evidence.

Moreover, when comparing the estimates of ‘AI Experts’ with those of ‘super forecasters,’ people who excel at the very complicated field of forecasting, the huge gap in predictions clearly indicates how blind by their prejudices most AI experts are:

In my personal opinion, x-risks are more a matter of trying to scare people from creating or using open-source models. They want AI to be treated with the same regulatory scrutiny as drugs, with some incumbents like Sam Altman, OpenAI’s CEO, or the US AI Executive Order hinting at creating an FDA (Federal Drug Administration) for AI.

The latest similar-in-fashion decisions came a few days ago when the US government announced they would be monitoring open-source models, with OpenAI’s next frontier-level model to be scrutinized by them, too.

Google’s New Minute Model is a Titan

Google’s new Gemma 2 model, just 2 billion parameters in size, has obtained absolutely remarkable results. Despite being almost 90 times smaller, it beat GPT-3.5, the model that brought ChatGPT into the world, in all categories.

TheWhiteBox’s take:

You may wonder why beating a two-year-old model is remarkable. However, the size of this particular model is what makes it unique.

Looking at the HuggingFace repo, we know that the model weighs 5GB, meaning the per-parameter precision it was trained on was ‘mixed,’ aka two bytes per parameter, for a total of around 2 billion.

That number may not say much to you, but it means you can confidently run a ChatGPT-level model in a laptop with 16 GB of RAM (you could try an 8 GB model, but it won’t go smoothly).

This is remarkable, considering that we have reduced the quality/size ratio by almost 100 times in just two years, signaling that we still have much room to compress models even more.

In other words, sometime soon, we might be able to store GPT-4 level models in our smartphones, something that could deploy so much power in the hands of billions of people without any privacy concerns.

But the biggest news of the week came from markets.

Subscribe to Leaders to read the rest.

Become a paying subscriber of Leaders to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In

A subscription gets you:
High-signal deep-dives into the most advanced AI in the world in a easy-to-understand language
Additional insights to other cutting-edge research you should be paying attention to
Curiosity-inducing facts and reflections to make you the most interesting person in the room