Personhood Credentials

In partnership with

THEWHITEBOX
TLDR;

  • 📰 News on OpenAI, Google, Nvidia, AMD & Cursor regarding new models, AI game engines, hardware battles, and the best GenAI product.

  • 🚨Trend of the Week🚨 Using zk-proofs to solve the biggest AI problem

The Daily Newsletter for Intellectually Curious Readers

  • We scour 100+ sources daily

  • Read by CEOs, scientists, business owners and more

  • 3.5 million subscribers

OpenAI’s Leaked Strategy

A new piece by The Information leaks OpenAI’s recent movements and strategy toward releasing a new model this fall, codenamed Strawberry. This model, once named Q* (the model whose improved intelligence spooked the Board and allegedly ignited Sam Altman's firing back in November 2023), will have improved reasoning capabilities, especially regarding maths and coding.

Besides, Strawberry could also be used to generate improved reasoning data at scale to generate another model, named Orion, which could become OpenAI’s new flagship model, arriving in 2025. This model would be so powerful that it has allegedly already been shown to the Feds (acknowledged moments ago by Sam Altman in a tweet).

This leak comes at the same time that OpenAI is reportedly eyeing a new financial round that could see the company valued at over $100 billion.

TheWhiteBox’s take:

OpenAI’s leak tells us that to create the new frontier of AI models, our current data is simply not good enough to achieve a reasoning breakthrough.

However, this costly process is so laborious that it is forcing OpenAI to build an intermediate model, Strawberry, with the necessary capabilities to create high-quality data at scale to train Orion. It may even force the highly unprofitable company to raise even more money.

But The Information piece lacks much of the information I explain in my newest Notion piece, from expanding what the ‘Strawberry’ models look like to how they use a technique called distillation brilliantly to make all this economically viable.

In the meantime, OpenAI announced yet another price decrease for its flagship model, GPT-4o, to just $4 per million tokens, a ninefold discount since the launch of GPT-4 last year. ARK estimates that AI costs fall an average of 86% annually, which could soon make LLMs ‘so cheap costs don’t matter.’

GOOGLE DEEPMIND
Generating Games One Frame at a Time

Google has presented GameNGen, an AI game engine. It’s part Reinforcement Learning (RL), part diffusion model that generates the next frame of the game (DOOM) in real-time as the user plays.

The RL model chooses the action, and the diffusion model conditions on that action to generate the next frame. And so forth.

The AI game engine can generate frames at 20 fps (frames per second), making the game playable and adhering exceptionally well to the original DOOM game. Notably, the entire model can be run in one single TPU.

TheWhiteBox’s take:

The gaming industry is starting to look like one of the most disruptible industries. Not only are companies building smart AI NPCs for games, but now we have AIs that can create entire games on the fly.

While this model strictly adheres to DOOM, it’s a fantastic step in the right direction. A while back, Google presented Genie, an AI that could take an input image and make it playable to a user, as the next frame was automatically generated based on the current screen plus the user’s action.

Thus, combining Genie and GameNGen could soon allow us to create entire games from text (stating what game you want to play and use the AI game engine to create and play it), aka text-to-game, turning video games into a uniquely personalized experience.

The main issue with this model is that the Reinforcement Learning part is extremely data inefficient, requiring almost a billion data points to train. Thus, scaling this to multiple games will be a considerable challenge.

GOOGLE GEMINI
Google Presents Gems

Yesterday, Google presented Gemini Gems, which allows users to build personalized Gemini chatbots. You give them the instructions, and they act accordingly, similar to OpenAI’s Custom GPTs. They also released an improved version of Imagen 3, their image generation model.

TheWhiteBox’s take: 

I’ve never used Custom GPTs with my OpenAI paid account, so that’s all you need to know about my feelings toward these tools. However, some people swear by them, so maybe they will see something I don’t.

The idea of customizing an LLM automatically through a simple text instruction is compelling, but in my own experience, the interaction feels almost identical to the original GPT-4o (or Gemini now), so why bother?

MARKETS
NVIDIA Crushers Earnings, Yet Stocks Falls. But the Real Concern is Not Them

NVIDIA's shares dipped 1.46% in premarket trading after reporting over $30 billion in revenue for the fiscal second quarter, beating analyst estimates of $28.7 billion and representing a 122% year-on-year increase.

Despite this strong performance, the slight dip in gross margins and extremely high market expectations led to the stock's decline.

NVIDIA projected $32.5 billion in revenue for the third quarter, indicating an 80% year-on-year increase but a slowdown from the previous quarter (also beating analyst estimates).

However, the Blackwell GPU delay may still be fresh in investors' minds. Importantly, Nvidia clarified the reason; in the words of CFO Colette Kress, they had to update the GPU mask.

Producing the chips that go into AI accelerators is an acutely error-prone process. One of the most delicate parts is the lithography, where EV light is channeled through a set of mirrors and finally through a mask that channels the EV light into the silicon wafer and draws the transistor circuit.

That mask might have been imperfect, and thus, the yield—the number of chips that turn out fine during the manufacturing process—fell sharply, so they fixed that to increase TSMC’s yield (Nvidia does not produce its chips; TSMC does).

TheWhiteBox’s take:

NVIDIA’s dip can be summarized as a rational decision by investors influenced by a highly irrational market. Never in history has a company with such huge revenues that continue to beat expectations every quarter dip on the news.

But the biggest AI news of the week in markets is Super Micro Computer’s downfall.

It is one of the fastest-growing companies in the world (it had grown more than NVIDIA this year) but has a tremendously lousy track record for the month, having lost 36% of its value at the time of writing. The reason is that Hindenburg Research opened a short position on them after the financial forensics research firm published several reports suggesting that Super Micro’s top leadership is actively manipulating its financials.

The main concern seems to be the dubious relationship between SMC and some of its suppliers, who are ‘casually’ owned by the CEO’s family and may not be appropriately recognized in the accounting numbers, for obvious reasons.

The markets, high on AI cope, didn’t react that badly initially. However, after Super Micro delayed its 10-k delivery to the SEC (required after Hindenburg’s initial report), the stock plunged—and continues to plunge—heavily as markets feared something is really off.

Overall, have we gone by the golden age of this AI cycle?

Probably, and unless OpenAI knocks the ball out of the park with its new model, enterprises better start adopting the shit out of GenAI. Otherwise, we are in for a nasty crash.

HARDWARE
AMD Matches NVIDIA’s Performance

AMD has published open results of its highly expected MI300 GPUs in the MLCommons v4.1 Inference benchmarks, the go-to benchmarks for evaluating AI hardware performance. For the first time, AMD's MI300 GPUs' performance matches NVIDIA’s H100 performance when running Llama 2 70B.

Still, NVIDIA reigns supreme in most benchmarks.

You can check overall results here (AMD’s results are the fields ID 4.1-0002/0070).

TheWhiteBox’s take:

Competitors catching up may be why NVIDIA’s gross margins are thinning. Concerningly, gross margins should continue to fall as more competitors match NVIDIA’s prowess.

We also saw NVIDIA’s new platform, Blackwell, post its first results. Despite posting only one accelerator’s performance (you usually have many of them, 8 in the case of NVIDIA’s current Hopper line-up and up to 72 or 36 per node once Blackwell drops), the results were astonishing (4.1-0074), achieving over 12k tokens per second on Llama 2-70B, almost half what 8 NVIDIA H100’s offer with just one B200 accelerator (GPU) (8 vs 1).

If you’re an NVIDIA investor, watch closely for Q4, as NVIDIA announced the first deliveries of Blackwell for Q4. The results of those deployments will be crucial to the company's future value.

 

DEVELOPMENT
The Best GenAI Product. And It’s Not Even Close

CursorAI, a company that has just raised a $60 million Series A round (at a $400 million valuation), is the best Generative AI product in the world right now. It’s not ChatGPT; it’s not Claude or MidJourney. Cursor is second to none, and it’s not even close.

In a nutshell, it’s an AI-enhanced IDE developer software that helps programmers use AI effectively to code. It’s similar to GitHub Copilot but feels stupidly more powerful.

  • Drafts entire boilerplates

  • Autocompletes code

  • Debugs its code (and yours)

And more. In my experience building a Claude agent that uses the Brave Search API to retrieve recent context, I may have written 5% or less of the total code, becoming more like a code reviewer (you still need to be able to read code to review it, understanding issues, and suggest improvements).

I would love to say I have a discount code for you, but I’m not sponsored or affiliated. Yet, the free version is already impressive. While tools like Cosine may be the future, Cursor is undoubtedly the present.

LEARN
The Insights Corner

👩‍🔬 Building Reliable Agents, Webinar by Princeton

TREND OF THE WEEK
Zk-Proofs: Solving The Biggest AI Problem

Finally, we have it: an AI use case for Crypto that isn’t pointless or scammy. This could mean blockchains could play a vital role in the AI economy, even becoming a basic requirement for a well-functioning society.

Sounds exaggerated, but it’s not. Bear with me because it’s not me saying it. And it’s not a crappy venture fund or a Crypto bro, who says this.

It’s a rockstar group of researchers from OpenAI, Harvard, Microsoft, Berkeley, MIT, a16z, Oxford, and even the American Enterprise Institute, among others.

Because AI’s biggest threat isn’t whether it will kill usit’s the inappropriate use of AI content at scale.

In an AI world, how will you know what’s human? Who are you going to trust? These questions will be a common theme in your life, as it’s almost impossible to tell if you’re interacting with an AI already.

These researchers have presented Personhood credentials, a proposal to leverage zero-knowledge proofs (zk-proofs) as the ultimate form of ‘proof of humanity.’

So we always know who’s behind what. Moreover, with zk-proofs, you can prove you’re human without revealing your identity. It sounds like magic, but it’s not. And you’ll learn about it today in plain English.

Let’s dive in!

Dealing with Lies

Ever since Jordan Peele, from Key and Peele, faked being Obama almost six years ago, deepfakes have become a real problem, as now it’s an effortless exercise.

Indistinguishable Material

Recently, the South Korean government has announced a ‘much tougher’ stance on deepfakes after sexually explicit deepfake images of real women have been distributed across schools and universities, and they already play a prominent role in the current US election.

But it’s not all about deepfakes.

Today, you can chat with entities online without knowing whether they are real humans. To make matters worse, AI tools are seeing their prices fall by several orders of magnitude in a year (as we discussed earlier), making the ‘art of deceiving’ a ridiculously cheap one.

Overwhelming Scalability

If you have interacted with Claude or ChatGPT once or twice, you know you could be easily fooled if someone uses them to spam bots across several social media sites looking for engagement, with clear examples in X, Instagram, or even a paywalled site like Medium.

For example, GPT-4, a model just over a year old, costs $60 and $120 per 1 million prompted (input) and sampled (output) tokens, respectively.

Today, OpenAI’s GPT-4o mini, which is head and shoulders better, has a price of:

  • Input: $0.15 / million tokens, 400 times cheaper than GPT-4 (March 2023)

  • Output: $0.6 / million tokens, 200 times cheaper than GPT-4 (March 2023)

Consequently, a solution to the proliferation of AI-generated content with nefarious intentions is a prime need for a well-functioning society. Several methods have been explored to fight this economically incentivized bad actors.

However, every single one of them has at least one core issue:

But what if there was a perfect way to fight all this?

The Power of Proving without Showing

Zero-knowledge proofs are a method theoretically proposed decades ago that allows a prover to prove a verifier something without revealing that something.

The Daltonic and the Trickster

In other words, using zk-proofs, a person can prove he/she’s human without revealing his/her identity or any traits whatsoever that could lead to that revealing.

A fully privacy-preserving yet undeniable proof of humanity. But how is that possible? We can explain it as if we were five years old.

Picture this: A trickster approaches a Daltonic person and hands him two balls of identical shape, texture, weight, and smell… all except color, which the Daltonic person can’t differentiate.

Then, he tells him: “I am a magician and can tell, at all times, whether you have switched balls from one hand to the other, even if you hide them behind your back.”

To the Daltonic person, these balls are identical. Hence, he takes the bet, puts his hands behind his back, and switches balls between them. But once he shows both hands, the trickster immediately guesses that he switched.

“Pure luck, that is,” the Daltonic guy thinks to himself. It was a 50% chance, right?

But they play again. And again. And again. Every single time, to his surprise, the trickster instantly guesses whether he has switched balls between hands.

Luck, right? Well… it can’t be luck anymore. For you to guess this game based on pure luck seven times in a row, that’s a probability of (0.57 = 0.78%), less than 1% chance.

Something’s off. And boy, it is.

The daltonic person does not know that the balls are of different colors; he sees both as identical, so there’s no possible explanation beyond luck. But at the same time, he knows his statistics and how unlikely seven straight guesses are.

Therefore, the Daltonic dude is convinced that the trickster is a magician, as he can’t find a reasonable explanation for what’s happening.

And that is what zk-proofs area way of proving something to someone without revealing your secret but making the other option so statistically unlikely that the person has no choice but to believe what you say.

As for our example, zk-proofs can be used to ‘prove that I’m human’ without revealing any secrets about my humanity (name, image, nationality, etc.) by making the probability of me not being human so statistically unlikely that it’s simply not possible.

And how can we implement this?

The Crypto Moment Many Were Waiting For

As a summary, the researchers offer a visual on how this whole process would work:

  1. First, we would have the enrollment phase, the phase where humans contact an issuer, who gives out the personhood credential (one per person).

To receive the credential, the user must provide a ‘proof of humanhood,’ which could be as simple as appearing physically in a certain place to collect the credential or, in simple terms, taking any action that could not be possibly exercised unless you’re human.

  1. Then, to access web services, the user uses their personhood credential to validate their humanity. The web service then runs a zk-proof over the personhood credential, validating the user’s claim of humanity.

Beautifully simple. But what’s the point of all this if the issuer of the personhood credential knows all my information?

While proving humanity could also be privacy-preserving, the risk remains if the issuer is centralized (think of a government or a private company).

Consequently, researchers indirectly hint at using a decentralized issuer that leverages cryptography to work (just like physicality, AI can’t forge advanced cryptography either). In other words, the issuer would have to be a distributed ledger, such as a blockchain.

But what do blockchains offer?

Blockchain 101:

Blockchains are distributed ledgers, an account of all transactions occurred in that blockchain.

They work by having a distributed set of validator nodes, each owning a complete copy of all transactions in the ledger. Thus, for a transaction to get approved, a majority of nodes need to approve it. Consequently, if one node tries to tamper the ledger (introduce false transactions), the other nodes reject it.

Therefore, compromising such a network requires owning a majority of nodes, which is a multi-billion exercise (or more in cases like Bitcoin), disincentivizing malicious actors from even trying.

Therefore, by having a blockchain network acting as the issuer of the personhood credential, we guarantee three things:

  1. Immutability: Bad actors can’t tamper with the ledger

  2. Distributed economic incentives: Every validator node securing the network gets paid in cryptocurrency to do so, generating economic incentives to protect and secure the entire network (in case you weren’t aware, this is the whole point of cryptocurrencies in the first place)

  3. Privacy: Users can obtain the credentials without revealing their identity even to the issuer, guaranteeing total privacy.

In a nutshell, we finally have a solution to a problem that will become incommensurable if not solved, which is why zk-proofs are this week’s trend of the week.

These are the kinds of problems we should be looking to solve, not whether AI will develop a mind of its own and kill us all.

TheWhiteBox’s take

Of course, the question is, how does this affect the Crypto market?

Honestly, not much. Most of the blockchains we see today have traded off decentralization (or, to be specific, security) in lieu of cheaper and faster transactions.

Honestly, I believe those blockchains are worthless and not apt for this use case (or any, for that matter). And while I think that personhood credentials could be the application that finally validates blockchains' importance to AI, only extremely secure blockchains are an option.

Bitcoin seems like the natural choice because it is the most decentralized (and, thus, secure) blockchain, but the expensive costs of running zk-proofs remain a huge challenge.

One way or another, you have to give it to the researchers because zk-proofs are one of the most elegant methods I’ve ever seen, and could also be the key to unite the worlds of AI and crypto.

THEWHITEBOX
Premium

If you like this content, join Premium to receive four times as much content weekly without saturating your inbox. You will even be able to ask the questions you need answers to.

For business inquiries, reach me out at [email protected]