• TheTechOasis
  • Posts
  • AI Proves Your Eyes Are Key to Uncover Your Illnesses

AI Proves Your Eyes Are Key to Uncover Your Illnesses

🏝 TheTechOasis 🏝

Breaking down the most advanced AI systems in the world to prepare you for your future.

5-minute weekly reads.

🤯 AI Research of the week 🤯

William Shakespeare is famously quoted as saying “The eyes are the window to your soul”.

And AI is proving it’s actually true.

Source: MidJourney

Published in the world-famous Nature magazine, a new study has proved that an AI, named RETFound, can accurately predict:

  • Diagnosis/Prognosis of sight-threatening diseases

  • And more spectacularly, 3-year incidence prediction of systemic diseases (diseases that affect the whole body) like Parkinson’s disease

All this by simply looking at your eyes.

These types of discoveries, and not ChatGPT, are the ones that are really going to change the world for the better.

The new great trend, oculomics

Oculomics is an exciting new field that provides a non-invasive way to predict several life-threatening diseases before they actually occur using your eyes.

As someone who’s had Alzheimer’s running in the family, I can’t help but feel extremely emotional about the fact that AI is already increasing our prospects for early diagnosis, or even prediction.

But how does RETFound actually work?

MAEs, the gift that keeps on giving

In short, RETFound is the encoder part of a masked autoencoder (MAE).

But what does that mean?

Put simply, MAE is an AI model that takes in a patched image and learns to reconstruct it to its original form.

By comparing to the ground truth using a differentiable loss (meaning it can be optimized through gradients as any other Neural Network), the model gets better at that reconstruction.

Source: Original MAE paper

And why do such exercise?

Well, MAEs are great for two reasons:

  1. Building models with unlabelled data with automatic supervisory signals, called self-supervised learning

  2. Achieving high semantic representations

As for the former, one of the reasons recent models have shown such an impressive performance is because they were fed humongous amounts of data; trillions of words in the case of LLMs (and millions of images in today’s case).

With MAEs, instead of humans pointing to the correct answer for every image, the ground truth is the original image itself.

That way, you can train your model with millions or billions of images with no human effort required.

Additionally, this ‘patching exercise’ forces the model to truly understand what it’s reconstructing.

For instance, if we show the model a patched image where only a German shepherd’s face can be seen, the model needs to ‘understand’ that it’s a German shepherd and, thus, that it should have four legs, a body, and a tail.

In other words, we force the model to learn a complex understanding of our world.

And how was RETFound built?

A long-distance cousin of ChatGPT

RETFound was trained in two steps, as per the image below:

  1. A pre-training phase to reconstruct images from a huge dataset of natural images

  2. A fine-tuning phase using eye with OCT and CFP images, the most common imaging modalities in ophthalmology

Source: RETFound paper

While the former allowed the model to learn to reconstruct images correctly, the latter grounded the model into performing well in retinal image reconstruction.

Why the initial pre-training phase?

Retinal images are famously scarce. Thus, as MAEs capture high-quality semantics from images, the model’s knowledge can be easily transferred into retinal images from an abundant source like natural images.

The same approach was just to train GPT and ChatGPT, but using a token-prediction decoder instead of a MAE.

Finally, you ‘chop off’ the decoder part (seen in the flaming image) and add a classification head to the encoder of the MAE.

In other words, instead of reconstructing the image with the decoder part, you just use the encoder and the head to classify the image according to a myriad of classes (diseases in this case).

A hopeful future

After evaluating all tasks (diagnosis/prognosis of eye diseases and 3-year prediction of systemic diseases), the results are really encouraging:

For eye conditions, the model exhibits accuracy of over 85% on average over several diseases with an average confidence interval of 95% and p-value of less than 0.001 (this means that the chances of the model getting it right based on pure chance are very small).

More interestingly, it showed very promising results for systemic disease prediction, reaching almost 80% accuracy for Parkinson’s disease.

They also evaluated what parts of the image RETFound was really paying attention to classify an illness, finding that the model truly understood how to identify the disease:

For this, they used RELPROP, a technique that allows to see what parts of the image are more relevant for the model when making a decision.

That way, if it points to where it should be to detect a disease, we know it’s understanding the disease’s condition.

I feel we’re heading into a future where doctors won’t diagnose or treat, but they will become the companions of the patients throughout the process.

But what do you think?

Will the role of doctors change dramatically in the near future?

Login or Subscribe to participate in polls.

🫡 Key contributions 🫡

  • RETFound proves that life-threatening diseases can be identified or predicted in a non-invasive way using your eyes

  • This signifies the value that MAEs have in building high-quality representations of our world, potentially being used as world models for LLMs

🔮 Practical implications 🔮

  • Most disease diagnoses and prognoses will be done by AI

  • Doctors will support the patient through the course of the illness, an expert playing an emotional support role

👾 Best news of the week 👾

🤔 ChatGPT vs Claude Pro, which should I choose?

🥇 Leaders 🥇

This week’s issue: Why Most AI Investors Are Wrong and Are Going to Get Destroyed

Warren Buffett summed it best, “The stock market game is the easiest game of all, you simply don’t need to play in order to win.”

Still, there’s something about investing that makes it so attractive. However, with AI investing the devil lies in the details.

As Admiral Ackbar from Star Wars would put it, It’s a trap. A deadly one in fact.

AI, despite its prominence, is a completely misunderstood technology, even for those with the capacity to pour billions into a company.

Put simply, people are investing in something they don’t understand. And they are going to pay for it, no pun intended.

Today we are going to delve into the AI market landscape to reach an unequivocal conclusion:

If you think AI will be a moat for your company or the company you invested in, you’re wrong.

In the next five minutes, you’ll understand why most AI companies today invested in the billions are, actually, terrible investment decisions.

Subscribe to Leaders to read the rest.

Become a paying subscriber of Leaders to get access to this post and other subscriber-only content.

Already a paying subscriber? Sign In

A subscription gets you:
High-signal deep-dives into the most advanced AI in the world in a easy-to-understand language
Additional insights to other cutting-edge research you should be paying attention to
Curiosity-inducing facts and reflections to make you the most interesting person in the room

Subscribe to keep reading

This content is free, but you must be subscribed to TheTechOasis to continue reading.

Already a subscriber?Sign In.Not now