AI Proves Your Eyes Are Key to Uncover Your Illnesses
š TheTechOasis š
Breaking down the most advanced AI systems in the world to prepare you for your future.
5-minute weekly reads.
𤯠AI Research of the week š¤Æ
William Shakespeare is famously quoted as saying āThe eyes are the window to your soulā.
And AI is proving itās actually true.

Source: MidJourney
Published in the world-famous Nature magazine, a new study has proved that an AI, named RETFound, can accurately predict:
Diagnosis/Prognosis of sight-threatening diseases
And more spectacularly, 3-year incidence prediction of systemic diseases (diseases that affect the whole body) like Parkinsonās disease
All this by simply looking at your eyes.
These types of discoveries, and not ChatGPT, are the ones that are really going to change the world for the better.
The new great trend, oculomics
Oculomics is an exciting new field that provides a non-invasive way to predict several life-threatening diseases before they actually occur using your eyes.
As someone whoās had Alzheimerās running in the family, I canāt help but feel extremely emotional about the fact that AI is already increasing our prospects for early diagnosis, or even prediction.
But how does RETFound actually work?
MAEs, the gift that keeps on giving
In short, RETFound is the encoder part of a masked autoencoder (MAE).
But what does that mean?
Put simply, MAE is an AI model that takes in a patched image and learns to reconstruct it to its original form.
By comparing to the ground truth using a differentiable loss (meaning it can be optimized through gradients as any other Neural Network), the model gets better at that reconstruction.

Source: Original MAE paper
And why do such exercise?
Well, MAEs are great for two reasons:
Building models with unlabelled data with automatic supervisory signals, called self-supervised learning
Achieving high semantic representations
As for the former, one of the reasons recent models have shown such an impressive performance is because they were fed humongous amounts of data; trillions of words in the case of LLMs (and millions of images in todayās case).
With MAEs, instead of humans pointing to the correct answer for every image, the ground truth is the original image itself.
That way, you can train your model with millions or billions of images with no human effort required.
Additionally, this āpatching exerciseā forces the model to truly understand what itās reconstructing.
For instance, if we show the model a patched image where only a German shepherdās face can be seen, the model needs to āunderstandā that itās a German shepherd and, thus, that it should have four legs, a body, and a tail.
In other words, we force the model to learn a complex understanding of our world.
And how was RETFound built?
A long-distance cousin of ChatGPT
RETFound was trained in two steps, as per the image below:
A pre-training phase to reconstruct images from a huge dataset of natural images
A fine-tuning phase using eye with OCT and CFP images, the most common imaging modalities in ophthalmology

Source: RETFound paper
While the former allowed the model to learn to reconstruct images correctly, the latter grounded the model into performing well in retinal image reconstruction.
Why the initial pre-training phase?
Retinal images are famously scarce. Thus, as MAEs capture high-quality semantics from images, the modelās knowledge can be easily transferred into retinal images from an abundant source like natural images.
The same approach was just to train GPT and ChatGPT, but using a token-prediction decoder instead of a MAE.
Finally, you āchop offā the decoder part (seen in the flaming image) and add a classification head to the encoder of the MAE.
In other words, instead of reconstructing the image with the decoder part, you just use the encoder and the head to classify the image according to a myriad of classes (diseases in this case).
A hopeful future
After evaluating all tasks (diagnosis/prognosis of eye diseases and 3-year prediction of systemic diseases), the results are really encouraging:
For eye conditions, the model exhibits accuracy of over 85% on average over several diseases with an average confidence interval of 95% and p-value of less than 0.001 (this means that the chances of the model getting it right based on pure chance are very small).
More interestingly, it showed very promising results for systemic disease prediction, reaching almost 80% accuracy for Parkinsonās disease.
They also evaluated what parts of the image RETFound was really paying attention to classify an illness, finding that the model truly understood how to identify the disease:

For this, they used RELPROP, a technique that allows to see what parts of the image are more relevant for the model when making a decision.
That way, if it points to where it should be to detect a disease, we know itās understanding the diseaseās condition.
I feel weāre heading into a future where doctors wonāt diagnose or treat, but they will become the companions of the patients throughout the process.
But what do you think?
Will the role of doctors change dramatically in the near future? |
š«” Key contributions š«”
RETFound proves that life-threatening diseases can be identified or predicted in a non-invasive way using your eyes
This signifies the value that MAEs have in building high-quality representations of our world, potentially being used as world models for LLMs
š® Practical implications š®
Most disease diagnoses and prognoses will be done by AI
Doctors will support the patient through the course of the illness, an expert playing an emotional support role
š¾ Best news of the week š¾
š§Ā A recent interview with Sam Altman, CEO of OpenAI
š¤ ChatGPT vs Claude Pro, which should I choose?
š¦Ā A look at how consumers are using GenAI by a16z
š„Ā Leaders š„
This weekās issue: Why Most AI Investors Are Wrong and Are Going to Get Destroyed
Warren Buffett summed it best, āThe stock market game is the easiest game of all, you simply donāt need to play in order to win.ā
Still, thereās something about investing that makes it so attractive. However, with AI investing the devil lies in the details.
As Admiral Ackbar from Star Wars would put it, Itās a trap. A deadly one in fact.
AI, despite its prominence, is a completely misunderstood technology, even for those with the capacity to pour billions into a company.
Put simply, people are investing in something they donāt understand. And they are going to pay for it, no pun intended.
Today we are going to delve into the AI market landscape to reach an unequivocal conclusion:
If you think AI will be a moat for your company or the company you invested in, youāre wrong.
In the next five minutes, youāll understand why most AI companies today invested in the billions are, actually, terrible investment decisions.
Subscribe to Leaders to read the rest.
Become a paying subscriber of Leaders to get access to this post and other subscriber-only content.
Already a paying subscriber? Sign In