Ophthalmol Sci
Ophthalmol Sci2025Journal Article

Independent Evaluation of RETFound Foundation Model's Performance on Optic Nerve Analysis Using Fundus Photography.

Artificial IntelligenceOptic Nerve & Disc

Summary

RETFound accurately predicts cup-to-disc ratio and average retinal nerve fiber layer thickness from fundus photos. This demonstrates its utility for optic nerve evaluation, even without specific training.

Abstract

PURPOSE

This study evaluates RETFound, a retinal image foundation model, as a feature extractor for predicting optic nerve metrics like cup-to-disc ratio (CDR) and retinal nerve fiber layer (RNFL) thickness using an independent clinical dataset.

DESIGN

Retrospective observational study.

PARTICIPANTS

Patients who underwent fundus photography and RNFL OCT at the Byers Eye Institute, Stanford University.

METHODS

Fundus images were paired with RNFL OCT results where study dates were within 6 months of each other. Latent features from full-sized raw fundus images were extracted from RETFound and used as inputs for several linear regression models (Ridge, Lasso, Elastic Net, and ordinary least squares). Baseline models using pretrained VGG16 and Vision Transformers (ViTs) as feature extractors were also developed. All models were trained to perform single-output tasks (predicting CDR or average RNFL thickness) and multioutput tasks (predicting RNFL thickness at quadrants and clock hours). Data were split 80:20 at the patient level for training and validation.

MAIN OUTCOME MEASURES

Model predictions were evaluated on a test set using the metrics of, mean absolute error, and root mean square error.

RESULTS

Among the 463 unique participants, contributing 776 fundus-OCT data pairs, the mean age was 63 years (±18 years), with 57.24% being female (N = 265). RETFound models demonstrated strong performance on single-output tasks, achievingvalues between 0.706 and 0.898 for CDR prediction and between 0.855 and 0.961 for average RNFL thickness prediction. Performance on multioutput tasks was less robust, with a highestof 0.583 for clock-hour RNFL thickness prediction and anof 0.811 for quadrant RNFL thickness prediction. RETFound models outperformed VGG16 and ViT models, which achieved maximumof 0.731 and 0.687 in predicting RNFL thickness and CDR.

CONCLUSIONS

Machine learning models leveraging the massively pretrained RETFound foundation model could accurately predict CDR and average RNFL thickness from fundus photos on an independent clinical dataset. Although RETFound was not trained or fine-tuned for these optic nerve evaluation tasks, nevertheless, RETFound overcomes small dataset limitations and excels in specialized applications.

FINANCIAL DISCLOSURES

Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

Keywords

Artificial intelligenceFoundation modelFundus photographyGlaucomaOptic nerve

This article has not yet been placed in the Knowledge Library.

Discussion

Comments and discussion will appear here in a future update.