A multimodal visual–language foundation model for computational ophthalmology
Abstract Early detection of eye diseases is vital for preventing vision loss. Existing ophthalmic artificial intelligence models focus on single modalities, overlooking multi-view information and struggling with rare diseases due to long-tail distributions. We propose EyeCLIP, a multimodal visual-la...
Saved in:
| Main Authors: | Danli Shi, Weiyi Zhang, Jiancheng Yang, Siyu Huang, Xiaolan Chen, Pusheng Xu, Kai Jin, Shan Lin, Jin Wei, Mayinuer Yusufu, Shunming Liu, Qing Zhang, Zongyuan Ge, Xun Xu, Mingguang He |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-06-01
|
| Series: | npj Digital Medicine |
| Online Access: | https://doi.org/10.1038/s41746-025-01772-2 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
DeepSeek-R1 outperforms Gemini 2.0 Pro, OpenAI o1, and o3-mini in bilingual complex ophthalmology reasoning
by: Pusheng Xu, et al.
Published: (2025-08-01) -
Embodied artificial intelligence in ophthalmology
by: Yao Qiu, et al.
Published: (2025-06-01) -
EyeGPT for Patient Inquiries and Medical Education: Development and Validation of an Ophthalmology Large Language Model
by: Xiaolan Chen, et al.
Published: (2024-12-01) -
Tackling visual impairment: emerging avenues in ophthalmology
by: Fang Lin, et al.
Published: (2025-04-01) -
Neuro-ophthalmology and migraine: visual aura and its neural basis
by: Hajar Nasir Tukur, et al.
Published: (2025-08-01)