Development and Validation of an Automatic Ultrawide-Field Fundus Imaging Enhancement System for Facilitating Clinical Diagnosis: A Cross-Sectional Multicenter Study

Qiaoling Wei, Zhuoyao Gu, Weimin Tan, Hongyu Kong, Hao Fu, Qin Jiang, Wenjuan Zhuang, Shaochi Zhang, Lixia Feng, Yong Liu, Suyan Li, Bing Qin, Peirong Lu, Jiangyue Zhao, Zhigang Li, Songtao Yuan, Hong Yan, Shujie Zhang, Xiangjia Zhu, Jiaxu Hong, Chen Zhao, Bo Yan

Engineering ›› 2024, Vol. 41 ›› Issue (10) : 179-188.

PDF(2854 KB)
PDF(2854 KB)
Engineering ›› 2024, Vol. 41 ›› Issue (10) : 179-188. DOI: 10.1016/j.eng.2024.05.006
Research
Article

Development and Validation of an Automatic Ultrawide-Field Fundus Imaging Enhancement System for Facilitating Clinical Diagnosis: A Cross-Sectional Multicenter Study

Author information +
History +

Abstract

In ophthalmology, the quality of fundus images is critical for accurate diagnosis, both in clinical practice and in artificial intelligence (AI)-assisted diagnostics. Despite the broad view provided by ultrawide-field (UWF) imaging, pseudocolor images may conceal critical lesions necessary for precise diagnosis. To address this, we introduce UWF-Net, a sophisticated image enhancement algorithm that takes disease characteristics into consideration. Using the Fudan University ultra-wide-field image (FDUWI) dataset, which includes 11 294 Optos pseudocolor and 2415 Zeiss true-color UWF images, each of which is rigorously annotated, UWF-Net combines global style modeling with feature-level lesion enhancement. Pathological consistency loss is also applied to maintain fundus feature integrity, significantly improving image quality. Quantitative and qualitative evaluations demonstrated that UWF-Net outperforms existing methods such as contrast limited adaptive histogram equalization (CLAHE) and structure and illumination constrained generative adversarial network (StillGAN), delivering superior retinal image quality, higher quality scores, and preserved feature details after enhancement. In disease classification tasks, images enhanced by UWF-Net showed notable improvements when processed with existing classification systems over those enhanced by StillGAN, demonstrating a 4.62% increase in sensitivity (SEN) and a 3.97% increase in accuracy (ACC). In a multicenter clinical setting, UWF-Net-enhanced images were preferred by ophthalmologic technicians and doctors, and yielded a significant reduction in diagnostic time ((13.17 ± 8.40) s for UWF-Net enhanced images vs (19.54 ± 12.40) s for original images) and an increase in diagnostic accuracy (87.71% for UWF-Net enhanced images vs 80.40% for original images). Our research verifies that UWF-Net markedly improves the quality of UWF imaging, facilitating better clinical outcomes and more reliable AI-assisted disease classification. The clinical integration of UWF-Net holds great promise for enhancing diagnostic processes and patient care in ophthalmology.

Graphical abstract

Keywords

Ultrawide-field imaging / Fundus photography / Image enhancement algorithm / Artificial intelligence / Multicenter study / Artificial intelligence-assisted diagnostics / Diagnostic accuracy

Cite this article

Download citation ▾
Qiaoling Wei, Zhuoyao Gu, Weimin Tan, Hongyu Kong, Hao Fu, Qin Jiang, Wenjuan Zhuang, Shaochi Zhang, Lixia Feng, Yong Liu, Suyan Li, Bing Qin, Peirong Lu, Jiangyue Zhao, Zhigang Li, Songtao Yuan, Hong Yan, Shujie Zhang, Xiangjia Zhu, Jiaxu Hong, Chen Zhao, Bo Yan. Development and Validation of an Automatic Ultrawide-Field Fundus Imaging Enhancement System for Facilitating Clinical Diagnosis: A Cross-Sectional Multicenter Study. Engineering, 2024, 41(10): 179‒188 https://doi.org/10.1016/j.eng.2024.05.006

References

[1]
D. Lin, J. Xiong, C. Liu, L. Zhao, Z. Li, S. Yu, et al. Application of Comprehensive Artificial intelligence Retinal Expert (CARE) system: a national real-world evidence study. Lancet Digit Health, 3 (8) (2021), pp. e486-e495.
[2]
N. Panwar, P. Huang, J. Lee, P.A. Keane, T.S. Chuan, A. Richhariya, et al. Fundus photography in the 21st century: a review of recent technological advances and their implications for worldwide healthcare. Telemed J E Health, 22 (3) (2016), pp. 198-208.
[3]
C.S. Tan, M.C. Chew, J. van Hemert, M.A. Singer, D. Bell, S.R. Sadda. Measuring the precise area of peripheral retinal non-perfusion using ultra-widefield imaging and its correlation with the ischaemic index. Br J Ophthalmol, 100 (2) (2016), pp. 235-239.
[4]
M. Maruyama-Inoue, Y. Kitajima, S. Mohamed, T. Inoue, S. Sato, A. Ito, et al. Sensitivity and specificity of high-resolution wide field fundus imaging for detecting neovascular age-related macular degeneration. PLoS One, 15 (8) (2020), Article e0238072.
[5]
H. Demirel, G. Anbarjafari. Image resolution enhancement by using discrete and stationary wavelet decomposition. IEEE Trans Image Process, 20 (5) (2011), pp. 1458-1460.
[6]
K. Zuiderveld. Contrast limited adaptive histogram equalization. P.S. Heckbert ( Ed.), Graphics gems IV, Academic Press (1994), pp. 474-485.
[7]
A. Swaminathan, S.S. Ramapackiam, T. Thiraviam, J. Selvaraj. Contourlet transform-based sharpening enhancement of retinal images and vessel extraction application. Biomed Tech, 58 (1) (2013), pp. 87-96.
[8]
C.C. Lee, C.Y. Shih, S.K. Lee, W.T. Hong. Enhancement of blood vessels in retinal imaging using the nonsubsampled contourlet transform. Multidimens Syst Signal Process, 23 (4) (2012), pp. 423-436.
[9]
Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference on Computer Vision; 2017 Oct 22-29; Venice, Italy. Piscataway: IEEE; 2017. p. 2223-32.
[10]
T.K. Yoo, J.Y. Choi, H.K. Kim. CycleGAN-based deep learning technique for artifact reduction in fundus photography. Graefes Arch Clin Exp Ophthalmol, 258 (8) (2020), pp. 1631-1637.
[11]
C. Wan, X. Zhou, Q. You, J. Sun, J. Shen, S. Zhu, et al. Retinal image enhancement using cycle-constraint adversarial network. Fron Med, 8 (2022), Article 793726.
[12]
Y. Ma, J. Liu, Y. Liu, H. Fu, Y. Hu, J. Cheng, et al. Structure and illumination constrained GAN for medical image enhancement. IEEE Trans Med Imaging, 40 (12) (2021), pp. 3955-3967.
[13]
J. Kumar, P. Kohli, N. Babu, K. Krishnakumar, D. Arthur, K. Ramasamy. Comparison of two ultra-widefield imaging systems for detecting peripheral retinal breaks requiring treatment. Graefes Arch Clin Exp Ophthalmol, 259 (6) (2021), pp. 1427-1434.
[14]
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2016 Jun 27-30; Las Vegas NV, USA. Piscataway: IEEE; 2016. p. 770-8.
[15]
Ulyanov D, Vedaldi A, Lempitsky V. Instance normalization: the missing ingredient for fast stylization. 2016. arXiv:1607.08022.
[16]
Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition; 2017 Jul 21-26; Honolulu HI, USA. Piscataway: IEEE; 2017. p. 1125-34.
[17]
Woo S, Park J, Lee JY, Kweon I. CBAM: convolutional block attention module. In:Proceedings of the European Conference on Computer Vision (ECCV); 2018 Sep 8-14; Munich, Germany. 2018. p. 3-19.
[18]
Asymmetric loss for multi-label classification.In:Proceedings of the IEEE/CVF International Conference on Computer Vision; 2021 Oct 10-17; Montreal, QC, Canada. 2021. p. 82-91.
[19]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, et al. Generative adversarial networks. Commun ACM, 63 (11) (2020), pp. 139-144.
[20]
Fu H, Wang B, Shen J, Cui S, Xu Y, Liu J, et al. Evaluation of retinal image quality assessment networks in different color-spaces. In:Medical Image Computing and Computer Assisted Intervention—MICCAI 2019: 22nd International Conference; 2019 Oct 13-17; Shenzhen, China. Berlin:Springer; 2019. p. 48-56.
[21]
Köhler T, Budai A, Kraus MF, Odstrčilík J, Michelson G, Hornegger J. Automatic no-reference quality assessment for retinal fundus images using vessel segmentation. Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems; 2013 Jun 20-22; Porto, Portugal. Piscataway: IEEE; 2013. p. 95-100.
[22]
Pires R, Jelinek HF, Wainer J, Rocha A. Retinal image quality analysis for automatic diabetic retinopathy detection. Proceedings of 2012 25th SIBGRAPI Conference on Graphics, 22-25; Ouro Preto, Brazil.Patterns and Images; 2012 Aug Piscataway: IEEE; 2012. p. 229-36.
[23]
K. He, J. Sun, X. Tang. Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell, 33 (12) (2010), pp. 2341-2353.
[24]
X. Guo, Y. Li, H. Ling. LIME: low-light image enhancement via illumination map estimation. IEEE Trans Image Process, 26 (2) (2016), pp. 982-993.
[25]
Guo C, Li C, Guo J, Loy CC, Hou J, Kwong S, et al. Zero-reference deep curve estimation for low-light image enhancement. In:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2020 Jun 13-19; Seattle, WA, USA. 2020. p. 1780-9.
[26]
Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, et al. EnlightenGAN: deep light enhancement without paired supervision. IEEE Trans Image Process, 30 (2021), pp. 2340-2349.
[27]
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In:Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition; 2017 Jul 21-26; Honolulu, HI, USA. 2017. p. 4700-8.
[28]
Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, et al. An image is worth 16 × 16 words: transformers for image recognition at scale. 2020. arXiv:2010.11929.
[29]
Y. Zhou, M.A. Chia, S.K. Wagner, M.S. Ayhan, D.J. Williamson, R.R. Struyven, et al. A foundation model for generalizable disease detection from retinal images. Nature, 622 (7981) (2023), pp. 156-163.
[30]
Ronneberger O, Fischer P, Brox T. U-Net:convolutional networks for biomedical image segmentation. In:Proceedings of Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference; 2015 Oct 5-9; Munich, Germany. Berlin:Springer; 2015. p. 234-41.
[31]
Z. Wang, A.C. Bovik, H.R. Sheikh, E.P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process, 13 (4) (2004), pp. 600-612.
AI Summary AI Mindmap
PDF(2854 KB)

Accesses

Citations

Detail

Sections
Recommended

/