Future Manufacturing with AI-Driven Particle Vision Analysis in the Microscopic World

Guangyao Chen , Fengqi You

Engineering ›› 2025, Vol. 52 ›› Issue (9) : 68 -84.

PDF (2146KB)
Engineering ›› 2025, Vol. 52 ›› Issue (9) :68 -84. DOI: 10.1016/j.eng.2025.08.005
Research
Review
Future Manufacturing with AI-Driven Particle Vision Analysis in the Microscopic World
Author information +
History +
PDF (2146KB)

Abstract

Recent advances in artificial intelligence (AI) have led to the development of sophisticated algorithms that significantly improve image analysis capabilities. This combination of AI and microscopic imaging is transforming the way we interpret and analyze imaging data, simplifying complex tasks and enabling innovative experimental methods previously thought impossible. In smart manufacturing, these improvements are especially impactful, increasing precision and efficiency in production processes. This review examines the convergence of AI with particle image analysis, an area we refer to as “particle vision analysis (PVA)." We offer a detailed overview of how this technology integrates into and impacts various fields within the physical sciences and materials sectors, where it plays a crucial role in both innovation and operational improvements. We explore four key areas of advancement—namely, particle classification, detection, segmentation, and object tracking—along with a look into the emerging field of augmented microscopy. This paper also underscores the vital role of the existing datasets and implementations that support these applications, which provide essential insights and resources that drive continuous research and development in this fast-evolving field. Our thorough analysis aims to outline the transformative potential of AI-driven PVA in improving precision in future manufacturing at the microscopic scale and thereby preparing the ground for significant technological progress and broad industrial applications in nanomanufacturing, biomanufacturing, and pharmaceutical manufacturing. This exploration not only highlights the advantages of integrating AI into conventional manufacturing processes but also anticipates the rise of next-generation smart manufacturing, which is set to revolutionize industry standards and operational practices.

Keywords

Particle vision analysis / AI-driven microscopic imaging / Smart manufacturing

Cite this article

Download citation ▾
Guangyao Chen, Fengqi You. Future Manufacturing with AI-Driven Particle Vision Analysis in the Microscopic World. Engineering, 2025, 52(9): 68-84 DOI:10.1016/j.eng.2025.08.005

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Liu H, Dong P, Ioannou MS, Li L, Shea J, Pasolli HA, et al. Visualizing long-term single-molecule dynamics in vivo by stochastic protein labeling. Proc Natl Acad Sci USA 2018; 115(2):343-8.

[2]

Grimm JB, Muthusamy AK, Liang Y, Brown TA, Lemon WC, Patel R, et al. A general method to fine-tune fluorophores for live-cell and in vivo imaging. Nat Methods 2017; 14(10):987-94.

[3]

Regot S, Hughey JJ, Bajar BT, Carrasco S, Covert MW. High-sensitivity measurements of multiple kinase activities in live single cells. Cell 2014; 157(7):1724-34.

[4]

Sampattavanich S, Steiert B, Kramer BA, Gyori BM, Albeck JG, Sorger PK. Encoding growth factor identity in the temporal dynamics of FOXO3 under the combinatorial control of ERK and AKT kinases. Cell Syst 2018; 6 (6):664-78.

[5]

Megason SG. In toto imaging of embryogenesis with confocal time-lapse microscopy. In: Lieschke GJ, Oates AC, Kawakami K, editors. Zebrafish: Methods and Protocols. Totowa: Humana Press; 2009. p. 317-32.

[6]

McDole K, Guignard L, Amat F, Berger A, Malandain G, Royer LA, et al. In toto imaging and reconstruction of post-implantation mouse development at the single-cell level. Cell 2018; 175(3):859-76.

[7]

Adrian M, Dubochet J, Lepault J, McDowall AW. Cryo-electron microscopy of viruses. Nature 1984; 308(5954):32-6.

[8]

Cheng Y. Single-particle cryo-EM—how did it get here and where will it go. Science 2018; 361(6405):876-80.

[9]

Rust MJ, Bates M, Zhuang X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat Methods 2006; 3(10):793-6.

[10]

Hess ST, Girirajan TPK, Mason MD. Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys J 2006; 91 (11):4258-72.

[11]

Caicedo JC, Cooper S, Heigwer F, Warchal S, Qiu P, Molnar C, et al. Dataanalysis strategies for image-based cell profiling. Nat Methods 2017; 14 (9):849-63.

[12]

Qin C, Zhang A, Zhang Z, Chen J, Yasunaga M, Yang D. Is ChatGPT a generalpurpose natural language processing task solver? In: Bouamor H, Pino J, Bali K, editors. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing; 2023 Dec 6-10; Singapore. Kerrville: Association for Computational Linguistics; 2023. p. 1339-84.

[13]

Open AI, Achiam J, Adler S, Agarwal S, Ahmad L, Akkaya I, et al. GPT-4 technical report. 2024. arXiv:2303.08774.

[14]

Jiang Q, Li F, Zeng Z, Ren T, Liu S, Zhang L. T-Rex2:towards generic object detection via text-visual prompt synergy. In: Leonardis A, Ricci E, Roth S, Russakovsky O, Sattler T, Varol G, editors. Computer Vision—ECCV 2024; 2024 Sep 29-Oct 4; Vienna, Austria. Cham: Springer; 2025. p. 38-57.

[15]

Ren T, Jiang Q, Liu S, Zeng Z, Liu W, Gao H, et al. Grounding DINO 1.5: advance the "edge" of open-set object detection. 2024. arXiv:2405.10300.

[16]

Qiao C, Zeng Y, Meng Q, Chen X, Chen H, Jiang T, et al. Zero-shot learning enables instant denoising and super-resolution in optical fluorescence microscopy. Nat Commun 2024; 15:4180.

[17]

Sage D, Neumann FR, Hediger F, Gasser SM, Unser M. Automatic tracking of individual fluorescence particles: application to the study of chromosome dynamics. IEEE Trans Image Process 2005; 14(9):1372-83.

[18]

Berg S, Kutra D, Kroeger T, Straehle CN, Kausler BX, Haubold C, et al. ilastik: interactive machine learning for (bio)image analysis. Nat Methods 2019; 16 (12):1226-32.

[19]

Arganda-Carreras I, Kaynig V, Rueden C, Eliceiri KW, Schindelin J, Cardona A, et al. Trainable Weka segmentation: a machine learning tool for microscopy pixel classification. Bioinformatics 2017; 33(15):2424-6.

[20]

Stringer C, Wang T, Michaelos M, Pachitariu M. Cellpose: a generalist algorithm for cellular segmentation. Nat Methods 2020; 18(1):100-6.

[21]

Bankhead P, Loughrey MB, Fernández JA, Dombrowski Y, McArt DG, Dunne PD, et al. QuPath: open source software for digital pathology image analysis. Sci Rep 2017; 7(1):1-7.

[22]

Schmidt U, Weigert M, Broaddus C, Myers G. Cell detection with star-convex polygons. In: Frangi A, Schnabel J, Davatzikos C, Alberola-López C, Fichtinger G, editors. Medical image computing and computer assisted intervention— MICCAI 2018; 2018 Sep 16-20; Granada, Spain. Cham: Springer International Publishing; 2018. p. 265-73.

[23]

Ghahremani P, Li Y, Kaufman A, Vanguri R, Greenwald N, Angelo M, et al. Deep learning-inferred multiplex immunofluorescence for immunohistochemical image quantification. Nat Mach Intell 2022; 4(4):401-12.

[24]

Qiu X, Zhu DY, Yao J, Jing Z, Zuo L, Wang M, et al. Spateo: multidimensional spatiotemporal modeling of single-cell spatial transcriptomics 2022. bioRxiv: 2022. 12.07.519417.

[25]

Chenouard N, Smal I, De Chaumont F, Maška M, Sbalzarini IF, Gong Y, et al. Objective comparison of particle tracking methods. Nat Methods 2014; 11 (3):281-9.

[26]

Kim H, Han J, Han TYJ. Machine vision-driven automatic recognition of particle size and morphology in SEM images. Nanoscale 2020; 12(37):19461-9.

[27]

Ershov D, Phan MS, Pylvänäinen JW, Rigaud SU, Le Blanc L, Charles-Orszag A, et al. TrackMate 7: integrating state-of-the-art segmentation algorithms into tracking pipelines. Nat Methods 2022; 19(7):829-32.

[28]

Pineda J, Midtvedt B, Bachimanchi H, Noé S, Midtvedt D, Volpe G, et al. Geometric deep learning reveals the spatiotemporal features of microscopic motion. Nat Mach Intell 2023; 5(1):71-82.

[29]

Youwai S, Makam P. Computer vision for particle size analysis of coarsegrained soils. 2023. arXiv:2311.06613.

[30]

Lagemann C, Lagemann K, Mukherjee S, Schröder W. Challenges of deep unsupervised optical flow estimation for particle-image velocimetry data. Exp Fluids 2024; 65(3):30.

[31]

Shuyuti NASA, Salami E, Dahari M, Arof H, Ramiah H. Application of artificial intelligence in particle and impurity detection and removal: a survey. IEEE Access 2024; 12:31498-514.

[32]

Facco P, Santomaso AC, Barolo M. Artificial vision system for particle size characterization from bulk materials. Chem Eng Sci 2017; 164:246-57.

[33]

Ramezani F, Sheikh P, Fix J, Battaglin A, Whyte S, Borys N, et al. Automatic 2D material detection in optical images using deep-learning-based computer vision. In: Kitayama K, Jalali B, editors. AI in optics and data science IV. San Francisco: SPIE; 2023. p. 80.

[34]

Soprana M, Santomaso AC, Facco P. Artificial vision system for the online characterization of the particle size distribution of bulk materials on conveyor belts. Computer-Aided Chem Eng 2018; 43:1667-72.

[35]

Igathinathane C, Pordesimo LO, Columbus EP, Batchelor WD, Sokhansanj S. Sieveless particle size distribution analysis of particulate materials through computer vision. Comput Electron Agric 2009; 66(2):147-58.

[36]

Kohut P, Kurowski P. Application of modal analysis supported by 3D visionbased measurements. J Theor Appl Mech 2009; 47:855-70.

[37]

Chen J, Balan A, Masih Das P, Thiruraman JP, Drndic´ M. Computer vision ACSTEM automated image analysis for 2D nanopore applications. Ultramicroscopy 2021; 231:113249.

[38]

Davis A, Bouman KL, Chen JG, Rubinstein M, Buyukozturk O, Durand F, et al. Visual vibrometry: estimating material properties from small motions in video. IEEE Trans Pattern Anal Mach Intell 2017; 39(4):732-45.

[39]

Sun B, Wang B, Shi Y, Gao H. Visual tracking using quantum-behaved particle swarm optimization. In:Proceedings of 2015 34th Chinese Control Conference; 2015 Jul 28-30; Hangzhou, China. Piscataway: IEEE; 2015. p. 3844-51.

[40]

Thompson KE, Willson CS, Zhang W. Quantitative computer reconstruction of particulate materials from microtomography images. Powder Technol 2006; 163(3):169-82.

[41]

Stamate MC, Munteanu C, Birsan M, Stamate C. IMAQ vision builder applications in SEM image processing of pharmaceutical powders. Appl Mech Mater 2016; 859:99-103.

[42]

Guo B, Xie D, Cheng J. Research on the algorithm of visual particles inspection based on machine vision. In:Proceedings of 2010 8th World Congress on Intelligent Control and Automation;2010 Jul 7-9; Jinan, China. Piscataway: IEEE; 2010. p. 5395-8.

[43]

Schuster BS, Ensign LM, Allan DB, Suk JS, Hanes J. Particle tracking in drug and gene delivery research: state-of-the-art applications and methods. Adv Drug Deliv Rev 2015; 91:70-91.

[44]

Zhang H, Li X, Zhong H, Yang Y, Wu QMJ, Ge J, et al. Automated machine vision system for liquid particle inspection of pharmaceutical injection. IEEE Trans Instrum Meas 2018; 67(6):1278-97.

[45]

Sun Z, Torrance S, McNeil-Watson FK, Sevick-Muraca EM. Application of frequency domain photon migration to particle size analysis and monitoring of pharmaceutical powders. Anal Chem 2003; 75(7):1720-5.

[46]

Primpke S, Godejohann M, Gerdts G. Rapid identification and quantification of microplastics in the environment by quantum cascade laser-based hyperspectral infrared chemical imaging. Environ Sci Technol 2020; 54 (24):15893-903.

[47]

Mukherjee F, Shi A, Wang X, You F, Abbott NL. Liquid crystals as multifunctional interfaces for trapping and characterizing colloidal microplastics. Small 2023; 19(23):2207802.

[48]

Jambers W, De Bock L, Van Grieken R. Applications of micro-analysis to individual environmental particles. Fresenius J Anal Chem 1996; 355(5- 6):521-7.

[49]

Sharma T, Shah T, Verma NK, Vasikarla S. A review on image dehazing algorithms for vision based applications in outdoor environment. In: Proceedings of 2020 IEEE Applied Imagery and Pattern Recognition Workshop; 2020 Oct 14-16;Washington, DC, USA. Piscataway: IEEE; 2020. p. 1-13.

[50]

Liu L, Chen T, Zhang Q, Zhang W, Yang H, Hu X, et al. Deep neural networkbased electron microscopy image recognition for source distinguishing of anthropogenic and natural magnetic particles. Environ Sci Technol 2023; 57 (43):16465-76.

[51]

Liu Z, Ukida H, Ramuhalli P, Forsyth DS. Integrated imaging and vision techniques for industrial inspection: a special issue on machine vision and applications. Mach Vis Appl 2010; 21(5):597-9.

[52]

Hussain R, Alican Noyan M, Woyessa G, Retamal Marín RR, Antonio Martinez P, Mahdi FM, et al. An ultra-compact particle size analyser using a CMOS image sensor and machine learning. Light Sci Appl 2020; 9(1):21.

[53]

Golnabi H, Asadpour A. Design and application of industrial machine vision systems. Robot Comput-Integr Manuf 2007; 23(6):630-7.

[54]

Conti G, Uritsky Y. Application of confocal laser imaging microscopy and Raman spectroscopy to particle characterization in semiconductor industry. Microsc Microanal 2001; 7(S2):154-5.

[55]

Li R, Wang D. Understanding the structure-performance relationship of active sites at atomic scale. Nano Res 2022; 15(8):6888-923.

[56]

Li B, Zhang G, Gao Y, Chen X, Chen R, Qin C, et al. Single quantum dot spectroscopy for exciton dynamics. Nano Res 2024; 17(12):10332-45.

[57]

Nichols JE, Azar SR. Real-time imaging of dynamic tissues. Nat Methods 2023; 20(11):1631-2.

[58]

Sachs PC, Mollica PA, Bruno RD. Tissue specific microenvironments: a key tool for tissue engineering and regenerative medicine. J Biol Eng 2017; 11(1):34.

[59]

Lachowicz M. Microscopic, mesoscopic and macroscopic descriptions of complex systems. Probab Eng Mech 2011; 26(1):54-60.

[60]

von Chamier L, Laine RF, Henriques R. Artificial intelligence for microscopy: what you should know. Biochem Soc Trans 2019; 47(4):1029-40.

[61]

Melanthota SK, Gopal D, Chakrabarti S, Kashyap AA, Radhakrishnan R, Mazumder N. Deep learning-based image processing in optical microscopy. Biophys Rev 2022; 14(2):463-81.

[62]

Merchant A, Batzner S, Schoenholz SS, Aykol M, Cheon G, Cubuk ED. Scaling deep learning for materials discovery. Nature 2023; 624(7990):80-5.

[63]

Schwartzman A, Kagan M, Mackey L, Nachman B, De Oliveira L. Image processing, computer vision, and deep learning: new approaches to the analysis and physics interpretation of LHC events. J Phys Conf Ser 2016; 762:012035.

[64]

von Chamier L, Laine RF, Jukkala J, Spahn C, Krentzel D, Nehme E, et al. Democratising deep learning for microscopy with ZeroCostDL4Mic. Nat Commun 2021; 12:2276.

[65]

Deng MH, Li ZC, Zhu SP. The agriculture vision image segmentation algorithm based on improved quantum-behaved particle swarm optimization. Appl Mech Mater 2015;713-715:1947-50.

[66]

Jones A, Kuehnert J, Fraccaro P, Meuriot O, Ishikawa T, Edwards B, et al. AI for climate impacts: applications in flood risk. npj Clim Atmos Sci 2023; 6(1):63.

[67]

Schnepf U, Von Moers-Meßmer MAL, Brümmer F. A practical primer for image-based particle measurements in microplastic research. Microplast Nanoplast 2023; 3(1):16.

[68]

Tang Y, Wang J, Jin S, Zhao J, Wang L, Bian G, et al. Real-time processing and high-quality imaging of navigation strip data using SSS based on AUVs. J Mar Sci Eng 2023; 11(9):1769.

[69]

Yang T, Zhang X, Xu Q, Zhang S, Wang T. An embedded-GPU-based scheme for real-time imaging processing of unmanned aerial vehicle borne video synthetic aperture radar. Remote Sens 2024; 16(1):191.

[70]

Rodriguez-Conde I, Campos C, Fdez-Riverola F. Optimized convolutional neural network architectures for efficient on-device vision-based object detection. Neural Comput & Applic 2022; 34(13):10469-501.

[71]

Kumar P, Bodade A, Kumbhare H, Ashtankar R, Arsh S, Gosar V. Parallel and distributed computing for processing big image and video data. In: Seng KP, Ang L, Liew AWC, Gao J, editors. Multimodal analytics for next-generation big data technologies and applications. Cham: Springer International Publishing; 2019. p. 337-60.

[72]

Zou Y, Zhang S, Chen G, Tian Y, Keutzer K, Moura JMF. Annotation-efficient untrimmed video action recognition. In:Proceedings of the 29th ACM International Conference on Multimedia; 2021 Oct 20-24; online conference. New York City: Association for Computing Machinery; 2021. p. 487-95.

[73]

Chen G, Peng P, Huang Y, Geng M, Tian Y. Adaptive discovering and merging for incremental novel class discovery. In:Proceedings of the AAAI Conference on Artificial Intelligence;2024 Feb 20-27; Vancouver, BC, Canada. Washington, DC: AAAI Press; 2024. p. 11276-84.

[74]

Fini E, Sangineto E, Lathuiliere S, Zhong Z, Nabi M, Ricci E. A unified objective for novel class discovery. In:Proceedings of 2021 IEEE/CVF International Conference on Computer Vision;2021 Oct 10-17; Montreal, QC, Canada. Piscataway: IEEE; 2021. p. 9264-72.

[75]

Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: visual explanations from deep networks via gradient-based localization. Int J Comput Vis 2020; 128(2):336-59.

[76]

Jiang PT, Zhang CB, Hou Q, Cheng MM, Wei Y. LayerCAM: exploring hierarchical class activation maps for localization. IEEE Trans Image Process 2021; 30:5875-88.

[77]

Chattopadhay A, Sarkar A, Howlader P, Balasubramanian VN. Grad-CAM++:generalized gradient-based visual explanations for deep convolutional networks. In:Proceedings of 2018 IEEE Winter Conference on Applications of Computer Vision; 2018 Mar 12-15; Lake Tahoe, NV, USA. Piscataway: IEEE; 2018. p. 839-47.

[78]

Zhang M, Vogelbacher M, Hagenmeyer V, Aleksandrov K, Gehrmann HJ, Matthes J. 3D refuse-derived fuel particle tracking-by-detection using a plenoptic camera system. IEEE Trans Instrum Meas 2022; 71:1-15.

[79]

Moen E, Bannon D, Kudo T, Graf W, Covert M, Van Valen D. Deep learning for cellular image analysis. Nat Methods 2019; 16(12):1233-46.

[80]

Tinevez JY, Perry N, Schindelin J, Hoopes GM, Reynolds GD, Laplantine E, et al. TrackMate: an open and extensible platform for single-particle tracking. Methods 2017; 115:80-90.

[81]

Stringer C, Pachitariu M. Cellpose 2.0: how to train your own model. 2022. bioRxiv: 2022. 04.01.486764.

[82]

Stringer C, Pachitariu M. Cellpose3: one-click image restoration for improved cellular segmentation. Nat Methods 2025; 22(3):592-9.

[83]

Alonso A, Kirkegaard JB. Fast detection of slender bodies in high density microscopy data. Commun Biol 2023; 6(1):754.

[84]

Weigert M, Schmidt U, Haase R, Sugawara K, Myers G. Star-convex polyhedra for 3D object detection and segmentation in microscopy. In:Proceedings of 2020 IEEE Winter Conference on Applications of Computer Vision;2020 Mar 1-5; Snowmass Village, CO, USA. Piscataway: IEEE; 2020. p. 3655-62.

[85]

Weigert M, Schmidt U. Nuclei instance segmentation and classification in histopathology images with StarDist. In:Proceedings of 2022 IEEE International Symposium on Biomedical Imaging Challenges;2022 Mar 28-31; Kolkata, India. Piscataway: IEEE; 2022. p. 1-4.

[86]

Pocock J, Graham S, Vu QD, Jahanifar M, Deshpande S, Hadjigeorghiou G, et al. TIAToolbox as an end-to-end library for advanced tissue image analytics. Commun Med 2022; 2(1):120.

[87]

Archit A, Freckmann L, Nair S, Khalid N, Hilt P, Rajashekar V, et al. Segment anything for microscopy. Nat Methods 2025;22(3):579-91. Corrected in: Nat Methods 2025; 22(7):1603.

[88]

Ghahremani P, Marino J, Dodds R, Nadeem S. DeepLIIF:an online platform for quantification of clinical pathology slides. In:Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2022 Jun 19-24; New Orleans, LA, USA. Piscataway: IEEE; 2022. p. 21367-73.

[89]

Ghahremani P, Marino J, Hernandez-Prera J, De La Iglesia JV, Slebos RJC, Chung CH, et al. An AI-ready multiplex staining dataset for reproducible and accurate characterization of tumor immune microenvironment. In: Greenspan H, Madabhushi A, Mousavi P, Salcudean S, Duncan J, SyedaMahmood T, editors. Medical image computing and computer assisted intervention—MICCAI 2023; 2023 Oct 8-12; Vancouver, BC, Canada Cham: Springer Nature Switzerland; 2023. p. 704-13.

[90]

Nadeem S, Hanna MG, Viswanathan K, Marino J, Ahadi M, Alzumaili B, et al. Ki67 proliferation index in medullary thyroid carcinoma: a comparative study of multiple counting methods and validation of image analysis and deep learning platforms. Histopathology 2023; 83(6):981-8.

[91]

Zehra T, Marino J, Wang W, Frantsuzov G, Nadeem S. Rethinking histology slide digitization workflows for low-resource settings. In: Linguraru MG, Dou Q, Feragen A, Giannarou S, Glocker B, Lekadir K, editors. Medical image computing and computer assisted intervention—MICCAI 2024; 2024 Oct 6- 10; Marrakesh, Morocco Cham: Springer Nature Switzerland; 2024. p. 427-36.

[92]

Lee G, Kim S, Kim J, Yun SY. MEDIAR:harmony of data-centric and modelcentric for multi-modality microscopy. In:Proceedings of the Cell Segmentation Challenge in Multi-Modality High-Resolution Microscopy Images; 2022 Nov 28-Dec 9; New Orleans, LA, USA; 2023. p. 1-16.

[93]

Manubens-Gil L, Zhou Z, Chen H, Ramanathan A, Liu X, Liu Y, et al. BigNeuron: a resource to benchmark and predict performance of algorithms for automated tracing of neurons in light microscopy datasets. Nat Methods 2023;20(6):824-35. Corrected in: Nat Methods 2024; 21(10):1959.

[94]

Tsai HF, Gajda J, Sloan TFW, Rares A, Shen AQ. Usiigaci: instance-aware cell tracking in stain-free phase contrast microscopy enabled by machine learning. SoftwareX 2019; 9:230-7.

[95]

Midtvedt B, Helgadottir S, Argun A, Midtvedt D, Volpe G. Quantitative digital microscopy with deep learning. In:Proceedings of the Cell Segmentation Challenge in Multi-Modality High-Resolution Microscopy Images; 2022 Nov 28-Dec 9; New Orleans, LA, USA; 2023. p. 1-16.

[96]

Midtvedt B, Pineda JD, Skärberg F, Olsén E, Bachimanchi H, Wesén EV, et al. Single-shot self-supervised object detection in microscopy. In: Volpe G, Pereira JB, Brunner D, Ozcan A, editors. Emerging topics in artificial intelligence (ETAI 2023); 2023 Aug 20-25; San Diego, CA, USA. San Diego: SPIE; 2023. p. 43.

[97]

Helgadottir S, Argun A, Volpe G. Digital video microscopy enhanced by deep learning. Optica 2019; 6(4):506.

[98]

Ulicna K, Vallardi G, Charras G, Lowe AR. Automated deep lineage tree analysis using a Bayesian single cell tracking approach. Front Comput Sci 2020; 3:734559.

[99]

Bove A, Gradeci D, Fujita Y, Banerjee S, Charras G, Lowe AR. Local cellular neighborhood controls proliferation in cell competition. Mol Biol Cell 2017; 28(23):3215-28.

[100]

De Niz M. Deep learning-based point-scanning super-resolution imaging. Nat Methods 2021; 18:406-16.

[101]

Priessner M, Gaboriau DCA, Sheridan A, Lenn T, Chubb JR, Manor U, et al. Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging. Nat Methods 2024; 21:322-30.

[102]

Fu S, Shi W, Luo T, He Y, Zhou L, Yang J, et al. Field-dependent deep learning enables high-throughput whole-cell 3D super-resolution imaging. Nat Methods 2023; 20(3):459-68.

[103]

Nehme E, Freedman D, Gordon R, Ferdman B, Weiss LE, Alalouf O, et al. DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning. Nat Methods 2020;17(7):734-40. Corrected in: Nat Methods 2020;17(7):749.

[104]

Cao H, Yang R, Zheng Y, Liu R, Wang X. Micron-sized particle discrimination and classification with deep learning and electrical sensing zone technique. AIP Adv 2023; 13(9):095212.

[105]

von der Esch E, Kohles AJ, Anger PM, Hoppe R, Niessner R, Elsner M, et al. TUMParticleTyper: a detection and quantification tool for automated analysis of (microplastic) particles and fibers. PLoS One 2020; 15(6):e0234766.

[106]

Lai Z, Chen Q. Reconstructing granular particles from X-ray computed tomography using the TWS machine learning tool and the level set method. Acta Geotech 2019; 14(1):1-18.

[107]

Raadnui S. Wear particle analysis—utilization of quantitative computer image analysis: a review. Tribol Int 2005; 38(10):871-8.

[108]

Stachowiak GP, Stachowiak GW, Podsiadlo P. Automated classification of wear particles based on their surface texture and shape features. Tribol Int 2008; 41(1):34-43.

[109]

Garcia R, Anzorena M, Valdés-Galicia JF, Matsubara Y, Sako T, Ortiz E, et al. Particle identification and analysis in the SciCRT using machine learning tools. Nucl Instrum Methods Phys Res Sect A 2021; 1003:165326.

[110]

Baldi P, Sadowski P, Whiteson D. Searching for exotic particles in high-energy physics with deep learning. Nat Commun 2014; 5:4308.

[111]

Carvalho IA, Silva NA, Rosa CC, Coelho LCC, Jorge PAS. Particle classification through the analysis of the forward scattered signal in optical tweezers. Sensors 2021; 21(18):6181.

[112]

Sbalzarini IF, Koumoutsakos P. Feature point tracking and trajectory analysis for video imaging in cell biology. J Struct Biol 2005; 151(2):182-95.

[113]

Chenouard N, Bloch I, Olivo-Marin J. Multiple hypothesis tracking for cluttered biological image sequences. IEEE Trans Pattern Anal Mach Intell 2013; 35(11):2736-50.

[114]

Piccinini F, Kiss A, Horvath P. CellTracker (not only) for dummies. Bioinformatics 2016; 32(6):955-7.

[115]

McQuin C, Goodman A, Chernyshev V, Kamentsky L, Cimini BA, Karhohs KW, et al. CellProfiler 3.0: next-generation image processing for biology. PLoS Biol 2018; 16(7):e2005970.

[116]

Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, et al. Fiji: an open-source platform for biological-image analysis. Nat Methods 2012; 9(7):676-82.

[117]

Legland D, Arganda-Carreras I, Andrey P. MorphoLibJ: integrated library and plugins for mathematical morphology with ImageJ. Bioinformatics 2016; 32 (22):3532-4.

[118]

Keane RD, Adrian RJ, Zhang Y. Super-resolution particle imaging velocimetry. Meas Sci Technol 1995; 6(6):754-68.

[119]

von Diezmann L, Shechtman Y, Moerner WE. Three-dimensional localization of single molecules for super-resolution imaging and single-particle tracking. Chem Rev 2017; 117(11):7244-75.

[120]

Gungor A, Askin B, Soydan DA, Saritas EU, Top CB, Cukur T. TranSMS: transformers for super-resolution calibration in magnetic particle imaging. IEEE Trans Med Imaging 2022; 41(12):3562-74.

[121]

Chen T, Dong B, Chen K, Zhao F, Cheng X, Ma C, et al. Optical super-resolution imaging of surface reactions. Chem Rev 2017; 117(11):7510-37.

[122]

Patil S, Rajanish CYPDP, Margankunte N. Multi-modal super resolution for dense microscopic particle size estimation. 2020. arXiv:2010.09594.

[123]

Classen A, Von Zanthier J, Agarwal GS. Analysis of super-resolution via 3D structured illumination intensity correlation microscopy. Opt Express 2018; 26(21):27492.

[124]

Liu Y, Stone JE, Cai E, Fei J, Lee SH, Park S, et al. VMD as a software for visualization and quantitative analysis of super resolution imaging and single particle tracking. Biophys J 2014; 106(2):202a.

[125]

Nieuwenhuizen RPJ, Lidke KA, Bates M, Puig DL, Grünwald D, Stallinga S, et al. Measuring image resolution in optical nanoscopy. Nat Methods 2013; 10 (6):557-62.

[126]

Bahy RM, Salama GI, Mahmoud TA. Adaptive regularization-based super resolution reconstruction technique for multi-focus low-resolution images. Signal Process 2014; 103:155-67.

[127]

Ghasemi-Falavarjani N, Moallem P, Rahimi A. Particle filter based multiframe image super resolution. Signal Image Video Process 2023; 17 (7):3247-54.

[128]

Gray RDM, Mercer J, Henriques R. Open-source single-particle analysis for super-resolution microscopy with VirusMapper. J Vis Exp 2017;122:e55471.

[129]

Laine RF, Tosheva KL, Gustafsson N, Gray RDM, Almada P, Albrecht D, et al. NanoJ: a high-performance open-source super-resolution microscopy toolbox. J Phys D Appl Phys 2019; 52:163001.

[130]

Ronneberger O, Fischer P, Brox T. U-Net:convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi A, editors. Medical image computing and computer-assisted intervention—MICCAI 2015; 2015 Oct 5-9; Munich, Germany. Cham: Springer International Publishing; 2015. p. 234-41.

[131]

Zhou Z, Rahman Siddiquee MM, Tajbakhsh N, Liang J. UNet++:a nested U-Net architecture for medical image segmentation. In: Stoyanov D, Taylor Z, Carneiro G, Syeda-Mahmood T, Martel A, Maier-Hein L, editors. Deep learning in medical image analysis and multimodal learning for clinical decision support; 2018 Sep 20; Granada, Spain. Cham: Springer International Publishing; 2018. p. 3-11.

[132]

Joseph S. Uncertainty analysis of a particle tracking algorithm developed for super-resolution particle image velocimetry [dissertation]. Saskatoon: University of Saskatchewan; 2003.

[133]

McMahon A, Andrews R, Ghani SV, Cordes T, Kapanidis AN, Robb NC. Highthroughput super-resolution analysis of influenza virus pleomorphism reveals insights into viral spatial organization. PLoS Pathog 2023; 19(6): e1011484.

[134]

Liu S, Hoess P, Ries J. Super-resolution microscopy for structural cell biology. Annu Rev Biophys 2022; 51(1):301-26.

[135]

Lee A, Tsekouras K, Calderon C, Bustamante C, Pressé S. Unraveling the thousand word picture: an introduction to super-resolution data analysis. Chem Rev 2017; 117(11):7276-330.

[136]

Offroy M, Moreau M, Sobanska S, Milanfar P, Duponchel L. Pushing back the limits of Raman imaging by coupling super-resolution and chemometrics for aerosols characterization. Sci Rep 2015; 5(1):12303.

[137]

Tokuhisa A, Akinaga Y, Terayama K, Okamoto Y, Okuno Y. Single-image super-resolution improvement of X-ray single-particle diffraction images using convolutional neural network. J Chem Inf Model 2022; 62(14):3352-64.

[138]

Ghahramani M, Qiao Y, Zhou MC, O’Hagan A, Sweeney J. AI-based modeling and data-driven evaluation for smart manufacturing processes. IEEE/CAA J Autom Sin 2020; 7(4):1026-37.

[139]

Xu J, Kovatsch M, Mattern D, Mazza F, Harasic M, Paschke A, et al. A review on AI for smart manufacturing: deep learning challenges and solutions. Appl Sci 2022; 12(16):8239.

[140]

Ehsani K, Bagherinezhad H, Redmon J, Mottaghi R, Farhadi A. Who let the dogs out? Modeling dog behavior from visual data. In:Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition;2018 Jun 18-22; Salt Lake City, UT, USA. Piscataway: IEEE; 2018. p. 4051-60.

[141]

Redmon J, Farhadi A. YOLOv3: an incremental improvement. 2018. arXiv:1804.02767.

[142]

Redmon J, Farhadi A. YOLO9000:better, faster, stronger. In:Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition;2017 Jul 21-26; Honolulu, HI, USA. Piscataway: IEEE; 2017. p. 6517-25.

[143]

Rastegari M, Ordonez V, Redmon J, Farhadi A. XNOR-Net:ImageNet classification using binary convolutional neural networks. In: Leibe B, Matas J, Sebe N, Welling M, editors. Computer vision—ECCV 2016; 2016 Oct 11-14; Amsterdam, the Netherlands. Cham: Springer International Publishing; 2016. p. 525-42.

[144]

Redmon J, Divvala S, Girshick R, Farhadi A. You only look once:unified, realtime object detection. In:Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition; 2016 Jun 27-30; Las Vegas, NV, USA. Piscataway: IEEE; 2016. p. 779-88.

[145]

Redmon J, Angelova A. Real-time grasp detection using convolutional neural networks. In:Proceedings of 2015 IEEE International Conference on Robotics and Automation;2015 May 26-30; Seattle, WA, USA. Piscataway: IEEE; 2015. p. 1316-22.

[146]

He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. IEEE Trans Pattern Anal Mach Intell 2020; 42(2):386-97.

[147]

Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 2017; 39(6):1137-49.

[148]

Liu S, Zeng Z, Ren T, Li F, Zhang H, Yang J, et al. Grounding DINO:marrying DINO with grounded pre-training for open-set object detection. In: Leonardis A, Ricci E, Roth S, Russakovsky O, Sattler T, Varol G, editors. Computer vision— ECCV 2024; 2024 Sep 29-Oct 4; Milan, Italy. Cham: Springer Nature Switzerland; 2025. p. 38-55.

[149]

Zhang P, Li X, Hu X, Yang J, Zhang L, Wang L, et al. VinVL:revisiting visual representations in vision-language models. In:Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2021 Jun 19-25; Nashville, TN, USA. Piscataway: IEEE; 2021. p. 5575-84.

[150]

Zhong Y, Yang J, Zhang P, Li C, Codella N, Li LH, et al. RegionCLIP:region-based language-image pretraining. In:Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2022 Jun 19-24; New Orleans, LA, USA. Piscataway: IEEE; 2022. p. 16772-82.

[151]

Zhou X, Girdhar R, Joulin A, Krähenbühl P, Misra I. Detecting twentythousand classes using image-level supervision. In: Avidan S, Brostow G, Cissé M, Farinella GM, Hassner T, editors. Computer vision—ECCV 2022; 2022 Oct 23-27; Tel Aviv, Israel Cham: Springer Nature Switzerland; 2022. p. 350-68.

[152]

Ghiasi G, Gu X, Cui Y, Lin TY. Scaling open-vocabulary image segmentation with image-level labels. In: Avidan S, Brostow G, Cissé M, Farinella GM, Hassner T, editors. Computer vision—ECCV 2022; 2022 Oct 23-27; Tel Aviv, Israel Cham: Springer Nature Switzerland; 2022. p. 540-57.

[153]

Boyle MJ, Goldman YE, Composto RJ. Enhancing nanoparticle detection in interferometric scattering (ISCAT) microscopy using a mask R-CNN. J Phys Chem B 2023; 127(16):3737-45.

[154]

Jacobs R. Deep learning object detection in materials science: current state and future directions. Comput Mater Sci 2022; 211:111527.

[155]

Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, et al. Learning transferable visual models from natural language supervision. In:Proceedings of the 38th International Conference on Machine Learning; 2021 Jul 18-24; online conference; 2021. p. 8748-63.

[156]

Sun Q, Fang Y, Wu L, Wang X, Cao Y. EVA-CLIP: improved training techniques for CLIP at scale. 2023. arXiv:2303.15389.

[157]

Liu J, Zhang Y, Chen JN, Xiao J, Lu Y, Landman BA, et al. CLIP-driven universal model for organ segmentation and tumor detection. In:Proceedings of 2023 IEEE/CVF International Conference on Computer Vision; 2023 Oct 2-6; Paris, France. Piscataway: IEEE; 2023. p. 21095-107.

[158]

Hao Z, Li WN, Hou B, Su P, Ma J. Characterization method for particle extraction from raw-reconstructed images using U-net. Front Phys 2022; 9:816158.

[159]

Dormagen N, Klein M, Schmitz AS, Thoma MH, Schwarz M. Multi-particle tracking in complex plasmas using a simplified and compact U-net. J Imaging 2024; 10(2):40.

[160]

Ravi N, Gabeur V, Hu YT, Hu R, Ryali C, Ma T, et al. SAM 2: segment anything in images and videos. 2024. arXiv:2408.00714.

[161]

Kirillov A, Mintun E, Ravi N, Mao H, Rolland C, Gustafson L, et al. Segment anything. 2023. arXiv:2304.02643.

[162]

Xiao Y, Peng Y, Wang M, Ning Y, Zhou Y, Kong K, et al. A novel method for predicting coarse aggregate particle size distribution based on segment anything model and machine learning. Constr Build Mater 2024; 429:136429.

[163]

Larsen R, Villadsen TL, Mathiesen JK, Jensen KMØ, Boejesen EDNPSAM. implementing the segment anything model for easy nanoparticle segmentation in electron microscopy images. 2023. ChemRxiv: 2023-k73qz-v2.

[164]

Wang X, Yu K, Wu S, Gu J, Liu Y, Dong C, et al. ESRGAN:enhanced superresolution generative adversarial networks. In: Leal-Taixé L, Roth S, editors. Computer vision - ECCV 2018 Workshops; 2018 Sep 8-14; Munich, Germany. Cham: Springer International Publishing; 2019. p. 63-79.

[165]

Saharia C, Chan W, Chang H, Lee C, Ho J, Salimans T, et al. Palette:image-toimage diffusion models. In: Nandigjav M, Mitra NJ, Hertzmann A, editors. SIGGRAPH 2022 Conference Papers Proceedings; 2022 Aug 8-11; Vancouver, BC, Canada. New York City: Association for Computing Machinery; 2022. p. 15.

[166]

Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models. In:Proceedings of the 34th International Conference on Neural Information Processing Systems;2020 Dec 6-12; online conference. Red Hook: Curran Associates, Inc.; 2020. p. 6840-51.

[167]

Antarasen J, Wellnitz B, Kramer SN, Chatterjee S, Kisley L. Cross-correlation increases sampling in diffusion-based super-resolution optical fluctuation imaging. Chem Biomed Imaging 2024; 2(9):640-50.

[168]

Zhuang F, Qi Z, Duan K, Xi D, Zhu Y, Zhu H, et al. A comprehensive survey on transfer learning. Proc IEEE 2021; 109(1):43-76.

[169]

Neyshabur B, Sedghi H, Zhang C. What is being transferred in transfer learning? In:Proceedings of the 34th International Conference on Neural Information Processing Systems;2020 Dec 6-12; online conferenc. Red Hook: Curran Associates, Inc.; 2020. p. 512-23.

[170]

Wang Y, Yao Q, Kwok JT, Ni LM. Generalizing from a few examples: a survey on few-shot learning. ACM Comput Surv 2021; 53(3):1-34.

[171]

Simon C, Koniusz P, Nock R, Harandi M. Adaptive subspaces for few-shot learning. In:Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition;2020 Jun 14-19; Seattle, WA, USA. Piscataway: IEEE; 2020. p. 4135-44.

[172]

Zhang Z, Chen G, Zou Y, Huang Z, Li Y, Li R. MICM:rethinking unsupervised pretraining for enhanced few-shot learning. In:Proceedings of the 32nd ACM International Conference on Multimedia; 2024 Oct 28-Nov 1; Melbourne, VIC, Australia. New York City: Association for Computing Machinery; 2024. p. 7686-95.

[173]

Zhang Z, Chen G, Zou Y, Li Y, Li R. Learning unknowns from unknowns:diversified negative prototypes generator for few-shot open-set recognition. In:Proceedings of the 32nd ACM International Conference on Multimedia; 2024 Oct 28-Nov 1; Melbourne VIC, Australia. New York City: Association for Computing Machinery; 2024. p. 6053-62.

[174]

Holm EA, Cohn R, Gao N, Kitahara AR, Matson TP, Lei B, et al. Overview: computer vision and machine learning for microstructural characterization and analysis. Metall Mater Trans A 2020; 51(12):5985-99.

[175]

Singh V, Patra S, Murugan NA, Toncu DC, Tiwari A. Recent trends in computational tools and data-driven modeling for advanced materials. Mater Adv 2022; 3(10):4069-87.

[176]

Xi Z, Chen W, Guo X, He W, Ding Y, Hong B, et al. The rise and potential of large language model based agents: a survey. Sci China Inf Sci 2025; 68 (2):121101.

[177]

Lewis P, Perez E, Piktus A, Petroni F, Karpukhin V, Goyal N, et al. Retrievalaugmented generation for knowledge-intensive NLP tasks. In:Proceedings of the 34th International Conference on Neural Information Processing Systems; 2020 Dec 6-12; online conferece. New York City: Curran Associates, Inc.; 2020. p. 9459-74.

[178]

Jia C, Yang Y, Xia Y, Chen YT, Parekh Z, Pham H, et al. Scaling up visual and vision-language representation learning with noisy text supervision. In:Proceedings of the 38th International Conference on Machine Learning; 2021 Jul 18-24; online conference; 2021. p. 4904-16.

[179]

Alayrac JB, Donahue J, Luc P, Miech A, Barr I, Hasson Y, et al. Flamingo: a visual language model for few-shot learning. 2022. arXiv:2204.14198.

[180]

Zeng Y, Wu H, Nie W, Chen G, Zheng X, Shen Y, et al. Training-free anomaly event detection via LLM-guided symbolic pattern discovery. 2025. arXiv:2502.05843v1.

[181]

Reed S, Zolna K, Parisotto E, Colmenarejo SG, Novikov A, Barth-Maron G, et al. A generalist agent. 2022. arXiv:2205.06175.

[182]

Archana R, Jeevaraj PSE. Deep learning models for digital image processing: a review. Artif Intell Rev 2024; 57(1):11.

[183]

Fan L, Wang Z, Lu Y, Zhou J. An adversarial learning approach for superresolution enhancement based on AgCl@Ag nanoparticles in scanning electron microscopy images. Nanomaterials 2021; 11(12):3305.

PDF (2146KB)

4772

Accesses

0

Citation

Detail

Sections
Recommended

/