Supervision System of AI-based Software as a Medical Device

Jiannan Zhang , Yingying Li , Jiahui Zhou , Yelin Zhu , Lanjuan Li

Strategic Study of CAE ›› 2022, Vol. 24 ›› Issue (1) : 198 -204.

PDF (432KB)
Strategic Study of CAE ›› 2022, Vol. 24 ›› Issue (1) : 198 -204. DOI: 10.15302/J-SSCAE-2022.01.021
Engineering Management
Orginal Article

Supervision System of AI-based Software as a Medical Device

Author information +
History +
PDF (432KB)

Abstract

A scientific supervision system enhances vigorous and standardized development of emerging things. Artificial intelligencebased software as a medical device (AI-Based SaMD) is an important product in the health field enabled by artificial intelligence (AI). As AI develops further, its unique black box algorithm and independent learning ability have posed major supervision challenges. AIBased SaMD supervision needs to keep pace with the times and a more scientific supervision strategy is urgently required to minimize the adverse events of AI-Based SaMD and their risk impact. This article summarizes the current status of supervision systems and supporting resources of AI-Based SaMD in China and abroad considering the difficulties in algorithm change management, quality control, and safety traceability. In addition, we explore the problems and challenges of AI-Based SaMD in China. Furthermore, we suggest that the AI-Based SaMD supervision and support systems should be improved in China to overcome the disadvantages of postmarket supervision.

Keywords

独立医用软件 / 人工智能 / 监管科学 / software as a medical device (SaMD) / artificial intelligence (AI) / supervision science

Cite this article

Download citation ▾
Jiannan Zhang, Yingying Li, Jiahui Zhou, Yelin Zhu, Lanjuan Li. Supervision System of AI-based Software as a Medical Device. Strategic Study of CAE, 2022, 24(1): 198-204 DOI:10.15302/J-SSCAE-2022.01.021

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

Stand-alone medical device (SaMD) software is used directly on general-purpose computers for medical purposes, independent of medical device hardware [1]. SaMDs based on artificial intelligence (AI) are rapidly developing with the advancement and field penetration of AI technologies. AI-based SaMD products have emerged, including common medical image diagnostic software, radiological diagnostic devices, clinical chemistry examination systems, cardiovascular diagnostic and monitoring devices, neurological diagnostic devices, and ophthalmic diagnostic devices [2]. The new technological endowment of AI has spawned the need for the regulation of this special class of products [3]. Discussions were held among regulators from leading countries in AI development and international organizations, such as the US Food and Drug Administration (FDA) and the International Medical Device Regulators Forum (IMDRF), which initiated regulatory changes for emerging intelligent medical device software (MDSW). The FDA concluded that changes were necessary due to the incompatibility of AI in the traditional regulatory model resulting from its technical specificity. AI technology is characterized by data, algorithms, and computing power, unlike traditional computer medical software, which can fully conform to the static medical device regulatory model because it has completely fixed program code. SaMDs driven by data or algorithms can change dynamically as algorithms and data change, exacerbating the risk of uncertainty in product safety and effectiveness and increasing the need for real-time software regulation and high-frequency change control [4]. Post-market regulation is particularly critical for AI-based SaMDs, and includes change control, quality control, and safety monitoring.

Current regulatory regimes and policies for AI-based SaMDs were gradually established by the majority of national regulators. In 2019, the US FDA discussed the regulatory framework for AI/machine learning (ML)-based medical device software (AI/ML-based SaMD) to address the regulation of AI-based SaMDs [5]. The FDA inherited the risk classification framework developed by the IMDRF for SaMDs [6] and formulated pre/post-market regulatory procedures with varying stringencies and regulatory rules that must be followed by assigning products to the respective 510(k), pre-market approval (PMA), and De Novo regulatory approval channels based on AI algorithm characteristics. The FDA classifies AI-based SaMD development algorithms as locked or adaptive according to the nature of the algorithm, which grades the level of uncertainty risk. For a locked algorithm, the software algorithm does not change with use, and the same input yields the same output. For an adaptive algorithm, the fundamental performance of the software can be changed through the defined learning process. The update and confirmation of the algorithm are controlled by the algorithm; the same input may yield different output results before and after an iterative software update. In the FDA regulatory framework, IDx-DR became the first FDA-approved autonomous AI system to provide diagnostic decisions for diabetic retinopathy [7]. More than 30 other AI-based SaMD products were approved by the FDA for marketing with locked algorithms [8]. However, the FDA does not currently have a foolproof method for regulating AI-based SaMDs with locked algorithms [9]. The National Medical Products Administration (NMPA) of China has proposed a different understanding of AI-based SaMD regulations. The NMPA defines an AI-based SaMD as stand-alone software based on medical device data and AI technologies for medical use [10]. The Key Points for Deep Learning-Aided Decision-Making Medical Device Software Review categorizes AI-based SaMDs as data-driven and algorithm-driven software, and uses algorithm maturity as a core factor for risk classification and grading. The Guidelines for Defining the Classification of Artificial Intelligence Medical Software Products specifies that AI medical software with low-maturity algorithms in medical applications (meaning not marketed or not fully confirmed safety and effectiveness) is managed as a Class III medical device if used for auxiliary decision-making such as lesion characteristics identification, lesion nature determination, medication guidance, treatment plan development, and other clinical treatment recommendations. However, such software is managed as a Class II medical device if used for non-auxiliary decision-making such as data processing or measurement to provide clinical reference information. The 12 AI-based SaMD products currently approved by the NMPA and managed as Class III medical devices have locked algorithms according to the FDA classification [11]. Although specific criteria for judging risk differ among regulators in different countries, high-risk AI-based SaMDs are a concern, and a prudent approach is taken toward the regulation of low-risk AI-based SaMDs.

In the early stages of AI-based SaMD development, regulators actively explore key elements affecting regulation over the AI-based SaMD lifecycle. The exploration scope revolves mainly around (1) a key regulatory strategy, that is, a real-time regulatory system or framework with high flexibility adapted to the special technical properties of AI/ML transparency, poor interpretability, generalization ability, robustness, and algorithmic self-adaptation; (2) regulatory support and guarantee, that is, a compatible support system for scientific regulation including quality evaluation and safety regulation standards and reserves, testing programs, and standardized datasets with strong reliance as technical tools. Compared with relatively mature pre-market access, we believe that as medical software developers do not fully understand the clinical environment in terms of errors and risks, we should not overlook scientifically and actively guiding and supervising the marketed development of AI-based SaMDs through regulation, preventing algorithmic risks of AI-based SaMDs, and actively addressing ongoing regulation in post-access AI-based SaMD applications as the number of products applied for and marketed has been increasing. From October 2018 to May 2019, medical device recalls caused by software defects accounted for 16.91% of the 136 international medical device recalls resulting from serious adverse events in the US, UK, Canada, Australia, and China [12]. Unfortunately, the number of such events will undoubtedly increase with the black-box usage model of AI-based SaMDs.

Thus, this article focuses on AI/ML technology features in analyzing current conditions regarding construction of regulatory systems, guidelines, standards, and standardization support for AI medical software. We also focus on change control, quality control, and safety monitoring in post-marketing regulation of AI medical software in China in the current regulatory system, and present recommendations to further improve complete lifecycle regulation of AI-based SaMDs in China.

2 Analysis of current state of AI-based SaMD regulatory strategies

The regulatory system for AI-based SaMDs is the means of implementing regulation. The US, UK, Japan, and China have developed regulatory regimes for AI-based SaMDs. Represented by the FDA, regulators have proposed outline strategies and solutions for critical issues such as post-market algorithm changes, quality control, and safety monitoring with the influence of AI/ML technologies.

2.1 Regulatory strategies based on algorithm changes

For AI-based SaMD post-market algorithm changes, the US FDA has proposed a scheme to control frontload change in a discussion paper entitled Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) to ensure that algorithms can be controlled during AI-based SaMD iteration after the product is put on the market. The FDA frames the potential change areas and expected change descriptions for software specifications through SaMD pre-specifications and the algorithm change protocol in the change control plan. Additionally, the FDA has proposed a new total product life cycle approach to ensure that products follow the algorithm change protocol with algorithm changes according to pre-specified performance goals [13]. Similarly, the Japanese Pharmaceuticals and Medical Devices Agency provides an approval process for continuous post-marketing SaMD performance improvement using AI in the December 2019 update of the Pharmaceutical and Medical Device Act for the rapid iteration of algorithms [14], stipulating that performance must be improved in one direction and managed by the marketing authorization holder (MAH). The MAH can develop a procedure to ensure the improvement process and submit it in the pre-marketing approval process.

2.2 Regulatory strategies based on quality control

For post-marketing quality control regulation, quality evaluation of algorithmic models for transparency, interpretability, and dependability is still underexplored in some countries; quality regulation will be realized through a new quality evaluation framework/system.

In January 2021, the Digital Health Center of Excellence, an independent division of the FDA Center for Devices and Radiological Health, released the Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan, which proposed five research points to ensure safety implementation and effectiveness monitoring of AI-based SaMDs (Table 1) [15]. The EU Medical Devices Regulation 2017/745, adopted in May 2021, focuses on strengthening the post-marketing obligations of manufacturers by requiring them to reassess their current quality management and documentation strategies and establish comprehensive processes, including quality management systems (QMS) and post-sales monitoring procedures for technical documentation. China’s regulation of AI-based SaMDs extends the traditional regulatory model in which quality is regulated through QMS, clinical performance, and assessment. The NMPA developed and released the Standalone Software Appendix of Medical Device Good Manufacturing Practice in 2019, which presents requirements in eight areas including standalone software, software component production management, quality control, and adverse event monitoring and analysis. In March 2021, the State Drug Administration officially released the revised Regulations on Supervision and Administration of Medical Devices to strengthen post-marketing supervision of medical devices. This regulation strengthens the obligations of medical device registrants and filers. They require establishment of a QMS appropriate for the product, maintenance of effective operation, development of post-marketing research and risk control plans, and assurance of effective implementation.

Table 1. Summary of Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan.

Action Important measures
Tailored regulatory framework for AI/ML-based SaMD  The Agency intends to issue draft guidance for public comment in this area, including a proposal of what should be included in an SPS and ACP to support the safety and effectiveness of AI/ML-based SaMD algorithms.
Good machine learning practice Developing a set of AI/ML best practices (data management, feature extraction, training, interpretability, evaluation, and documentation) akin to good software engineering practices or quality system practices. Efficiently examining and evaluating AI/ML-based SaMD algorithms.
Patient-centered approach incorporating transparency to users Holding a public workshop to elicit input from the broader community on how device labeling supports transparency to users. Recommending that a manufacturer includes labeling of AI/ML-based medical devices to support transparency to users.
Development of methodology for evaluation and improvement of machine learning algorithms Developing methods for identification and elimination of bias, and for robustness and resilience of algorithms in withstanding changing clinical inputs and conditions. Collaborations with leading researchers including at Centers for Excellence in Regulatory Science and Innovation at the University of California San Francisco, Stanford University, and Johns Hopkins University.
Real-world performance  Supporting piloting of real-world performance monitoring. Gathering performance data on real-world use of SaMDs may allow manufacturers to understand how their products are being used, identify improvement opportunities, and respond proactively to safety and usability concerns.

2.3 Regulatory strategies based on safety monitoring

For post-marketing safety regulation, adverse event reporting systems and product recall regimes are mainstream response mechanisms. Improving post-marketing traceability is another means of obtaining timely AI-based SaMD safety information and attributing AI-induced adverse events. The EU has implemented a unique device identification program to prioritize support for MDSW. It requires manufacturers to provide a post-market clinical follow-up evaluation report as part of the periodic safety update report for newly introduced Class IIa or higher products. FDA post-marketing regulatory strategies include traceability and safety monitoring based on real-world data and are being explored as a pilot program [14]. The Technical Guideline for the Application of Real-World Data in Clinical Evaluation of Medical Devices (Trial) was developed and released by the State Drug Administration in November 2020, and suggests the use of real-world data for post-marketing clinical evaluation and adverse event monitoring. New regulations clarify the addition of unique product identification traceability, extended inspection, and other regulatory measures. Post-market safety regulations are implemented through a unique identifier-based traceability system and an adverse event monitoring system.

3 Analysis of current status of AI-based SaMD regulatory support system

3.1 Guidelines and standard specifications

AI-based SaMD-related guidelines or standards are crucial for scientific regulation of AI-based SaMDs. The FDA has established extensive cooperation with international standardization organizations such as the International Organization for Standardization (ISO), the International Electrotechnical Commission, and the Institute of Electrical and Electronics Engineers (IEEE) in the development of international standards for AI-based SaMDs. The FDA is also working with the Association for the Advancement of Medical Instrumentation, the British Standards Institution, and other organizations to develop programs on medical AI terminology and classification, and the medical AI certification processes. The International Telecommunication Union (ITU) and the World Health Organization (WHO) jointly founded the Focus Group on Artificial Intelligence for Health (FG-AI4H) in July 2018, which creates health assessment guidelines covering health ethics, regulatory compliance, requirement specifications, software lifecycle specifications, data specifications, testing practice specifications, evaluation specifications, demonstration applications, general requirements for applications and platforms of AI-based SaMDs, and specific requirements for different types of medical applications [16]. The ISO has published referenceable standards for AI-based SaMDs such as Health Informatics – Applications of Machine Learning Technologies in Imaging and Other Medical Applications (ISO/TR 24291: 2021), Condition Monitoring and Diagnostics of Machine Systems – Data Processing, Communication, and Presentation (ISO 13374-4: 2015), and Health Software and Health IT Systems Safety, Effectiveness, and Security – Part 1: Principles and Concepts (ISO 81001-1: 2021). Standards related to quality and safety evaluation are also being developed, including Safe, Effective, and Secure Health Software and Health IT Systems – Assurance Case Application Guidance (ISO/AWI TS 6337) and Health Software – Part 2: Health and Wellness Apps – Quality and Reliability (ISO/PRF TS 82304-2).

Development of AI-based SaMD standards in China is in the initial stages. The Center for Medical Device Evaluation of the State Drug Administration has issued two guidelines to promote the development of AI-based SaMD standardization: the first is Key Points for Approving Deep Learning Aided Decision-Making Medical Device Software, which further clarifies the product approval rules in five areas: scope of application, approval concerns, software updates, relevant technical considerations, and descriptions of registration and application materials. The second is Key Points for Evaluation of Pneumonia CT Image Triage and Evaluation Software (Trial), which further facilitates the review and approval of pneumonia-related SaMD software through green channels and appropriate procedural leniency, in combination with the national conditions of COVID-19. The Artificial Intelligence Medical Device Working Group led by the Chinese Academy of Quality Supervision, Inspection, and Quarantine has established the Standard for the Performance and Safety Evaluation of Artificial Intelligence-Based Medical Devices: Terminology (IEEE P2802) and Recommended Practice for the Quality Management of Datasets for Medical Artificial Intelligence (IEEE P2801). The focal points of AI medical device standardization technology were initially the basis of China’s AI medical device standards system according to Medical Device Standards Management, which is divided into basic standards, management standards, method standards, and product standards. The basic standards are oriented mainly to common problems in industry foundation and standardization of terminology, classification, coding, data quality, data annotation, and datasets of AI medical devices. Artificial Intelligence Medical Device Quality Requirements and Evaluation Part 1: Terminology and Artificial Intelligence Medical Device Quality Requirements and Evaluation Part 2: General Requirements for Data Sets are in the draft stage. Management standards for special problems in AI production quality management are in the declaration stage, and include risk management, algorithm development, infrastructure management, product iteration change management, and personnel management. Method standards are primarily geared toward quality evaluation of products and components, including product/system performance evaluation methods, product change evaluation methods, safety testing methods, test tool evaluation, and labeling tool evaluation. To date, the Artificial Intelligence Medical Device Innovation and Cooperation Platform has released two technical documents: Performance Indicators and Test Methods for Diabetic Retinopathy-assisted Decision-making Products Based on Fundus Color Photography and Performance Index and Test Method of Pulmonary Nodule Image-aided Decision-making Product Based on Chest CT. In terms of product standards, a first draft was created in the direction of coronary CT blood flow analysis and neurological imaging-assisted diagnosis according to the number of products in the market and regulatory needs.

3.2 Information-based regulatory support environment

To systematically improve evaluation of the quality of AI-based SaMD products, some countries and regions are actively promoting research on key regulatory support, including standardized datasets, test cases, test methods, tools, indexes, and platforms. The ITU and WHO jointly established the FG-AI4H in July 2018, creating an online benchmarking test platform and open-source software packages with tools related to annotation and data collection. The platform collects unpublished test datasets to validate AI models and establishes AI gold standard datasets [6]. Twenty-one EU countries launched the AI4EU project to share and integrate technical resources such as datasets, algorithms, and technical tools. Through the website (www.ai4europe.eu), the project has provided datasets such as clinical use cases as well as shared resources such as ABELE, an interpreter that supports the creation of interpretation in the form of enhanced images [17]. The Chinese Food and Drug Administration launched the Development and Application Demonstration of Medical Artificial Intelligence Product Whole LifeCycle Detection Platform project in July 2020 to provide technical services for regulatory links, including pre-marketing verification and confirmation, post-marketing regulation, clinical in-use quality control, and product change evaluation [18]. The project has made initial progress in terms of data and software interfaces, system architecture, and testing methods, and has developed and integrated major modules including data upload, statistical analysis, data annotation, and test set extraction with medical imaging applications in the Digital Imaging and Communications in Medicine format as an entry point with data, algorithms, and computing power. The Artificial Intelligence Medical Device Innovation and Cooperation Platform has constructed a public service platform for medical artificial intelligence measurement with the aim of developing an AI standards database of routine fundus color photography for diabetic retinopathy.

4 AI-based SaMD regulatory issues and challenges in China

4.1 Shortcomings of AI-based SaMD regulatory systems

AI-based SaMDs have inherent frequent algorithm changes and unpredictable algorithm models after being put on the market, requiring greater real-time performance and flexibility than traditional regulatory models. Based on the flexible AI/ML algorithm change mode, post-marketing regulation deserves further study. However, given that the regulatory frameworks proposed by the FDA and the NMPA do not currently address regulation of adaptive algorithm-driven AI-based SaMDs, we cannot clearly explain the mechanism and rationale before and after update of the “black-box” algorithm. This inherent high-risk factor cannot be accepted by medical professionals. Although a current interpretation method based on deep Taylor decomposition and layer-wise relevance propagation can identify which input data (features) play a decisive role in the algorithm [19], it may only be valid for interpretability in data-driven AI-based SaMDs; real-world performance requires further examination. Algorithmic interpretability is a common problem for AI-based SaMDs in regulatory science worldwide. With the difficulty in research and development of algorithm model interpreters for the AI medical field, no mature results have been reported in China.

4.2 Lack of AI-based SaMD regulatory support environment

Most AI-based SaMD standards in China are in the research stage. The lack of standards is a severe problem that interferes with the standards development cycle and affects other uncertainty factors. AI-based SaMDs in China lack clear, unified, comprehensive, and detailed post-marketing assessment/review criteria for regulatory support. The lack of review/approval rules hinders rapid and accurate implementation of regulations. At present, China has not established an evaluation system for technical performance and application effects of AI-based SaMD products; few researchers have proposed an evaluation framework for the application effects of medical AI products at the research level [20]. In addition, technical safety specification standards and quality evaluation systems for AI-based SaMDs have not been established.

For high-quality data, post-marketing regulations lack the support of large datasets. Lack of access to sufficient test data oriented toward emerging AI-based SaMDs is a major limitation in predicting the performance of algorithmic models. Large standard datasets, test sets, and validation sets are essential for iteration and validation of post-marketing AI-based SaMDs. Current gold-standard datasets as a reference are severely lacking in AI healthcare. As access to health data is largely constrained by laws and privacy protection, as well as information level and interconnection, there are few large standards test datasets in China, resulting in regulatory constraints such as difficulties in post-marketing clinical evaluation. Although AI-based SaMDs are extensively reported in terms of model accuracy, there is a lack of evaluation data in actual clinical settings, including clinical effectiveness, cost-effectiveness, and safety assessment data. Real-world software post-marketing data are difficult to collect and use, making regulation of software iterations difficult and greatly increasing the potential risk of using AI-based SaMDs. This is particularly detrimental to development of AI-based SaMDs based on adaptive algorithms.

5 Suggestions

5.1 Systematic improvement of AI-based SaMD regulatory systems

Introduction of a regulatory sandbox mechanism for complete lifecycle regulation of AI-based SaMDs is recommended to systematically improve the AI-based SaMD regulatory system. Accelerating improvement of post-marketing regulatory mechanisms for AI-based SaMDs, strengthening regulation of expected application scenarios for products post-marketing, evaluating application effects, researching on monitoring and early warning of adverse reactions based on real-world data, and establishing a risk warning mechanism for high-risk AI-based SaMDs are also recommended. Establishing a forward-looking regulatory framework for AI-based SaMDs based on the Key Points for Approving Deep Learning Aided Decision-Making Medical Device Software and constructing a widely applicable safety and effectiveness regulatory mechanism based on the complete lifecycle regulatory process, quality control system, clinical evaluation/tests, post-marketing traceability, and re-examination mechanism by absorbing other technical branches foreseeable in AI system technologies are advisable.

5.2 Deepening of AI-based SaMD regulatory support systems

Accelerating development of AI-based SaMD national and industry standards based on the AI medical device standards system framework and introducing or transforming international quality, safety, and management standards for AI-based SaMDs are recommended. A collaborative mechanism for construction of an AI-based SaMD standard system in the international community, collaborative construction of AI-based SaMD standards in international and domestic contexts, facilitation of worldwide uniform AI-based SaMD construction, and development of a global standardized monitoring plan for medical AI are proposed.

Accelerating AI-based SaMD research based on real-world data is recommended. Construction of functionalized and standardized test dataset clusters should be accelerated for data-driven AI-based SaMDs. Deeper research on post-marketing regulation of AI-based SaMDs based on real-world data, accelerated construction of real-world data acquisition, analysis, and research tool systems, and data interconnection between trial design, clinical performance, workflow, and data management are recommended. Development of a complete lifecycle monitoring and evaluation system for AI-based SaMDs with adaptive algorithms, construction of a safety control mechanism for AI-based SaMDs after marketing through real-time monitoring of adaptive algorithm tracking performance, and increasing the accessibility of algorithm tracking using digital and visual means are also advised.

Compliance with ethics guidelines

The authors declare that they have no conflict of interest or financial conflicts to disclose.

References

[1]

IMDRF SaMD Working Group. Software as a Medical Device (SaMD): Key definitions [C]. Washington D.C.: IMDRF, 2013.

[2]

Matheny M E, Whicher D, Israni S T. Artificial intelligence in health care: A report from the National Academy of Medicine [J]. The Journal of the American Medical Association, 2020, 323:509– 510.

[3]

Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: An online database [J]. npj Digital Medicine, 2020, 3(1): 118.

[4]

European Commission. ATI - Artificial Intelligence-based software as a medical device [R]. Brussels: European Commission, 2020.

[5]

FDA. Proposed regulatory framework for modifications to Artificial Intelligence/ Machine Learning (AI/ML)—Based Software as a Medical Device (SaMD) [R]. Silver Spring: FDA, 2019.

[6]

IMDRF. Software as a Medical Device: Possible framework for risk categorization and corresponding considerations [C]. Washington D.C.: IMDRF, 2014.

[7]

Abràmoff M D, Lavin P T, Birch M, et al. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices [J]. npj Digital Medicine, 2018, 1: 39.

[8]

Yan S, Xu D Z, Ouyang Z L. Analysis on the regulation and Application of United States Intelligence medical device [J]. China Medical Devices, 2021, 36(2): 117–122. Chinese.

[9]

FDA. How is the FDA considering regulation of Artificial Intelligence and machine learning medical devices? [R/OL]. (2021-09-21)[2021-11-10].

[10]

National Medical Products Administration. Guiding principles for the classification and definition of Artificial Intelligence medical software products [R]. Beijing: National Medical Products Administration, 2021. Chinese.

[11]

National Medical Products Administration. Scientific regulatory research has steadily made fruitful results in key projects [N]. China Medical News, 2021-04-15(002). Chinese.

[12]

Zhao F, Gao M X, Liu P, et al. Statistics and analysis of 136 cases of adverse events of international medical devices [J]. Chinese Journal of Medical Instrumentation, 2020, 44 (2): 166–171. Chinese.

[13]

Cai X S, Lyu H, Yu G J. Study on audit guidelines of US FDA medical Artificial Intelligence software [J]. China Digital Medicine, 2019, 14(11): 34–37, 33. Chinese.

[14]

Ministry of Health, Labour and Welfare (MHLW). Update on medical device and IVD regulation in Japan [R/OL]. (2021-03-20) [2021-11-10].

[15]

FDA. Artificial Intelligence/Machine Learning(AI/ML)-Based Software as a Medical Device(SaMD) action plan [R/OL]. (2021-01-15)[2021-11-10]. https://www.fda.gov/media/145022/ download.

[16]

ITU. Whitepaper for the ITU/WHO focus group on Artificial Intelligence for health [EB/OL]. (2020-10-13)[2021-11-10].

[17]

AI4EU.Verifiable and explainable risk forecasting Artificial Intelligence framework [EB/OL]. (2021-11-26)[2021-11-28].

[18]

National Institutes for Food and Drug Control. The kick-off meeting of the “Development and application demonstration of the full life cycle testing platform for medical Artificial Intelligence products” was held online [EB/OL]. (2020-08-03)[2021-11-10].

[19]

Wang Z F, Huang X L, Yang J,et al. Universal adversarial perturbation generated by attacking layer-wise relevance propagation [C]. New Yok: IEEE, 2020: 431–436.

[20]

Wang S Q, Li Y S, Su M L, et al. Medical Artificial Intelligence product application evaluation framework and process research [J]. Chinese Medical Equipment Journal, 2020, 41(1): 62–65. Chinese.

Funding

Chinese Academy of Engineering project“ Research on the application of artificial intelligence in the field of medicine and health” (2019-ZD-06)()

AI Summary AI Mindmap
PDF (432KB)

7956

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/