Risk Analysis and Response Strategies of Large Language Models for Security Governance

Kun Jia , Yuxin Zhang , Jiyun Chen , Jiayin Qi , Binxing Fang

Strategic Study of CAE ›› : 1 -16.

PDF (1002KB)
Strategic Study of CAE ›› : 1 -16. DOI: 10.15302/J-SSCAE-2025.06.016
research-article

Risk Analysis and Response Strategies of Large Language Models for Security Governance

Author information +
History +
PDF (1002KB)

Abstract

To address the challenges of fragmented understanding of Large Language Model (LLM) security risks and the inadequacy of LLM risk classification and grading frameworks, this study aims to construct a comprehensive framework that integrates risk mechanism analysis, quantitative assessment, and governance practices. Theoretically, this study synthesizes and reconstructs multiple foundational theories, including socio-technical systems, social systems theory, and safety science, to reveal that risks originate from a dual trigger mechanism of the model's "internal complexity" and "external interaction." It consequently dissects risks into two primary dimensions—"internal safety" and "application security"—providing a unified theoretical foundation for a systematic governance framework. Methodologically, the study introduces "Risk Label Cards" as a standardized tool and employs an "Artificial Intelligence + Human Expert Collaboration" approach to structurally analyze real-world security incidents. Combined with an improved DREAD (damage, reproducibility, exploitability, affected users, discoverability) risk matrix model, it establishes a complete assessment methodology that spans from qualitative identification to quantitative grading. The research culminates in the construction of a systematic risk classification system and a three-tiered (high, medium, low) risk landscape covering major risk types. The "dual-dimensional driven" risk analysis and governance framework constructed in this study provides a systematic theoretical tool for the precise assessment and governance of LLM risks, effectively bridging the "theory-practice gap" in governance. Furthermore, with its theoretical compatibility and dynamic characteristics, the framework provides a reference for continuously tracking and understanding the evolution of LLM security risks and for security policy research.

Keywords

large language model / security risk / security governance / risk assessment / classification and grading / risk landscape

Cite this article

Download citation ▾
Kun Jia, Yuxin Zhang, Jiyun Chen, Jiayin Qi, Binxing Fang. Risk Analysis and Response Strategies of Large Language Models for Security Governance. Strategic Study of CAE 1-16 DOI:10.15302/J-SSCAE-2025.06.016

登录浏览全文

4963

注册一个新账户 忘记密码

References

Funding

Funding project: Chinese Academy of Engineering preject "Security and Compliance Regulatory Strategies for Artificial Intelligence Large Language Models in Guangdong Province"(2025-XZ-08)

"Research on the National Guardrails and Governance Framework for Large Model Regulation"(2024-GD-04)

Major Project of Philosophy and Social Sciences Research of the Ministry of Education(24JZD040)

National Natural Science Fond Project(72293583)

AI Summary AI Mindmap
PDF (1002KB)

1422

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/