当前位置:首页 / 医学人工智能可解释性的研究与应用进展
| 更新时间:2025-08-27
|
医学人工智能可解释性的研究与应用进展
Research and application progresses on interpretability of medical artificial intelligence

广西医学 页码:1088-1098

作者机构:张会勇,在读博士研究生,中级工程师,研究方向为医学人工智能。

基金信息:广西壮族自治区自然科学基金面上项目(桂科发〔2025〕69号);广西壮族自治区杰出青年科学基金(2023GXNSFFA026003)

DOI:10.11675/j.issn.0253⁃4304.2025.08.04

  • 中文简介
  • 英文简介
  • 参考文献

人工智能(AI)在医疗领域的深度渗透催生了精准诊断、个性化治疗、主动健康等创新治疗模式,但其决策过程的不透明性(也称为“黑箱”效应)可引发临床信任危机与监管合规挑战。可解释性人工智能(XAI)通过揭示模型决策逻辑,成为破解AI技术潜力与应用瓶颈矛盾的核心路径。XAI正从辅助解释工具发展为系统性解决方案,其将推动医学AI从“结果输出”转向“过程透明”,为构建可信赖的智能医疗生态奠定基础。本文系统解析医学AI可解释性的多维内涵,梳理特征重要性分析、因果推断、多模态融合等关键技术的演进脉络,结合影像诊断、药物研发等前沿场景的实证研究,深入探讨医学AI数据异质性、责任界定等核心挑战。

The deep integration of artificial intelligence (AI) in healthcare has given rise to innovative paradigms such as precision diagnosis, personalized treatment, and proactive health. However, the opacity of AI decision⁃making (often termed the “black box” effect) can trigger clinical trust crises and monitor regulatory compliance challenges. Explainable artificial intelligence (XAI), by unveiling the logical framework behind model decisions, has emerged as a pivotal approach to reconciling the tension between AI's potential and its application bottlenecks. XAI is evolving from an auxiliary explanatory tool into a systematic solution. It is poised to shift medical AI from “outcome delivery” to “process transparency”, laying the foundation for a trustworthy intelligent healthcare ecosystem. This paper systematically deconstructs the multidimensional implications of interpretability in medical AI, traces the evolution of key technologies, including feature importance analysis, causal inference, and multi⁃modal fusion, and delves into core challenges such as data heterogeneity and accountability demarcation, supported by empirical studies in cutting⁃edge applications like medical imaging diagnosis and drug discovery. 

82

浏览量

7

下载量

0

CSCD

工具集