Abstract
Objective.
Artificial intelligence (AI) can enable automation, improve treatment accuracy, allow for a more efficient workflow, and improve the cost-effectiveness of radiotherapy (RT). To implement AI in RT, clinicians have expressed a desire to understand the AI outputs. Explainable AI (XAI) methods have been put forward as a solution, but the multidisciplinary nature of RT complicates the application of trustworthy and understandable XAI methods. The objective of this review is to analyze XAI in the RT landscape and understand how XAI can best support the diverse user groups in RT by exploring challenges and opportunities with a critical lens.
Approach.
We performed a review of XAI in RT, evaluating how explanations are built, validated, and embedded across the RT workflow, with attention to XAI purposes, evaluation and validation, interpretability trade-offs, and RT's multidisciplinary context.
Main results.
XAI in RT serves five purposes: (1) knowledge discovery, (2) model verification, (3) model improvement, (4) clinical verification, and (5) clinical justification/actionability. Many studies favor interpretability but neglect fidelity and seldom include user-specific evaluation. Key challenges include stakeholder diversity, evaluation of XAI, cognitive bias, and causality; we also outline opportunities.
Significance.
By linking XAI purposes to RT tasks and highlighting challenges and opportunities, we provide actionable recommendations and a user-centric framework to guide the development, validation, and deployment of XAI in RT.
Artificial intelligence (AI) can enable automation, improve treatment accuracy, allow for a more efficient workflow, and improve the cost-effectiveness of radiotherapy (RT). To implement AI in RT, clinicians have expressed a desire to understand the AI outputs. Explainable AI (XAI) methods have been put forward as a solution, but the multidisciplinary nature of RT complicates the application of trustworthy and understandable XAI methods. The objective of this review is to analyze XAI in the RT landscape and understand how XAI can best support the diverse user groups in RT by exploring challenges and opportunities with a critical lens.
Approach.
We performed a review of XAI in RT, evaluating how explanations are built, validated, and embedded across the RT workflow, with attention to XAI purposes, evaluation and validation, interpretability trade-offs, and RT's multidisciplinary context.
Main results.
XAI in RT serves five purposes: (1) knowledge discovery, (2) model verification, (3) model improvement, (4) clinical verification, and (5) clinical justification/actionability. Many studies favor interpretability but neglect fidelity and seldom include user-specific evaluation. Key challenges include stakeholder diversity, evaluation of XAI, cognitive bias, and causality; we also outline opportunities.
Significance.
By linking XAI purposes to RT tasks and highlighting challenges and opportunities, we provide actionable recommendations and a user-centric framework to guide the development, validation, and deployment of XAI in RT.
| Original language | English |
|---|---|
| Article number | 03TR01 |
| Number of pages | 31 |
| Journal | Physics in Medicine and Biology |
| Volume | 71 |
| Issue number | 3 |
| DOIs | |
| Publication status | Published - 14 Feb 2026 |
Keywords
- Black box models
- Clinical implementation
- Deep learning
- Explainable artificial intelligence
- Radiotherapy
Fingerprint
Dive into the research topics of 'Maximizing impact of explainable artificial intelligence in radiotherapy: a critical review'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver