Objective
The study aims to develop an artificial intelligence (AI) framework for automatic pressure injury (PI) staging directly from raw clinical images, without requiring manual lesion localisation. By integrating a two-stage deep learning approach—combining object detection and image classification—the proposed system seeks to improve diagnostic accuracy, reduce subjectivity and support consistent staging in real-world clinical workflows.
Methods
We conducted a retrospective study using 1807 PI images from China Medical University Hospital collected between 2020 and 2024. Lesions were annotated by five senior nurses with consensus. YOLOv9 was used for lesion detection, and DenseNet161 for staging. Performance metrics included accuracy, sensitivity, specificity, F1 score, area under the curve (AUC) and mean average precision (mAP)@0.5.
Results
The object detection model (YOLOv9) achieved an mAP@0.5 of 0.796. On the independent test set (n=365), the staging model demonstrated an overall accuracy of 0.775, sensitivity of 0.775, specificity of 0.955, precision of 0.779 and F1 score of 0.775.
Discussion
The proposed two-stage AI framework—using YOLOv9 for lesion localisation and DenseNet161 for staging—demonstrated performance comparable to or exceeding previously reported approaches, while offering improved clinical interpretability when applied to real-world clinical images. By separating detection and classification, the framework reduced background noise and improved the ability to distinguish visually similar PI stages. However, challenges remain in handling intra-wound heterogeneity, subtle early-stage boundaries and variability in image quality.
Conclusions
This two-stage AI system may assist clinical staff by providing standardised, reproducible PI staging from clinical images, with potential for integration into nursing workflows.