Explainable AI for mortality prediction: a comparative study using the MIMIC-III dataset

A VPN is an essential component of IT security, whether you’re just starting a business or are already up and running. Most business interactions and transactions happen online and VPN

Objectives

Predicting mortality is vital for tailoring treatments, improving care and reducing costs. Machine learning (ML) has shown strong potential, often outperforming traditional severity-of-illness scoring systems in intensive care units (ICUs). However, the black-box nature of ML limits adoption. This study evaluates the accuracy of several ML algorithms on the Medical Information Mart for Intensive Care III (MIMIC-III) dataset and applies explainable artificial intelligence to identify variables influencing ICU mortality.

Methods

A retrospective cohort of 600 MIMIC-III patient records was analysed. ML algorithms tested included support vector machine (SVM), K-nearest neighbour (KNN), decision trees (DT), gradient boosting (GB), random forests (RF), Naive Bayes (NB), logistic regression (LR) and extra trees (ET). Models were assessed with threefold cross-validation using F1 Score, sensitivity, specificity, confusion matrix and accuracy. SHapley Additive exPlanations (SHAP) was applied to explain key factors influencing predictions.

Results

Among the eight algorithms evaluated, ET and GB achieved the highest accuracy (98.33% and 98.23%, respectively) with F1-scores above 96%. SVM also performed strongly (97.50% accuracy, F1=94.34%). RF and DT yielded accuracies of 95.00% and 94.17%, respectively. NB reached 96.67% accuracy with 100% recall, while KNN and LR showed lower discriminative performance (76.67% and 75.83% accuracy, respectively).

SHAP, LIME and Shapley values consistently identified hypertension, tumours and endocrine and digestive disease as the leading predictors of mortality.

Discussion

Findings highlight ML’s potential to optimise ICU decision-making and support clinicians. However, reliance on retrospective data remains a limitation, and clinical validation is required.

Conclusion

ML algorithms are highly effective for mortality prediction, and explainability is key for trust and adoption. When combined, accuracy and interpretability enable ML to safely support informed ICU decisions and improve patient outcomes.

Shafiabady, N., Akume, D., Haghighat, M., Ud Din, F., Sattarshetty, K., Karim, A., Zhou, J., Alsharaydeh, E.

Shafiabady, N., Akume, D., Haghighat, M., Ud Din, F., Sattarshetty, K., Karim, A., Zhou, J., Alsharaydeh, E.

Leave a Replay

Sign up for our Newsletter

Contact Us