RETAI N: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism

RETAI N: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism

26 Feb 2017 | Edward Choi*, Mohammad Taha Bahadori*, Joshua A. Kulas*, Andy Schuetz†, Walter F. Stewart†, Jimeng Sun*
The paper introduces RETAIN (REverse Time AttentIoN), a novel predictive model for healthcare applications, specifically designed to address the trade-off between accuracy and interpretability in machine learning models. RETAIN is based on a two-level neural attention mechanism that detects influential past visits and significant clinical variables within those visits, mimicking the behavior of physicians by attending to EHR data in reverse time order. This approach ensures that recent clinical visits receive higher attention, enhancing the model's interpretability while maintaining high accuracy. The model was tested on a large health system EHR dataset with 14 million visits from 263,000 patients over an 8-year period. RETAIN demonstrated comparable predictive accuracy to state-of-the-art methods like RNNs and superior interpretability compared to traditional models. The paper also presents a method to interpret the end-to-end behavior of RETAIN, showing how it identifies the most meaningful visits and quantifies visit-specific features contributing to the prediction. In experiments, RETAIN outperformed traditional machine learning methods and RNN variants in both accuracy and interpretability, particularly in the heart failure prediction task. The model's ability to exploit sequence information and generate interpretable predictions makes it a promising tool for clinical applications. Future work includes developing an interactive visualization system for RETAIN and evaluating its performance in other healthcare contexts.The paper introduces RETAIN (REverse Time AttentIoN), a novel predictive model for healthcare applications, specifically designed to address the trade-off between accuracy and interpretability in machine learning models. RETAIN is based on a two-level neural attention mechanism that detects influential past visits and significant clinical variables within those visits, mimicking the behavior of physicians by attending to EHR data in reverse time order. This approach ensures that recent clinical visits receive higher attention, enhancing the model's interpretability while maintaining high accuracy. The model was tested on a large health system EHR dataset with 14 million visits from 263,000 patients over an 8-year period. RETAIN demonstrated comparable predictive accuracy to state-of-the-art methods like RNNs and superior interpretability compared to traditional models. The paper also presents a method to interpret the end-to-end behavior of RETAIN, showing how it identifies the most meaningful visits and quantifies visit-specific features contributing to the prediction. In experiments, RETAIN outperformed traditional machine learning methods and RNN variants in both accuracy and interpretability, particularly in the heart failure prediction task. The model's ability to exploit sequence information and generate interpretable predictions makes it a promising tool for clinical applications. Future work includes developing an interactive visualization system for RETAIN and evaluating its performance in other healthcare contexts.
Reach us at info@study.space