26 Feb 2017 | Edward Choi*, Mohammad Taha Bahadori*, Joshua A. Kulas*, Andy Schuetz†, Walter F. Stewart†, Jimeng Sun*
The paper introduces RETAIN, a two-level neural attention model for healthcare that achieves high accuracy while maintaining clinical interpretability. RETAIN is designed for Electronic Health Records (EHR) data and mimics physician behavior by attending to EHR data in reverse time order, prioritizing recent clinical visits. The model uses a reverse time attention mechanism to detect influential past visits and significant clinical variables, enabling interpretable predictions. RETAIN was tested on a large health system EHR dataset with 14 million visits from 263,000 patients over 8 years, demonstrating performance comparable to state-of-the-art methods like RNNs and superior interpretability compared to traditional models. The model's two-level attention mechanism generates visit-level and variable-level attention weights, allowing for detailed interpretation of prediction results. RETAIN uses RNNs to generate attention weights and a context vector to predict outcomes, with the attention mechanism enabling the model to focus on relevant clinical information. The model was evaluated on heart failure prediction and encounter sequence modeling tasks, showing strong performance in both accuracy and interpretability. The results indicate that RETAIN can effectively balance predictive accuracy with interpretability, making it a valuable tool for clinical applications. The paper also discusses the importance of interpretability in healthcare and the challenges of using complex models like RNNs, which are often difficult to interpret. RETAIN addresses these challenges by providing a transparent and interpretable model that can be used in clinical settings.The paper introduces RETAIN, a two-level neural attention model for healthcare that achieves high accuracy while maintaining clinical interpretability. RETAIN is designed for Electronic Health Records (EHR) data and mimics physician behavior by attending to EHR data in reverse time order, prioritizing recent clinical visits. The model uses a reverse time attention mechanism to detect influential past visits and significant clinical variables, enabling interpretable predictions. RETAIN was tested on a large health system EHR dataset with 14 million visits from 263,000 patients over 8 years, demonstrating performance comparable to state-of-the-art methods like RNNs and superior interpretability compared to traditional models. The model's two-level attention mechanism generates visit-level and variable-level attention weights, allowing for detailed interpretation of prediction results. RETAIN uses RNNs to generate attention weights and a context vector to predict outcomes, with the attention mechanism enabling the model to focus on relevant clinical information. The model was evaluated on heart failure prediction and encounter sequence modeling tasks, showing strong performance in both accuracy and interpretability. The results indicate that RETAIN can effectively balance predictive accuracy with interpretability, making it a valuable tool for clinical applications. The paper also discusses the importance of interpretability in healthcare and the challenges of using complex models like RNNs, which are often difficult to interpret. RETAIN addresses these challenges by providing a transparent and interpretable model that can be used in clinical settings.