RNN Under the Lens: Attention, Confidence, and Feature Importance
Main Article Content
Abstract
Recurrent Neural Networks (RNNs) have enjoyed considerable success in areas such as natural language processing, time-series forecasting, and sentiment analysis. Nevertheless, these models often lack clarity regarding which parts of the input sequence drive their predictions, how confident they are in those predictions, and which input features play a pivotal role. To address these gaps, we propose a unified framework that augments a baseline RNN with an attention mechanism, a confidence score module, and a feature importance estimation procedure. We evaluate our approach using the IMDB movie review dataset for sentiment classification. Empirical results show that our model not only outperforms a vanilla RNN in terms of accuracy but also produces interpretable outputs at multiple levels. We argue that such a multi-faceted approach can be easily adopted for diverse sequence-related tasks in need of greater transparency and robustness.