Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks

Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks

| Rahul Dey and Fathi M. Salem
The paper evaluates three variants of the Gated Recurrent Unit (GRU) in recurrent neural networks (RNN) by reducing the number of parameters in the update and reset gates. The variants, named GRU1, GRU2, and GRU3, are compared to the original GRU-RNN model on the MNIST and IMDB datasets. The variants aim to reduce computational expense while maintaining performance. The results show that GRU1 and GRU2 perform similarly to the original GRU-RNN, while GRU3 initially lags but can achieve comparable performance with fewer parameters. The study highlights the trade-off between parameter reduction and computational efficiency, suggesting that GRU3 may be more suitable for applications with limited resources. The experiments also demonstrate that the driving signal for the gates is primarily the recurrent state, which contains essential information about the network's internal state.The paper evaluates three variants of the Gated Recurrent Unit (GRU) in recurrent neural networks (RNN) by reducing the number of parameters in the update and reset gates. The variants, named GRU1, GRU2, and GRU3, are compared to the original GRU-RNN model on the MNIST and IMDB datasets. The variants aim to reduce computational expense while maintaining performance. The results show that GRU1 and GRU2 perform similarly to the original GRU-RNN, while GRU3 initially lags but can achieve comparable performance with fewer parameters. The study highlights the trade-off between parameter reduction and computational efficiency, suggesting that GRU3 may be more suitable for applications with limited resources. The experiments also demonstrate that the driving signal for the gates is primarily the recurrent state, which contains essential information about the network's internal state.
Reach us at info@study.space
[slides] Gate-variants of Gated Recurrent Unit (GRU) neural networks | StudySpace