Should the Dropout vs. Recurrent Dropout Arguments Be the Same in Keras?

by Jonathan Bechtel   Last Updated June 30, 2020 04:19 AM

I'm learning about recurrent neural networks right now, and am in chapter 6 of Deep Learning with Python by Francois Chollet.

In the chapter it's discussing using dropout in recurrent layers. I understand the logic behind having the inputs randomized the same way at each time step since RNN's are used to learn sequence data, but I'm having a difficult time parsing some of the finer details between the dropout and recurrent dropout arguments you can pass in.

Take this simple example:

keras.layers.GRU(32, dropout=0.2, recurrent_dropout=0.2)

Whenever I see snippets like this on the internet both dropout arguments are usually set to the same value. Is this a best habit or just convention?

I'm assuming the dropout argument is the fraction of inputs that will be zeroed out coming into the recurrent layer. If that's the case, what's the difference between my example and something like this:

keras.layers.Dropout(0.2)
keras.layers.GRU(32, recurrent_dropout=0.2)

Thank you for all of your help.

Tags : keras rnn dropout


Answers 1


Your two snippets are equivalent. It is just syntactic sugar.

Btw. be careful about using the recurrent dropout. It usually makes things worse.

Jindřich
Jindřich
April 06, 2020 08:00 AM

Related Questions



Using variable dropout in Keras

Updated May 17, 2019 13:19 PM


Where to include Dropout in stacked autoencoder

Updated February 05, 2019 12:19 PM