You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 18, 2021. It is now read-only.
In class BahdanauAttnDecoderRNN(nn.Module), self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=dropout_p),
but the input of gru is rnn_input = torch.cat((word_embedded, context), 2) whose size is 2*hidden_size
The text was updated successfully, but these errors were encountered:
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
num_layers – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1
bias – If False, then the layer does not use bias weights b_ih and b_hh. Default: True
batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False
dropout – If non-zero, introduces a Dropout layer on the outputs of each GRU layer except the last layer, with dropout probability equal to dropout. Default: 0
bidirectional – If True, becomes a bidirectional GRU. Default: False
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
In class BahdanauAttnDecoderRNN(nn.Module), self.gru = nn.GRU(hidden_size, hidden_size, n_layers, dropout=dropout_p),
but the input of gru is rnn_input = torch.cat((word_embedded, context), 2) whose size is 2*hidden_size
The text was updated successfully, but these errors were encountered: