I have built the following LSTM
model however, it's very slow with huge dataset so, is it possible to replace it with CuDNNLSTM
to gain GPU assistance?
Here is my LSTM
code:
def get_lstm_model():
lstm_model = Sequential()
lstm_model.add(LSTM(300, dropout=0.4, recurrent_dropout=0.4, input_shape=[1, 300], return_sequences=True))
lstm_model.add(LSTM(64, recurrent_dropout=0.4))
lstm_model.add(Dropout(0.5))
lstm_model.add(Dense(1, activation='relu'))
lstm_model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['mae'])
lstm_model.summary()
return lstm_model
question from:https://stackoverflow.com/questions/65903731/how-to-replace-lstm-with-cudnnlstm