Deploying cudnn in Bidirectional LSTM in PyTorch results in RuntimeError: cuDN error: CUDN_STATUS_EXECUTION_FAILED

Asked 1 years ago, Updated 1 years ago, 86 views

Run Environment
OS:windows10
python —3.7.4
numpy —1.16.5
pytorch:1.3.1

gpu —GeForce GTX 1060
Nvidia driver:441.87
Cuda: 10.1
Cudnn: 7.6.5

As for the execution code, I will omit the details, but I have implemented Bidirectional LSTM.

error message

File"C:\Users\User\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py", line541, in__call__
result=self.forward (*input,**kwargs)
File "D:\new\mcep_generator\no_adversal_module\train_sample_BLSTM.py", line 65, in forward
out, hidden=self.l1(input)
File "C:\Users\User\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py", line541, in__call__
result=self.forward (*input,**kwargs)
File "C:\Users\User\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line543, in forward_tensor
return self.forward_tensor(input,hx)
 File "C:\Users\User\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line543, in forward_tensor
output, hidden=self.forward_impl (input, hx, batch_size, max_batch_size, sorted_indices)
File "C:\Users\User\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\rnn.py", line 526, in forward_impl
self.dropout, self.training, self.bidirectional, self.batch_first)
RuntimeError: cuDN error: CUDN_STATUS_EXECUTION_FAILED

I've been dropping the Cuda version (and dropping the Pythorch and cudnn versions accordingly), but I've been getting this error when I've been training a few times.

Torch.backends.cudnn.enabled=False will not cause errors when training starts, but it will be very slow, so I would like to find out why.

python cuda pytorch

2022-09-30 16:20

1 Answers

LSTM (not bi-directional) gave me the same error.I don't know why, but when I reduced the batch size, the error disappeared.


2022-09-30 16:20

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.