Calculated losses from models created using pytorch
When you run the following code during propagation:
The following error message is troubling me.
loss.backward()
The forward propagation calculation can be performed without any problems.
terminate called after browsing an instance of 'std::runtime_error'
what():tensorflow/compiler/xla/xla_client/computation_client.cc:280:Missing XLA configuration
Aborted
It was a locally working code, and as I mentioned above, it was working in other GPU environments.The environment stopped working when it was updated.
Please help me...
python pytorch gpu
Resolved with the following command:
$pip uninstalltorch_xla
It seems to have been a problem with pytorch-ignite and torch_xla.
© 2024 OneMinuteCode. All rights reserved.