Pythorch What is the difference between these two codes?Please tell me who knows why only one of them has an error.

Asked 1 years ago, Updated 1 years ago, 73 views

What is the difference between these two codes?Please tell me if you know why only one of them has an error.
Why do only one error occurs when both types of torch.Tensor are added together?
Please let me know if you understand.

Code A

a=torch.tensor([10]]).to("cuda:0").half()
b=torch.tensor([2]).to("cuda:0").half()
print(type(a),a)
print(type(b),b)
print(a+b)
print("ok")

Results

<class'torch.Tensor'>tensor([10.]], device='cuda:0', dtype=torch.float16)
<class'torch.Tensor'>tensor([2.], device='cuda:0', dtype=torch.float16)
tensor([12.]], device='cuda:0', dtype=torch.float16)
ok

Code B

 target=(gamma**multireward_steps) * targetQN.forward (memory.buffer [idx][0], "net_v")
rew=memory.buffer [idx][2].to("cuda:0")
print(type(targ), targ)
print(type(rew),rew)
targets[i] = rew+targ

Results

<class'torch.Tensor'>tensor([0.0208]], device='cuda:0', dtype=torch.float16)
<class'torch.Tensor'>tensor([0.], device='cuda:0', dtype=torch.float16)

---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-dff82183a33b>in<module>
    420 trin.pioritized_experience_replay(batch_size,gamma,step=episode,
    421 state_size=state_, action_size=athon,
-->422 multilireward_steps=multireward_steps)
    423 trin. Done (episode)
    424mainQN.Done()

<ipython-input-1-dff82183a33b>implemented_experience_replay(self, batch_size, gamma, step, state_size, action_size, multilireward_steps)
    289 print (type(targ), targ)
    290 print (type(rew), rew)
-->291 targets[i] = rew+targ
    292 
    293 priority = rank_sum (memory_TDerror.buffer [idx], self.alpha)

~\Anaconda3\envs\pyflan\lib\site-packages\torch\tensor.py in __array__(self, dtype)
    492 return self.numpy()
    493 else:
-->494 return self.numpy().astype(dtype, copy=False)
    495 
    496#Wrap Numpy array again in a sustainable sensor when done, to support e.g.

TypeError: can't convert cuda:0 device type sensor to numpy.Use Tensor.cpu() to copy the sensor to host memory first.

Also, I can't do it even if you ask me to put the cord on the black one? and put it on properly.

python gpu pytorch

2022-09-30 21:48

1 Answers

TypeError: can't convert cuda:0 device type sensor to numpy.Use Tensor.cpu() to copy the sensor to host memory first.

As the error message says, I think we should change the Tensor that uses cuda for copying to cpu once.

Code A is only a(Tensor)+b(Tensor) but
Code B has a copy of numpy at targets[i]=rew(Tensor)+targ(Tensor).That's the difference.Perhaps the targets[i] type is ndarray.


2022-09-30 21:48

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.