Error using PyTorch Tensor

Asked 2 years ago, Updated 2 years ago, 73 views

PyTorch1.1: Getting Started: Sample Learning PyTorch – PyTorch

Using the code on the above site as a reference, I am having trouble executing the code listed in the question.

Run Environment
python 3.7.6
pytorch 1.3.1
numpy1.17.4

gpu —GeForce GTX 1660 Ti
Cuda: 10.1
Cudnn: 7.6.5

Code executed

#-*-coding:utf-8-*-

import torch

dtype=torch.float
device=torch.device("cpu")
dtype=torch.device("cuda:0")#Uncomment this to run on GPU
print("dtype:",dtype)

# Nis batch size;D_in is input dimension;
# Hidden dimension; D_outis output dimension.
N,D_in,H,D_out = 64,1000,100,10

# Create random input and output data
x=torch.randn(N,D_in,device=device,dtype=dtype)
y=torch.randn(N,D_out,device=device,dtype=dtype)

# Randomly initialize weights
w1=torch.randn(D_in,H,device=device,dtype=dtype)
w2=torch.randn(H,D_out,device=device,dtype=dtype)

learning_rate = 1e-6
Fort in range (500):
    # Forward pass: compute predicty
    h=x.mm(w1)
    h_relu=h.clamp(min=0)
    y_pred=h_relu.mm(w2)

    # Compute and print loss
    loss=(y_pred-y).power(2).sum().item()
    print(t,loss)

    # Backprop to compute gradients of w1 and w2 with inspect to loss
    grad_y_pred=2.0*(y_pred-y)
    grad_w2 = h_relu.t().mm(grad_y_pred)
    grad_h_relu=grad_y_pred.mm(w2.t())
    grad_h=grad_h_relu.clone()
    grad_h[h<0] = 0
    grad_w1 = x.t().mm(grad_h)

    # Update weights using gradient descent
    w1-=learning_rate*grad_w1
    w2-=learning_rate*grad_w2

error message

dtype:cuda:0
Traceback (most recent call last):
  File "c:\program files(x86)\microsoft visual studio\2019\community\common7\ide\extensions\microsoft\python\core\ptvsd_launcher.py", line 119, in<module>
    vspd.debug(filename, port_num, debug_id, debug_options, run_as)
  File "c:\program files(x86)\microsoft visual studio\2019\community\common7\ide\extensions\microsoft\python\core\Packages\ptvsd\debugger.py", line 39, in debug
    run()
  File "c:\program files(x86)\microsoft visual studio\2019\community\common7\ide\extensions\microsoft\python\core\Packages\ptvsd\__main__.py", line 316, in run_file
    runpy.run_path(target,run_name='__main__')
  File "C:\Users\U\.conda\envs\env\lib\runpy.py", line 263, in run_path
    pkg_name = pkg_name, script_name = fname )
  File "C:\Users\U\.conda\envs\env\lib\runpy.py", line 96, in_run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "C:\Users\U\.conda\envs\env\lib\runpy.py", line85, in_run_code
    exec(code, run_globals)
  File "D:\exam\tensor.py", line 16, in <module>
    x=torch.randn(N,D_in,device=device,dtype=dtype)
TypeError: randn() received an invalid combination of arguments-got(int, int, dtype=torch.device, device=torch.device), but expected one of:
 * (tuple of ints size,tuple of names names,torch.dtype dtype,torch.layout layout,torch.device device,bool pin_memory,bool requirements_grad)
 * (tuple of ints size,torch.Generator generator,tuple of names names,torch.dtype dtype,torch.layout layout,torch.device device,bool pin_memory,bool requirements_grad)
 * (tuple of ints size,torch.Generator generator,Tensor out,torch.dtype dtype,torch.layout layout,torch.device device,bool pin_memory,bool requirements_grad)
 * (tuple of ints size, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requirements_grad)

python pytorch

2022-09-30 20:24

1 Answers

The reason is that it is dtype=torch.device("cuda:0").
The article says device=torch.device("cuda:0").

This post was posted as a community wiki based on @metropolis' comments.


2022-09-30 20:24

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.