Attempting to reproduce pytorch's torch.nn.Linear module with numpy.
As the result showed some errors, I rounded off the decimal point and printed out the decision of the bool.
Why does the result of truncation result in a False decision despite the same number?
I'm not familiar with computer science myself, so I'd like someone to teach me.
import numpy as np
import torch
import torch.nn asn
from module import Linear # Self-made module
np.random.seed (42)
# self-made module
x = np.random.randn(2,3,3,4)
affine = Linear (4,6)
# torch.nn.Linear
xt=torch.tensor(x).float()
linear = nn. Linear (4,6)
# Overwrite nn.Linear Parameters to Self-Module
weight=linear.weight.detach().numpy().copy()
affine.weight = weight
bias=linear.bias.detach().numpy().copy()
affine.bias=bias
# Output of self-made modules
y = affine(x)
print(y[0][0][0][0][0])# output partially
>>- 0.4672098047691683
# nn.Linear Output
yt = linear(xt).detach().numpy().copy()
print(yt[0][0][0][0][0])# Partial output
>>- 0.46720976
# If you check with bool, it becomes False......
print(np.round(y[0][0], decimals=3) == np.round(yt[0][0], decimals=3))
>>array([False, False, False, False, False],
[False, False, False, False, False, False],
[False, False, False, False, False])
I solved myself.
When converting x to xt, it was converted from np.float64 to torch.float32.
Therefore, when I returned the output yt from tonser to ndarray, it was converted to np.floa32.
It seems that the decision was made as False because the data types matched y(float64) and yt(float32) despite the same values.
© 2024 OneMinuteCode. All rights reserved.