Question 1
"When I was studying ""Deep Learning from scratch"" and reading 182p, I couldn't understand why I had set the initial value of weight W to np.random.randn until now."So please tell me why np.random.randn is the initial value of the weight.
Question 2
np.random.randn wrote on the net, "Returns random numbers that follow the normal distribution (standard normal distribution) of mean 0, variance 1 (standard deviation 1), but I don't understand what this means either."I tried executing the code as shown below, and I thought the average would be zero, but it didn't.Please tell me the meaning of this sentence.
tu=np.random.randn (1,100)
tuuu = 0
for i in range (100):
tuu+=tu[0][i]
print(tuuu/100)
# Output 0.22453386331188382 *It's different every time, I thought it wasn't zero on average.
Learn DeepLearning Chapter 6 from scratch ~About initial weight ~ article may be familiar If the gradient reaches zero, learning will not progress, so it is important to provide an initial value to prevent gradient loss.
What you're learning about neural networks might also be helpful?
There's no such thing as rolling the dice and getting different eyes all five times, and the sixth time is the last one left.
After 99 random numbers, if the remaining one is just zero, you'll be like, "Numpy.random, work!"
import matplotlib.pyplot as plt
import numpy as np
config,ax=plt.subplots()
num = [ ]
mean = [ ]
std = [ ]
for cnt in range (11200):
data=np.random.randn(cnt)
num.append(cnt)
mean.append(data.mean())
std.append(data.std())
ax.scatter(num, mean, label='mean')
ax.set_xlabel('count')
ax.set_ylabel('mean')
ax.legend(loc=4)
ax2 = ax.twinx()
ax2.scatter(num, std, color='r', label='std')
ax2.set_ylabel('std')
ax2.legend(loc=1)
I made it as a trial.50,000 or 100,000 yen, smooth, average zero, approach variance 1 and go
© 2024 OneMinuteCode. All rights reserved.