This site Pawason regression and I am studying while implementing it in R.
The equivalent method "No.3" is introduced below.This is if the link function is identity, but is it possible if the link function is log?
x=c (1, 2, 3, 4)
y=c(2,3,5,4)
w = c(1,1,1,1)
for (i in 1:10) {
r=lm(y-x, weights=w)
lambda=predict(r)
print(c(as.numeric(r$coef), -sum(y*log(lambda)-lambda)))
w=1/lambda
}
# omission
#[1] 1.2783467 0.8886613 -4.0609501
glm(y-x, family=poisson(link="identity")
#(Intercept)x
# 1.2784 0.8887
Below, I assumed the link function to log, and simply set y to log(y) and w to lambda, but is that not true?May I ask someone to give me some advice?
x=c (1, 2, 3, 4)
y=c(2,3,5,4)
w = c(1,1,1,1)
for (i in 1:10) {
r=lm(log(y)-x, weights=w)
lambda=predict(r)
print(c(as.numeric(r$coef), -sum(y*log(lambda)-lambda)))
w = lambda
}
# omission
#[1] 0.6195090 0.2334289 1.7332280
glm(y-x, family=poisson(link="log"))
#(Intercept)x
# 0.6393 0.2320
###############################################################################################
Dear user12399423 (Answer date: June 13th, 13:19)
Thank you for your reply.The explanation was easy to understand, and I found out what I had been wondering about for a long time, which made me feel refreshed.I'm ashamed to say that I didn't study enough, but one more thing I don't understand about the same thing, and if this question is still alive, I'd appreciate it if you could give me some advice.
I also saw the maximum likelihood estimation to be solved on this site IRLS, and I was studying it.
The weighted least squares method below is when the link function is log.For example, in the case of identity in this program (simply removing exp from the lambda section), is it possible?If you run it, it doesn't work (is it the same as regression analysis?), so why would a link function log be a Poisson regression?
I was confused.May I ask someone to give me some advice?
#log
x<-c (1, 2, 3, 4)
X<-cbind(rep(1,length(x))),x)
Y<-c(2,3,5,4)
beta<-matrix(0,ncol=2,nrow=100)
beta[1,]<-c(1,10)
for (min 2:100) {
lambda<-exp (X%*%beta[m-1,])
W<-diag (lambda[,1])
XtWX<-t(X)%*%W%*%X
U<-t(Y-lambda)%*%X
beta[m,]<-beta[m-1,]+solve(XtWX)%*%t(U)
}
tail(beta)
# [100,] 0.63926470.2320399#OK
How about this?
#identity?
x<-c (1, 2, 3, 4)
X<-cbind(rep(1,length(x))),x)
Y<-c(2,3,5,4)
beta<-matrix(0,ncol=2,nrow=100)
beta[1,]<-c(1,10)
for (min 2:100) {
lambda<- (X%*%beta[m-1,])
W<-diag (lambda[,1])
XtWX<-t(X)%*%W%*%X
U<-t(Y-lambda)%*%X
beta[m,]<-beta[m-1,]+solve(XtWX)%*%t(U)
}
tail(beta)
# [100,] 1.499999 0.8000007
Run glm.
#glm
summary(glm(y-x))
# Estimate Std. Error value Pr (>|t|)
# (Intercept) 1.5000 1.1619 1.2910.326
# x 0.8000.4243 1.8860.200
No. The original method uses y=lambda=a*x+b for equal mapping, so setting the link function to logarithm (I don't think it's a good approximation either) and the extreme value in your method is log(lambda)*(log(y)-1)*lambda'*exp(lambda)=0, which is far from the regression level.
© 2024 OneMinuteCode. All rights reserved.