Selon un texte que j'utilise, la formule de la variance du résiduel est donnée par:
Je trouve cela difficile à croire car le résiduel est la différence entre la valeur observée et la valeur ajustée; si l'on devait calculer la variance de la différence, à tout le moins, je m'attendrais à des «avantages» dans l'expression résultante. Toute aide pour comprendre la dérivation serait appréciée.
Réponses:
L'intuition concernant les signes «plus» liés à la variance (du fait que même lorsque nous calculons la variance d'une différence de variables aléatoires indépendantes, nous ajoutons leurs variances) est correcte mais fatalement incomplète: si les variables aléatoires impliquées ne sont pas indépendantes , alors les covariances sont également impliquées - et les covariances peuvent être négatives. Il existe une expression qui est presque comme l'expression de la question a été pensé qu'il « devrait » être par l'OP (et moi), et il est la variance de la prédiction erreur , noteronse0=y0−y^0 , où y0=β0+β1x0+u0 :
La différence critique entre la variance de l'erreur de prédiction et la variance de l' erreur d' estimation (c'est-à-dire du résiduel) est que le terme d'erreur de l'observation prédite n'est pas corrélé avec l'estimateur , puisque la valeury0 n'a pas été utilisée dans la construction l'estimateur et le calcul des estimations, étant une valeur hors échantillon.
L'algèbre pour les deux se déroule exactement de la même manière jusqu'à un point (en utilisant 0 au lieu de i ), mais diverge ensuite. Plus précisément:0 i
Dans la régression linéaire simple , Var ( u i ) = σ 2 la variance de l'estimateur β = ( β 0 , β 1 ) ' est toujoursyi=β0+β1xi+ui Var(ui)=σ2 β^=(β^0,β^1)′
Nous avons
et donc
Nous avons
So
which means that
Thei -th residual is defined as
The actual coefficients are treated as constants, the regressor is fixed (or conditional on it), and has zero covariance with the error term, but the estimators are correlated with the error term, because the estimators contain the dependent variable, and the dependent variable contains the error term. So we have
Pack it up a bit to obtain
The term in the big parenthesis has exactly the same structure with the variance of the prediction error, with the only change being that instead ofxi we will have x0 (and the variance will be that of e0 and not of u^i ). The last covariance term is zero for the prediction error because y0 and hence u0 is not included in the estimators, but not zero for the estimation error because yi and hence ui is part of the sample and so it is included in the estimator. We have
the last substitution from howβ^0 is calculated. Continuing,
Inserting this into the expression for the variance of the residual, we obtain
So hats off to the text the OP is using.
(I have skipped some algebraic manipulations, no wonder OLS algebra is taught less and less these days...)
SOME INTUITION
So it appears that what works "against" us (larger variance) when predicting, works "for us" (lower variance) when estimating. This is a good starting point for one to ponder why an excellent fit may be a bad sign for the prediction abilities of the model (however counter-intuitive this may sound...).1/n . Why? because by estimating, we "close our eyes" to some error-variability existing in the sample,since we essentially estimating an expected value. Moreover, the larger the deviation of an observation of a regressor from the regressor's sample mean, the smaller the variance of the residual associated with this observation will be... the more deviant the observation, the less deviant its residual... It is variability of the regressors that works for us, by "taking the place" of the unknown error-variability.
The fact that we are estimating the expected value of the regressor, decreases the variance by
But that's good for estimation. For prediction, the same things turn against us: now, by not taking into account, however imperfectly, the variability iny0 (since we want to predict it), our imperfect estimators obtained from the sample show their weaknesses: we estimated the sample mean, we don't know the true expected value -the variance increases. We have an x0 that is far away from the sample mean as calculated from the other observations -too bad, our prediction error variance gets another boost, because the predicted y^0 will tend to go astray... in more scientific language "optimal predictors in the sense of reduced prediction error variance, represent a shrinkage towards the mean of the variable under prediction". We do not try to replicate the dependent variable's variability -we just try to stay "close to the average".
la source
Sorry for the somewhat terse answer, perhaps overly-abstract and lacking a desirable amount of intuitive exposition, but I'll try to come back and add a few more details later. At least it's short.
GivenH=X(XTX)−1XT ,
Hence
In the case of simple linear regression ... this gives the answer in your question.
This answer also makes sense: sincey^i is positively correlated with yi , the variance of the difference should be smaller than the sum of the variances.
--
Edit: Explanation of why(I−H) is idempotent.
(i)H is idempotent:
(ii)(I−H)2=I2−IH−HI+H2=I−2H+H=I−H
la source