Remarque: = somme des carrés au total, = somme des erreurs au carré et = somme des carrés de régression. L'équation dans le titre est souvent écrite comme suit:
Question assez simple, mais je cherche une explication intuitive. Intuitivement, il me semble que aurait plus de sens. Par exemple, supposons que le point n'a correspondant valeur y et y i = 3 , où y i est le point correspondant sur la droite de régression. Supposons également que la valeur moyenne y pour l'ensemble de données est ˉ y = 0 . Ensuite, pour ce point particulier i, S , tandis que et . Évidemment, . Ce résultat ne se généraliserait-il pas à l'ensemble des données? Je ne comprends pas.
Réponses:
Additionner et soustraire donne Nous devons donc montrer que∑ n i = 1
En fait, je pense que (a) est plus facile de montrer en notation matricielle pour la régression multiple générale de laquelle le boîtier de variable unique est un cas particulier:
la source
(1) Intuition for whySST=SSR+SSE
When we try to explain the total variation in Y (SST ) with one explanatory variable, X, then there are exactly two sources of variability. First, there is the variability captured by X (Sum Square Regression), and second, there is the variability not captured by X (Sum Square Error). Hence, SST=SSR+SSE (exact equality).
(2) Geometric intuition
Please see the first few pictures here (especially the third): https://sites.google.com/site/modernprogramevaluation/variance-and-bias
Some of the total variation in the data (distance from datapoint toY¯ ) is captured by the regression line (the distance from the regression line to Y¯ ) and error (distance from the point to the regression line). There's not room left for SST to be greater than SSE+SSR .
(3) The problem with your illustration
You can't look at SSE and SSR in a pointwise fashion. For a particular point, the residual may be large, so that there is more error than explanatory power from X. However, for other points, the residual will be small, so that the regression line explains a lot of the variability. They will balance out and ultimatelySST=SSR+SSE . Of course this is not rigorous, but you can find proofs like the above.
Also notice that regression will not be defined for one point:b1=∑(Xi−X¯)(Yi−Y¯)∑(Xi−X¯)2 , and you can see that the denominator will be zero, making estimation undefined.
Hope this helps.
--Ryan M.
la source
When an intercept is included in linear regression(sum of residuals is zero),SST=SSE+SSR .
proveSST====∑i=1n(yi−y¯)2∑i=1n(yi−y^i+y^i−y¯)2∑i=1n(yi−y^i)2+2∑i=1n(yi−y^i)(y^i−y¯)+∑i=1n(y^i−y¯)2SSE+SSR+2∑i=1n(yi−y^i)(y^i−y¯)
Just need to prove last part is equal to 0:
∑i=1n(yi−y^i)(y^i−y¯)==∑i=1n(yi−β0−β1xi)(β0+β1xi−y¯)(β0−y¯)∑i=1n(yi−β0−β1xi)+β1∑i=1n(yi−β0−β1xi)xi
In Least squares regression, the sum of the squares of the errors is minimized.
SSE=∑i=1n(ei)2=∑i=1n(yi−yi^)2=∑i=1n(yi−β0−β1xi)2
Take the partial derivative of SSE with respect to β0 and setting it to zero.
∂SSE∂β0=∑i=1n2(yi−β0−β1xi)1=0
So
∑i=1n(yi−β0−β1xi)1=0
Take the partial derivative of SSE with respect to β1 and setting it to zero.
∂SSE∂β1=∑i=1n2(yi−β0−β1xi)1xi=0
So
∑i=1n(yi−β0−β1xi)1xi=0
Hence,
∑i=1n(yi−y^i)(y^i−y¯)=(β0−y¯)∑i=1n(yi−β0−β1xi)+β1∑i=1n(yi−β0−β1xi)xi=0
SST=SSE+SSR+2∑i=1n(yi−y^i)(y^i−y¯)=SSE+SSR
la source
This is just the Pythagorean theorem!
la source