Variance d'une variable aléatoire bornée

22

Supposons qu'une variable aléatoire ait une borne inférieure et une borne supérieure [0,1]. Comment calculer la variance d'une telle variable?

Piotr
la source
8
De la même manière que pour une variable illimitée - en définissant les limites d'intégration ou de sommation de manière appropriée.
Scortchi - Réintégrer Monica
2
Comme l'a dit @Scortchi. Mais je suis curieux de savoir pourquoi vous pensiez que cela pourrait être différent?
Peter Flom - Réintègre Monica
3
À moins que vous ne sachiez rien de la variable (auquel cas une borne supérieure de la variance pourrait être calculée à partir de l'existence de bornes), pourquoi le fait qu'elle soit bornée entre-t-il dans le calcul?
Glen_b -Reinstate Monica
6
Une limite supérieure utile de la variance d'une variable aléatoire qui prend des valeurs dans [ a , b ][a,b] , avec une probabilité 11 est ( b - a ) deux / 4(ba)2/4 et est obtenu par une variable aléatoire discrète qui prend des valeurs aa et bb avec une égale probabilité 1212 . Un autre point à garder à l'esprit est que la variance est garantie d'exister alors qu'une variable aléatoire non bornée peut ne pas avoir de variance (certaines, comme les variables aléatoires de Cauchy n'ont même pas de moyenne).
Dilip Sarwate
7
Il existe une variable aléatoire discrète dont la variance est égale à ( b - a ) 24(ba)24 exactement:une variable aléatoire qui prend les valeursaaetbbavec une probabilité égale1212 . Donc, au moins, nous savons qu'une limite supérieure universelle sur la variance ne peut pas être inférieure à(b-a)24(ba)24 .
Dilip Sarwate

Réponses:

46

Vous pouvez prouver l'inégalité de Popoviciu comme suit. Utiliser la notation m = inf Xm=infX et M = sup XM=supX . Définissez une fonction gg par g ( t ) = E [ ( X - t ) 2 ].

g(t)=E[(Xt)2].
Calcul de la dérivée g g et résolution de g ( t ) = - 2 E [ X ] + 2 t = 0,
g(t)=2E[X]+2t=0,
nous constatons que gg atteint son minimum à t = E [ X ]t=E[X] (notons que g > 0g′′>0 ).

Maintenant, considérons la valeur de la fonction gg au point spécial t = M + m2t=M+m2 . Ce doit être le cas que Var[X]=g(E[X])g(M+m2 ).

Var[X]=g(E[X])g(M+m2).
Mais g ( M + m2 )=E[(X- M + m2 )2]=14 E[((X-m)+(X-M))2].
g(M+m2)=E[(XM+m2)2]=14E[((Xm)+(XM))2].
Puisque X - m 0Xm0 et X - M 0XM0 , nous avons ( ( X - m ) + ( X - M ) ) 2( (( X - m ) - ( X - M ) ) 2 = ( M - m ) 2,
((Xm)+(XM))2((Xm)(XM))2=(Mm)2,
ce qui implique que 14 E[((X-m)+(X-M))2]14 E[((X-m)-(X-M))2]=(M-m)24.
14E[((Xm)+(XM))2]14E[((Xm)(XM))2]=(Mm)24.
Nous avons donc démontré l'inégalité de Popoviciu V a r [ X ] ( M - m ) 24.
Var[X](Mm)24.

Zen
la source
3
Belle approche: c'est bien de voir des démonstrations rigoureuses de ce genre de choses.
whuber
22
+1 Sympa! J'ai appris les statistiques bien avant que les ordinateurs ne soient en vogue, et une idée qui nous a été expliquée était que E [ ( X - t ) 2 ] = E [ ( ( X - μ ) - ( t - μ ) ) 2 ] = E [ ( X - μ ) 2 ] + ( t - μ ) 2
E[(Xt)2]=E[((Xμ)(tμ))2]=E[(Xμ)2]+(tμ)2
ce qui a permis le calcul de la variance en trouvant la somme des carrés des écarts de tout point commode t et en ajustant ensuite le biais. Ici bien sûr, cette identité donne une preuve simple du résultat que g ( t ) a une valeur minimale à t = μ sans la nécessité de dérivés etc.tg(t)t=μ
Dilip Sarwate
18

Soit F une distribution sur [ 0 , 1 ] . Nous montrerons que si la variance de F est maximale, alors F ne peut avoir aucun support à l'intérieur, d'où il résulte que F est Bernoulli et le reste est trivial.F[0,1]FFF

En termes de notation, soit μ k = 1 0 x k d F ( x ) le k ème moment brut de F (et, comme d'habitude, on écrit μ = μ 1 et σ 2 = μ 2 - μ 2 pour la variance).μk=10xkdF(x)kFμ=μ1σ2=μ2μ2

Nous savons que F n'a pas tout son support à un moment donné (la variance est minime dans ce cas). Cela implique entre autres que μ se situe strictement entre 0 et 1 . Pour argumenter par contradiction, supposons qu'il existe un sous-ensemble mesurable I à l'intérieur ( 0 , 1 ) pour lequel F ( I ) > 0 . Sans aucune perte de généralité, nous pouvons supposer (en changeant X en 1 - X si besoin est) que F ( J = IFμ01I(0,1)F(I)>0X1X( 0 , μ ] ) > 0 : en d'autres termes, J est obtenu en coupant toute partie de I au-dessus de la moyenne et J a une probabilité positive.F(J=I(0,μ])>0JIJ

Modifions F en F en retirant toute probabilité de J et en la plaçant à 0 . FFJ0 Ce faisant, μ k devientμk

μ k = μ k - J x k d F ( x ) .

μk=μkJxkdF(x).

As a matter of notation, let us write [g(x)]=Jg(x)dF(x)[g(x)]=Jg(x)dF(x) for such integrals, whence

μ2=μ2[x2],μ=μ[x].

μ2=μ2[x2],μ=μ[x].

Calculate

σ2=μ2μ2=μ2[x2](μ[x])2=σ2+((μ[x][x2])+(μ[x][x]2)).

σ2=μ2μ2=μ2[x2](μ[x])2=σ2+((μ[x][x2])+(μ[x][x]2)).

The second term on the right, (μ[x][x]2)(μ[x][x]2), is non-negative because μxμx everywhere on JJ. The first term on the right can be rewritten

μ[x][x2]=μ(1[1])+([μ][x][x2]).

μ[x][x2]=μ(1[1])+([μ][x][x2]).

The first term on the right is strictly positive because (a) μ>0μ>0 and (b) [1]=F(J)<1[1]=F(J)<1 because we assumed FF is not concentrated at a point. The second term is non-negative because it can be rewritten as [(μx)(x)][(μx)(x)] and this integrand is nonnegative from the assumptions μxμx on JJ and 0x10x1. It follows that σ2σ2>0σ2σ2>0.

We have just shown that under our assumptions, changing FF to FF strictly increases its variance. The only way this cannot happen, then, is when all the probability of FF is concentrated at the endpoints 00 and 11, with (say) values 1p1p and pp, respectively. Its variance is easily calculated to equal p(1p)p(1p) which is maximal when p=1/2p=1/2 and equals 1/41/4 there.

Now when FF is a distribution on [a,b][a,b], we recenter and rescale it to a distribution on [0,1][0,1]. The recentering does not change the variance whereas the rescaling divides it by (ba)2(ba)2. Thus an FF with maximal variance on [a,b][a,b] corresponds to the distribution with maximal variance on [0,1][0,1]: it therefore is a Bernoulli(1/2)(1/2) distribution rescaled and translated to [a,b][a,b] having variance (ba)2/4(ba)2/4, QED.

whuber
la source
Interesting, whuber. I didn't know this proof.
Zen
6
@Zen It's by no means as elegant as yours. I offered it because I have found myself over the years thinking in this way when confronted with much more complicated distributional inequalities: I ask how the probability can be shifted around in order to make the inequality more extreme. As an intuitive heuristic it's useful. By using approaches like the one laid out here, I suspect a general theory for proving a large class of such inequalities could be derived, with a kind of hybrid flavor of the Calculus of Variations and (finite dimensional) Lagrange multiplier techniques.
whuber
Perfect: your answer is important because it describes a more general technique that can be used to handle many other cases.
Zen
@whuber said - "I ask how the probability can be shifted around in order to make the inequality more extreme." -- this seems to be the natural way to think about such problems.
Glen_b -Reinstate Monica
There appear to be a few mistakes in the derivation. It should be μ[x][x2]=μ(1[1])[x]+([μ][x][x2]).
μ[x][x2]=μ(1[1])[x]+([μ][x][x2]).
Also, [(μx)(x)][(μx)(x)] does not equal [μ][x][x2][μ][x][x2] since [μ][x][μ][x] is not the same as μ[x]μ[x]
Leo
13

If the random variable is restricted to [a,b][a,b] and we know the mean μ=E[X]μ=E[X], the variance is bounded by (bμ)(μa)(bμ)(μa).

Let us first consider the case a=0,b=1a=0,b=1. Note that for all x[0,1]x[0,1], x2xx2x, wherefore also E[X2]E[X]E[X2]E[X]. Using this result, σ2=E[X2](E[X]2)=E[X2]μ2μμ2=μ(1μ).

σ2=E[X2](E[X]2)=E[X2]μ2μμ2=μ(1μ).

To generalize to intervals [a,b][a,b] with b>ab>a, consider YY restricted to [a,b][a,b]. Define X=YabaX=Yaba, which is restricted in [0,1][0,1]. Equivalently, Y=(ba)X+aY=(ba)X+a, and thus Var[Y]=(ba)2Var[X](ba)2μX(1μX).

Var[Y]=(ba)2Var[X](ba)2μX(1μX).
where the inequality is based on the first result. Now, by substituting μX=μYabaμX=μYaba, the bound equals (ba)2μYaba(1μYaba)=(ba)2μYababμYba=(μYa)(bμY),
(ba)2μYaba(1μYaba)=(ba)2μYababμYba=(μYa)(bμY),
which is the desired result.
Juho Kokkala
la source
8

At @user603's request....

A useful upper bound on the variance σ2σ2 of a random variable that takes on values in [a,b][a,b] with probability 11 is σ2(ba)24σ2(ba)24. A proof for the special case a=0,b=1a=0,b=1 (which is what the OP asked about) can be found here on math.SE, and it is easily adapted to the more general case. As noted in my comment above and also in the answer referenced herein, a discrete random variable that takes on values aa and bb with equal probability 1212 has variance (ba)24(ba)24 and thus no tighter general bound can be found.

Another point to keep in mind is that a bounded random variable has finite variance, whereas for an unbounded random variable, the variance might not be finite, and in some cases might not even be definable. For example, the mean cannot be defined for Cauchy random variables, and so one cannot define the variance (as the expectation of the squared deviation from the mean).

Dilip Sarwate
la source
this is a special case of @Juho's answer
Aksakal
It was just a comment, but I could also add that this answer does not answer the question asked.
Aksakal
@Aksakal So??? Juho was answering a slightly different and much more recently asked question. This new question has been merged with the one you see above, which I answered ten months ago.
Dilip Sarwate
0

are you sure that this is true in general - for continuous as well as discrete distributions? Can you provide a link to the other pages? For a general distibution on [a,b][a,b] it is trivial to show that Var(X)=E[(XE[X])2]E[(ba)2]=(ba)2.

Var(X)=E[(XE[X])2]E[(ba)2]=(ba)2.
I can imagine that sharper inequalities exist ... Do you need the factor 1/41/4 for your result?

On the other hand one can find it with the factor 1/41/4 under the name Popoviciu's_inequality on wikipedia.

This article looks better than the wikipedia article ...

For a uniform distribution it holds that Var(X)=(ba)212.

Var(X)=(ba)212.
Ric
la source
This page states the result with the start of a proof that gets a bit too involved for me as it seems to require an understanding of the "Fundamental Theorem of Linear Programming". sci.tech-archive.net/Archive/sci.math/2008-06/msg01239.html
Adam Russell
Thank you for putting a name to this! "Popoviciu's Inequality" is just what I needed.
Adam Russell
2
This answer makes some incorrect suggestions: 1/4 is indeed right. The reference to Popoviciu's inequality will work, but strictly speaking it applies only to distributions with finite support (in particular, that includes no continuous distributions). A limiting argument would do the trick, but something extra is needed here.
whuber
2
A continuous distribution can approach a discrete one (in cdf terms) arbitrarily closely (e.g. construct a continuous density from a given discrete one by placing a little Beta(4,4)-shaped kernel centered at each mass point - of the appropriate area - and let the standard deviation of each such kernel shrink toward zero while keeping its area constant). Such discrete bounds as discussed here will thereby also act as bounds on continuous distributions. I expect you're thinking about continuous unimodal distributions... which indeed have different upper bounds.
Glen_b -Reinstate Monica
2
Well ... my answer was the least helpful but I would leave it here due to the nice comments. Cheers,R
Ric