Comment fonctionne l'approximation du point de selle?

38

Comment fonctionne l' approximation du point de selle? A quel genre de problème s'agit-il?
(N'hésitez pas à utiliser un exemple particulier ou des exemples à titre d'illustration)

Y a-t-il des inconvénients, des difficultés, des points à surveiller ou des pièges pour les imprudents?

Glen_b -Reinstate Monica
la source

Réponses:

49

L’approximation du point-à-cheval d’une fonction de densité de probabilité (elle fonctionne également pour les fonctions de masse, mais je ne parlerai ici qu’en termes de densités) est une approximation étonnamment efficace qui peut être vue comme un raffinement du théorème de la limite centrale. Donc, cela ne fonctionnera que dans les environnements où il existe un théorème de la limite centrale, mais des hypothèses plus fortes sont nécessaires.

Nous partons de l’hypothèse que la fonction génératrice de moments existe et est différenciable deux fois. Cela implique notamment que tous les moments existent. Soit X une variable aléatoire avec la fonction de génération de moment (de mgf)

M(t)=EetX
et CGF (fonction génératrice des cumulants) K(t)=bûcheM(t) (où bûchedésigne le logarithme naturel). Dans le développement, je suivrai de près Ronald W Butler: "Saddlepoint Approximations with Applications" (CUP). Nous allons développer l'approximation du point de selle en utilisant l'approximation de Laplace à une certaine intégrale. Ecrire
eK(t)=-etXF(X)X=-exp(tX+bûcheF(X))X=-exp(-h(t,X))X
h(t,X)=-tX-bûcheF(X) . Nous allons maintenant développerh(t,X) dansX considérantt comme une constante. Cela donne
h(t,X)=h(t,X0)+h(t,X0)(X-X0)+12h"(t,X0)(X-X0)2+
Désigne la différenciation par rapport àX . Notez que
h(t,x)=tXbûcheF(X)h"(t,X)=-2X2bûcheF(X)>0
(la dernière inégalité par hypothèse, car elle est nécessaire au bon fonctionnement de l'approximation). SoitXtêtre la solution àh(t,Xt)=0. Nous supposerons que cela donne un minimum pour h(t,X)en fonction deX. Grâcecette extension dans l'intégrale etoubli de lapartie, donne
eK(t)exp(h(t,xt)12h(t,xt)(xxt)2)dx=eh(t,xt)e12h(t,xt)(xxt)2dx
which is a Gaussian integral, giving
eK(t)eh(t,xt)2πh(t,xt).
This gives (a first version) of the saddlepoint approximation as
(*)f(xt)h(t,xt)2πexp(K(t)txt)
Note that the approximation has the form of an exponential family.

Now we need to do some work to get this in a more useful form.

From h(t,xt)=0 we get

t=xtlogf(xt).
Differentiating this with respect to xt gives
txt=2xt2logf(xt)>0
(by our assumptions), so the relationship between t and Xt is monotone, so Xt is well defined. We need an approximation to XtbûcheF(Xt). To that end, we get by solving from (*)
(**)bûcheF(Xt)=K(t)-tXt-12bûche2π-2Xt2bûcheF(Xt).
Assuming the last term above only depends weakly on Xt, so its derivative with respect to Xt is approximately zero (we will come back to comment on this), we get
logf(xt)xt(K(t)xt)txtt
0t+logf(xt)xt=(K(t)xt)txt
txt
(§)K(t)xt=0,
which is called the saddlepoint equation.

(*)

h(t,xt)=2logf(xt)xt2=xt(logf(xt)xt)=xt(t)=(xtt)1
and that we can find by implicit differentiation of the saddlepoint equation K(t)=xt:
xtt=K(t).
The result is that (up to our approximation)
h(t,xt)=1K(t)
Putting everything together, we have the final saddlepoint approximation of the density f(x) as
f(xt)eK(t)txt12πK(t).
Now, to use this practically, to approximate the density at a specific point xt, we solve the saddlepoint equation for that xt to find t.

The saddlepoint approximation is often stated as an approximation to the density of the mean based on n iid observations X1,X2,,Xn. The cumulant generating function of the mean is simply nK(t), so the saddlepoint approximation for the mean becomes

f(x¯t)=enK(t)ntx¯tn2πK(t)

Let us look at a first example. What does we get if we try to approximate the standard normal density

f(x)=12πe12x2
The mgf is M(t)=exp(12t2) so
K(t)=12t2K(t)=tK(t)=1
so the saddlepoint equation is t=xt and the saddlepoint approximation gives
f(xt)e12t2txt12π1=12πe12xt2
so in this case the approximation is exact.

Let us look at a very different application: Bootstrap in the transform domain, we can do bootstrapping analytically using the saddlepoint approximation to the bootstrap distribution of the mean!

Assume we have X1,X2,,Xn iid distributed from some density f (in the simulated example we will use a unit exponential distribution). From the sample we calculate the empirical moment generating function

M^(t)=1ni=1netxi
and then the empirical cgf K^(t)=logM^(t). We need the empirical mgf for the mean which is log(M^(t/n)n) and the empirical cgf for the mean
K^X¯(t)=nlogM^(t/n)
which we use to construct a saddlepoint approximation. In the following some R code (R version 3.2.3):

set.seed(1234)
x  <-  rexp(10)

require(Deriv)   ### From CRAN
drule[["sexpmean"]]   <-  alist(t=sexpmean1(t))  # adding diff rules to 
                                                 # Deriv
drule[["sexpmean1"]]  <-  alist(t=sexpmean2(t))

###

make_ecgf_mean  <-   function(x)   {
    n  <-  length(x)
    sexpmean  <-  function(t) mean(exp(t*x))
    sexpmean1 <-  function(t) mean(x*exp(t*x))
    sexpmean2 <-  function(t) mean(x*x*exp(t*x))
    emgf  <-  function(t) sexpmean(t)
    ecgf  <-   function(t)  n * log( emgf(t/n) )
    ecgf1 <-   Deriv(ecgf)
    ecgf2 <-   Deriv(ecgf1)
    return( list(ecgf=Vectorize(ecgf),
                 ecgf1=Vectorize(ecgf1),
                 ecgf2 =Vectorize(ecgf2) )    )
}

### Now we need a function solving the saddlepoint equation and constructing
### the approximation:
###

make_spa <-  function(cumgenfun_list) {
    K  <- cumgenfun_list[[1]]
    K1 <- cumgenfun_list[[2]]
    K2 <- cumgenfun_list[[3]]
    # local function for solving the speq:
    solve_speq  <-  function(x) {
          # Returns saddle point!
          uniroot(function(s) K1(s)-x,lower=-100,
                  upper = 100, 
                  extendInt = "yes")$root
}
    # Function finding fhat for one specific x:
    fhat0  <- function(x) {
        # Solve saddlepoint equation:
        s  <-  solve_speq(x)
        # Calculating saddlepoint density value:
        (1/sqrt(2*pi*K2(s)))*exp(K(s)-s*x)
    }
    # Returning a vectorized version:
    return(Vectorize(fhat0))
} #end make_fhat

( I have tried to write this as general code which can be modified easily for other cgfs, but the code is still not very robust ...)

Then we use this for a sample of ten independent observations from a unit exponential distribution. We do the usual nonparametric bootstrapping "by hand", plot the resulting bootstrap histogram for the mean, and overplot the saddlepoint approximation:

> ECGF  <- make_ecgf_mean(x)
> fhat  <-  make_spa(ECGF)
> fhat
function (x) 
{
    args <- lapply(as.list(match.call())[-1L], eval, parent.frame())
    names <- if (is.null(names(args))) 
        character(length(args))
    else names(args)
    dovec <- names %in% vectorize.args
    do.call("mapply", c(FUN = FUN, args[dovec], MoreArgs = list(args[!dovec]), 
        SIMPLIFY = SIMPLIFY, USE.NAMES = USE.NAMES))
}
<environment: 0x4e5a598>
> boots  <-  replicate(10000, mean(sample(x, length(x), replace=TRUE)), simplify=TRUE)
> boots  <-  replicate(10000, mean(sample(x, length(x), replace=TRUE)), simplify=TRUE)
> hist(boots, prob=TRUE)
> plot(fhat, from=0.001, to=2, col="red", add=TRUE)

Giving the resulting plot:

saddlepoint approximation of bootstrap distribution

The approximation seems to be rather good!

We could get an even better approximation by integrating the saddlepoint approximation and rescaling:

> integrate(fhat, lower=0.1, upper=2)
1.026476 with absolute error < 9.7e-07

Now the cumulative distribution function based on this approximation could be found by numerical integration, but it is also possible to make a direct saddlepoint approximation for that. But that is for another post, this is long enough.

Finally, some comments left out of the development above. In (**) we did an approximation essentially ignoring the third term. Why can we do that? One observation is that for the normal density function, the left-out term contributes nothing, so that approximation is exact. So, since the saddlepoint-approximation is a refinement on the central limit theorem, so we are somewhat close to the normal, so this should work well. One can also look at specific examples. Looking at the saddlepoint approximation to the Poisson distribution, looking at that left-out third term, in this case that becomes a trigamma function, which indeed is rather flat when the argument is not to close to zero.

Finally, why the name? The name come from an alternative derivation, using complex-analysis techniques. Later we can look into that, but in another post!

kjetil b halvorsen
la source
4
What you have so far is great. The development there is very clear.
Glen_b -Reinstate Monica
1
kjetil I attempted to fix four small typos 1. "In the development I wil follow" 2. "needed for the approximatrion to work" 3. "What we misses now" 4. "implicit differentiation of the sadlepoint" but in doing so it looks like I broke one of your equations - I have no idea how, since I changed nothing but those text items (as you can see from the edit history). I'd roll it back but since I can't explain how fixing those errors caused a problem I don't want to cause still further problems. My apologies. (It actually looked like it broke as soon as I opened the edit session)
Glen_b -Reinstate Monica
1
It's possible there's a mathJax bug or a bug in the edit code that leads to this issue.
Glen_b -Reinstate Monica
1
@Christoph Hanck: To get an approximation at some specifix xt, you solve the saddlepoint equation (§) to find t.
kjetil b halvorsen
2
Maybe it is worth pointing out that, when the empirical cgf is used, the resulting saddlepoint approximation is undefined outside the convex hull of the data. See Feuerverger (‎1989) "On the Empirical Saddlepoint Approximation". This should be the case also in the bootstrap example above.
Matteo Fasiolo
15

Ici, je développe la réponse de kjetil, et je me concentre sur les situations dans lesquelles la fonction de génération de cumulant (CGF) est inconnue, mais elle peut être estimée à partir des données. X1,,Xn, où XR. L’estimateur CGF le plus simple est probablement celui de Davison et Hinkley (1988).

K^(λ)=1nΣje=1neλTXje,
qui est celui utilisé dans l'exemple de bootstrap de kjetil. Cet estimateur a l’inconvénient que l’équation du point de selle résultante
K^(λ)=y,
peut être résolu que si y, le point auquel nous voulons évaluer la densité de la pointe de la selle, tombe dans la coque convexe de X1,,Xn.

Wong (1992) et Fasiolo et al. (2016) ont abordé ce problème en proposant deux estimateurs CGF alternatifs, conçus de manière à ce que l'équation du point d'équilibre puisse être résolue pour tout type de calcul.y. La solution de Fasiolo et al. (2016), appelé ESA d'approximation empirique élargie de Saddlepoint, est implémenté dans le package esaddle R et voici quelques exemples.

En tant qu’exemple univarié simple, envisagez d’utiliser ESA pour approcher un Gamma(2,1) densité.

library("devtools")
install_github("mfasiolo/esaddle")
library("esaddle")

########## Simulating data
x <- rgamma(1000, 2, 1)

# Fixing tuning parameter of ESA
decay <-  0.05

# Evaluating ESA at several point
xSeq <- seq(-2, 8, length.out = 200)
tmp <- dsaddle(y = xSeq, X = x, decay = decay, log = TRUE)

# Plotting true density, ESA and normal approximation
plot(xSeq, exp(tmp$llk), type = 'l', ylab = "Density", xlab = "x")
lines(xSeq, dgamma(xSeq, 2, 1), col = 3)
lines(xSeq, dnorm(xSeq, mean(x), sd(x)), col = 2)
suppressWarnings( rug(x) )
legend("topright", c("ESA", "Truth", "Gaussian"), col = c(1, 3, 2), lty = 1)

C'est la forme

entrez la description de l'image ici

En regardant la couverture, il est clair que nous avons évalué la densité de la zone ESA en dehors de la plage des données. Un exemple plus stimulant est le gaussien bivarié déformé suivant.

# Function that evaluates the true density
dwarp <- function(x, alpha) {
  d <- length(alpha) + 1
  lik <- dnorm(x[ , 1], log = TRUE)
  tmp <- x[ , 1]^2
  for(ii in 2:d)
    lik <- lik + dnorm(x[ , ii] - alpha[ii-1]*tmp, log = TRUE)
  lik
}

# Function that simulates from true distribution
rwarp <- function(n = 1, alpha) {
  d <- length(alpha) + 1
  z <- matrix(rnorm(n*d), n, d)
  tmp <- z[ , 1]^2
  for(ii in 2:d) z[ , ii] <- z[ , ii] + alpha[ii-1]*tmp
  z
}

set.seed(64141)
# Creating 2d grid
m <- 50
expansion <- 1
x1 <- seq(-2, 3, length=m)* expansion; 
x2 <- seq(-3, 3, length=m) * expansion
x <- expand.grid(x1, x2) 

# Evaluating true density on grid
alpha <- 1
dw <- dwarp(x, alpha = alpha)

# Simulate random variables
X <- rwarp(1000, alpha = alpha)

# Evaluating ESA density
dwa <- dsaddle(as.matrix(x), X, decay = 0.1, log = FALSE)$llk

# Plotting true density
par(mfrow = c(1, 2))
plot(X, pch=".", col=1, ylim = c(min(x2), max(x2)), xlim = c(min(x1), max(x1)),
     main = "True density", xlab = expression(X[1]), ylab = expression(X[2]))
contour(x1, x2, matrix(dw, m, m), levels = quantile(as.vector(dw), seq(0.8, 0.995, length.out = 10)), col=2, add=T)

# Plotting ESA density
plot(X, pch=".",col=2, ylim = c(min(x2), max(x2)), xlim = c(min(x1), max(x1)),
     main = "ESA density", xlab = expression(X[1]), ylab = expression(X[2]))
contour(x1, x2, matrix(dwa, m, m), levels = quantile(as.vector(dwa), seq(0.8, 0.995, length.out = 10)), col=2, add=T)

entrez la description de l'image ici

La coupe est assez bonne.

Matteo Fasiolo
la source
9

Grâce à l'excellente réponse de Kjetil, j'essaie de trouver moi-même un petit exemple que je voudrais aborder car il semble soulever un point pertinent:

Prendre en compte χ2(m) Distribution. K(t)et ses dérivés peuvent être trouvés ici et sont reproduits dans les fonctions du code ci-dessous.

x <- seq(0.01,20,by=.1)
m <- 5

K  <- function(t,m) -1/2*m*log(1-2*t)
K1 <- function(t,m) m/(1-2*t)
K2 <- function(t,m) 2*m/(1-2*t)^2

saddlepointapproximation <- function(x) {
  t <- .5-m/(2*x)
  exp( K(t,m)-t*x )*sqrt( 1/(2*pi*K2(t,m)) )
}
plot( x, saddlepointapproximation(x), type="l", col="salmon", lwd=2)
lines(x, dchisq(x,df=m), col="lightgreen", lwd=2)

Cela produit

entrez la description de l'image ici

Cela produit évidemment une approximation qui intègre correctement les caractéristiques qualitatives de la densité, mais, comme le confirme le commentaire de Kjetil, n’est pas une densité appropriée, car elle est supérieure à la densité exacte partout. Redimensionner l'approximation comme suit donne l'erreur d'approximation presque négligeable tracée ci-dessous.

scalingconstant <- integrate(saddlepointapproximation, x[1], x[length(x)])$value

approximationerror_unscaled <- dchisq(x,df=m) - saddlepointapproximation(x)
approximationerror_scaled   <- dchisq(x,df=m) - saddlepointapproximation(x) /
                                                    scalingconstant

plot( x, approximationerror_unscaled, type="l", col="salmon", lwd=2)
lines(x, approximationerror_scaled,             col="blue",   lwd=2)

entrez la description de l'image ici

Christoph Hanck
la source
1
C’est une caractéristique, l’approximation du point à cheval n’a pas besoin de s’intégrer à une, mais elle est souvent proche. Il peut être redimensionné par intégration numérique.
kjetil b halvorsen
Il pourrait être plus révélateur de tracer l'erreur relative!
kjetil b halvorsen
approximationerror_unscaled/approximationerror_scaled s'avère osciller autour de 25.90798
Christoph Hanck