# Laplace’s Method – the Saddle-Point Approximation

(This originally appeared over on my physics blog. Enjoy!)

Lately, as a form of review, I’ve been taking a quantum mechanics course on Coursera. (It was, in fact, that course which prompted me to recently post a derivation of the Schrödinger equation a few weeks ago.) A couple of the lectures were devoted to a brief introduction to Feynman’s path-integral formulation of quantum mechanics, something typically not brought up in courses at that level, which was a refreshing change of pace. A key component of deriving Feynman’s approach is Laplace’s method, a mathematical technique that I’ve probably not thought about since taking Mathematical Methods for Physicists way back in the Dark Ages when I rode a dinosaur to grad school. (Now, where the heck is my copy of Arfkin?) A review was definitely in order.

“So, what the heck is Laplace’s method?”

Briefly stated, suppose that you are presented with evaluating an integral of the following form:

We are assuming here that f(x) is a twice-differentiable function (an important requirement for the method being discussed here), M is a very large number (the larger the better for the accuracy of this method), and the integration range can be infinite.

So, how can we analytically evaluate this integral? Well, that depends upon what f(x) is. For some functions, sure, we can evaluate this without problem, but here we are discussing a more general case that is independent of the form of f(x). Unfortunately, there is no straightforward way to analytically evaluate this integral for any old form of f(x). What to do?

This is where Laplace’s method comes into play. Laplace’s method is a technique for constructing an approximation of the integral being evaluated. This is done by finding the global maximum of f(x), which is of course done by setting its first derivative equal to zero and finding the corresponding value for x, which we shall call x_{0}. (We also double-check that this is a maximum rather than a minimum or an inflection point by checking that the second derivative is less than zero.) Then, we take the Taylor series expansion of f(x) around x_{0} up to quadratic order:

Of course, we’ve already established that f'(x_{0}) is zero (that is, x_{0} is a stationary point), so we can drop the second term. Remembering that the second derivative is negative at the stationary point, our Taylor series approximation of f(x) then becomes:

Now when we substitute this approximation of f(x) back into our original integral, something very handy takes place:

Well, hey nonny nonny, our integral is now just a Gaussian integral, and we can evaluate that! In fact, the bigger the value of M, the more closely our integral aligns with a Gaussian integral.

“But, what the heck,” I hear you cry, “is a Gaussian integral?”

Well, that is a whole discussion for another day; but, in brief, the Gaussian integral (named for the legendary mathematician Carl Friedrich Gauss, and which is also quite useful for calculating propagators in the path-integral formulation of QM) is as follows:

Or, for a more generalized form:

“But wait a minute,” you should now object. “Our definite integral is over the range from a to b, and those bounds may or may not be infinite. You said so earlier!”

An appropriate observation. You’ve been paying attention, and may now move to the head of the class. However, keep in mind that the exponential decays quite rapidly away from our stationary point x_{0}, particularly for large values of M. Portions of the integral far away from the stationary point do not make significant contributions, so we can accept this shift in integration limits for the sake of our approximation. The integration limits just don’t matter so much in this case.

So, evaluating the Gaussian integral, our approximation becomes as follows:

And that, my friends, is all there is to it. Except for the other way of doing it.

Well, it is really the same way, but “flipping the saddle,” so to speak. We can replace the large number M with , where is a tiny number. In that formulation, instead of expanding about a maximum, we expand about a minimum (due to the negative sign in the substitution we just made), so we are looking for a positive second derivative. The other steps are the same, and the final form of the approximation becomes the following:

So, there you have it: a way to integrate the un-integratable!