**Update (1/26/2014):** Dr. Tony Padilla, one of the folks behind the Numberphile videos, has posted a response to all of the controversy. Plus, an interesting blog entry at Physics Central talking about the applications of this to physics (beyond the prominent mention of string theory).

**Update (1/21/2014):** Mark Carrol-Chu has posted a follow-up, and Evelyn Lamb over at “Roots of Unity” has chimed in as well.

Previously, I had written about the somewhat bizarre behavior of a divergent geometric infinite series, containing no negative numbers, but which appeared to add up to -1:

I discussed how this divergent infinite series actually represents the Taylor Series expansion of the analytic continuation on the complex plane of a function that does actually come out to equal -1 at a specific relevant point. (Both Taylor Series and analytic continuation are topics which warrant their own discussions; but, for now, we’ll hold off on that until another day.)

That article had also contained a video from Minute Physics which discussed this bizarre series, although without delving into details about why the series exhibited such bizarre behavior. Well, the fine folks from Numberphile have posted yet another video about a similarly bizarre series:

“You have to go to infinity, Brady.”

Now, it just so happens that this dovetails nicely into something I’ve been leading up to in my posts: the Riemann zeta function and the Riemann Hypothesis. I have a bit more groundwork to cover before diving into them; but, as it turns out, the Riemann zeta function comes into play in a secondary proof for the sum described in that video. That proof is covered in a secondary video:

So here’s the bizarre series, the sum of all of the natural numbers:

Weird, huh? Obviously, the series doesn’t actually add up to this negative sum. It is a divergent series that blows up to infinity. A bit more on that in a moment. But first, here is the proof presented in the first Numberphile video.

First, let us take the following series, known as Grandi’s Series, an infinite series alternating between 1 and -1:

This series is a bit challenging to evaluate. Looking at the partial sums of an odd number of terms in this series will always yield a value of 1, whereas the partial sum of an even number of terms will always give 0. Now, it can be shown (and is shown in yet another Numberphile video), that the appropriate sum for this series is the average of the two values, 1/2.

Let us also consider the following series:

And our sum of interest is defined as:

First off, let us add S_{2} to itself:

Well, now we are getting somewhere. This yields

Now, let us subtract S_{2} from S:

Now, we’ve already figured out what S_{2} is, so we substitute that in and simplify:

So, there you have it.

Okay, how about a different, more rigorous proof? Let’s take a look at the proof from the second Numberphile video, a proof first discovered by Leonhard Euler. First of all, let us consider the following series:

It can readily be shown (but I’ll leave it as an exercise for the reader for now) that the above is strictly true for values of x less than one. Now let us differentiate with respect to x:

Now, let us set x = -1. What does this give us?

Now we bring the big guns to bear. Here is the Riemann zeta function:

Now, when Euler first worked with this function, he only studied in the context of s being a real number. However, Riemann extended his analyis of the function into the complex plane. We’ll focus for the time being on Euler’s view of the function, but keep in the back of your head that a more rigorous version of this requires considering the complex plane.

Now, let us do a bit of manipulation of the zeta function:

Next, Euler subtracted twice this expression from the original zeta function:

Now we set s=-1 and plug it in:

But we’ve already figured out above that this series is equal to 1/4, so:

Now, keep in mind that Euler was looking at this function on the real number line. The series in question is simply the special case of the zeta function at s=-1. However, there is a catch. In the realm of real numbers, the zeta function only converges for values of s larger than 1. What Riemann realized is that he could make the zeta function converge for all values of s, except for a pole at s=1, by switching to the complex plane and performing the analytic continuation of the function. In that context, the value of the zeta function at s=-1 is in fact -1/12.

Okay, obviously, the series in question doesn’t REALLY add up to -1/12. That is an impossibility in terms of how we ordinarily define algebraic sums. The point here is that we are using a non-standard definition of sums involving analytic continuation on the complex plane. We aren’t just talking about the real number line. The “proofs” in these videos aren’t really rigorous and break a few rules. (Specifically, if you have a series that does not converge, you can’t go adding and subtracting other series to it willy-nilly. Cauchy taught us this.)

Since this distinction between different types of summation isn’t really made clear in the original video, this has caused a bit of a ruckus online. After I had already started writing this blog article, Phil Plait from the Bad Astronomy blog posted an article about the video, and a firestorm ensued in the comments and on Twitter. Mark Chu-Carroll over at the “Good Math, Bad Math” blog picked up on the topic, and Phil Plait subsequently posted a follow-up to clarify the situation.

So, what’s all of this about different *types* of sums?

Back in the middle of the 18th century, Leonhard Euler came up with a proof that the summation we are discussing does indeed equal -1/12. He also came up with the result that 1 – 2 + 3 – 4 + … = 1/4. Both of these results, although obtained rigorously as far as he could tell, seemed paradoxical to him. What was needed was a new way to deal with divergent sums. Towards the end of the 19th century, several mathematicians started coming up with those techniques, some of which were based upon Euler’s work.

The standard algebraic infinite sum is typically defined by taking the n^{th} partial sum of the series in the limit as n goes to infinity. By and large, these alternative summation techniques involve differing definitions for the limits that are taken when evaluating an infinite sum, as well as modifying the domain over which the sum is taken (such switching to the complex plane). These summation techniques frequently focus on the properties of partial sums since they address scenarios where the complete sums are not well-defined. By using these techniques, we can assign meaningful values to series which do not converge to a finite value under standard algebraic summation.

For example, suppose we have an infinite series whose n^{th} partial sum oscillates symmetrically as n goes to infinity, never converging on a specific value. An example of this would be the Grandi’s Series we mentioned earlier, whose partial sums alternate between 1 and 0. However, instead of using ordinary algebraic summation, we can define a summation in which we sum the series to the n^{th} term, then divide by n, doing this for all values of n. If the series converges to a value in the limit as n goes to infinity, we say that the series is Cesàro summable, We’ve essentially taken the average of all partial sums in the series.

The Grandi’s Series can also be evaluated terms of something called an Abel sum. Similarly, 1 – 2 + 3 – 4 + … can be evaluated using Abel summation. And for our sum of all natural numbers, it turns out that their are two summation techniques which apply: zeta function regularization (which is a formalization of Euler’s approach) and Ramanujan summation.

But what does it mean when we say that we are finding a value for something that doesn’t have a value? Let us take a look at a generic geometric series:

Let’s multiply the whole thing by a factor r, and then subtract the original series from that:

For the original series to be convergent, r has to be less than one. In that scenario, evaluating this formula would give us a value equal to the limit of n partial sums of the series as the limit of n goes to infinity. For other values of r, where the series is divergent, we can still use this formula to get a value. This value is not the limit of n partial sums as n goes to infinity, since that limit does not exist. But we still have a value using the same formula. (For the special case of r=-1, we have an oscillating value, as with our Grandi’s Series.)

That would be a rather lengthy discussion, but have a look at a series of posts by mathematical physicist John Baez (who happens to be the cousin of singer Joan Baez) listed below in the “For More Information” section. He covers the topic quite nicely. This sum also shows up in other areas of physics, including QED calculations of the Casimir Effect.

- The Euler-Maclaurin formula, Bernoulli numbers, the zeta function, and real-variable analytic continuation | What’s new
- Mathematical Physicist John Baez has a series of articles which hit upon this topic:
- The Reference Frame: Zeta-function regularization
- The Reference Frame: Why is the sum of integers equal to -1/12
- math.arizona.edu/~cais/Papers/Expos/div.pdf
- 1 + 2 + 3 + 4 + ⋯ – Wikipedia, the free encyclopedia
- 1 − 2 + 3 − 4 + · · · – Wikipedia, the free encyclopedia
- Grandi’s series – Wikipedia, the free encyclopedia
- Summation of Grandi’s series – Wikipedia, the free encyclopedia
- Zeta function regularization – Wikipedia, the free encyclopedia
- Ramanujan summation – Wikipedia, the free encyclopedia
- Cesàro summation – Wikipedia, the free encyclopedia
- Divergent series – Wikipedia, the free encyclopedia
- Divergent geometric series – Wikipedia, the free encyclopedia
- Analytic continuation – Wikipedia, the free encyclopedia

]]>

(Warning: Much of this was written in a cloud of decongestants. If there are any errors, well, you know why….)

Infinity can be a slippery concept, and it causes no end of woes to mathematicians. But, over the years, they have gotten a better and better handle on the concept. This was helped greatly by the work of Georg Cantor, who developed the basic mathematical tools used today for grappling with infinities. But he was by no means the first, nor the last. The legendary 18th century mathematician Leonhard Euler made great strides in devising methods for dealing with divergent infinite series, and the definitive work on that subject is the 1949 book by G. H. Hardy, *Divergent Series*.

Let’s get a common misconception out of the way: Something divided by zero most certainly does NOT equal infinity. Division by zero is undefined, as the concept is absolutely meaningless by any mathematical definition of division. What is the case is that, in the limit as the divisor of an expression approaches zero, the value of the expression goes to either positive or negative infinity. (Depending upon the function, it can be either positive or negative depending upon which direction the limit is taken from.) A function exhibiting such behavior is said to have a discontinuity at that point.

Knowing how to deal with infinities and divergences can be crucial. The need to tame divergences in the blackbody radiation problem led Planck to take the first step in creating quantum mechanics (although, to be fair, Planck basically reformulated the problem in a form that didn’t result in infinities). Quantum electrodynamics calculations required the creation the “dippy procedure” of renormalization. And, on the bleeding edge of theoretical physics, the challenge of reconciling quantum theory and general relativity is fraught with seemingly intractable divergences. But such heady problems aren’t the only places where divergences crop up. They can arise in the most seemingly simple math problems.

By way of introduction, I would like you to take a moment and watch this video. Go ahead. I’ll wait.

Did you watch it? Did your brain threaten to have an aneurism partway through it? Yeah, mine as well.

I can hear your protests already. “That can’t be right! Infinity doesn’t equal -1. That same series can’t be equal to two different values. For crying out loud, there aren’t even any negative values in that sum! I call shenanigans!”

As well you might be tempted to do. Math has a reputation for being among the most rigorous of disciplines. No hand-waving allowed. The basic explanation is that there is more going on under the hood than is being described. Working with infinite series is a tricky business, and there is a lot of bookkeeping to take into account.

But, before I dive into explaining what is going on, here is a recap for those of you who didn’t bother to watch the video (for whatever bizarre reason).

Consider the following infinite series:

Now, it is pretty obvious that this series diverges to infinity. But we know that we can safely multiply anything by 1 and get what we started with. We also know that 1 can be expressed as (2-1), so let’s multiply our series by (2-1):

Cancel out the offsetting terms, and we are left with

Scary, n’est-ce pas?

To be honest, there is a little hanky-panky in the procedure described above. The cancellation of terms is not entirely rigorous. Every time a positive term and a corresponding negative term is cancelled out, there is still another larger positive term offsetting the initial -1. But then there is another negative term to offset that, and so on. Programmers will recognize this as a race condition. Dealing with infinite series is a tricky business.

But there is a legitimate mathematical procedure roughly equivalent to the above, and which yields the same result. The idea is to take a step back, and consider a more generic expression which reduces to the same problem. More specifically, I’m going to start with a function, and show that the infinite series we are describing is a legitimate power series expansion of that function. Yep, we are working the problem from back to front!

Consider the following expression (shown in the graph to the right, after a variable substitution of y=2x):

The Maclaurin Series expansion for this is:

Now let us substitute y=2x:

Well, fine and dandy. For x=1, the expansion reduces back to our original series, and the equivalence shown does in fact work out to -1. But there is a problem. That series still diverges. In fact, it will diverge for any value of x greater than or equal to 1/2. (In other words, the radius of convergence for this series is 1/2.) So how can this equivalence be correct?

Earlier, we started with a Maclaurin Series expansion for our function, which is simply a Taylor Series expansion around the origin. Now we can take a Taylor Series expansion around any point within the radius of convergence of our original expansion to try to extend the region for which the expansion is valid, but we keep coming up against that discontinuity at x=1/2. Imagine being a tightrope artist, where the real number line is the tightrope, and there is a pole at x=1/2 which, ideally, we would like to get past. Well, the simple solution is to lower the tightrope to the ground, such that we can simply walk *around* the pole! In this case, the ground is the complex plane.

If we extend the domain of from the real numbers to the complex plane , the function is no longer divergent except at . We can take a Taylor Series expansion around a point off to the side of the real number line, but still within the radius of convergence of our original Maclaurin Series expansion. (With the domain extended to the complex plane, the radius of convergence is now a disk on the complex plane.) Then we can take another Taylor series expansion within the radius of convergence of THAT expansion, and so forth, building overlapping disks of radii of convergence, until we have covered the entire complex plane (except for the pole at x=1/2), thus patching together a Riemann surface over which the expansion is valid. With the domain thus extended, .

This approach of changing the domain of a function to sidestep divergences and other difficulties is known as analytic continuation.

Consider for a moment, the Riemann hypothesis, the proof of which remains one of the longest-standing goals in mathematics. The hypothesis states that the non-trivial roots of the Riemann Zeta function lie along a critical strip such that the real portion of each root has a value of 1/2. But the Riemann zeta function is only convergent for real values of 1 or greater. Due to this, the zeta function must be analytically continued in order for the hypothesis to be applicable. Not only are the consequences of the Riemann hypothesis of immense importance to the field of number theory, but it turns out that the distribution of roots along the aforementioned critical strip bears an uncanny similarity to the distribution of atomic energy levels in nuclei.

- 1 + 2 + 4 + 8 + … – Wikipedia, the free encyclopedia
- Analytic Continuation
- Analytic Continuation — from Wolfram MathWorld
- www.nhn.ou.edu/~milton/p5013/chap6.pdf
- math.uci.edu/~mfried/booklist-ret/chpanal.pdf
- Analytic Continuation – CCRMA – Stanford University
- PlanetMath: analytic continuation
- Analytic Continuation of the Riemann Zeta Function – ProofWiki
- Strange Sum

]]>

Lately, as a form of review, I’ve been taking a quantum mechanics course on Coursera. (It was, in fact, that course which prompted me to recently post a derivation of the Schrödinger equation a few weeks ago.) A couple of the lectures were devoted to a brief introduction to Feynman’s path-integral formulation of quantum mechanics, something typically not brought up in courses at that level, which was a refreshing change of pace. A key component of deriving Feynman’s approach is Laplace’s method, a mathematical technique that I’ve probably not thought about since taking Mathematical Methods for Physicists way back in the Dark Ages when I rode a dinosaur to grad school. (Now, where the heck is my copy of Arfkin?) A review was definitely in order.

“So, what the heck is Laplace’s method?”

Briefly stated, suppose that you are presented with evaluating an integral of the following form:

We are assuming here that f(x) is a twice-differentiable function (an important requirement for the method being discussed here), M is a very large number (the larger the better for the accuracy of this method), and the integration range can be infinite.

So, how can we analytically evaluate this integral? Well, that depends upon what f(x) is. For some functions, sure, we can evaluate this without problem, but here we are discussing a more general case that is independent of the form of f(x). Unfortunately, there is no straightforward way to analytically evaluate this integral for any old form of f(x). What to do?

This is where Laplace’s method comes into play. Laplace’s method is a technique for constructing an approximation of the integral being evaluated. This is done by finding the global maximum of f(x), which is of course done by setting its first derivative equal to zero and finding the corresponding value for x, which we shall call x_{0}. (We also double-check that this is a maximum rather than a minimum or an inflection point by checking that the second derivative is less than zero.) Then, we take the Taylor series expansion of f(x) around x_{0} up to quadratic order:

Of course, we’ve already established that f'(x_{0}) is zero (that is, x_{0} is a stationary point), so we can drop the second term. Remembering that the second derivative is negative at the stationary point, our Taylor series approximation of f(x) then becomes:

Now when we substitute this approximation of f(x) back into our original integral, something very handy takes place:

Well, hey nonny nonny, our integral is now just a Gaussian integral, and we can evaluate that! In fact, the bigger the value of M, the more closely our integral aligns with a Gaussian integral.

“But, what the heck,” I hear you cry, “is a Gaussian integral?”

Well, that is a whole discussion for another day; but, in brief, the Gaussian integral (named for the legendary mathematician Carl Friedrich Gauss, and which is also quite useful for calculating propagators in the path-integral formulation of QM) is as follows:

Or, for a more generalized form:

“But wait a minute,” you should now object. “Our definite integral is over the range from a to b, and those bounds may or may not be infinite. You said so earlier!”

An appropriate observation. You’ve been paying attention, and may now move to the head of the class. However, keep in mind that the exponential decays quite rapidly away from our stationary point x_{0}, particularly for large values of M. Portions of the integral far away from the stationary point do not make significant contributions, so we can accept this shift in integration limits for the sake of our approximation. The integration limits just don’t matter so much in this case.

So, evaluating the Gaussian integral, our approximation becomes as follows:

And that, my friends, is all there is to it. Except for the other way of doing it.

Well, it is really the same way, but “flipping the saddle,” so to speak. We can replace the large number M with , where is a tiny number. In that formulation, instead of expanding about a maximum, we expand about a minimum (due to the negative sign in the substitution we just made), so we are looking for a positive second derivative. The other steps are the same, and the final form of the approximation becomes the following:

So, there you have it: a way to integrate the un-integratable!

]]>

“You’ve broken maths, Brady. Stop that!”

Did you follow that? Let’s recap.

Recall that the factorial of a number “n” (denoted as n!) is the product of that number with all of the positive integers smaller than that number. In other words 5! = 5 x 4 x 3 x 2 x 1, 3! = 3 x 2 x 1, and so forth. In more formal mathematical notation, this is represented by

.

An alternative definition (which will come in handy a little later) is this:

But 0! = 1? Really? After all, 0 times… Er, wait a minute. There are no positive integers less than zero. That seems to make the first definition fall apart. As for the second definition, that seems a little *ad hoc*.

In the video, Dr. Grime takes two approaches to explaining this. The first one comes across as a bit of hand-waving, and basically involves a variant of the second definition above for calculating how factorials are calculated. He starts off with the example of the written-out form of 5!:

He then points out the following relationship between 5! and 4!:

So far, so good. Continuing this pattern:

Arooh? If you think about it for a bit, it becomes apparent that this relationship:

is really just a subtle re-arrangement of the second definition we gave above (although this is never explicitly mentioned in the video). Looks like we are good on that count.

(The video goes on to demonstrate that this pattern breaks for taking the factorial of -1. The result is division by zero, which is undefined.)

But what about the first definition? Well, that is actually covered (with a bit of hand-waving) in the video. Dr. Grime points out that one of the key usages of the factorial is to calculate the number of ways that a given collection of things (whether they are numbers, coins, dice, colors…whatever) can be re-arranged. In other words, the factorial gives the number of permutations of the ordering of a set. Three objects can be arrange in six ways, hence 3! = 6. One object can be arranged one way, so 1!=1. But zero objects? Well, now we are talking about how many ways we can arrange a collection of nothing. Basically, there is only one arrangement of a collection of nothing.

In other words, we aren’t taking the factorial of the number zero. We are calculating the permutations of the null set. Which is one. Mathematically, we are taking advantage of a convention used by mathematicians of regarding the product of no numbers at all as always being equal to 1. This is also referred to as the empty product.

The video could have ended there, but then something happened that I didn’t really see coming in a discussion targeting a general audience. Dr. Grime introduced the Gamma function, something which crops up pretty frequently in higher math. Essentially, the Gamma function generalizes the concept of the factorial to non-integers, with the argument shifted by one such that

In fact, the Gamma function extends the concept of factorials even to negative numbers, except for the negative integers, and, by analytic continuation, across the complex plane. The Gamma function is defined as follows:

This function is undefined for negative integers and the origin.

I won’t delve at this time into the derivation or uses of this function (although I should at some point); but, for the moment, suffice it to say that it was the creation of Leonhard Euler, the Greatest Mathematician Who Ever Lived™. (Seriously, the only other folks who even come close are Carl Friedrich Gauss, David Hilbert, Bernhard Riemann, and, of course, Euclid.)

- Factorial – Wikipedia, the free encyclopedia
- Factorial — from Wolfram MathWorld
- Gamma function – Wikipedia, the free encyclopedia
- Gamma Function — from Wolfram MathWorld

]]>

First up, on April 17th (my birthday, no less), the journal *Annals of Mathematics* received a submission from Yitang Zhang of the University of New Hampshire purporting to at least partially prove the twin prime conjecture. The paper (available for preview here if you have access) will appear in an upcoming issue, but it is already creating a stir in the mathematics community. Building upon earlier work by Goldston, Pintz, and Yildrim (GPY), Zhang’s paper demonstrates that for any integer N less than 70 million, there are infinitely many pairs of primes that differ by N.

More at these links:

- Yitang Zhang Proves ‘Landmark’ Theorem in Distribution of Prime Numbers | Simons Foundation
- Bounded Gaps Between Primes | The n-Category Café
- Twin Prime Conjecture — from Wolfram MathWorld
- K. Soundararajan, Small gaps between prime numbers: the work of Goldston-Pintz-Yıldırım
- Goldbach Variations | Roots of Unity, Scientific American Blog Network

But wait, there’s more. H. A. Helfgott claims to a proven the ternary Goldbach conjecture.

In its most basic form, Goldbach’s conjecture states that every even integer greater than 2 can be expressed as the sum of two primes. The ternary Goldbach conjecture, also known as the weak or odd Goldbach conjecture, states that every odd number greater than 5 can be expressed as the sum of three primes (with repeats allowed). For example, 7 can be written as 2+2+3.

More at these links:

- Goldbach Conjecture — from Wolfram MathWorld
- The Prime Glossary: Goldbach’s conjecture
- Goldbach Variations | Roots of Unity, Scientific American Blog Network
- Cracking Goldbach’s Conjecture
- On equivalent forms of the weak Goldbach conjecture | The Aperiodical

]]>

]]>