Update (2/20/2014): And this story has now even hit the New York Times. Also, plus.maths.org has a wonderful overview, including details on how this series ties into the Casimir effect, as well as providing a link another article about Ramanujan.Update (2/7/2014): The folks at Physics Central have pointed out a way of relating -1/12 to the sum of all natural numbers that I had not seen before. Also, Johannes  Koelman revisits the sum of all powers of two.

Update (1/26/2014): Dr. Tony Padilla, one of the folks behind the Numberphile videos, has posted a response to all of the controversy. Plus, an interesting blog entry at Physics Central talking about the applications of this to physics (beyond the prominent mention of string theory).

Update (1/21/2014): Mark Carrol-Chu has posted a follow-up, and Evelyn Lamb over at “Roots of Unity” has chimed in as well.

Previously, I had written about the somewhat bizarre behavior of a divergent geometric infinite series, containing no negative numbers, but which appeared to add up to -1:

$\displaystyle \sum\limits_{n=0}^\infty {2^n} = 1 + 2 + 4 + 8 + \ldots = -1$

I discussed how this divergent infinite series actually represents the Taylor Series expansion of the analytic continuation on the complex plane of a function that does actually come out to equal -1 at a specific relevant point. (Both Taylor Series and analytic continuation are topics which warrant their own discussions; but, for now, we’ll hold off on that until another day.)

That article had also contained a video from Minute Physics which discussed this bizarre series, although without delving into details about why the series exhibited such bizarre behavior. Well, the fine folks from Numberphile have posted yet another video about a similarly bizarre series:

“You have to go to infinity, Brady.”

Now, it just so happens that this dovetails nicely into something I’ve been leading up to in my posts: the Riemann zeta function and the Riemann Hypothesis. I have a bit more groundwork to cover before diving into them; but, as it turns out, the Riemann zeta function comes into play in a secondary proof for the sum described in that video.  That proof is covered in a secondary video:

So here’s the bizarre series, the sum of all of the natural numbers:

$\displaystyle \sum\limits_{n=1}^\infty {n} = 1 + 2 + 3 + 4 + \ldots = -\frac{1}{2}$

Weird, huh? Obviously, the series doesn’t actually add up to this negative sum. It is a divergent series that blows up to infinity. A bit more on that in a moment. But first, here is the proof presented in the first Numberphile video.

First, let us take the following series, known as Grandi’s Series, an infinite series alternating between 1 and -1:

$\displaystyle S_1 = 1 - 1 + 1 - 1 + 1 - 1 + \ldots$

This series is a bit challenging to evaluate. Looking at the partial sums of an odd number of terms in this series will always yield a value of 1, whereas the partial sum of an even number of terms will always give 0. Now, it can be shown (and is shown in yet another Numberphile video), that the appropriate sum for this series is the average of the two values, 1/2.

Let us also consider the following series:

$\displaystyle S_2 = 1 -2 + 3 -4 + \ldots$

And our sum of interest is defined as:

$\displaystyle S = 1 + 2 + 3 + 4 + \ldots$

First off, let us add S2 to itself:

$\displaystyle \begin{array}{lcr}2S_2 &=& 1 - 2 + 3 - 4 + 5 - 6 + \ldots \\ & & + 1 - 2 + 3 - 4 + 5 - 6 + \ldots \\ &=& 1 - 1 + 1 - 1 + 1 - 1 + \ldots \\ &=& S_1 \\ &=& \frac{1}{2}\end{array}$

Well, now we are getting somewhere. This yields
$\displaystyle S_2 = \frac{1}{4}$

Now, let us subtract S2 from S:
$\displaystyle \begin{array}{lcr} S - S_2 &=& 1 + 2 + 3 + 4 + 5 + \ldots \\ & & -\left( 1 - 2 + 3 - 4 + \ldots \right)\\ &=& 0 + 4 + 0 + 8 + 0 + 12 + \ldots \\ &=& 4\left( 1 + 2 + 3 + 4 + 5 + \ldots \right) \\ &=& 4S \end{array}$

Now, we’ve already figured out what S2 is, so we substitute that in and simplify:
$\displaystyle \begin{array}{lcr} S - \frac{1}{4} &=& 4S\\ -\frac{1}{4} &=& 3S\\ S &=& -\frac{1}{12} \end{array}$

So, there you have it.

# Wait. What?

Okay, how about a different, more rigorous proof? Let’s take a look at the proof from the second Numberphile video, a proof first discovered by Leonhard Euler. First of all, let us consider the following series:

$\displaystyle 1 + x + x^2 + x^3 + \ldots = \frac{1}{1-x} \text{, for }x<1$

It can readily be shown (but I’ll leave it as an exercise for the reader for now) that the above is strictly true for values of x less than one. Now let us differentiate with respect to x:

$\displaystyle \begin{array}{lcr}\frac{\mathrm{d}}{\mathrm{d}x}(1 + x + x^2 + x^3 + \ldots ) &=& \frac{\mathrm{d}}{\mathrm{d}x}\left(\frac{1}{1-x}\right) \\ 1 + 2x + 3x^2 + 4 x^3 + \ldots &=& \frac{1}{(1-x)^2} \\ \end{array}$

Now, let us set x = -1. What does this give us?

$\displaystyle 1 - 2 + 3 - 4 + 5 - \ldots = \frac{1}{4}$

Now we bring the big guns to bear. Here is the Riemann zeta function:

$\displaystyle \zeta(s) = \sum\limits_{n=1}^\infty \frac{1}{n^s} = \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \frac{1}{4^s} + \ldots$

Now, when Euler first worked with this function, he only studied in the context of s being a real number. However, Riemann extended his analyis of the function into the complex plane. We’ll focus for the time being on Euler’s view of the function, but keep in the back of your head that a more rigorous version of this requires considering the complex plane.

Now, let us do a bit of manipulation of the zeta function:

$\displaystyle 2^{-s}\zeta(s) = 2^{-s} + 4^{-s} + 6^{-s} + 8^{-s} + \ldots$

Next, Euler subtracted twice this expression from the original zeta function:

$\displaystyle \begin{array}{rcl} (1-2*2^{-s})\zeta(s) &=& 1 + 2^{-s} + 3^{-s} + 4^{-s} + \ldots \\ &=& -2(2^{-s} + 4^{-s} + 6^{-s} + \ldots ) \\ &=& 1 - 2^{-s} + 3^{-s} - 4^{-s} + \ldots \end{array}$

Now we set s=-1 and plug it in:
$\displaystyle -3(1+2+3+4+\ldots)=1-2+3-4+5-\ldots$

But we’ve already figured out above that this series is equal to 1/4, so:

$\displaystyle 1+2+3+4+\ldots = -\frac{1}{12}$

Now, keep in mind that Euler was looking at this function on the real number line. The series in question is simply the special case of the zeta function at s=-1. However, there is a catch. In the realm of real numbers, the zeta function only converges for values of s larger than 1. What Riemann realized is that he could make the zeta function converge for all values of s, except for a pole at s=1, by switching to the complex plane and performing the analytic continuation of the function. In that context, the value of the zeta function at s=-1 is in fact -1/12.

# No, Seriously, What’s Really Happening Here?

Okay, obviously, the series in question doesn’t REALLY add up to -1/12. That is an impossibility in terms of how we ordinarily define algebraic sums. The point here is that we are using a non-standard definition of sums involving analytic continuation on the complex plane.  We aren’t just talking about the real number line.  The “proofs” in these videos aren’t really rigorous and break a few rules. (Specifically, if you have a series that does not converge, you can’t go adding and subtracting other series to it willy-nilly. Cauchy taught us this.)

Since this distinction between different types of summation isn’t really made clear in the original video, this has caused a bit of a ruckus online. After I had already started writing this blog article, Phil Plait from the Bad Astronomy blog posted an article about the video, and a firestorm ensued in the comments and on Twitter. Mark Chu-Carroll over at the “Good Math, Bad Math” blog picked up on the topic, and Phil Plait subsequently posted a follow-up to clarify the situation.

So, what’s all of this about different types of sums?

Back in the middle of the 18th century, Leonhard Euler came up with a proof  that the summation we are discussing does indeed equal -1/12.  He also came up with the result that 1 – 2 + 3 – 4 + … = 1/4.  Both of these results, although obtained rigorously as far as he could tell, seemed paradoxical to him.  What was needed was a new way to deal with divergent sums. Towards the end of the 19th century, several mathematicians started coming up with those techniques, some of which were based upon Euler’s work.

The standard algebraic infinite sum is typically defined by taking the nth partial sum of the series in the limit as n goes to infinity. By and large, these alternative summation techniques involve differing definitions for the limits that are taken when evaluating an infinite sum, as well as modifying the domain over which the sum is taken (such switching to the complex plane). These summation techniques frequently focus on the properties of partial sums since they address scenarios where the complete sums are not well-defined. By using these techniques, we can assign meaningful values to series which do not converge to a finite value under standard algebraic summation.

For example, suppose we have an infinite series whose nth partial sum oscillates symmetrically as n goes to infinity, never converging on a specific value. An example of this would be the Grandi’s Series we mentioned earlier, whose partial sums alternate between 1 and 0. However, instead of using ordinary algebraic summation, we can define a summation in which we sum the series to the nth term, then divide by n, doing this for all values of n.  If the series converges to a value in the limit as n goes to infinity, we say that the series is Cesàro summable, We’ve essentially taken the average of all partial sums in the series.

The Grandi’s Series can also be evaluated terms of something called an Abel sum. Similarly, 1 – 2 + 3 – 4 + … can be evaluated using Abel summation. And for our sum of all natural numbers, it turns out that their are two summation techniques which apply: zeta function regularization (which is a formalization of Euler’s approach) and Ramanujan summation.

But what does it mean when we say that we are finding a value for something that doesn’t have a value?  Let us take a look at a generic geometric series:

$\displaystyle S = a + ar + ar^2 + ar^3 + ar^4 + \ldots$

Let’s multiply the whole thing by a factor r, and then subtract the original series from that:

$\displaystyle rS = ar + ar^2 + ar^3 + ar^4 + \ldots$

$\displaystyle rS - S = \frac{a}{1-r}$

For the original series to be convergent, r has to be less than one. In that scenario, evaluating this formula would give us a value equal to the limit of n partial sums of the series as the limit of n goes to infinity. For other values of r, where the series is divergent, we can still use this formula to get a value. This value is not the limit of n partial sums as n goes to infinity, since that limit does not exist. But we still have a value using the same formula. (For the special case of r=-1, we have an oscillating value, as with our Grandi’s Series.)

# So, What Does This Have To Do With String Theory?

That would be a rather lengthy discussion, but have a look at a series of posts by mathematical physicist John Baez (who happens to be the cousin of singer Joan Baez) listed below in the “For More Information” section.  He covers the topic quite nicely.  This sum also shows up in other areas of physics, including QED calculations of the Casimir Effect.

# For Further Information:

(Here’s another tidbit which originally appeared on my physics blog.)

(Warning: Much of this was written in a cloud of decongestants. If there are any errors, well, you know why….)

Infinity can be a slippery concept, and it causes no end of woes to mathematicians. But, over the years, they have gotten a better and better handle on the concept. This was helped greatly by the work of Georg Cantor, who developed the basic mathematical tools used today for grappling with infinities. But he was by no means the first, nor the last. The legendary 18th century mathematician Leonhard Euler made great strides in devising methods for dealing with divergent infinite series, and the definitive work on that subject is the 1949 book by G. H. Hardy,  Divergent Series.

Let’s get a common misconception out of the way: Something divided by zero most certainly does NOT equal infinity. Division by zero is undefined, as the concept is absolutely meaningless by any mathematical definition of division. What is the case is that, in the limit as the divisor of an expression approaches zero, the value of the expression goes to either positive or negative infinity. (Depending upon the function, it can be either positive or negative depending upon which direction the limit is taken from.) A function exhibiting such behavior is said to have a discontinuity at that point.

Knowing how to deal with infinities and divergences can be crucial. The need to tame divergences in the blackbody radiation problem led Planck to take the first step in creating quantum mechanics (although, to be fair, Planck basically reformulated the problem in a form that didn’t result in infinities).  Quantum electrodynamics calculations required the creation the “dippy procedure” of renormalization. And, on the bleeding edge of theoretical physics, the challenge of reconciling quantum theory and general relativity is fraught with seemingly intractable divergences. But such heady problems aren’t the only places where divergences crop up. They can arise in the most seemingly simple math problems. Read More

(This originally appeared over on my physics blog. Enjoy!)

Lately, as a form of review, I’ve been taking a quantum mechanics course on Coursera. (It was, in fact, that course which prompted me to recently post a derivation of the Schrödinger equation a few weeks ago.) A couple of the lectures were devoted to a brief introduction to Feynman’s path-integral formulation of quantum mechanics, something typically not brought up in courses at that level, which was a refreshing change of pace. A key component of deriving Feynman’s approach is Laplace’s method, a mathematical technique that I’ve probably not thought about since taking Mathematical Methods for Physicists way back in the Dark Ages when I rode a dinosaur to grad school. (Now, where the heck is my copy of Arfkin?) A review was definitely in order.

If you haven’t caught the Numberphile video series over on YouTube, you don’t know what you are missing. These short videos by Dr. James Grime and Brady Haran provide brief, simple-to-grasp explanations of a variety of somewhat sophisticated mathematical topics. For example, yesterday’s new video covered some territory to which I had not really given any thought in years: the fact that 0! is equal to 1.

“You’ve broken maths, Brady. Stop that!”

# The Twin Prime Conjecture

First up, on April 17th (my birthday, no less), the journal Annals of Mathematics received a submission from Yitang Zhang of the University of New Hampshire purporting to at least partially prove the twin prime conjecture. The paper (available for preview here if you have access) will appear in an upcoming issue, but it is already creating a stir in the mathematics community. Building upon earlier work by  Goldston, Pintz, and Yildrim (GPY), Zhang’s paper demonstrates that for any integer N less than 70 million, there are infinitely many pairs of primes that differ by N.

# The Ternary Goldbach Conjecture

But wait, there’s more. H. A. Helfgott claims to a proven the ternary Goldbach conjecture.

In its most basic form, Goldbach’s conjecture states that every even integer greater than 2 can be expressed as the sum of two primes. The ternary Goldbach conjecture, also known as the weak or odd Goldbach conjecture, states that every odd number greater than 5 can be expressed as the sum of three primes (with repeats allowed). For example, 7 can be written as 2+2+3.