Definition

Given a sequence (left{a_{i} ight}_{i=m}^{infty},) let (left{s_{n} ight}_{n=m}^{infty}) be the sequence defined by

[s_{n}=sum_{i=m}^{n} a_{i}.]

We call the sequence (left{s_{n} ight}_{n=m}^{infty}) an infinite series. If (left{s_{n} ight}_{n=m}^{infty}) converges, we call

[s=lim _{n ightarrow infty} s_{n}]

the sum of the series. For any integer (n geq m,) we call (s_{n}) a partial sum of the series.

We will use the notation

[sum_{i=1}^{infty} a_{i}]

to denote either (left{s_{n} ight}_{n=m}^{infty},) the infinite series, or (s), the sum of the infinite series. Of course, if (left{s_{n} ight}_{n=m}^{infty}) diverges, then we say (sum_{i=m}^{infty} a_{i}) diverges.

Exercise (PageIndex{1})

Suppose (sum_{i=m}^{infty} a_{i}) converges and (eta in mathbb{R} .) Show that (sum_{i=m}^{infty} eta a_{i}) also converges and

[sum_{i=m}^{infty} eta a_{i}=eta sum_{i=m}^{infty} a_{i}.]

Exercise (PageIndex{2})

Suppose both (sum_{i=m}^{infty} a_{i}) and (sum_{i=m}^{infty} b_{i}) converge. Show that (sum_{i=m}^{infty}left(a_{i}+b_{i} ight)) converges and

[sum_{i=m}^{infty}left(a_{i}+b_{i} ight)=sum_{i=m}^{infty} a_{i}+sum_{i=m}^{infty} b_{i}.]

Exercise (PageIndex{3})

Given an infinite series (sum_{i=m}^{infty} a_{i}) and an integer (k geq m,) show that (sum_{i=m}^{infty} a_{i}) converges if and only if (sum_{i=k}^{infty} a_{i}) converges.

proposition (PageIndex{1})

Suppose (sum_{i=m}^{infty} a_{i}) converges. Then (lim _{n ightarrow infty} a_{n}=0).

**Proof**Let (s_{n}=sum_{i=m}^{n} a_{i}) and (s=lim _{n ightarrow infty} s_{n} .) Since (a_{n}=s_{n}-s_{n-1},) we have (lim _{n ightarrow infty} a_{n}=lim _{n ightarrow infty}left(s_{n}-s_{n-1} ight)=lim _{n ightarrow infty} s_{n}-lim _{n ightarrow infty} s_{n-1}=s-s=0). Q.E.D.

Exercise (PageIndex{4})

Let (s=sum_{i=0}^{infty}(-1)^{n} .) Note that

[s=sum_{n=0}^{infty}(-1)^{n}=1-sum_{n=0}^{infty}(-1)^{n}=1-s, ]

from which it follows that (s=frac{1}{2} .) Is this correct?

Exercise (PageIndex{5})

Show that for any real number (x eq 1),

[s_{n}=sum_{i=0}^{n} x^{i}=frac{1-x^{n+1}}{1-x} .]

(Hint: Note that (left.x^{n+1}=s_{n+1}-s_{n}=1+x s_{n}-s_{n} . ight))

Theorem (PageIndex{2})

For any real number (x) with (|x|<1),

[sum_{n=0}^{infty} x^{n}=frac{1}{1-x}.]

**Proof**If (s_{n}=sum_{i=0}^{n} x^{i},) then, by the previous exercise,

[s_{n}=frac{1-x^{n+1}}{1-x}.]

Hence

[sum_{n=0}^{infty} x^{n}=lim _{n ightarrow infty} s_{n}=lim _{n ightarrow infty} frac{1-x^{n+1}}{1-x}=frac{1}{1-x}.]

## 2.2.1 Comparison Tests

The following two propositions are together referred to as the comparison test.

proposition (PageIndex{3})

Suppose (sum_{i=m}^{infty} a_{i}) and (sum_{i=k}^{infty} b_{i}) are infinite series for which there exists an integer (N) such that (0 leq a_{i} leq b_{i}) whenever (i geq N .) If (sum_{i=k}^{infty} b_{i}) converges, then (sum_{i=m}^{infty} a_{i}) converges.

**Proof**By Exercise 2.2 .3 We need only show that (sum_{i=N}^{infty} a_{i}) converges. Let (s_{n}) be the nth partial sum of (sum_{i=N}^{infty} a_{i}) and let (t_{n}) be the nth partial sum of (sum_{i=N}^{infty} b_{i} .) Now

[s_{n+1}-s_{n}=a_{n+1} geq 0]

for every (n geq N,) so (left{s_{n} ight}_{n=N}^{infty}) is a nondecreasing sequence. Moreover,

[s_{n} leq t_{n} leq sum_{i=N}^{infty} b_{i}<+infty]

for every (n geq N .) Thus (left{s_{n} ight}_{n=N}^{infty}) is a nondecreasing, bounded sequence, and so converges.

proposition (PageIndex{4})

Suppose (sum_{i=m}^{infty} a_{i}) and (sum_{i=k}^{infty} b_{i}) are infinite series for which there exists an integer (N) such that (0 leq a_{i} leq b_{i}) whenever (i geq N .) If (sum_{i=k}^{infty} a_{i}) diverges then (sum_{i=m}^{infty} b_{i}) diverges.

**Proof**By Exercise 2.2 .3 we need only show that (sum_{i=N}^{infty} b_{i}) diverges. Let (s_{n}) be the nth partial sum of (sum_{i=N}^{infty} a_{i}) and let (t_{n}) be the nth partial sum of (sum_{i=N}^{infty} b_{i} .) Now (left{s_{n} ight}_{n=N}^{infty}, N) is a nondecreasing sequence which diverges, and so we must have (lim _{n ightarrow infty} s_{n}=+infty .) Thus given any real number (M) there exists an integer (L) such that

[M

whenever (n>L .) Hence (lim _{n ightarrow infty} t_{n}=+infty) and (sum_{i=m}^{infty} b_{i}) diverges. (quad) Q.E.D.

Example (PageIndex{1})

Consider the infinite series

[sum_{n=0}^{infty} frac{1}{n !}=1+1+frac{1}{2}+frac{1}{3 !}+frac{1}{4 !}+cdots .]

Now for (n=1,2,3, dots,) we have

[0

Since

[sum_{n=1}^{infty} frac{1}{2^{n-1}}]

converges, it follows that

[sum_{n=0}^{infty} frac{1}{n !}]

converges. Moreover,

[2

We let

[e=sum_{n=0}^{infty} frac{1}{n !}.]

proposition (PageIndex{5})

(e otin mathbb{Q}).

**Proof**Suppose (e=frac{p}{q}) where (p, q in mathbb{Z}^{+} .) Let

[a=q !left(e-sum_{i=0}^{q} frac{1}{n !} ight).]

Then (a in mathbb{Z}^{+}) since (q! e=(q-1) ! p) and (n !) divides (q !) when (n leq q .) At the same time

[egin{aligned} a &=q ! left(sum_{n=0}^{infty} frac{1}{n !}-sum_{i=0}^{q} frac{1}{n !} ight) &=q ! sum_{n=q+1}^{infty} frac{1}{n !} &=left(frac{1}{q+1}+frac{1}{(q+1)(q+2)}+frac{1}{(q+1)(q+2)(q+3)}+cdots ight) &= frac{1}{q+1} left(1+frac{1}{q+2}+frac{1}{(q+2)(q+3)}+cdots ight) &=frac{1}{q+1} left(1+frac{1}{q+2}+frac{1}{(q+2)(q+3)}+cdots ight) &

Since this is impossible, we conclude that no such integers (p) and (q) exist. (quad) Q.E.D.

Definition

We call a real number which is not a rational number an irrational number.

Example (PageIndex{2})

We have seen that (sqrt{2}) and (e) are irrational numbers.

proposition (PageIndex{6})

Suppose (sum_{i=m}^{infty} a_{i}) and (sum_{i=k}^{infty} b_{i}) are infinite series for which there exists an integer (N) and a real number (M>0) such that (0 leq a_{i} leq M b_{i}) whenever (i geq N .) If (sum_{i=k}^{infty} b_{i}) converges, then (sum_{i=m}^{infty} a_{i}) converges.

**Proof**Since (sum_{i=k}^{infty} M b_{i}) converges whenever (sum_{i=k}^{infty} b_{i}) does, the result follows from the comparison test. (quad) Q.E.D.

Exercise (PageIndex{6})

Suppose (sum_{i=m}^{infty} a_{i}) diverges. Show that (sum_{i=m}^{infty} eta a_{i}) diverges for any real number (eta eq 0).

proposition (PageIndex{7})

Suppose (sum_{i=m}^{infty} a_{i}) and (sum_{i=k}^{infty} b_{i}) are infinite series for which there exists an integer (N) and a real number (M>0) such that (0 leq a_{i} leq M b_{i}) whenever (i geq N .) If (sum_{i=m}^{infty} a_{i}) diverges, then (sum_{i=k}^{infty} b_{i}) diverges.

**Proof**By the comparison test, (sum_{i=m}^{infty} M b_{i}) diverges. Hence, by the previous exercise, (sum_{i=m}^{infty} b_{i}) also diverges.

We call the results of the next two exercises, which are direct consequences of the last two propositions, the limit comparison test.

Exercise (PageIndex{7})

Suppose (sum_{i=m}^{infty} a_{i}) and (sum_{i=m}^{infty} b_{i}) are infinite series for which (a_{i} geq 0) and (b_{i}>0) for all (i geq m .) Show that if (sum_{i=m}^{infty} b_{i}) converges and

[lim _{i ightarrow infty} frac{a_{i}}{b_{i}}<+infty ,]

then (sum_{i=m}^{infty} a_{i}) converges.

Exercise (PageIndex{8})

Suppose (sum_{i=m}^{infty} a_{i}) and (sum_{i=m}^{infty} b_{i}) are infinite series for which (a_{i} geq 0) and (b_{i}>0) for all (i geq m .) Show that if (sum_{i=m}^{infty} a_{i}) diverges and

[lim _{i ightarrow infty} frac{a_{i}}{b_{i}}<+infty ,]

then (sum_{i=m}^{infty} b_{i}) diverges.

Exercise (PageIndex{9})

Show that

[sum_{n=1}^{infty} frac{1}{n 2^{n}}]

converges.

Exercise (PageIndex{10})

Show that

[sum_{n=0}^{infty} frac{x^{n}}{n !}]

converges for any real number (x geq 0).

Exercise (PageIndex{11})

Let (S) be the set of all finite sums of numbers in the set (left{a_{1}, a_{2}, a_{3}, dots ight},) where (a_{i} geq 0) for (i=1,2,3, dots) That is

[S=left{sum_{i in J} a_{i}: J subset{1,2,3, ldots, n} ext { for some } n in mathbb{Z}^{+} ight}.]

Show that (sum_{i=1}^{infty} a_{i}) converges if and only if (sup S

[sum_{i=1}^{infty} a_{i}=sup S.]

One consequence of the preceding exercise is that the sum of a sequence of nonnegative numbers depends only on the numbers begin added, and not on the order in which they are added. That is, if (varphi: mathbb{Z}^{+} ightarrow mathbb{Z}^{+}) is one-to-one and onto, (sum_{i=1}^{infty} a_{i}) converges if and only if (sum_{i=1}^{infty} a_{varphi(i)}) converges, and, in that case,

[sum_{i=1}^{infty} a_{i}=sum_{i=1}^{infty} a_{varphi(i)}.]

## Infinite Series and Convergence

What is an infinite series, and how can you add up an endless amount of numbers and not get infinity? These are the questions I focus on in the article. First, I explain what an infinite series is and then discuss geometric series and the harmonic series. I also cover the divergence test and convergence rules.

We introduce infinite series and their basic properties such as the Divergence Test and elementary convergence rules. We also discuss the Harmonic Series and Geometric Series.

## Related Resources

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: Today we are continuing with improper integrals. I still have a little bit more to tell you about them. What we were discussing at the very end of last time was improper integrals.

Now and these are going to be improper integrals of the second kind. By second kind I mean that they have a singularity at a finite place. That would be something like this. So here's the definition if you like. Same sort of thing as we did when the singularity was at infinity. So if you have the integral from 0 to 1 of f(x). This is going to be the same thing as the limit, as a goes to 0 from above, the integral from a to 1 of f(x) dx.

And the idea here is the same one that we had at infinity. Let me draw a picture of it. You have, imagine a function which is coming down like this and here's the point 1. And we don't know whether the area enclosed is going to be infinite or finite and so we cut it off at some place a. And we let a go to 0 from above. So really it's 0+. So we're coming in from the right here. And we're counting up the area in this chunk. And we're seeing as it expands whether it goes to infinity or whether it tends to some finite limit.

Right, so this is the example and this is the definition. And just as we did for the other kind of improper integral, we say that this converges -- so that's the key word here -- if the limit is finite, exists maybe I should just say and diverges if not.

Let's just take care of the basic examples. First of all I wrote this one down last time. We're going to evaluate this one. The integral from 0 to 1 of 1 over the square root of x. And this just, you almost don't notice the fact that it goes to infinity. This goes to infinity as x goes to 0. But if you evaluate it -- first of all we always write this as a power. Right? To get the evaluation. And then I'm not even going to replace the 0 by an a. I'm just going to leave it as 0. The antiderivative here is x^(1/2) times 2. And then I evaluate that at 0 and 1. And I get 2. 2 minus 0, which is 2.

All right so this one is convergent. And not only is it convergent but we can evaluate it.

The second example, being not systematic but really giving you the principal examples that we'll be thinking about, is this one here, dx / x. And this one gives you the antiderivative as the logarithm. Evaluated at 0 and 1. And now again you have to have this thought process in your mind that you're really taking the limit. But this is going to be the log of 1 minus the log of 0. Really the log of 0 from above. There is no such thing as the log of 0 from below.

And this is minus infinity. So it's 0 minus minus infinity, which is plus infinity. And so this one diverges.

All right so what's the general-- So more or less in general, let's just, for powers anyway, if you work out this thing for dx / x^p from 0 to 1. What you're going to find is that it's 1/(1-p) when p is less than 1. And it diverges for p >= 1. Now that's the final result. If you carry out this integration it's not difficult.

All right so now I just want to try to help you to remember this. And to think about how you should think about it. So I'm going to say it in a few more ways. All right just repeat what I've said already but try to get it to percolate and absorb itself. And in order to do that I have to make the contrast between the kind of improper integral that I was dealing with before. Which was not as x goes to 0 here but as x goes to infinity, the other side. Let's make this contrast.

First of all, if I look at the angle that we have been paying attention to right now. We've just considered things like this. 1 over x to the 1/2. Which is a lot smaller than 1/x. Which is a lot smaller than say 1/x^2. Which would be another example. This is as x goes to 0. So this one's the smallest one. This one's the next smallest one. And this one is very large.

On the other hand it goes the other way at infinity. As x tends to infinity. All right so try to keep that in mind.

And now I'm going to put a little box around the bad guys here. This one is divergent. And this one is divergent. And this one is divergent. And this one is divergent. The crossover point is 1/x. When we get smaller than that, we get to things which are convergent. When we get smaller than it on this other scale, it's convergent.

All right so these guys are divergent. So they're associated with divergent integrals. The functions themselves are just tending towards-- well these tend to infinity, and these tend toward 0. So I'm not talking about the functions themselves but the integrals.

Now I want to draw this again here, not small enough. I want to draw this again. And, so I'm just going to draw a picture of what it is that I have here. But I'm going to combine these two pictures. So here's the picture for example of y = 1/x. All right. That's y y = 1/x. And that picture is very balanced. It's symmetric on the two ends.

If I cut it in half then what I get here is two halves. And this one has infinite area. That corresponds to the integral from 1 to infinity, dx / x being infinite. And the other piece, which -- this one we calculated last time, this is the one that we just calculated over here at Example 2 -- has the same property. It's infinite. And that's the fact that the integral from 0 to 1 of dx / x is infinite. Right, so both, we lose on both ends.

On the other hand if I take something like -- I'm drawing it the same way but it's really not the same -- y = 1 over the square root of x. y = 1 / x^(1/2). And if I cut that in half here then the x^(1/2) is actually bigger than this guy. So this piece is infinite. And this part over here actually is going to give us an honest number. In fact this one is finite. And we just checked what the number is. It actually happens to have area 2. And what's happening here is if you would superimpose this graph on the other graph what you would see is that they cross. And this one sits on top. So if I drew this one in let's have another color here, orange let's say. If this were orange if I set it on top here it would go this way. OK and underneath the orange is still infinite. So both of these are infinite. On here on the other hand underneath the orange is infinite but underneath where the green is is finite. That's a smaller quantity. Infinity is a lot bigger than 2. 2 is a lot less than infinity. All right so that's reflected in these comparisons here.

Now if you like if I want to do these in green. This guy is good and this guy is good. Well let me just repeat that idea over here in this sort of reversed picture with y = 1/x^2. If I chop that in half then the good part is this end here. This is finite. And the bad part is this part of here which is way more singular. And it's infinite.

All right so again what I've just tried to do is to give you some geometric sense and also some visceral sense. This guy, its tail as it goes out to infinity is much lower. It's much smaller than 1/x. And these guys trapped an infinite amount of area. This one traps only a finite amount of area.

All right so now I'm just going to give one last example which combines these two types of pictures. It's really practically the same as what I've said before but I-- oh have to erase this one too.

So here's another example: if you're in-- So let's take the following example. This is somewhat related to the first one that I gave last time. If you take a function y = 1/(x-3)^2. And you think about its integral.

So let's think about the integral from 0 to infinity, dx / (x-3)^2. And suppose you were faced with this integral. In order to understand what it's doing you have to pay attention to two places where it can go wrong. We're going to split into two pieces. I'm going say break it up into this one here up to 5, for the sake of argument. And say from 5 to infinity. All right. So these are the two chunks.

Now why did I break it up into those two pieces? Because what's happening with this function is that it's going up like this at 3. And so if I look at the two halves here. I'm going to draw them again and I'm going to illustrate them with the colors we've chosen, which are I guess red and green. What you'll discover is that this one here, which corresponds to this piece here, is infinite. And it's infinite because there's a square in the denominator. And as x goes to 3 this is very much like if we shifted the 3 to 0. Very much like this 1/x^2 here. But not in this context. In the other context where it's going to infinity. This is the same as at the picture directly above with the infinite part in red. All right. And this part here, this part is finite. All right. So since we have an infinite part plus a finite part the conclusion is that this thing, well this guy converges. And this one diverges. But the total unfortunately diverges. Right, because it's got one infinity in it. So this thing diverges. And that's what happened last time when we got a crazy number. If you integrated this you would get some negative number. If you wrote down the formulas carelessly. And the reason is that the calculation actually is nonsense. So you've gotta be aware, if you encounter a singularity in the middle, not to ignore it. Yeah. Question.

AUDIENCE: [INAUDIBLE PHRASE]

PROFESSOR: Why do we say that the whole thing diverges? The reason why we say that is the area under the whole curve is infinite. It's the sum of this piece plus this piece. And so the total is infinite.

AUDIENCE: [INAUDIBLE PHRASE]

PROFESSOR: We're stuck. This is an ill-defined integral. It's one where your red flashing warning sign should be on. Because you're not going to get the right answer by computing it. You'll never get an answer. Similarly you'll never get an answer with this. And you will get an answer with that. OK? Yeah another question.

AUDIENCE: [INAUDIBLE PHRASE]

PROFESSOR: So the question is, if you have a little glance at an integral, how are you going to decide where you should be heading? So I'm going to answer that orally. Although you know, but I'll say one little hint here. So you always have to check x going to infinity and x going to minus infinity, if they're in there. And you also have to check any singularity, like x going to 3 for sure in this case. You have to pay attention to all the places where the thing is infinite. And then you want to focus in on each one separately. And decide what's going on it at that particular place. When it's a negative power-- So remember dx / x as x goes to 0 is bad. And dx / x^2 is bad. dx / x^3 is bad. All of them are even worse. So anything of this form is bad: n = 1, 2, 3. These are the red box kinds. All right.

That means that any of the integrals that we did in partial fractions which had a root, which had a factor of something in the denominator. Those are all divergent integrals if you cross the singularly. Not a single one of them makes sense across the singularity. Right?

If you have square roots and things like that then you can repair things like that. And there's some interesting examples of that. Such as with the arcsine function and so forth. Where you have an improper integral which is really OK. All right. So that's the best I can do. It's obviously something you get experience with. All right.

Now I'm going to move on and this is more or less our last topic. Yay, but not quite. Well, so I should say it's our penultimate topic. Right because we have one more lecture. All right.

So that our next topic is series. Now we'll do it in a sort of a concrete way today. And then we'll do what are known as power series tomorrow.

So let me tell you about series. Remember we're talking about infinity and dealing with infinity. So we're not just talking about any old series. We're talking about infinite series.

There is one infinite series which is probably, which is without question the most important and useful series. And that's the geometric series but I'm going to introduce it concretely first in a particular case. If I draw a picture of this sum. Which in principle goes on forever. You can see that it goes someplace fairly easily by marking out what's happening on the number line. The first step takes us to 1 from 0. And then if I add this half, I get to 3/2. Right, so the first step was 1 and the second step was 1/2.

Now if I add this quarter in, which is the next piece then I get some place here. But what I want to observe is that I got, I can look at it from the other point of view. I got, when I move this quarter I got half way to 2 here.

I'm putting 2 in green because I want you to think of it as being the good kind. Right. The kind that has a number. And not one of the red kinds. We're getting there and we're almost there. So the next stage we get half way again. That's the eighth and so forth. And eventually we get to 2. So this sum we write equals two.

All right that's kind of a paradox because we never get to 2. This is the paradox that Zeno fussed with. And his conclusion, you know, with the rabbit and the hare. No, the rabbit and the tortoise. Sorry hare chasing-- anyway, the rabbit chasing the tortoise. His conclusion-- you know, I don't know if you're aware of this, but he understood this paradox. And he said you know it doesn't look like it ever gets there because they're infinitely many times between the time-- you know that the tortoise is always behind, always behind, always behind, always behind. So therefore it's impossible that the tortoise catches up right. So do you know what his conclusion was? Time does not exist. That was actually literally his conclusion. Because he didn't understand the possibility of a continuum of time. Because there were infinitely many things that happened before the tortoise caught up. So that was the reasoning. I mean it's a long time ago but you know people didn't-- he didn't believe in continuum.

All right. So anyway that's a small point. Now the general case here of a geometric series is where I put in a number a instead of 1/2 here. So what we had before. So that's 1 + a + a^2. Isn't quite the most general but anyway I'll write this down. And you're certainly going to want to remember that the formula for this in the limit is 1/(1-a).

And I remind you that this only works when the absolute value is strictly less than 1. In other words when -1 is strictly less than a is less than 1. And that's really the issue that we're going to want to worry about now. What we're worrying about is this notion of convergence. And what goes wrong when there isn't convergence, when there's a divergence.

So let me illustrate the divergences before going on. And this is what we have to avoid if we're going to understand series. So here's an example when a = 1. You get 1 + 1 + 1 plus et cetera. And that's equal to 1/(1-1). Which is 1 over 0. So this is not bad. It's almost right. Right? It's sort of infinity equals infinity.

At the edge here we managed to get something which is sort of almost right. But you know, it's, we don't consider this to be logically to make complete sense. So it's a little dangerous. And so we just say that it diverges. And we get rid of this. So we're still putting it in red. All right. The bad guy here. So this one diverges. Similarly if I take a equals -1, I get 1 - 1 + 1 - 1 + 1. Because the odd and the even powers in that formula alternate sign.

And this bounces back and forth. It never settles down. It starts at 1. And then it gets down to 0 and then it goes back up to 1, down to 0, back up to 1. It doesn't settle down. It bounces back and forth. It oscillates. On the other hand if you compare the right hand side. What's the right hand side? It's 1 / (1-(-1)). Which is 1/2. All right. So if you just paid attention to the formula, which is what we were doing when we integrated without thinking too hard about this, you get a number here but in fact that's wrong. Actually it's kind of an interesting number. It's halfway between the two, between 0 and 1. So again there's some sort of vague sense in which this is trying to be this answer. All right. It's not so bad but we're still going to put this in a red box. All right. because this is what we called divergence. So both of these cases are divergent. It only really works when alpha-- when a is less than 1. I'm going to add one more case just to see that mathematicians are slightly curious about what goes on in other cases.

So this is 1 + 2 + 2^2 + 2^3 plus etc.. And that should be equal to -- according to this formula -- 1/(1-2). Which is -1. All right. Now this one is clearly wrong, right? This one is totally wrong. It certainly diverges. The left hand side is obviously infinite. The right hand side is way off. It's -1.

On the other hand it turns out actually that mathematicians have ways of making sense out of these. In number theory there's a strange system where this is actually true. And what happens in that system is that what you have to throw out is the idea that 0 is less than 1. There is no such thing as negative numbers. So this number exists. And it's the additive inverse of 1. It has this arithmetic property but the statement that this is, that 1 is bigger than 0 does not make sense.

So you have your choice, either this diverges or you have to throw out something like this. So that's a very curious thing in higher mathematics. Which if you get to number theory there's fun stuff there. All right. OK but for our purposes these things are all out. All right. They're gone. We're not considering them. Only a between -1 and 1. All right.

Now I want to do something systematic. And it's more or less on the lines of the powers that I'm erasing right now. I want to tell you about series which are kind of borderline convergent. And then next time when we talk about powers series we'll come back to this very important series which is the most important one.

So now let's talk about some series-- er, general notations. And this will help you with the last bit. This is going to be pretty much the same as what we did for improper integrals. Namely, first of all I'm going to have S_N which is the sum of a_n, n equals 0 to capital N. And this is what we're calling a partial sum.

And then the full limit, which is capital S, if you like. a_n, n equals 0 to infinity, is just the limit as N goes to infinity of the S_N's. And then we have the same kind of notation that we had before. Which is there are these two choices which is that if the limit exists. That's the green choice. And we say it converges. So we say the series converges.

And then the other case which is the limit does not exist. And we can say the series diverges. Question.

AUDIENCE: [INAUDIBLE PHRASE]

PROFESSOR: The question was how did I get to this? And I will do that next time but in fact of course you've seen it in high school. Right this is-- Yeah. Yeah. We'll do that next time. The question was how did we arrive-- sorry I didn't tell you the question. The question was how do we arrive at this formula on the right hand side here. But we'll talk about that next time. All right.

So here's the basic definition and what we're going to recognize about series. And I'm going to give you a few examples and then we'll do something systematic.

So the first example-- well the first example is the geometric series. But the first example that I'm going to discuss now and in a little bit of detail is this sum 1/n^2, n equals 1 to infinity. It turns out that this series is very analogous -- and we'll develop this analogy carefully -- the integral from 1 to x, dx / x^2. And we're going to develop this analogy in detail later in this lecture. And this one is one of the ones-- so now you have to go back and actually remember, this is one of the ones you really want to memorize. And you should especially pay attention to the ones with an infinity in them. This one is convergent. And this series is convergent. Now it turns out that evaluating this is very easy. This is 1. It's easy to calculate. Evaluating this is very tricky. And Euler did it. And the answer is pi^2 / 6. That's an amazing calculation. And it was done very early in the history of mathematics. If you look at another example-- so maybe example two here, if you look at 1/n^3, n equals-- well you can't start here at 0 by the way. I get to start wherever I want in these series. Here I start with 0. Here I started with 1.

And notice the reason why I started-- it was a bad idea to start with 0 was that 1 over 0 is undefined. Right? So I'm just starting where it's convenient for me. And since I'm interested mostly in the tail behavior it doesn't matter to me so much where I start. Although if I want an exact answer I need to start exactly at n = 1. All right. This one is similar to this integral here. All right. Which is convergent again. So there's a number that you get. And let's see what is it something like 2/3 or something like that, all right, for this for this number. Or 1/3. What is it? No 1/2. I guess it's 1/2. This one is 1/2. You check that, I'm not positive, but anyway just doing it in my head quickly it seems to be 1/2. Anyway it's an easy number to calculate.

This one over here stumped mathematicians basically for all time. It doesn't have any kind of elementary form like this. And it was only very recently proved to be rational. People couldn't even couldn't even decide whether this was a rational number or not. But anyway that's been resolved it is an irrational number which is what people suspected. Yeah question.

AUDIENCE: [INAUDIBLE PHRASE]

PROFESSOR: Yeah sorry. OK. I violated a rule of mathematics-- you said why is this similar? I thought that similar was something else. And you're absolutely right. And I violated a rule of mathematics. Which is that I used this symbol for two different things. I should have written this symbol here. All right. I'll create a new symbol here. The question of whether this converges or this converges. These are the the same type of question. And we'll see why they're the same question it in a few minutes. But in fact the wiggle I used, "similar", I used for the connection between functions. The things that are really similar are that 1/n resembles 1/x^2. So I apologize I didn't--

AUDIENCE: [INAUDIBLE PHRASE]

PROFESSOR: Oh you thought that this was the definition of that. That's actually the reason why these things correspond so closely. That is that the Riemann sum is close to this. But that doesn't mean they're equal. The Riemann sum only works when the delta x goes to 0. The way that we're going to get a connection between these two, as we will just a second, is with a Riemann sum with-- What we're going to use is a Riemann sum with delta x = 1. All right and then that will be the connection between. All right that's absolutely right. All right.

So in order to illustrate exactly this idea that you've just come up with, and in fact that we're going to use, we'll do the same thing but we're going to do it on the example sum 1/n. So here's Example 3 and it's going to be sum 1/n, n equals 1 to infinity. And what we're now going to see is that it corresponds to this integral here. And we're going to show therefore that this thing diverges. But we're going to do this more carefully. We're going to do this in some detail so that you see what it is, that the correspondence is between these quantities. And the same sort of reasoning applies to these other examples.

So here we go. I'm going to take the integral and draw the picture of the Riemann sum. So here's the level 1 and here's the function y = 1/x. And I'm going to take the Riemann sum. With delta x = 1. And that's going to be closely connected to the series that I have. But now I have to decide whether I want a lower Riemann sum or an upper Riemann sum. And actually I'm going to check both of them because both of them are illuminating.

First we'll do the upper Riemann's sum. Now that's this staircase here. So we'll call this the upper Riemann's sum. And let's check what its levels are. This is not to scale. This level should be 1/2. So if this is 1 and this is 2 and that level was supposed to be 1/2 and this next level should be 1/3. That's how the Riemann sums are working out.

And now I have the following phenomenon. Let's cut it off at the nth stage. So that means that I'm going, the integral is from 1 to n, dx / x. And the Riemann sum is something that's bigger than it. Because the areas are enclosing the area of the curved region. And that's going to be the area of the first box which is 1, plus the area of the second box which is 1/2, plus the area of the third box which is 1/3. All the way up the last one, but the last one starts at N-1. So it has 1/(N-1). There are not N boxes here. There are only N-1 boxes. Because the distance between 1 and N is N-1. Right so this is N-1 terms.

However, if I use the notation for partial sum. Which is 1 + 1/2 plus all the way up to 1/(n-1) 1 + 1/n. In other words I go out to the Nth one which is what I would ordinarily do. Then this sum that I have here certainly is less than S_N. Because there's one more term there. And so here I have an integral which is underneath this sum S_N.

Now this is going to allow us to prove conclusively that the-- So I'm just going to rewrite this, prove conclusively that the sum diverges. Why is that? Because this term here we can calculate. This is log x evaluated at 1 and n. Which is the same thing as log N minus 0. All right, the quantity log N - log 1 which is 0. And so what we have here is that log N is less than S_N. All right and clearly this goes to infinity right. As N goes to infinity this thing goes to infinity. So we're done. All right we've shown divergence. Now the way I'm going to use the lower Riemann's sum is to recognize that we've captured the rate appropriately. That is not only do I have a lower bound like this but I have an upper bound which is very similar.

So if I use the upper Riemann-- oh sorry, lower Riemann sum again with delta x = 1. Then I have that the integral from 1 to n of dx / x is bigger than-- Well what are the terms going to be if fit them underneath? If I fit them underneath I'm missing the first term. That is the box is going to be half height. It's going to be this lower piece. So I'm missing this first term. So it'll be a 1/2 + 1/3 plus. All right, it will keep on going. But now the last one instead of being 1/(N-1), it's going to be 1 over N. This is again a total of the N-1 terms. This is the lower Riemann sum.

And now we can recognize that this is exactly equal to-- well so I'll put it over here-- this is exactly equal to S_N minus 1, minus the first term. So we missed the first term but we got all the rest of them. So if I put this to the other side remember this is log N. All right. If I put this to the other side I have the other side of this bound. I have that S S_N is less than, if I reverse it, log N + 1. And so I've trapped it on the other side. And here I have the lower bound. So I'm going to combine those together.

So all told I have this correspondence here. It is the size of log N is trapped between the-- sorry, the size of S_N, which is relatively hard to calculate and understand exactly, is trapped between log N and log N + 1. Yeah question.

AUDIENCE: [INAUDIBLE PHRASE]

PROFESSOR: This step here is the step that you're concerned about. So this step is a geometric argument which is analogous to this step. All right it's the same type of argument. And in this case it's that the rectangles are on top and so the area represented on the right hand side is less than the area represented on this side. And this is the same type of thing except that the rectangles are underneath. So the sum of the areas of the rectangles is less than the area under the curve.

All right. So I've now trapped this quantity. And I'm now going to state the sort of general results. So here's what's known as integral comparison. It's this double arrow correspondence in the general case, for a very general case. There are actually even more cases where it works. But this is a good case and convenient.

Now this is called integral comparison. And it comes with hypotheses but it follows the same argument that I just gave. If f(x) is decreasing and it's positive, then the sum f(n), n equals 1 to infinity, minus the integral from 1 to infinity of f(x) dx is less than f(1). That's basically what we showed. We showed that the difference between S_N and log N was at most 1. All right. Now if both of them are-- And the sum and the integral converge or diverge together. That is they either both converge or both diverge. This is the type of test that we like because then we can just convert the question of convergence over here to this question of convergence over on the other side.

Now I remind you that it's incredibly hard to calculate these numbers. Whereas these numbers are easier to calculate. Our goal is to reduce things to simpler things. And in this case sums, infinite sums are much harder than infinite integrals.

All right so that's the integral comparison. And now I have one last bit on comparisons that I need to tell you about. And this is very much like what we did with integrals. Which is a so called limit comparison. The limit comparison says the following: if f(n) is similar to g(n) -- recall that means f(n) / g(n) tends to 1 as n goes to infinity -- and we're in the positive case. So let's just say g(n) is positive. Then-- that doesn't even, well-- then sum f(n), sum g(n) either both-- same thing as above, either both converge or both diverge. All right. This is just saying that if they behave the same way in the tail, which is all we really care about, then they have similar behavior, similar convergence properties.

And let me give you a couple examples. So here's one example: if you take the sum 1 over n^2 + 1, square root. This is going to be replaced by something simpler. Which is the main term here. Which is 1 over square root of n^2, which we recognize as sum 1/n, which diverges. So this guy is one of the red guys. On the red team. Now we have another example. Which is let's say the square root of n, I don't know, to the fifth minus n^2. Now if you have something where it's negative in the denominator you kind of do have to watch out that denominator makes sense. It isn't 0. So we're going to be careful and start this at n = 2. In which case, the first term, I don't like 1/0 as a term in my series. So I'm just going to be a little careful about how-- as I said I was kind of lazy here. I could have started this one at 0 for instance. All right. So here's the picture.

Now this I just replace by its main term which is 1 over n^5, square root. Which is sum 1/n^(5/2), which converges. All right. The power is bigger than 1. 1 is the divider for these things and it just misses. This one converges. All right so these are the typical ways in which these convergence processes are used. All right.

So I have one more thing for you. Which is an advertisement for next time. And I have this demo here which I will grab. But you will see this next time.

So here's a question for you to think about overnight but don't ask friends, you have to think about it yourself. So here's the problem. Here are some blocks which I acquired when my kids left home. Anyway yeah that'll happen to you too in about four years. So now here you are, these are blocks. So now here's the question that we're going to deal with next time. I'm going to build it, maybe I'll put it over here because I want to have some room to head this way. I want to stack them up so that-- oh didn't work. Going to stack them up in the following way. I want to do it so that the top one is completely to the right of the bottom one. That's the question can I do that? Can I get-- Can I build this up? So let's see here. I just seem to be missing-- but anyway what I'm going to do is I'm going to try to build this and we're going to see how far we can get with this next time.

## Geometric Progression Formulas

In mathematics, a **geometric progression(sequence)** (also inaccurately known as a **geometric series**) is a sequence of numbers such that the quotient of any two successive members of the sequence is a constant called the common ratio of the sequence.

The geometric progression can be written as:

ar 0 =a, ar 1 =ar, ar 2 , ar 3 , .

where r &ne 0 , r is the common ratio and a is a scale factor(also the first term).

#### Examples

A geometric progression with common ratio 2 and scale factor 1 is

1, 2, 4, 8, 16, 32.

A geometric sequence with common ratio 3 and scale factor 4 is

4, 12, 36, 108, 324.

A geometric progression with common ratio -1 and scale factor 5 is

5, -5, 5, -5, 5, -5.

#### Formulas

Formula for the n-th term can be defined as:

Formula for the common ratio is:

- Negative ,
**the results will alternate between positive and negative**.*Example:*

1, -2, 4, -8, 16, -32. - the common ratio is -2 and the first term is 1. - Greater than 1 ,
**there will be exponential growth towards infinity (positive)**.*Example*:

1, 5, 25, 125, 625 . - the common ratio is 5. - Less than -1 ,
**there will be exponential growth towards infinity (positive and negative)**.*Example*:

1, -5, 25, -125, 625, -3125, 15625, -78125, 390625, -1953125 . - the common ratio is -5. - Between 1 and -1 ,
**there will be exponential decay towards zero**.*Example*:

4, 2, 1, 0.5, 0.25, 0.125, 0.0625 . - the common ratio is $frac<1><2>$

4, -2, 1, -0.5, 0.25, -0.125, 0.0625 . - the common ratio is $-frac<1><2>$. - Zero ,
**the results will remain at zero**.*Example*:

4, 0, 0, 0, 0 . - the common ratio is 0 and the first term is 4.

#### Geometric Progression Properties

Formula for the sum of the first n numbers of a geometric series

#### Infinite geometric series where |r| < 1

If |r| < 1 then a_{n} -> 0 , when n -> &infin .

The sum S of such an infinite geometric series is given by the formula:

#### Geometric Progression Calculator

#### Geometric Progression Problems

Problem 1.

Is the sequence 2, 4, 6, 8. a geometric progression? **Solution:** No, it is not. (2, 4, 8 is a geometric progression)

Problem 2

If 2, 4, 8. form a geometric progression. What is the 10-th term? **Solution:** We can use the formula a_{n} = a_{1} &sdot r n-1

a_{10} = 2 &sdot 2 10-1 = 2 &sdot 512 = 1024

Problem 3

Find the scale factor and the command ratio of a geometric progression if

a_{5} - a_{1} = 15

a_{4} - a_{2} = 6 **Solution:** there are two geometric progressions. The first one has a scale factor 1 and common ratio = 2

the second decidion is -16, 1/2

## Step by step guide to solve Infinite Geometric Series

- Infinite Geometric Series: The sum of a geometric series is infinite when the absolute value of the ratio is more than (1).
- Infinite Geometric Series formula: (color
*^ infty a_**r^i=frac*><1-r>>)

### Infinite Geometric Series – Example 1:

Evaluate infinite geometric series described. (S= sum_*^ infty 9^ )*

### Infinite Geometric Series – Example 2:

Evaluate infinite geometric series described. (S= sum_

### Infinite Geometric Series – Example 3:

Evaluate infinite geometric series described. (S= sum_*^ infty 8^ )*

### Infinite Geometric Series – Example 4:

Evaluate infinite geometric series described. (S= sum_

## An infinite series of surprises

is known as an infinite series. Such series appear in many areas of modern mathematics. Much of this topic was developed during the seventeenth century. Leonhard Euler continued this study and in the process solved many important problems. In this article we will explain Euler’s argument involving one of the most surprising series.

This was one of the first, and only, general results known during the seventeenth century. Another series then known was

( 4 ) | |

( 5 ) | |

( 6 ) | |

( 7 ) |

Similar methods were used to find the sums

( 8 ) |

Now all these series converge. That is to say we can make sense of the infinite sum as a finite number. This is not true of a particularly famous series which is known as the *harmonic series*:

( 9 ) |

The following medieval proof that the harmonic series diverges was discovered and published by a French monk called Orseme around 1350 and relies on grouping the terms in the series as follows:

( 10 ) | |

( 11 ) | |

( 12 ) |

It follows that the sum can be made as large as we please by taking enough terms. In fact this series diverges quite slowly. A more accurate estimate of the speed of divergence can be made using the following more modern proof. This uses a technique known as the *integral test* which compares the graph of a function with the terms of the series. By integrating the function using calculus we can compare the sum of the series with the integral of the function and draw conclusions from this.

### The harmonic series generalised

The harmonic series can be described as "the sum of the reciprocals of the natural numbers". Another series that presents itself as being similar is the "the sum of the squares of reciprocals of the natural numbers". That is to say, the series( 16 ) |

( 17 ) |

( 18 ) |

( 19 ) |

### "Infinite polynomial" - power series

This is a truly remarkable result. No one expected the value

We paraphrase Euler’s next claim as *"what holds for a finite polynomial holds for an infinite polynomial"*. He applies this claim to the polynomial

( 25 ) |

The second line pairs the positive and negative roots – the last line uses the difference of two squares to combine these. If you don’t believe this can be done you are right to question the logic here! Euler is being incredibly bold in his assertion that "what holds for a finite polynomial holds for an infinite polynomial". His use turns out to give the correct answer in this case!

( 33 ) |

Now Euler didn’t stop here – he expanded the product further and equated other coefficients to sum other series. In this way he obtained

( 34 ) |

( 35 ) |

by this method. In principle his method solves

( 36 ) |

### Further reading

You can find out more about some on Euler's work on infinite series (including a derivation of the last result) in his paper Remarques sur un beau rapport entre les series des puissances tant directes que reciproques.

If you liked those numbers, be sure to become a fan of the Math Dude on Facebook where you&rsquoll learn a new number that&rsquos just as interesting as these every single weekday. If that&rsquos not enough to convince you to check it out, I&rsquove also started to post daily math puzzles that have proven to be quite popular. Head over to the Math Dude&rsquos Facebook page and see for yourself. Of course, if you&rsquore on Twitter, please follow me there too and get updates about the podcast, the numbers of the day, the daily math puzzles, and all the latest math and science news. Finally, if you have any math questions, please feel free to ask me via Facebook, Twitter, or by email at [email protected]

Until next time, this is Jason Marshall with The Math Dude&rsquos Quick and Dirty Tips to Make Math Easier. Thanks for reading, math fans!

## The Ramanujan Summation: 1 + 2 + 3 + ⋯ + ∞ = -1/12?

This is what my mom said to me when I told her about this little mathematical anomaly. And it is just that, an anomaly. After all, it defies basic logic. How could adding positive numbers equal not only a negative, but a negative fraction? What the frac?

**Before I begin**: It has been pointed out to me that when I talk about sum’s in this article, it is not in the traditional sense of the word. This is because all the series I deal with naturally do not tend to a specific number, so we talk about a different type of sums, namely Cesàro Summations. For anyone interested in the mathematics, Cesàro summations assign values to some infinite sums that do not converge in the usual sense. “The Cesàro sum is defined as the limit, as *n* tends to infinity, of the sequence of arithmetic means of the first *n* partial sums of the series” — Wikipedia. I also want to say that throughout this article I deal with the concept of countable infinity, a different type of infinity that deals with a infinite set of numbers, but one where if given enough time you could count to any number in the set. It allows me to use some of the regular properties of mathematics like commutativity in my equations (which is an axiom I use throughout the article).

For those of you who are unfamiliar with this series, which has come to be known as the Ramanujan Summation after a famous Indian mathematician named Srinivasa Ramanujan, it states that if you add all the natural numbers, that is 1, 2, 3, 4, and so on, all the way to infinity, you will find that it is equal to -1/12. **Yup, -0.08333333333.**

Don’t believe me? Keep reading to find out how I prove this, by proving two equally crazy claims:

First off, the bread and butter. This is where the real magic happens, in fact the other two proofs aren’t possible without this.

I start with a series, A, which is equal to 1–1+1–1+1–1 repeated an infinite number of times. I’ll write it as such:

Then I do a neat little trick. I take away **A** from 1

So far so good? Now here is where the wizardry happens. If I simplify the right side of the equation, I get something very peculiar:

Look familiar? In case you missed it, thats **A**. Yes, there on that right side of the equation, is the series we started off with. So I can substitute **A** for that right side, do a bit of high school algebra and boom!

This little beauty is Grandi’s series, called such after the Italian mathematician, philosopher, and priest Guido Grandi. That’s really everything this series has, and while it is my personal favourite, there isn’t a cool history or discovery story behind this. **However**, it does open the door to proving a lot of interesting things, including a very important equation for quantum mechanics and even string theory. But more on that later. For now, we move onto proving **#2: 1–2+3–4+5–6⋯ = 1/4**.

We start the same way as above, letting the series B =1–2+3–4+5–6⋯. Then we can start to play around with it. This time, instead of subtracting **B** from 1, we are going to subtract it from **A**. Mathematically, we get this:

Then we shuffle the terms around a little bit, and we see another interesting pattern emerge.

Once again, we get the series we started off with, and from before, we know that **A = 1/2**, so we use some more basic algebra and prove our second mind blowing fact of today.

And voila! This equation does not have a fancy name, since it has proven by many mathematicians over the years while simultaneously being labeled a paradoxical equation. Nevertheless, it sparked a debate amongst academics at the time, and even helped extend Euler’s research in the Basel Problem and lead towards important mathematical functions like the Riemann Zeta function.

Now for the icing on the cake, the one you’ve been waiting for, the big cheese. Once again we start by letting the series **C** = 1+2+3+4+5+6⋯, and you may have been able to guess it, we are going to subtract **C** from **B**.

Because math is still awesome, we are going to rearrange the order of some of the numbers in here so we get something that looks familiar, but probably wont be what you are suspecting.

Not what you were expecting right? Well hold on to your socks, because I have one last trick up my sleeve that is going to make it all worth it. If you notice, all the terms on the right side are multiples of -4, so we can pull out that constant factor, and lo n’ behold, we get what we started with.

And since we have a value for **B=1/4**, we simply put that value in and we get our magical result:

Now, why this is important. Well for starters, it is used in string theory. Not the Stephen Hawking version unfortunately, but actually in the original version of string theory (called Bosonic String Theory). Now unfortunately Bosonic string theory has been somewhat outmoded by the current area of interest, called supersymmetric string theory, but the original theory still has its uses in understanding superstrings, which are integral parts of the aforementioned updated string theory.

The Ramanujan Summation also has had a big impact in the area of general physics, specifically in the solution to the phenomenon know as the Casimir Effect. Hendrik Casimir predicted that given two uncharged conductive plates placed in a vacuum, there exists an attractive force between these plates due to the presence of virtual particles bread by quantum fluctuations. In Casimir’s solution, he uses the very sum we just proved to model the amount of energy between the plates. And there is the reason why this value is so important.

So there you have it, the Ramanujan summation, that was discovered in the early 1900’s, which is still making an impact almost 100 years on in many different branches of physics, and can still win a bet against people who are none the wiser.

P.S. If you are still interested and want to read more, here is a conversation with two physicists trying to explain this crazy equation and their views on it’s usefulness and validity. It’s nice and short, and very interesting. https://physicstoday.scitation.org/do/10.1063/PT.5.8029/full/

This essay is part of a series of stories on math-related topics, published in Cantor’s Paradise, a weekly Medium publication. Thank you for reading!

## Infinite Geometric Series

An infinite geometric series is the sum of an infinite geometric sequence . This series would have no last term. The general form of the infinite geometric series is a 1 + a 1 r + a 1 r 2 + a 1 r 3 + . , where a 1 is the first term and r is the common ratio.

We can find the sum of all finite geometric series. But in the case of an infinite geometric series when the common ratio is greater than one, the terms in the sequence will get larger and larger and if you add the larger numbers, you won't get a final answer. The only possible answer would be infinity. So, we don't deal with the common ratio greater than one for an infinite geometric series.

If the common ratio r lies between &minus 1 to 1 , we can have the sum of an infinite geometric series. That is, the sum exits for | &thinsp r &thinsp | < 1 .

The sum S of an infinite geometric series with &minus 1 < r < 1 is given by the formula,

An infinite series that has a sum is called a convergent series and the sum S n is called the partial sum of the series.

You can use sigma notation to represent an infinite series.

For example, &sum n = 1 &infin 10 ( 1 2 ) n &minus 1 is an infinite series. The infinity symbol that placed above the sigma notation indicates that the series is infinite.

To find the sum of the above infinite geometric series, first check if the sum exists by using the value of r .

Some people hope that Fibonacci numbers will provide an edge in picking lottery numbers or bets in gambling. The truth is that the outcomes of games of chance are determined by random outcomes and have no special connection to Fibonacci numbers.

There are, however, betting systems used to manage the way bets are placed, and the Fibonacci system based on the Fibonacci sequence is a variation on the Martingale progression. In this system, often used for casino and online roulette, the pattern of bets placed follows a Fibonacci progression: i.e., each wager should be the sum of the previous two wagers until a win is made. If a number wins, the bet goes back two numbers in the sequence because their sum was equal to the previous bet.

In the Fibonacci system the bets stay lower then a Martingale Progression, which doubles up every time. The downside is that in the Fibonacci roulette system the bet does not cover all of the losses in a bad streak.

An important caution: Betting systems do not alter the fundamental odds of a game, which are always in favor of the casino or the lottery. They may just be useful in making the playing of bets more methodical, as illustrated in the example below: