## Section 2.3: Linear Stability Analysis

Section 2.3: Linear Stability Analysis

Linear Stability Analysis

If you haven't come across Taylor Polynomials before, look at this post I wrote for my first year pure maths class. Look at the posts before and after at the same link for more about them.

OK, we’ve got some first order autonomous differential equation:

x

With fixed point =

x

*

x

Is there a way that we can tell whether it's stable or unstable?

In fact we've already done that by inspection of the phase portrait. But we can do this in a more formal way. It turns out that we can ask about small fluctuations close to the fixed point.

Let's say that we want to look at some (t) just a little away from . We could write this as:

x

*

x

x(t)=+η(t)

*

x

where η(t) (the Greek letter eta, pronounced eat-a) is very small.

We take derivatives of both sides of the above equation and because is a constant we have that (Try writing it out if you’re confused):

*

x

x

η

Now what can we say of (x)? Well, let’s sub in , where we have temporarily removed that functional dependence on t, but we know that it’s there.

f

x=+η

*

x

What can we say about:

f(+η)

*

x

Well, we have said that η is small, so presumably we can say that as long as f is continuous

f(+η)≈f()

*

x

*

x

We say that this is a zeroth order approximation. This is just a constant, and we know that in reality isn’t a constant because η depends on time, so what would the next approximation be? Well, for a small shift in η, how much will f shift? Well, it’s going to be related to the gradient of , along with how much we are moving in η, so we write:

f(+η)

*

x

f

f(+η)=f()+ηf'()+O()

*

x

*

x

*

x

2

η

The is read as “terms of order ” and is small compared to the second term so long as η is small. Essentially this means that these are terms that we are going to ignore. We have approximated the function at this point by a linear function in η (we threw away higher order terms).

O()

2

η

2

η

Let's say that we are looking at a point which is away from a fixed point (in some units). Then =0.01, which is smaller than η, and as long as the derivative is not very small there compared to the second derivative (see later), we can ignore the terms.

η=0.1

2

η

2

η

Let’s understand this for a general function about some point visually, let’s call it . The equivalent expression to the above would be:

f

c

f(c+a)=f(c)+af'(c)+O()

2

a

In[]:=

Show[Plot[,{x,0.5,1.4},PlotStyleRed],Plot[2x-1,{x,1,1.2},PlotStyleBlue],Plot[2x-1,{x,0.8,1.4},PlotStyle{Dashed,Blue}],Graphics[Line[#]]&/@{{{1,1},{0.5,1}},{{1.2,0},{1.2,1+0.22}},{{1.2,1+0.22},{0.5,1+0.22}},{{1,0},{1,1}}},Graphics[Text[Style[#[[1]],15],#[[2]]]]&/@{{"c",{1,-0.2}},{"c+a",{1.2,-0.2}},{" f(c)",{0.4,1}},{" f(c)+f'(c)a",{0.35,1.4}}},AxesOrigin{0.5,0},PlotRange{{0.2,1.4},{-0.3,2}},AxesLabel{Style["x",14],Style["f(x)",14]}]

2

x

Out[]=

We see that the value of the function at is very well approximated by the value of the function at , plus times the gradient of the function at . Provided isn’t very large, this is a good approximation.

c+a

c

a

c

a

This helps explain this:

f(+η)=f()+ηf'()+O()

*

x

*

x

*

x

2

η

The point is that the first two terms don’t capture it perfectly, but the next term will be of size , which for small is even smaller than itself, so we can ignore it to first approximation.

2

η

η

η

How on earth does this help us? Well, remember that is a fixed point, which means that at that point '(t)=0 and so , so we can write:

*

x

*

x

f()=0

*

x

f(+η)=ηf'()+O()

*

x

*

x

2

η

Carefully approximated to:

f(+η)=ηf'()

*

x

*

x

Keep in the back of your minds when this approximation is valid, it's only true close to the fixed points (Remember the terms we throw away must be smaller - this isn’t always true).

η

*

x

Make sure you understand and can show this!

Remember is just a constant. It turns out that this is a differential equation that we can solve. It’s actually the same differential equation as that for population growth and radioactive decay, and has solution:

f'()

*

x

η=

η

0

f'()t

*

x

e

Where is the value of η at . So, it tells us if you start off with a small perturbation (η) away from :

η

0

t=0

*

x

- if > 0, perturbation will grow exponentially - i.e. move further and further away from

f'()

*

x

x=

*

x

- if < 0, perturbation will decay to zero exponentially - i.e. you will move closer and closer to =

f'()

*

x

x

*

x

If starting close to a fixed point, you move away from it, then that is an unstable fixed point. If towards it, then that is a stable fixed point.

We have now proved what we said previously. We said back then that:

Given a differential equation:

x

∘ The fixed points, labeled, are given by .

∘ The particle will be moving to the right whenever and moving to the left whenever .

∘ A fixed point is stable if goes from positive to negative through zero as you increase

∘ A fixed point is unstable if goes from negative to positive through zero as you increase

∘ The fixed points, labeled

*

x

f()=0

*

x

∘ The particle will be moving to the right whenever

f(x)>0

f(x)<0

∘ A fixed point is stable if

f(x)

x

∘ A fixed point is unstable if

f(x)

x

Now, in the new language:

∘ If then is an unstable fixed point.

f'()>0

*

x

*

x

∘ If then is a stable fixed point.

f'()<0

*

x

*

x

Not only does the sign of the derivative tell us about whether a fixed point is stable or not, but how large it is tells us how stable or unstable. If the derivative is, for instance, large and positive then not only is it an unstable fixed point, but because of the exponential solution, we will very quickly move away from the point. The timescale over which the perturbation size decays to of its original size is given by . The same analysis holds for the stable fixed point.(Note that is in units of inverse time , therefore being a timescale makes sense)

1

e

1

f'()

*

x

f'()

*

x

f()

*

x

dt

1

f'()

*

x

We intuitively knew this already. If we have a function which has a very small slope as it passes through the fixed point (let’s say with a negative gradient) then we would very, very slowly move towards it. If the slope was very large, then we will be moving towards it faster.

But hang on, I hear you say! What about if ? Well, it turns out that in that case we have to look at to tell if it's stable, or unstable. When might that be the case? Well, how about this differential equation?

f'()=0

*

x

f''()

*

x

x

2

x(t)

Then it’s clear that =0 is a fixed point, but what kind is it? The derivative of the function is zero at , so we can’t use that rule.

*

x

(f(x)=)

2

x

x=0

Let's look at the phase portrait:

In[]:=

Plot[,{x,-2,2},AxesLabel{Style["x",14],Style["",14]},AspectRatio1]

2

x

x

Out[]=

What we see though is that if you start off just to the left of the fixed point you will move towards it (stable fixed point), but if you start just to the right of it you will move away from it (unstable fixed point).

So which is it? Well, it's called a half stable fixed point. The arrows and the fixed point symbol would be drawn like this:

Which shows that it’s stable from the left and unstable from the right.

If the parabola was the other way, we would have:

is stable from the right and unstable from the left.

Make sure that you can understand the arrows, and the stability:

There is one final, trivial case the case where:

OK, so that gives us the full classification of fixed points and their stability for dynamical systems on the line. We are almost done with this section.

Next we will look at existence and uniqueness, a subtle, but important little topic.