Di↵erence Equations

Chapter 2

Di↵erence Equations

Don't use plagiarized sources. Get Your Custom Essay on
Di↵erence Equations
Just from $13/Page
Order Essay

In this chapter we see an application of diagonalization to solving di↵erence equations. We will develop the theory we need by working out an example. The material in this chapter comes from Strang’s Chapter 6.2.

2.1 Fibonacci sequence

You might have heard of the Fibonacci sequence 0, 1, 1, 2, 3, 5, 8, . . . defined recursively as follows: if Fk denotes the kth Fibonacci number, where k = 0, 1, 2, 3, . . ., then

F0 = 0, F1 = 1, Fk+2 = Fk+1 + Fk.

This recursive definition allows us to enumerate any Fibonacci number we want

F0 = 0, F1 = 1, F2 = 1, F3 = 2, F4 = 3, F5 = 5, F6 = 8, F7 = 13, F8 = 21, F9 = 33, F10 = 54, . . .

The numbers get large relatively quickly and it becomes cumbersome to use the recursive definition to find a large Fibonacci number like F100. However, linear algebra makes this easy as we now see.

Define wk :=

 Fk+1 Fk

� which means that wk+1 =

 Fk+2 Fk+1

� . Check that wk and wk+1 can be related via

multiplication by a matrix A. Indeed, if A =

 1 1 1 0

� , then Awk = wk+1. Let’s check:

Awk =

 1 1 1 0

� wk =

 1 1 1 0

�  Fk+1 Fk

� =

 Fk+1 + Fk

Fk+1

� =

 Fk+2 Fk+1

� = wk+1.

Now suppose we start this process with w0 =

 F1 F0

� =

 1 0

� . We know that Aw0 = w1 and multiplying

both sides of this equation with A we see that

w2 = Aw1 = A(Aw0) = A 2w0

Continuing to multiply by A on both sides, we see that wk = A kw0 and hence w100 = A

100w0. Recall that we can “easily” compute A100 if A is diagonalizable.

We find the eigenvalues of A by computing the roots of its characteristic polynomial:

det(A � �I) =  1 � � 1 1 ��

� = ��(1 � �) � 1 = �2 � � � 1 = 0

16

 

 

which yields the eigenvalues �1 = 1+

p 5

2 ⇠ 1.618 and �2 = 1�

p 5

2 ⇠ �0.618. Now computing eigenvectors we

get that u1 =

 �1 1

� is an eigenvector of �1 and u2 =

 �2 1

� is an eigenvector of �2. Since {u1, u2} is a basis

of R2, we can diagonalize A and write A = U⇤U�1. This means that

wk = A kw0 = U⇤

kU�1w0

Putting everything together we have that

 F101 F100

� = w100 = A

100w0 = U⇤ 100U�1

 1 0

� = U

” (1+

p 5

2 )100 0

0 (1� p 5

2 )100

# U�1

 1 0

� .

Multiplying everything out we could find F100, but this still requires some matrix computations and com-

puting ⇤100 is not too easy in this example since it requires finding (1+ p 5

2 )100 and (1�

p 5

2 )100.

We will now see that there is an easier way to find F100. In fact, there is a closed form expression for the kth Fibonacci number Fk purely in terms of the eigenvalues and eigenvectors of A.

2.2 The general setting

Before we start, we point out an important interpretation of matrix-vector multiplication. These are very useful facts that we will need over and over again in this class. Please test these facts on some examples and make sure you understand these interpretations very very well.

Matrix-vector multiplication as a linear combination of columns/rows

Let A 2 Rm⇥n with columns a1, . . . , an and rows b>1 , b > 2 , . . . , b

> m where ai 2 Rm and bj 2 Rn.

• The vector Ax is the linear combination of the columns of A by x1, x2, . . . , xn. It is a column vector whose ith entry is the dot product of the ith row of A with x. Mathematically, if A =

⇥ a1 a2 · · · an

⇤ 2 Rm⇥n and x = (x1, x2, . . . , xn)>, then Ax = x1a1 +x2a2 +· · ·+xnan

and (Ax)i = b > i x.

• The vector y>A is the linear combination of the rows of A by y1, . . . , ym. It is a row vector whose jth entry is the dot product of y and the jth column of A. Mathematically, if y = (y1, . . . , ym)

>

and A =

2

666 4

b>1 b>2 …

b>m

3

777 5 2 Rm⇥n, then y>A = y1b>1 + y2b

> 2 + . . . + ymb

> m and (y

>A)j = y >aj.

Using the above (check!) we get the following two very useful facts about the Col(A), the column space of A which is the span of the columns of A, and Row(A), the rowspace of A which is the span of the rows of A:

Column and row space of a matrix

Col(A) = {Ax : x 2 Rn} ✓ Rm and Row(A) = {y>A : y 2 Rm} ✓ Rn

Now let’s come back to di↵erence equations. Suppose a matrix A 2 Rn⇥n sends an initial state vector w0 to the k

th state vector, wk via the relation A

kw0 = wk. Recall that we got this from the di↵erence equation Awk = wk+1 that related two consecutive states of some system whose kth state vector is wk. If

17

 

 

A is diagonalizable, then A = U⇤U�1 and wk = A kw0 = U⇤

kU�1w0.

Set c := U�1w0. Recall that in the eigenbasis of Rn consisting of the columns of U, the coordinates of w0 is c = U

�1w0. You can find the vector c by solving the system Uc = w0.

Now consider wk = A kw0 again where we substitute c for U

�1w0:

wk = U⇤ kU�1w0 = U⇤

kc = ⇥ u1 u2 · · · un

2

666 4

�k1 0 . . . 0 0 �k2 . . . 0 …

… …

… 0 0 . . . �kn

3

777 5

2

666 4

c1 c2 … cn

3

777 5

= ⇥ u1 u2 · · · un

2

666 4

�k1c1 �k2c2 …

�kncn

3

777 5 = c1�

k 1u1 + c1�

k 2u2 + · · · + cn�

k nun

which expresses the kth state vector wk as a linear combination of the eigenvectors u1, . . . , un of A. The scalars involved in the combination come from the eigenvalues of A as well as the vector c. We summarize the above calculations in the following proposition.

Proposition 2.2.1. Suppose we have a di↵erence equation Awk = wk+1 with initial state vector w0 and A 2 Rn⇥n. Then wk = Akw0. If A is diagonalizable then we can explicitly solve for wk as:

wk = c1� k 1u1 + c2�

k 2u2 + · · · + cn�

k nun = ⌃

n i=1ci�

k i ui.

In the above expression, the vector c = (c1, c2, . . . , cn) >

is the solution of the linear system Uc = w0, and the vectors u1, u2, . . . , un are independent eigenvectors of A with eigenvalues �1, �2, . . . , �n.

2.3 Back to the Fibonacci sequence

Now let’s come back to computing F100. Solving Uc = w0, we see that

c =

 1 �1��2 �1

�1��2

� .

Therefore,

wk = 1

�1 � �2| {z } c1

�k1

 �1 1

|{z} u1

+ �1

�1 � �2| {z } c2

�k2

 �2 1

|{z} u2

=

2

6 4

�k+11 �� k+1 2

�1��2

�k1 �� k 2

�1��2

3

7 5 .

In particular, you can check that

w0 =

 1 0

� =

1

�1 � �2| {z } c1

 �1 1

|{z} u1

� 1

�1 � �2| {z } c2

 �2 1

|{z} u2

.

Since Fk is the second coordinate of wk =

 Fk+1 Fk

� , we conclude that

Fk = �k1 � �k2 �1 � �2

.

18

 

 

What happens as k goes to infinity, i.e., what is limk!1 wk = limk!1 c1� k 1u1 + c1�

k 2u2?

Since �2 = �0.618 2 [�1, 0], limk!1 �k2 = 0. Therefore,

lim k!1

c1� k 1u1 + c1�

k 2u2 = lim

k!1 c1�

k 1u1 =

✓ lim k!1

c1� k 1

◆ u1.

This means that wk becomes proportional to the first eigenvector u1 in the long run. Looking back at the

formula for Fk we also see that in the long run, Fk ⇠ �k1

�1��2 = 1p

5

⇣ 1+

p 5

2

⌘k .

The key observation is that in this Fibonacci example, the second eigenvalue of A tended to 0 as k got large and wk limits to a multiple of the first eigenvector u1 of A. In other words, the long term behavior of wk is controlled by the eigenvector that had the largest eigenvalue (called the dominant eigenvector and eigenvalue). Keep this in mind as we go to the next chapter where we will see situations in which the dominant eigenvalue and eigenvector are guaranteed to control long term behavior of states.

2.4 Fibonacci numbers and the golden ratio – an aside

Definition 2.4.1. The golden ratio, denoted by the greek letter �, is defined to be � = 1+ p 5

2 . It is also a

root of the quadratic polynomial x2 � x � 1.

The golden ratio came about from the idea of trying to draw the “perfect” rectangle, that is, a rectangle that was the most pleasing to the human eye. It appears everywhere in nature and is intimately related to the Fibonacci sequence. Recall that � is the dominant eigenvalue of the matrix A in the Fibonacci example.

Suppose we were a plant, one that grows upwards and has leaves coming o↵ of its main stem. How would we grow more and more leaves and ensure that the leaf spacing maximized sunlight on the surface of our leaves? We could start with leaf 1, and then rotate halfway around the stem to let leaf 2 grow there. That is, leaf 2 is a rotation of 1

2 units around the stem from leaf 1. Next, we would want leaf 3 to be positioned

so it does not block too much sunlight from leaves 1 and 2. If we drew a picture and though about it for a bit, we would end up rotating 3

5 units away from leaf 2. How about leaf 4? We would rotate this one 5

8 units from leaf 3. Assuming the plant was immortal, we would continue this process indefinitely, with a new spacing for each leaf.

What would happen as we proceed? If we write it out we see that the we obtain a sequence of fractions 1 2 , 3 5 , 5 8 , . . . , Fk

Fk+1 and the limit of this sequence as k tends to infinity is

lim k!1

Fk Fk+1

= 1

In other words, limk!1 Fk+1 Fk

= �, the golden ratio!

ORDER NOW »»

and taste our undisputed quality.