In this chapter we study linear operators (T : V o V) on a finite-dimensional vector space (V). For example, quantum mechanics is largely based upon the study of eigenvalues and eigenvectors of operators on finite- and infinite-dimensional vector spaces.

## Chapter 7: Eigenvalues and Eigenvectors

It was shown in Chapter 6 that provided the eigenvalues and eigenvectors of a system can be found, it is possible to transform the coordinates of the system from local or global coordinates to coordinates consisting of normal or 'principal' modes.

Depending on the damping, the eigenvalues and eigenvectors of a system can be real or complex, as discussed in Chapter 6. However, real eigenvalues and eigenvectors, derived from the undamped equations of motion, can be used in most practical cases, and will be assumed here, unless stated otherwise.

In Example 6.3, we used a very basic 'hand' method to demonstrate the derivation of the eigenvalues and eigenvectors of a simple 2-DOF system solving the characteristic equation for its roots, and substituting these back into the equations to obtain the eigenvectors. In this chapter, we look at methods that can be used with larger systems.

First, **choose the matrix size** you want to enter. You will see a randomly generated matrix to give you an idea of what your output will look like.

Then, **enter your own numbers** in the boxes that appear. You can enter **integers or decimals**. (More advanced entry and output is in the works, but not available yet.)

On a keyboard, you can use the tab key to easily move to the next matrix entry box.

Click **calculate** when ready.

The **output** will involve either real and/or complex eigenvalues and eigenvector entries.

You can change the **precision** (number of significant digits) of the answers, using the pull-down menu.

### Eigenvalues and eigenvectors calculator

**NOTE 1:** The eigenvector output you see here may not be the same as what you obtain on paper. Remember, you can have any scalar multiple of the eigenvector, and it will still be an eigenvector. The convention used here is eigenvectors have been scaled so the final entry is 1.

**NOTE 2:** The larger matrices involve a lot of calculation, so expect the answer to take a bit longer.

**NOTE 3:** Eigenvectors are usually column vectors, but the larger ones would take up a lot of vertical space, so they are written horizontally, with a "T" superscript (known as the **transpose** of the matrix).

**NOTE 4:** When there are complex eigenvalues, there's always an **even number** of them, and they always appear as a **complex conjugate pair**, e.g. 3 + 5*i* and 3 &minus 5*i*.

**NOTE 5:** When there are eigenvectors with complex elements, there's always an **even number** of such eigenvectors, and the corresponding elements always appear as **complex conjugate pairs**. (It may take some manipulating by multiplying each element by a complex number to see this is so in some cases.)

## Eigenvalues and Eigenvectors

We review here the basics of computing eigenvalues and eigenvectors. Eigenvalues and eigenvectors play a prominent role in the study of ordinary differential equations and in many applications in the physical sciences. Expect to see them come up in a variety of contexts!

#### Definitions

Let $A$ be an $n imes n$ matrix. The number $lambda$ is an **eigenvalue** of $A$ if there exists a non-zero vector $<f v>$ such that $ A <f v>= lambda <f v>. $ In this case, vector $<f v>$ is called an **eigenvector** of $A$ corresponding to $lambda$.

#### Computing Eigenvalues and Eigenvectors

We can rewrite the condition $A <f v>= lambda <f v>$ as $ (A- lambda I) <f v>= <f 0>. $ where $I$ is the $n imes n$ identity matrix. Now, in order for a *non-zero* vector $<f v>$ to satisfy this equation, $A – lambda I$ must *not* be invertible.

Otherwise, if $A – lambda I$ has an inverse, egin **characteristic polynomial** of $A$. The eigenvalues of $A$ are simply the roots of the characteristic polynomial of $A$.

###### Example

Let $A = left[ egin

To find eigenvectors $ <f v>= left[ egin

###### Example

The matrix $A = left[ egin**basis** of the eigenspace corresponding to $lambda_1 =3$.

Repeating this process with $lambda_2 = -2$, we find that egin

In the following example, we see a two-dimensional eigenspace.

###### Example

Let $A=left[egin

Letting $v_3=t$, we find from the second equation that $v_1=-2t$, and then $v_2=-t$. All eigenvectors corresponding to $lambda_1=1$ are multiples of $left[egin

Eigenvectors corresponding to $lambda_2=-3$ must satisfy

The equations here are just multiples of each other! If we let $v_3 = t$ and $v_2 = s$, then $v_1 = -s -2t$. Eigenvectors corresponding to $lambda_2=-3$ have the form $ left[egin

#### Notes

- Eigenvalues and eigenvectors can be complex-valued as well as real-valued.
- The dimension of the eigenspace corresponding to an eigenvalue is less than or equal to the multiplicity of that eigenvalue.
- The techniques used here are practical for $2 imes 2$ and $3 imes 3$ matrices. Eigenvalues and eigenvectors of larger matrices are often found using other techniques, such as iterative methods.

#### Key Concepts

Let $A$ be an $n imes n$ matrix. The eigenvalues of $A$ are the roots of the characteristic polynomial $ p(lambda )= det (A – lambda I). $ For each eigenvalue $lambda$, we find eigenvectors $ <f v>=left[ egin**eigenspace** of $A$ corresponding to $lambda$.

## 2 Answers 2

You don't need to find the characteristic polynomial of $M$ (or indeed the matrix $M$ at all) in order to find the eigenvalues and eigenvectors of $T$ . You can work directly from the definition. If $lambda$ is an eigenvalue of $T$ with associated eigenvector $A$ , then by definition $A^T = lambda A$ Taking transposes of both sides gives $A = lambda A^T$ Substituting the previous equation into this one, we obtain $A = lambda^2 A$ Assuming $A$ is nonzero, which is required for any eigenvector, we conclude that the only possible eigenvalues are $lambda = 1$ and $lambda = -1$ .

Now consider the two cases.

In this case, the first equation becomes $A^T = A$ , so $A$ is an associated eigenvector if and only if it is nonzero and symmetric, i.e. of the form $A = egin

Note that there are three degrees of freedom (the values of $a$ , $b$ , and $c$ ), so this eigenspace has geometric multiplicity $3$ .

In this case, the first equation becomes $A^T = -A$ , so $A$ is an associated eigenvector if and only if it is nonzero and antisymmetric, i.e. of the form $A = egin

Note that there is one degree of freedom (the value of $d$ ), so this eigenspace has geometric multiplicity $1$ .

## Chapter 7: Eigenvalues and Eigenvectors

occur frequently in engineering analysis. Consider, for example, that the variables of interest in the analysis of a linear system are *x* _{1},x _{2} and *x* _{3} and that they are related by three, linear, simultaneous differential equations with constant coefficients:

These may be solved for the derivatives

and then put into matrix form

The forgoing may be written with **X** indicating the derivative of **X** with respect to time as

The solution to the system of differential equations begins with the determination of the so-called *complementary function*. The procedure is to make the set of equations homogeneous and then, knowing that exponential solutions exist, assume that the complementary function is in the form x = **C**e ?t where *C* is an arbitrary constant. Thus in

take **X _{c}** =

**C**e ?t where

**C**is a 3 1 column vector of arbitrary constants.

Then, with **X _{c}** = ?

**Ce ?t**it is observed that

which is in the form of eq (7.1) and where the values of the ? 's must be determined.

The forgoing discussion describes what is called the *eigenvalve* or *characteristic value problem.* It occurs frequently in engineering analysis in asll disciplines and it does not derive exclusively from a set of differential equations.

## Invertibility and Eigenvalues¶

So far we haven’t probed what it means for a matrix to have an eigenvalue of 0.

This happens if and only if the equation (Amathbf

But that equation is equivalent to (Amathbf

0 is an eigenvalue of $A$ if and only if $A$ is not invertible.

This draws an important connection between invertibility and zero eigenvalues.

So we have **yet another** addition to the Invertible Matrix Theorem!

## 7: Eigenvalues and Eigenvectors

If you get nothing out of this quick review of linear algebra you must get this section. Without this section you will not be able to do any of the differential equations work that is in this chapter.

So, let’s start with the following. If we multiply an (n imes n) matrix by an (n imes 1) vector we will get a new (n imes 1) vector back. In other words,

What we want to know is if it is possible for the following to happen. Instead of just getting a brand new vector out of the multiplication is it possible instead to get the following,

In other words, is it possible, at least for certain (lambda ) and (vec eta ), to have matrix multiplication be the same as just multiplying the vector by a constant? Of course, we probably wouldn’t be talking about this if the answer was no. So, it is possible for this to happen, however, it won’t happen for just any value of (lambda ) or (vec eta ). If we do happen to have a (lambda ) and (vec eta ) for which this works (and they will always come in pairs) then we call (lambda) an **eigenvalue** of (A) and (vec eta ) an **eigenvector** of (A).

So, how do we go about finding the eigenvalues and eigenvectors for a matrix? Well first notice that if (vec eta = vec 0) then (eqref

[egin

Notice that before we factored out the (vec eta ) we added in the appropriately sized identity matrix. This is equivalent to multiplying things by a one and so doesn’t change the value of anything. We needed to do this because without it we would have had the difference of a matrix, (A), and a constant, (lambda ), and this can’t be done. We now have the difference of two matrices of the same size which can be done.

So, with this rewrite we see that

is equivalent to (eqref

Knowing this will allow us to find the eigenvalues for a matrix. Recall from this fact that we will get the second case only if the matrix in the system is singular. Therefore, we will need to determine the values of (lambda ) for which we get,

Once we have the eigenvalues we can then go back and determine the eigenvectors for each eigenvalue. Let’s take a look at a couple of quick facts about eigenvalues and eigenvectors.

If (A) is an (n imes n) matrix then (det left(
ight) = 0) is an (n^< ext>) degree polynomial. This polynomial is called the **characteristic polynomial**.

If (

- If (lambda ) occurs only once in the list then we call (lambda )

**simple**.

The usefulness of these facts will become apparent when we get back into differential equations since in that work we will want linearly independent solutions.

Let’s work a couple of examples now to see how we actually go about finding eigenvalues and eigenvectors.

The first thing that we need to do is find the eigenvalues. That means we need the following matrix,

In particular we need to determine where the determinant of this matrix is zero.

So, it looks like we will have two simple eigenvalues for this matrix, (

To find the eigenvectors we simply plug in each eigenvalue into GOTOBUTTON ZEqnNum594711 * MERGEFORMAT REF ZEqnNum594711 ! * MERGEFORMAT (2) and solve. So, let’s do that.

(

In this case we need to solve the following system.

Recall that officially to solve this system we use the following augmented matrix.

Upon reducing down we see that we get a single equation

that will yield an infinite number of solutions. This is expected behavior. Recall that we picked the eigenvalues so that the matrix would be singular and so we would get infinitely many solutions.

Notice as well that we could have identified this from the original system. This won’t always be the case, but in the (2 imes 2) case we can see from the system that one row will be a multiple of the other and so we will get infinite solutions. From this point on we won’t be actually solving systems in these cases. We will just go straight to the equation and we can use either of the two rows for this equation.

Now, let’s get back to the eigenvector, since that is what we were after. In general then the eigenvector will be any vector that satisfies the following,

To get this we used the solution to the equation that we found above.

We really don’t want a general eigenvector however so we will pick a value for (

Now we get to do this all over again for the second eigenvalue.

(

We’ll do much less work with this part than we did with the previous part. We will need to solve the following system.

Clearly both rows are multiples of each other and so we will get infinitely many solutions. We can choose to work with either row. We’ll run with the first because to avoid having too many minus signs floating around. Doing this gives us,

Note that we can solve this for either of the two variables. However, with an eye towards working with these later on let’s try to avoid as many fractions as possible. The eigenvector is then,

Note that the two eigenvectors are linearly independent as predicted.

This matrix has fractions in it. That’s life so don’t get excited about it. First, we need the eigenvalues.

So, it looks like we’ve got an eigenvalue of multiplicity 2 here. Remember that the power on the term will be the multiplicity.

Now, let’s find the eigenvector(s). This one is going to be a little different from the first example. There is only one eigenvalue so let’s do the work for that one. We will need to solve the following system,

So, the rows are multiples of each other. We’ll work with the first equation in this example to find the eigenvector.

Recall in the last example we decided that we wanted to make these as “nice” as possible and so should avoid fractions if we can. Sometimes, as in this case, we simply can’t so we’ll have to deal with it. In this case the eigenvector will be,

Note that by careful choice of the variable in this case we were able to get rid of the fraction that we had. This is something that in general doesn’t much matter if we do or not. However, when we get back to differential equations it will be easier on us if we don’t have any fractions so we will usually try to eliminate them at this step.

Also, in this case we are only going to get a single (linearly independent) eigenvector. We can get other eigenvectors, by choosing different values of (

Recall from the fact above that an eigenvalue of multiplicity (k) will have anywhere from 1 to (k) linearly independent eigenvectors. In this case we got one. For most of the (2 imes 2) matrices that we’ll be working with this will be the case, although it doesn’t have to be. We can, on occasion, get two.

So, we’ll start with the eigenvalues.

This doesn’t factor, so upon using the quadratic formula we arrive at,

In this case we get complex eigenvalues which are definitely a fact of life with eigenvalue/eigenvector problems so get used to them.

Finding eigenvectors for complex eigenvalues is identical to the previous two examples, but it will be somewhat messier. So, let’s do that.

(

The system that we need to solve this time is

Now, it’s not super clear that the rows are multiples of each other, but they are. In this case we have,

This is not something that you need to worry about, we just wanted to make the point. For the work that we’ll be doing later on with differential equations we will just assume that we’ve done everything correctly and we’ve got two rows that are multiples of each other. Therefore, all that we need to do here is pick one of the rows and work with it.

We’ll work with the second row this time.

Now we can solve for either of the two variables. However, again looking forward to differential equations, we are going to need the “(i)” in the numerator so solve the equation in such a way as this will happen. Doing this gives,

So, the eigenvector in this case is

As with the previous example we choose the value of the variable to clear out the fraction.

Now, the work for the second eigenvector is almost identical and so we’ll not dwell on that too much.

(

The system that we need to solve here is

Working with the second row again gives,

The eigenvector in this case is

There is a nice fact that we can use to simplify the work when we get complex eigenvalues. We need a bit of terminology first however.

If we start with a complex number,

then the **complex conjugate** of (z) is

To compute the complex conjugate of a complex number we simply change the sign on the term that contains the “(i)”. The complex conjugate of a vector is just the conjugate of each of the vector’s components.

We now have the following fact about complex eigenvalues and eigenvectors.

If (A) is an (n imes n) matrix with only real numbers and if (

This fact is something that you should feel free to use as you need to in our work.

Now, we need to work one final eigenvalue/eigenvector problem. To this point we’ve only worked with (2 imes 2) matrices and we should work at least one that isn’t (2 imes 2). Also, we need to work one in which we get an eigenvalue of multiplicity greater than one that has more than one linearly independent eigenvector.

Despite the fact that this is a (3 imes 3) matrix, it still works the same as the (2 imes 2) matrices that we’ve been working with. So, start with the eigenvalues

So, we’ve got a simple eigenvalue and an eigenvalue of multiplicity 2. Note that we used the same method of computing the determinant of a (3 imes 3) matrix that we used in the previous section. We just didn’t show the work.

Let’s now get the eigenvectors. We’ll start with the simple eigenvector.

This time, unlike the (2 imes 2) cases we worked earlier, we actually need to solve the system. So let’s do that.

Going back to equations gives,

So, again we get infinitely many solutions as we should for eigenvectors. The eigenvector is then,

Now, let’s do the other eigenvalue.

Okay, in this case is clear that all three rows are the same and so there isn’t any reason to actually solve the system since we can clear out the bottom two rows to all zeroes in one step. The equation that we get then is,

So, in this case we get to pick two of the values for free and will still get infinitely many solutions. Here is the general eigenvector for this case,

Notice the restriction this time. Recall that we only require that the eigenvector not be the zero vector. This means that we can allow one or the other of the two variables to be zero, we just can’t allow both of them to be zero at the same time!

What this means for us is that we are going to get two linearly independent eigenvectors this time. Here they are.

Now when we talked about linear independent vectors in the last section we only looked at (n) vectors each with (n) components. We can still talk about linear independence in this case however. Recall back with we did linear independence for functions we saw at the time that if two functions were linearly dependent then they were multiples of each other. Well the same thing holds true for vectors. Two vectors will be linearly dependent if they are multiples of each other. In this case there is no way to get (

So, summarizing up, here are the eigenvalues and eigenvectors for this matrix

Here is the most important definition in this text.

##### Definition

The German prefix “eigen” roughly translates to “self” or “own”. An eigenvector of

is a vector that is taken to a multiple of itself by the matrix transformation

which perhaps explains the terminology. On the other hand, “eigen” is often translated as “characteristic” we may think of an eigenvector as describing an intrinsic, or characteristic, property of

Eigenvalues and eigenvectors are only for square matrices.

Eigenvectors are *by definition nonzero*. Eigenvalues may be equal to zero.

We do not consider the zero vector to be an eigenvector: since

the associated eigenvalue would be undefined.

If someone hands you a matrix

On the other hand, given just the matrix

it is not obvious at all how to find the eigenvectors. We will learn how to do this in Section 5.2.

##### Example (Verifying eigenvectors)

##### Example (Verifying eigenvectors)

##### Example (An eigenvector with eigenvalue

are *collinear with the origin*. So, an eigenvector of

lie on the same line through the origin. In this case,

the eigenvalue is the scaling factor.

For matrices that arise as the standard matrix of a linear transformation, it is often best to draw a picture, then find the eigenvectors and eigenvalues geometrically by studying which vectors are not moved off of their line. For a transformation that is defined geometrically, it is not necessary even to compute its matrix to find the eigenvectors and eigenvalues.

## 3 Answers 3

Anyway, to solve this one, keep in mind what an eigenvector actually is. It is a non-zero vector which, after being multiplied by $A$, becomes a multiple of itself. Geometrically this means that an output vector needs to be **parallel** to its corresponding input vector.

It seems that the diagonal vectors $displaystyle pm frac<1>

Then $displaystyle pm frac<1>