Consider the following assertion: if A is a square matrix, then there is a scalar l and a vector x such that:
l is then known as an eigenvalue (or "characteristic root") of A while x is known as the associated eigenvector (or "characterstic vector") of A. To find the eigenvalues and eigenvectors which solve the system, we can proceed as follows. We can rewrite the system:
thus we now have a homogeneous system. Thus, as we know, either the trivial case holds (i.e. x = 0) or the determinant vanishes, i.e. |A - l I| = 0. Consider now the last possibility. The vanishing determinant can be re-expressed as an nth degree polynomial in l known as a "characteristic equation". In a 2 ´ 2 case, where:
then:
so the characteristic equation is:
or simply:
which is a simple quadratic equation. Notice that the coefficient attached to l is merely the negative of the trace of the original matrix A, i.e. - (a11 + a22) = - tr A, while the last term is merely the determinant of the original matrix, i.e. (a11a22 - a21a12) = |A|. Thus, we can write:
This is generally true for all two-dimensional equation systems. There are always two solutions to a quadratic equation and these can be obtained from the familiar square root rule, in this case:
where the eigenvalues l 1, l 2 are real if trA2 ³ 4|A|, otherwise they are complex conjugates. For higher-dimensional systems, the polynomial is, of course, different. In fact, an n-dimensional matrix A will have n eigenvalues l with associated eigenvectors x which can solve the system Ax = l x. Nonetheless, some general rules apply. For instance:
Having established the eigenvalues, the question now turns to the associated eigenvectors. These are obtained by plugging in one of the eigenvalues and solving for the ratios between xis in x. For instance, if we take the first eigenvalue l 1 then our equation system is
For the 2 ´ 2 case, we can write this out in a systems of two equations:
Although it might not be obvious, the first equation is a linear transformation of the second equation, thus the ratio x1/x2 will be the same regardless of how we solve it, i.e.
Once x1/x2 is obtained, the only thing that remains to obtain some levels of x1 and x2, we have to normalize the system, e.g. we could take x2 = 1 or impose x1 + x2 = 1 or x12 + x22 = 1 as a normalization device. From this we would thus obtain the vector x = [x1 x2]¢ . This x is the eigenvector associated with the eigenvalue l 1. If we then took the second eigenvalue l 2, we would also find another eigenvector x associated with that by the same means. In an n-dimensional system, we would have n eigenvalues with associated eigenvectors. An Example: Consider the following matrix:
imposing Ax = l x, then we wish to find the l s such that |A - l I| = 0. This reduces itself to the polynomial:
which yields two values l1 = 3 and l2 = -2. Plugging the first eigenvalue into the system (A - l 1I)x = 0, we then obtain the two equation system:
which are obviously linearly dependent, thus the solution in either case is x1/x2 = 1/2. If we normalize x2 = 1, then x = [1/2 1]¢ , normalizing x1 = 1, then x = [1, 2]¢ , normalizing x1 + x2 = 1, then x = [1/3, 2/3]¢ , normalizing x12 + x22 = 1, then x = [1/Ö 5, 2/Ö 5]¢ , and so on for other normalizations. Note that whatever normalization we choose, the resulting eigenvector be linearly related to any other obtained by a different normalization device. To obtain the second eigenvector, we need to plug in the second eigenvalue l 2 = -2 into our system (A - l2I)x = 0. This yields:
where the solution in either case is x1/x2 = -0.5. If we normalize x2 = 1, for instance, then x = [-0.5 1]¢ is the eigenvector associated with l 2.
|
All rights reserved, Gonçalo L. Fonseca