(1) General Differential Equations (1) General Differential Equations Economists use differential equations largely in the context of dynamical systems, i.e. in systems where time, t, is one of the variables. However, differential equations are defined more generally than this. In this section, we provide general definitions and revert only to including time as the explicit variable in the next section.
Intuitively, a differential equation is an equation involving derivatives of an unknown function y. The problem is one of finding this function - thus a solution to a differential equation is a function y = j (x) which satisfies:
For conditions establishing the existence of a solution, j (x), the Cauchy-Peano theorem, we refer to any text on this matter and shall thus pass it over in silence here. We shall concern ourselves throughout with first order differential equations (FODE), so that we have:
As x is included explicitly, then this is a "non-autonomous" system; if x were excluded, then we have an autonomous system. We can convert from non-autonomous to autonomous systems via a change of variable technique which we shall not pursue here. We shall also focus the bulk of our attention on linear FODE. This is defined as follows:
Proof: let y0 be a particular solution to the complete equation. Let y be the general solution such that y = y0 + z. We must prove that z is a solution to the equation without the second member. As y solves a(x)y¢ + b(x)y = c(x), then a(x)[y0¢ + z¢ ] + b(x)[y0 + z] = c(x), or:
Since y0 is a particular solution to the complete equation, the a(x)y0¢ + b(x)y0 - c(x) = 0, thus the previous equation reduces to:
thus z is a solution to the equation without the second member.§
Resolution: if y ¹ 0, rewrite the Bernoulli equation as:
and let z = 1/ym-1 and z¢ = (1-m)y¢ /ym so y¢ /ym = z¢ /(1-m). Thus, rearraning:
which is a linear FODE we can solve.
Resolution: Let y1 be a particular solution of the Riccati equation. Then, setting y = y1 + z, then this becomes:
since y1 is a particular solution then we obtain y1¢ - a(x)y12 - b(x)y1 - c(x) = 0, so, after some algebra, the previous equation becomes:
which is a Bernoulli equation for m = 2, which we can solve. (2) Dynamical Systems of Differential Equations In our previous section, we defined a differential equation as a general function. Now, we shall consider time explicitly and thus consider differential equations ¦ (t, x, x¢ , .., x(n)) where, note, time, t Î R+, is now the variable and x(t) is a function of time (and x¢ , x¢ ¢ , etc. are its first and higher order derivatives). We shall in this section focus our attention exclusively on systems of linear first order differential equations. This translates effectively to a system of n differential equations of the following form:
or, letting x¢ (t) = [dx1(t)/dt, dx2(t)/dt, ... dxn(t)/dt]¢ , x(t) = [x1(t), x2(t), .... xn(t)]¢ , b(t) = [b1(t), b2(t), ..., bn(t)]¢ , and letting:
be a matrix of (constant) coefficients, then the system can be rewritten as:
or simply:
Throughout the following, the term t will be dropped as an argument of x¢ (t) and x(t) if no confusion is risked. If b(t) = 0, then x¢ (t) = Ax(t) is homogeneous. The solution to a homogenous system can be expressed as follows:
Proof: If x = velt, then x¢ = l velt and thus substituting for x and x¢ , the homogeneous system can be rewritten as l velt = Avelt, which, dividing through by elt, yields us the eigenvalue system l v = Av or (A - l I)v = 0. In other words, for a non-trivial solution, it must be that |A-l I| = 0, which is the characteristic equation of matrix A. Thus, l is an eigenvalue of A and v is its associated eigenvector.§ As the matrix A has n eigenvalues, l 1, .., l n and n associated eigenvectors, v1, v2, .., vn, then each term vielit is a solution to the homogeneous system x¢ = Ax. The following theorem establishes that any linear combination of these terms are also solutions to x¢ = Ax:
Proof: We wish to prove that as v1el1t, v2el2t, .., vnelnt are all independent solutions to the system x¢ = Ax, then so is their linear combination z(t) = å i=1n civielit. This is easily noticed as, taking first derivatives of z(t), we obtain z¢ (t) = å icil ivi elit which as l ivi = Avi, then z¢ (t) = å iciAvielit = Az(t) by the definition of z(t). Thus, z(t) is a solution to the system x¢ = Ax.§ The matrix F (t) = [v1el1t, v2el2t, .., vnelnt] is sometimes referred to as the "fundmental matrix" as vielit are linearly independent of each other (a result of l 1, l 2, .., l n being distinct eigenvalues). This implies that any solution x(t) to the system x¢ = Ax can be expressed as a unique combination of the vectors in the fundamental matrix. (we omit the proof). Consequently, what is commonly referred to as the general solution to the system x¢ = Ax is given as:
where, as noted earlier, c1, .., cn are arbitrary, possibly complex, constants. If the eigenvalues are not dinstinct, things get a bit complicated but nonetheless, as repeated roots are not robust, or "structurally unstable" (i.e. do not survive small changes in the coefficients of A), then these can be generally ignored for practical purposes (cf. Murata, 1977). Let us now turn to another interesting issue. Recall that a matrix A is "diagonalizable" if there is a matrix, P, such that P-1AP is a diagonal matrix. We now turn to the following:
Proof: Define the modal matrix P = [v1, v2, .., vn], thus P is a (n ´ n) matrix whose n columns are n eigenvectors of A. Thus, as Avi = l ivi for i = 1, .. n, then A[v1, v2, .., vn] = [l 1v1, l 2v2, .., l nvn], or simply AP = PL where L is a diagonal matrix with the eigenvalues l 1, l 2, .., l n of A arrayed along the diagonal, i.e.
As AP = PL , then obviously P-1AP = L , thus the matrix P diagonalizes A. For P-1 to exist, the columns of P, i.e. the eigenvectors vi, must be linearly independent. Conversely, if P is non-singular, P-1 exists and P-1AP = L , i.e. P diagonalizes A.§ For the next set of theorems, it is worth noting that Taylor's expansion of the function ¦ (t) = eat around t = 0 is:
As a consequence, the following theorem can be stated:
Proof: Taylor's expansion of x(t) around t = 0 yields:
As x¢ (t) = Ax(t), then x¢ ¢ (t) = Ax¢ (t) = AAx(t) = A2x(t). Similarly, x¢ ¢ ¢ (t) = A3x(t) and so on. Thus, at t = 0, we have x¢ (0) = Ax(0) = Ax0, x¢ ¢ (0) = A2x(0) = A2x0, x¢ ¢ ¢ (0) = A3x(0) = A3x0, etc. from the initial condition x(0) = x0. Thus, replacing these in the Taylor's expansion:
or, factoring out x0:
where I is the identity matrix. But, as established earlier, we know that eAt = [I + At/1! + A2t2/2! + A3t3/3! + ....], so x(t) = eAtx0.§ We can now turn to the following:
Proof: Distinct eigenvalues ensure linearly independent eigenvectors and hence non-singularity of P and, by our previous theorem, the diagonalizability of A. Thus, P-1AP = L or A = PL P-1. Thus, A2 = AA = (PLP-1)(PLP-1) = PLIL P-1 = PL 2P-1. Similarly, A3 = PL3P-1 and so on. Now, recall that:
so, substituting in for A, A2, etc. and recalling that I = PP-1, then:
or factoring out P to the left and P-1 to the right:
but, as we know by definition, eLt = [I + L t/1! + L 2t2/2! + L 3t3/3! + ....], thus this reduces to:
hence:
as was to be shown.§ Now, recall that the fundamental matrix was defined as F(t) = [v1el1t, v2el2t, .., vnelnt] where each column is an independent solution of the homogeneous system, x¢ (t) = Ax(t). Also, recall that the general solution was:
or, letting c = [c1, .., cn]:
It is elementary to note, then, that F (t) = PeLt by the definition of P and L . Thus:
But we also know that x(t) = PeLtP-1x0, thus it must be that c = P-1x0. Thus, in short, a solution to the homogeneous system x¢ = Ax can be obtained by trying a solution x(t) = c1v1el1t + c2v2el2t +....+ cnvnelnt where l 1, l 2, ..., l n are the eigenvalues of A, v1, v2, .., vn are its eigenvectors and c1, c2, .., cn the constants to be determined by the initial conditions. Let us now turn to a typical, non-homogeneous system of linear first order differential equations. Thus, turning away from the homogeneous case, we are now considering the system:
where b ¹ 0 and, note, b is not a function of time. Consider now the following:
Proof: Let y = x + A-1b. Then, as b is independent of time, taking the time derivative, y¢ = x¢ . Thus, substituting, y¢ = Ax + b = Ax + AA-1b = A(x + A-1b) = Ay, i.e. we obtain a homogenous system y¢ = Ay. We know that the solution to a homogeneous system is y(t) = eAty0 = PeL tP-1y0. For the first, note that y = eAty0 implies x(t) + A-1b = eAt[x(0) + A-1b] or simply x(t) = eAt[x(0) + A-1b] - A-1b or, by the definition of k, x(t) = eAtk - A-1b. For the second, y(t) = PeL tP-1y0 implies x(t) + A-1b = PeL tP-1[x(0) + A-1b] or x(t) = PeL tP-1[x(0) + A-1b] - A-1b, or, once again, by definition of k, x(t) = PeL tP-1k - A-1b.§ It can be noticed that the latter term x(t) = PeL tP-1k - A-1b can be expressed as:
or:
where l 1, l 2, ..., l n are the eigenvalues of A and v1, v2, .., vn are its associated eigenvectors, so the fundamental matrix F (t) = [v1el 1t, v2el 2t, .., vnel nt] = PeL t; the constants c1, c2, .., cn are determined by the initial conditions, i.e. c = P-1k = P-1[x(0) + A-1b]; and xp is the particular integral (xp = A-1b).
|
All rights reserved, Gonçalo L. Fonseca