Skip to main content

Section 20.2 Coupled Linear Differential Equations

Recall that the linear differential equation

\begin{equation*} \dfrac{dx}{dt} = ax, \, \, a \in \mathbb{R}\text{,} \end{equation*}

has the solution (via separation of variables)

\begin{equation*} x(t) = C e^{at}\text{,} \end{equation*}

where \(C\) is an arbitrary constant. Consider now the system of two linear differential equations

\begin{align*} \dot{x}_1 \amp = \dfrac{dx_1}{dt} = ax_1 + bx_2\\ \dot{x}_2 \amp = \dfrac{dx_2}{dt} = cx_1 + dx_2 \end{align*}

where \(a, \, b, \, c, \, d \in \mathbb{R}\text{,}\) which can be written in matrix notation as

\begin{equation} \dot{\mathbf{x}} = A \mathbf{x}\label{Eq-matrix_form_coupled_linear_DEs}\tag{20.2.1} \end{equation}

where \(\mathbf{x} = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}\text{,}\) \(\dot{\mathbf{x}} = \begin{pmatrix} \dot{x}_1 \\ \dot{x}_2 \end{pmatrix}\) and \(A = \begin{pmatrix} a \amp b \\ c \amp d \end{pmatrix}\text{.}\) These equations are “coupled”, i.e. the derivative of \(x_1(t)\) depends on both \(x_1(t)\) and \(x_2(t)\) and likewise for the derivative of \(x_2(t)\text{.}\) Thus we can't solve the first equation unless we can solve the second and vice versa. Note that if \(A\) is diagonal then the equations become uncoupled and we could solve each separately.

If the matrix \(A\) has 2 distinct eigenvalues, \(\lambda_1\) and \(\lambda_2\text{,}\) by making the change of variable \(\mathbf{y}= P^{-1} \mathbf{x}\text{,}\) where \(P\) is the matrix whose columns are the eigenvectors \(\mathbf{v_1}\) and \(\mathbf{v_2}\) of \(A\text{,}\) we can transform (20.2.1) into a system where the matrix is diagonal. By solving that system and converting back to our original variables we find that the general solution to (20.2.1) is

\begin{equation} \mathbf{x} = C_1 e^{\lambda_1 t} \mathbf{v_1} + C_2 e^{\lambda_2 t} \mathbf{v_2}\label{Eq-general_solution_coupled_linear_DEs}\tag{20.2.2} \end{equation}

where \(C_1\) and \(C_2\) are arbitrary constants. We can check that (20.2.2) is indeed a solution to (20.2.1). From (20.2.2)

\begin{equation*} \mathbf{x} = C_1 \lambda_1 e^{\lambda_1 t} \mathbf{v_1} + C_2 \lambda_2 e^{\lambda_2 t} \mathbf{v_2} \end{equation*}

and

\begin{align*} A \mathbf{x} \amp = A \left( C_1 e^{\lambda_1 t} \mathbf{v_1} + C_2 e^{\lambda_2 t} \mathbf{v_2} \right)\\ \amp = C_1 e^{\lambda_1 t} A \mathbf{v_1} + C_2 e^{\lambda_2 t} A \mathbf{v_2}\\ \amp = C_1 e^{\lambda_1 t} \lambda_1 \mathbf{v_1} + C_2 e^{\lambda_2 t} \lambda_2 \mathbf{v_2}\text{.} \end{align*}

Find the solution to the initial value problem

\begin{align*} \dfrac{dx_1}{dt} \amp = x_1 + 2x_2\\ \dfrac{dx_2}{dt} \amp = 2x_1 + x_2\text{,} \end{align*}

where \(x_1(0) = 2\) and \(x_2(0) = 3\text{.}\)

Answer.

\(x_1(t) = \dfrac{5}{2} e^{3t} - \dfrac{1}{2} e^{-t}\) and \(x_2(t) = \dfrac{5}{2} e^{3t} + \dfrac{1}{2} e^{-t}\)

Solution.

In matrix notation this system is

\begin{equation*} \dot{\mathbf{x}} = \begin{pmatrix} 1 \amp 2 \\ 2 \amp 1 \end{pmatrix} \mathbf{x}, \quad \mathbf{x}(0) = \begin{pmatrix} 2 \\ 3 \end{pmatrix}\text{.} \end{equation*}

The eigenvalues of \(\begin{pmatrix} 1 \amp 2 \\ 2 \amp 1 \end{pmatrix}\) turn out to be \(\lambda_1 = -1\) and \(\lambda_2 = 3\) with associated eigenvectors \(\mathbf{v_1} = \begin{pmatrix} 1 \\ -1 \end{pmatrix}\) and \(\mathbf{v_2} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}\text{.}\) Thus, from (20.2.2) the general solution is

\begin{equation*} \mathbf{x} = C_1 e^{-t} \begin{pmatrix} 1 \\ -1 \end{pmatrix} + C_2 e^{3t} \begin{pmatrix} 1 \\ 1 \end{pmatrix}\text{.} \end{equation*}

From the initial conditions we have

\begin{equation*} \begin{pmatrix} 2 \\ 3 \end{pmatrix} = C_1 \begin{pmatrix} 1 \\ -1 \end{pmatrix} + C_2 \begin{pmatrix} 1 \\ 1 \end{pmatrix}\text{.} \end{equation*}

Solving this system of linear equations (by Gauss-Jordan elimination say) gives

\begin{equation*} C_1 = -\dfrac{1}{2} \, \text{ and } \, C_2 = \dfrac{5}{2}\text{.} \end{equation*}

Thus, the solution to the initial value problem is

\begin{equation*} \mathbf{x} = \dfrac{5}{2} e^{3t} \begin{pmatrix} 1 \\ 1 \end{pmatrix} - \dfrac{1}{2} e^{-t} \begin{pmatrix} 1 \\ -1 \end{pmatrix} \end{equation*}

or equivalently

\begin{align*} x_1(t) \amp = \dfrac{5}{2} e^{3t} - \dfrac{1}{2} e^{-t}\\ x_2(t) \amp = \dfrac{5}{2} e^{3t} + \dfrac{1}{2} e^{-t} \end{align*}

Note that you can always check your answer by checking that the functions do indeed satisfy the original equations.

Figure 20.2.2 shows the graph of these solutions. Notice that as \(t\) gets larger because of the \(e^{-t}\) term in each solution the functions get closer together and because of the \(e^{3t}\) term both solutions grow (exponentially). Thus the eigenvalues of the matrix \(A\) will give us some idea of the qualitative nature of the solutions.

Figure 20.2.2.

Solve the initial value problem

\begin{equation*} \dot{\mathbf{x}} = A \mathbf{x}, \: A = \begin{pmatrix} 0 \amp 1 \\ -1 \amp 0 \end{pmatrix}, \: \mathbf{x}(0) = \begin{pmatrix} -4 \\ 8 \end{pmatrix}\text{.} \end{equation*}
Answer.

\(\mathbf{x} = \begin{pmatrix} 8 \sin(t) - 4 \cos(t) \\ 8 \cos(t) + 4 \sin(t) \end{pmatrix}\)

Solution.

The eigenvalues of \(A\) turn out to be purely complex with \(\lambda_1 = i\) and \(\lambda_2 = -i\text{.}\) The associated eigenvectors are \(\mathbf{v_1} = \begin{pmatrix} -i \\ 1 \end{pmatrix}\) and \(\mathbf{v_2}= \begin{pmatrix} i \\ 1 \end{pmatrix}\text{.}\) Thus, from (20.2.2) the general solution is

\begin{equation*} \mathbf{x} = C_1 e^{it} \begin{pmatrix} -i \\ 1 \end{pmatrix} + C_2 e^{-it} \begin{pmatrix} i \\ 1 \end{pmatrix}\text{.} \end{equation*}

From the initial conditions we have

\begin{equation*} \begin{pmatrix} -4 \\ 8 \end{pmatrix} = C_1 \begin{pmatrix} -i \\ 1 \end{pmatrix} + C_2 \begin{pmatrix} i \\ 1 \end{pmatrix}\text{,} \end{equation*}

which upon solving gives

\begin{equation*} C_1 = 4-2i \, \text{ and } \, C_2 = 4+2i\text{.} \end{equation*}

Thus, the solution to the initial value problem is

\begin{equation*} \mathbf{x} = (4-2i) e^{it} \begin{pmatrix} -i \\ 1 \end{pmatrix} + (4+2i) e^{-it} \begin{pmatrix} i \\ 1 \end{pmatrix}\text{.} \end{equation*}

We can simplify this solution by using Euler's equation

\begin{equation*} e^{i \theta} = \cos( \theta ) + i \sin( \theta )\text{.} \end{equation*}

Thus

\begin{equation*} \mathbf{x} = (4-2i) \left( \cos(t) + i\sin(t) \right) \begin{pmatrix} -i \\ 1 \end{pmatrix} + (4+2i) \left( \cos(t) - i \sin(t) \right) \begin{pmatrix} i \\ 1 \end{pmatrix}\text{,} \end{equation*}

which simplifies to

\begin{equation*} \mathbf{x} = \begin{pmatrix} 8 \sin(t) - 4 \cos(t) \\ 8 \cos(t) + 4 \sin(t) \end{pmatrix}\text{.} \end{equation*}

This is a real solution! As explained below, because all of the entries in \(A\) are real and the initial conditions are real the solution will also be real.

As shown in Figure 20.2.4 where these solutions are graphed, purely complex eigenvalues are associated with periodic solutions. The period of these solutions is \(\dfrac{2 \pi}{| \operatorname{Im}(\lambda) |}\text{.}\)

Figure 20.2.4.

Consider the system of coupled linear differential equations

\begin{equation} \dot{\mathbf{x}} = A \mathbf{x}\label{Eq3-matri_form_for_system_linear_DEs}\tag{20.2.3} \end{equation}

where the entries in \(A\) are all real. Now imagine that this system has a complex solution given by

\begin{equation} \mathbf{x}(t) = \mathbf{x_1}(t) + i \mathbf{x_2}(t)\text{.}\label{Eq4-complex_solution}\tag{20.2.4} \end{equation}

Taking the complex conjugates of both sides of (20.2.3)

\begin{equation*} \bar{\dot{\mathbf{x}}} = \overline{A \mathbf{x}} = \bar{A} \bar{\mathbf{x}}\text{.} \end{equation*}

Since \(\bar{\dot{\mathbf{x}}} = \dot{\bar{\mathbf{x}}}\) and \(\bar{A} = A\) (as the entries in \(A\) are all real),

\begin{equation*} \dot{\bar{\mathbf{x}}} = A \bar{\mathbf{x}} \end{equation*}

i.e.

\begin{equation} \bar{\mathbf{x}}(t) =\mathbf{x_1}(t) - i \mathbf{x_2} (t)\label{Eq5-complex_conjugate_solution}\tag{20.2.5} \end{equation}

will also be a solution to (20.2.3). Substituting (20.2.4) into (20.2.3) gives

\begin{equation} \dot{\mathbf{x}}_1 + i \dot{\mathbf{x}}_2 = A \mathbf{x_1} + i A \mathbf{x_2}\label{Eq6-substituting_4_into_3}\tag{20.2.6} \end{equation}

while substituting (20.2.5) into (20.2.3) gives

\begin{equation} \dot{\mathbf{x}}_1 - i \dot{\mathbf{x}}_2 = A \mathbf{x_1} - i A \mathbf{x_2}\text{.}\label{Eq7-substituting_5_into_3}\tag{20.2.7} \end{equation}

Now, adding equations (20.2.6) and (20.2.7) gives

\begin{equation*} \dot{\mathbf{x}}_1 = A \mathbf{x_1}\text{,} \end{equation*}

while subtracting (20.2.7) from (20.2.6) gives

\begin{equation*} \dot{\mathbf{x}}_2 = A \mathbf{x_2} \end{equation*}

Thus if we have a complex solution to (20.2.3) then both the real and imaginary parts of this complex solution must separately be solutions and hence a general solution to (20.2.3) is

\begin{equation*} \mathbf{x}(t) = C_1 \mathbf{x}(t) + C_2 \mathbf{x_2}(t)\text{.} \end{equation*}

This gives us another way of proceeding when the eigenvalues of \(A\) are complex.

Find the general solution to

\begin{equation*} \dot{\mathbf{x}} = A \mathbf{x}, \: A = \begin{pmatrix} 1 \amp -5 \\ 2 \amp 3 \end{pmatrix}\text{.} \end{equation*}
Answer.

\(\mathbf{x} = C_1 e^{2t} \begin{pmatrix} -5\cos(3t) \\ \cos(3t) - 3 \sin(3t) \end{pmatrix} + C_2 e^{2t} \begin{pmatrix} -5\sin(3t) \\ \sin(3t) + 3\cos(3t) \end{pmatrix}\)

Solution.

Here the eigenvalues of \(A\) are complex with \(\lambda_1 = 2+3i\) and \(\lambda_2 = 2-3i\text{.}\) The eigenvector associated with \(\lambda_1\) is \(\mathbf{v_1} = \begin{pmatrix} -5 \\ 1+3i \end{pmatrix}\text{.}\) Thus, one solution to the system is

\begin{equation*} \mathbf{x} = e^{(2+3i)t} \begin{pmatrix} -5 \\ 1+3i \end{pmatrix}\text{.} \end{equation*}

Simplifying this solution using Euler's equation gives

\begin{equation*} \mathbf{x} = e^{2t} \left \{ \begin{pmatrix} -5\cos(3t) \\ \cos(3t) - 3 \sin(3t) \end{pmatrix} + i \begin{pmatrix} -5\sin(3t) \\ \sin(3t) + 3\cos(3t) \end{pmatrix} \right \}\text{.} \end{equation*}

Since we know that both the real part and the imaginary part are solutions to the system we know that the general solution is

\begin{equation*} \mathbf{x} = C_1 e^{2t} \begin{pmatrix} -5\cos(3t) \\ \cos(3t) - 3 \sin(3t) \end{pmatrix} + C_2 e^{2t} \begin{pmatrix} -5\sin(3t) \\ \sin(3t) + 3\cos(3t) \end{pmatrix}\text{.} \end{equation*}

Figure 20.2.6 shows a plot of this solution when \(C_1 = C_2 = 1\text{.}\)

Figure 20.2.6.

Note the solutions to the system are periodic with the period determined from the imaginary part of the eigenvalue. However, since the real part of the eigenvalue is positive the amplitude of the solutions grows without bound.

The discussion so far has concentrated on systems of two coupled first-order linear differential equations. However the ideas carry over to systems with more equations.

A qualitative description of the solutions to the system can be determined from the eigenvalues of \(A\text{.}\)

Remark 20.2.8.

  • If \(A\) has a positive real eigenvalue then the corresponding solution grows without bound.

  • If \(A\) has a negative real eigenvalue then the corresponding solution decays.

  • If \(A\) has a zero eigenvalue then the corresponding solution is constant.

  • If \(A\) has a pair of complex conjugate eigenvalues then the corresponding solution oscillates with period \(2\pi / \operatorname{Im}(\lambda)\) and with the amplitude either growing \((\operatorname{Re}(\lambda) > 0)\text{,}\) decaying \((\operatorname{Re}(\lambda) < 0 )\) or staying the same \((\operatorname{Re}(\lambda) = 0)\text{.}\)

Exercises Example Tasks

1.

Describe the long term behaviour of the solutions to the system \(\dot{\mathbf{x}} = A \bm{x}\text{,}\) where

\begin{equation*} A = \begin{pmatrix} -1 \amp 2 \amp 3 \\ 0 \amp -2 \amp 4 \\ 0 \amp 0 \amp 0 \end{pmatrix}\text{.} \end{equation*}