- Captured On
- Source
- Matrix Exponentials | Unit IV: First-order Systems | Differential Equations | Mathematics | MIT OpenCourseWare
1 For this linear system, how many solutions are there?
1.1 Front
For this linear system, how many solutions are there?
\({\displaystyle \dot{\vb{x}} = A \vb{x}}\), where \(A\) is a \(n \cross n\) matrix
1.2 Back
There are \(n\) linearly independent solutions for the system
2 How is the linear system when the coefficients are functions of the independent variable t?
2.1 Front
How is the linear system when the coefficients are functions of the independent variable t?
for \(n \cross n\) linear homogeneous systems
2.2 Back
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\)
3 When can we say that the n solutions are linearly independent?
3.1 Front
When can we say that the n solutions are linearly independent?
Solutions for the system \({\displaystyle \dot{\vb{x}} = A \vb{x}}\)
3.2 Back
If \(c_1 \vb{x_1}(t) + \dots + c_n \vb{x_n}(t) = 0\) for all \(t\) \(\implies\) all \(c_i = 0\)
4 What that means this expression?
4.1 Front
What that means this expression?
\(c_1 \vb{x_1}(t) + \dots + c_n \vb{x_n}(t) \equiv 0\)
4.2 Back
It’s identically \(0\), meaning zero for all \(t\), the symbol \(\not \equiv\) means not identically 0, there is some \(t\text{-value}\) for which it is not zero
5 What is a fundamental set of solutions for this linear system?
5.1 Front
What is a fundamental set of solutions for this linear system?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\)
And what is the general solution?
5.2 Back
Is the set of solutions \(\vb{x_1}, \dots \vb{x_n}\) which is linearly independent for this system.
The general solution is \(\vb{x} = c_1 \vb{x_1} + \dots + c_n \vb{x_n}\) using the superposition principle because it’s a linear system.
6 What is the existence and uniqueness theorem for linear systems?
6.1 Front
What is the existence and uniqueness theorem for linear systems?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\)
6.2 Back
If the entries of the square matrix \(A(t)\) are continuous on an open interval \(I\) containing \(t_0\), then the initial value problem
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), \(\vb{x}(t_0) = \vb{x_0}\)
has one and only one solution \(\vb{x}(t)\) on the interval \(I\)
7 What is the linear independence theorem for linear systems?
7.1 Front
What is the linear independence theorem for linear systems?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\),
7.2 Back
Supposing that the entries of \(A(t)\) are continuous on an open interval \(I\)
Let \(\vb{x_1}(t)\) and \(\vb{x_2}(t)\) be two solutions to the linear system on the interval \(I\), such that at some point \(t_0\) in \(I\), the vectors \(\vb{x_1}(t_0)\) and \(\vb{x_2}(t_0)\) are linearly independent.
Then
- the solutions \(\vb{x_1}(t)\) and \(\vb{x_2}(t)\) are linearly independent on \(I\), and
- the vectors \(\vb{x_1}(t_1)\) and \(\vb{x_2}(t_1)\) are linearly independent at every point \(t_1\) of \(I\)
8 What is the general solution theorem for linear systems?
8.1 Front
What is the general solution theorem for linear systems?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), where \(A\) is matrix of \(n \cross n\)
8.2 Back
- This system has \(n\) linearly independent solutions
- If \(\vb{x_1}, \dots, \vb{x_n}\) are \(n\) linearly independent solutions, then every solution \(\vb{x}\) can be written in this form for some choice of \(c_i\)
\({\displaystyle \vb{x} = c_1 \vb{x_1} + \dots + c_n \vb{x_n}}\)
9 What is the Wronskian of these 2 vector functions?
9.1 Front
What is the Wronskian of these 2 vector functions?
Let \(\vb{x_1}(t)\) and \(\vb{x_2}(t)\) be two 2-vector functions
9.2 Back
We define their Wronskian to be the determinant
\({\displaystyle W(\vb{x_1}, \vb{x_2})(t) = \begin{vmatrix}x_1(t) & x_2 (t) \\ y_1(t) & y_2(t)\end{vmatrix}}\)
whose columns are the two vector functions
10 What does it mean that Wronskian is 0 in this point?
10.1 Front
What does it mean that Wronskian is 0 in this point?
\({\displaystyle W(\vb{x_1}, \vb{x_2}) (t_0) = 0}\)
10.2 Back
\({\displaystyle W(\vb{x_1}, \vb{x_2}) (t_0) = \begin{vmatrix}x_1(t_0) & x_2 (t_0) \\ y_1(t_0) & y_2(t_0)\end{vmatrix} = 0 \Leftrightarrow \vb{x_1}(t_0)}\) and \(\vb{x_2}(t_0)\) are dependent
11 What is the Wronskian vanishing theorem for linear systems?
11.1 Front
What is the Wronskian vanishing theorem for linear systems?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), suppose \(A\) is \(2 \cross 2\)
11.2 Back
On an interval \(I\) where the entries of \(A(t)\) are continuous, let \(\vb{x_1}\) and \(\vb{x_2}\) be two solutions to this linear system and \(W(t)\) their Wronskian
\({\displaystyle W(\vb{x_1}, \vb{x_2})(t) = \begin{vmatrix}x_1(t) & x_2 (t) \\ y_1(t) & y_2(t)\end{vmatrix}}\)
Then either:
- \(W(t) \equiv 0\) on \(I\), and \(\vb{x_1}\) and \(\vb{x_2}\) are linearly dependent on \(I\), or
- \(W(t)\) is never \(0\) on \(I\), and \(\vb{x_1}\) and \(\vb{x_2}\) are linearly independent on \(I\)
12 What is the matrix form of inhomogeneous linear system?
12.1 Front
What is the matrix form of inhomogeneous linear system?
12.2 Back
\({\displaystyle \dot{\vb{u}} = A(t) \vb{u} + \vb{F}(t)}\)
13 What is the general solution for this linear system?
13.1 Front
What is the general solution for this linear system?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x} + \vb{F}(t)}\), where \(A\) is a square matrix \(2 \cross 2\)
13.2 Back
Its homogeneous linear system is \({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\) which solutions are \(\vb{x_h} = c_1 \vb{x_1} + c_2 \vb{x_2}\)
And \(\vb{x_p}\) is a solution to inhomogeneous linear system, so the general solution to the system is
\({\displaystyle \vb{x} = \vb{x_p} + \vb{x_h}}\)
14 What is the existence and uniqueness theorem for this linear system?
14.1 Front
What is the existence and uniqueness theorem for this linear system?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x} + \vb{F}(t)}\)
14.2 Back
We start with an initial time \(t_0\) and the initial value problem
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x} + \vb{F}(t)}\), \(\vb{x}(t_0) = \vb{x_0}\)
If \(A(t)\) and \(\vb{F}(t)\) are continuous then there exists a unique solution to IVP
15 What is a fundamental matrix for this system?
15.1 Front
What is a fundamental matrix for this system?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\) where \(A(t)\) is \(2 \cross 2\) square matrix
15.2 Back
The fundamental matrix is the matrix composed by the 2 linearly independent solutions \(\vb{x_1}\) and \(\vb{x_2}\)
\({\displaystyle \Phi (t) = \begin{pmatrix}\vb{x_1} \\ \vb{x_2}\end{pmatrix} = \begin{pmatrix} x_1 & x_2 \\ y_1 & y_2\end{pmatrix}}\)
16 What is the general solution using fundamental matrix notation?
16.1 Front
What is the general solution using fundamental matrix notation?
Where \(\vb{x_1}\) and \(\vb{x_2}\) are the 2 linearly independent solutions
16.2 Back
General solution \({\displaystyle \vb{x}(t) = c_1 \begin{pmatrix}x_1 \\ y_1\end{pmatrix} + c_2 \begin{pmatrix} x_2 \\ y_2\end{pmatrix} = \begin{pmatrix}x_1 & x_2 \\ y_1 & y_2\end{pmatrix} \begin{pmatrix}c_1 \\ c_2\end{pmatrix}}\)
which becomes using the fundamental matrix
\({\displaystyle \vb{x} = \Phi(t) \vb{c}}\), where \(\vb{c} = \begin{pmatrix}c_1 \\ c_2\end{pmatrix}\)
Note that the vector \(\vb{c}\) must be written on the right, even though the \(c’\text{s}\) are usually written on the left when they are the coefficients of the solutions \(\vb{x_i}\)
17 How can we solve this IVP using fundamental matrix notation?
17.1 Front
How can we solve this IVP using fundamental matrix notation?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), where \(\vb{x}(t_0) = \vb{x_0}\)
Show the process
17.2 Back
The general solution is \({\displaystyle \vb{x} = \Phi(t) \vb{c}}\), where \(\vb{c} = \begin{pmatrix}c_1 \\ c_2\end{pmatrix}\)
We choose the \(\vb{c}\) so that the initial condition is satisfied, substituting \(t_0\) gives us the matrix equation for \(\vb{c}\)
\({\displaystyle \Phi(t_0) \vb{c} = \vb{x_0}}\)
Since the determinant \({\displaystyle \abs{\Phi(t_0)}}\) is the value at \(t_0\) of the Wronskian of \(\vb{x_1}\) and \(\vb{x_2}\), it is non-zero since the two solutions are linearly independent. Therefore the inverse matrix exists and the matrix equation above can be solved for \(\vb{c}\)
\({\displaystyle \vb{c} = \Phi(t_0)^{-1} \vb{x_0}}\)
Using the above value of \(\vb{c}\), the solution to the IVP can now be written
\({\displaystyle \vb{x} = \Phi(t) \Phi(t_0)^{-1} \vb{x_0}}\)
Note that when the solution is written in this form, it’s “obvious” that \(\vb{x}(t_0) = \vb{x_0}\),i.e., that the initial condition in IVP is satified
18 Why the determinant of fundamental matrix could be never 0?
18.1 Front
Why the determinant of fundamental matrix could be never 0?
\({\displaystyle \abs{\Phi}(t)}\)
18.2 Back
Because it’s the value of the Wronskian of \(x_i\) linearly independents solutions of the linear system.
19 How many fundamental matrix could have a linear system?
19.1 Front
How many fundamental matrix could have a linear system?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\)
19.2 Back
There is no an unique fundamental matrix since there are many ways to pick two independent solutions of \({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\) to form the columns of \(\Phi\)
20 How could we know that a matrix could be a fundamental matrix of the system?
20.1 Front
How could we know that a matrix could be a fundamental matrix of the system?
\({\displaystyle \dot{\vb{x}} = A \vb{x}}\)
20.2 Back
\(\Phi(t)\) is a fundamental matrix for the system if its determinant \({\displaystyle \abs{\Phi(t)}}\) is non-zero and it satisfies the matrix equation
\({\displaystyle \dot{\Phi} = A \Phi}\)
where \(\dot{\Phi}\) means that each entry of \(\Phi\) has been differentiated
21 How could be proof that this matrix equations is true?
21.1 Front
How could be proof that this matrix equations is true?
\({\displaystyle \dot{\Phi} = A \Phi}\), suppose \(A\) a \(2 \cross 2\) square matrix
21.2 Back
This is true, if \(\Phi(t)\) is a fundamental matrix for the system \({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\)
In this case, the determinant of the fundamental matrix is never 0, \(\abs{\Phi(t)} \not \equiv 0\), so its columns \(\vb{x_1}\) and \(\vb{x_2}\) are linearly independent
Let \({\displaystyle \Phi = \begin{pmatrix}\vb{x_1} \\ \vb{x_2}\end{pmatrix}}\). According to the rules for matrix multiplication
\({\displaystyle \dot{\Phi} = A \Phi \implies \begin{pmatrix}\dot{\vb{x_1}} \\ \dot{\vb{x_2}}\end{pmatrix} = A \begin{pmatrix}\vb{x_1} \\ \vb{x_2}\end{pmatrix} = \begin{pmatrix}A \vb{x_1} \\ A \vb{x_2}\end{pmatrix}}\)
which shows that
\({\displaystyle \dot{\vb{x_1}} = A \vb{x_1}}\) and \({\displaystyle \dot{\vb{x_2}} = A \vb{x_2}}\)
this means that \(\vb{x_1}\) and \(\vb{x_2}\) are solutions to the system \({\displaystyle \dot{\vb{x}} = A \vb{x}}\)
22 Write this linear system in its matrix form
22.1 Front
Write this linear system in its matrix form
\({\displaystyle \dot{\vb{x}} = A \vb{x}}\)
22.2 Back
Let \(\Phi(t)\) be a fundamental matrix of this system
\({\displaystyle \dot{\Phi} = A \Phi}\)
23 What is the best choice for fundamental matrix from this system?
23.1 Front
What is the best choice for fundamental matrix from this system?
\({\displaystyle \dot{\vb{u}} = A(t) \vb{x}}\) where \(A\) is a 2 square matrix and initial condition \(\vb{x}(t_0) = \vb{x_0}\)
23.2 Back
The fundamental matrix has form \({\displaystyle \Phi(t) = \begin{pmatrix}\vb{x_1} \\ \vb{x_2}\end{pmatrix} = \begin{pmatrix}x_1 & x_2 \\ y_1 & y_2\end{pmatrix}}\)
The solution to IVP is \(\vb{x} = \Phi(t) \Phi^{-1}(t_0) \vb{x_0}\)
There 2 methods
- If the ODE has constant coefficients, and its eigenvalues are real and distinct
- Fundamental matrix would be the one whose columns are the normal modes
- Normal modes has form \({\displaystyle \vb{x_i} = \vec{\alpha_1} e^{\lambda_i t}}\), \(i = 1,2\)
- Useful in showing how the solution depends on the initial conditions
- Pick \({\displaystyle \Phi(t_0) = I = \begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix}}\)
- \({\displaystyle \vb{x_1}(t_0) = \begin{pmatrix}1 \\ 0\end{pmatrix}}\)
- \({\displaystyle \vb{x_2}(t_0) = \begin{pmatrix}0 \\ 1\end{pmatrix}}\)
- Since the \(\vb{x_i}(t)\) are uniquely determined by these initial conditions, the fundamental matrix \(\Phi(t)\) satisfying \(\Phi(t_0) = I\) is also unique
24 What is the normalized fundamental matrix?
24.1 Front
What is the normalized fundamental matrix?
\({\displaystyle \dot{\vb{u}} = A \vb{u}}\) at \(t = t_0\)
24.2 Back
The unique matrix \(\widetilde{\Phi}_{t_0}(t)\) satisfying:
\({\displaystyle \widetilde{\Phi}_{t_0}’ = A \widetilde{\Phi}_{t_0}}\), \({\displaystyle \widetilde{\Phi}_{t_0} (t_0) = I}\)
is called the normalized fundamental matrix at \(t_0\) for \(A\)
The \(\widetilde{\Phi}_{t_0}\) must be a fundamental matrix which determinant is never 0. So in this case it’s definition its true because \({\displaystyle \abs{\widetilde{\Phi}_{t_0}(t_0)} = 1}\)
25 What does this symbol mean?
25.1 Front
What does this symbol mean?
\(\widetilde{\Phi}_0\)
25.2 Back
It’s the normalized fundamental matrix at \(t = 0\) for \(A\)
26 How is the solution of this system using normalized fundamental matrix?
26.1 Front
How is the solution of this system using normalized fundamental matrix?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), where \(\vb{x}(0) = \vb{x_0}\)
26.2 Back
\({\displaystyle \vb{x}(t) = \widetilde{\Phi}_0(t) \vb{x_0}}\)
27 How can we compute the normalized fundamental matrix?
27.1 Front
How can we compute the normalized fundamental matrix?
\(\widetilde{\Phi}_0 (t)\)
Show the process
27.2 Back
Matrix form for the solution to the IVP: \({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), \({\displaystyle \vb{x}(0) = \vb{x_0}}\),
- \({\displaystyle \vb{x} = \widetilde{\Phi}_0 \vb{x_0}}\)
- \({\displaystyle \vb{x} = \Phi(t) \Phi(0)^{-1} \vb{x_0}}\)
Find any fundamental matrix \(\Phi(t)\) and then
\({\displaystyle \widetilde{\Phi}_0(t) = \Phi(t) \Phi(0)^{-1}}\)
To verify this, we have to see that the matrix on the RHS satisfies the 2 conditions of
- \({\displaystyle \widetilde{\Phi}_0’ = A \widetilde{\Phi}_0}\)
- Use of rule for matrix differentiation
- \({\displaystyle \widetilde{\Phi}_0 (0) = I}\)
- Trivial to check \({\displaystyle \widetilde{\Phi}_0 = \Phi(0) \Phi(0)^{-1} = I}\)
Since \(\Phi(t)\) is a fundamental matrix (any of them)
\({\displaystyle (\Phi(t) \Phi(0)^{-1})’ = \Phi(t)’\Phi(0)^{-1} = A \Phi(t) \Phi(0)^{-1} = A(\Phi(t) \Phi(0)^{-1})}\)
showing that \(\Phi(t)\Phi(0)^{-1}\) also satisfies the first condition
28 What are the basic properties of fundamental matrix?
28.1 Front
What are the basic properties of fundamental matrix?
Fundamental matrix \(\Phi(t)\) of \({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\)
28.2 Back
- \(\operatorname{det}(\Phi(t)) \neq 0\) for any \(t\)
- \({\displaystyle \Phi(t)’ = A \Phi(t)}\), so cols solve the system
29 What is the definition of the exponential matrix?
29.1 Front
What is the definition of the exponential matrix?
\({\displaystyle e^A}\)
29.2 Back
Given an \(n \cross n\) constant matrix \(A\), the exponential matrix \(e^A\) is the \(n \cross n\) matrix defined by
\({\displaystyle e^A = I + A + \frac{A^2}{2!} + \dots + \frac{A^n}{n!} + \dots}\)
30 What are the dimensions of the exponential matrix?
30.1 Front
What are the dimensions of the exponential matrix?
\(e^{At}\) where \(A\) is \(n\) square matrix
30.2 Back
It’s a \(n \cross n\) matrix
31 What can we say about this expression?
31.1 Front
What can we say about this expression?
\({\displaystyle e^A = I + A + \frac{A^2}{2!} + \dots + \frac{A^n}{n!} + \dots}\)
31.2 Back
Each term of the RHS is an \(n \cross n\) matrix adding up the \(ij\text{-th}\) entry of each of these matrices gives you an infinite series whose sum is the \(ij\text{-th}\) entry of \(e^A\)
The series always converges
32 Expand this expression
32.1 Front
Expand this expression
\((At)^2\) where \(A\) is \(n\) square matrix
32.2 Back
\({\displaystyle (At)^2 = At \cdot At = A \cdot A \cdot t^2 = A^2 t^2}\)
33 How could we solve this linear system?
33.1 Front
How could we solve this linear system?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), where \(\vb{x}(0) = \vb{x_0}\) and \(A\) is a square constant matrix
33.2 Back
As \(A\) is a square constant matrix, we can use this theorem
- \({\displaystyle e^{At} = \widetilde{\Phi}_0 (t)}\) is the normalized fundamental matrix at \(0\)
- the unique solution to this IVP is \({\displaystyle \vb{x} = e^{At} \vb{x_0}}\)
34 Could we use this normalized fundamental matrix for any linear system?
34.1 Front
Could we use this normalized fundamental matrix for any linear system?
\({\displaystyle \widetilde{\Phi}_0 = e^{At}}\), \({\displaystyle \dot{\vb{x}} = Ax}\)
34.2 Back
No, only when \(A\) is a constant square matrix
35 Why can we use this normalized fundamental matrix for this linear system?
35.1 Front
Why can we use this normalized fundamental matrix for this linear system?
\({\displaystyle \widetilde{\Phi}_0 = e^{At}}\) for the linear system \({\displaystyle \dot{\vb{x}} = A \vb{x}}\) where \(A\) is a constant square matrix
Proof it
35.2 Back
If \(A\) is constant \({\displaystyle e^{A \cdot 0} = I}\), so the \(\widetilde{\Phi}_0(0) = I\)
Letting \(\Phi = e^{At}\), we must show that \(\Phi’ = A \Phi\)
We assume that we can differentiate the series \({\displaystyle e^{At} = I + At + A^2 \frac{t^2}{2!} + \cdots + A^n \frac{t^n}{n!} + \cdots}\) term by term
We have for the individual terms
\({\displaystyle \dv{t} A^n \frac{t^n}{n!} = A^n \frac{t^{n-1}}{(n-1)!}}\)
since \(A^n\) is a constant matrix. Differentiates the series term by term then gives
\({\displaystyle \dv{\Phi}{t} = \dv{t} e^{At} = A + A^2 t + \cdots + A^n \frac{t^{n-1}}{(n-1)!} + \cdots = A e^{At} = A \Phi}\)
36 How is the series expansion for this expenential?
36.1 Front
How is the series expansion for this expenential?
\(e^{At}\) where \(A\) is a constant square matrix
36.2 Back
\({\displaystyle e^{At} = I + At + A^2 \frac{t^2}{2!} + \cdots + A^n \frac{t^n}{n!} + \cdots}\)
37 How can we compute this normalized fundamental matrix for a specific system?
37.1 Front
How can we compute this normalized fundamental matrix for a specific system?
\({\displaystyle \widetilde{\Phi}_0 = e^{At}}\)
37.2 Back
You can use several techniques available
- In simple cases, it can be calculated directly as an infinite series of matrices
- It can always be calculated as the normalized fundamental matrix
- \({\displaystyle \widetilde{\Phi}_0(t) = \Phi(t) \Phi(0)^{-1}}\)
- Using the exponential law
- \({\displaystyle e^{(B + C)t} = e^{Bt} e^{Ct}}\) valid if \(BC = CB\)
- To use it, one looks for constant matrices \(B\) and \(C\) such that \(A = B + C\), \(BC = CB\) and \(e^{Bt}\) and \(e^{Ct}\) are computable
- \(e^{At} = e^{Bt}e^{Ct}\)
38 What is the formula to get particular solution to inhomegeneous systems?
38.1 Front
What is the formula to get particular solution to inhomegeneous systems?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x} + \vb{F}(t)}\)
38.2 Back
\({\displaystyle \vb{x_p} = \Phi \cdot \biggl(\int \Phi^{-1} \cdot \vb{F} \dd{t} + \vb{C} \biggr)}\)
39 How can we get a formula for solving inhomegeneous linear system?
39.1 Front
How can we get a formula for solving inhomegeneous linear system?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x} + \vb{F}(t)}\)
39.2 Back
General homogeneous solution: \({\displaystyle \vb{x} = \Phi \cdot \vb{c}}\) for a constant vector \(\vb{c}\)
Make \(c\) variable \(\leadsto\) trial solution \({\displaystyle \vb{x} = \Phi \cdot \vb{v}(t)}\)
Plug this into \({\displaystyle \vb{x}’ = A \vb{x} + \vb{F}(t) \implies \Phi’ \cdot \vb{v} + \Phi \cdot \vb{v}’ = A \Phi \cdot \vb{v} + \vb{F}}\)
Now substitute for \({\displaystyle \Phi’ = A \Phi}\):
- \({\displaystyle \implies A \Phi \cdot \vb{v} + \Phi \vb{v}’ = A \Phi \cdot \vb{v} + \vb{F}}\)
- \({\displaystyle \implies \Phi \cdot \vb{v}’ = \vb{F}}\)
- \({\displaystyle \implies \vb{v}’ = \Phi^{-1} \cdot \vb{F}}\)
- \({\displaystyle \implies \vb{v} = \int \Phi^{-1} \cdot \vb{F} + \vb{C}}\)
- \({\displaystyle \implies \vb{x} = \Phi \cdot \vb{v} = \Phi \biggl( \int \Phi^{-1} \cdot \vb{F} \dd{t} + \vb{C} \biggr)}\)
40 What is the definite integral version of variation of parameters for solving inhomegeneous linear system?
40.1 Front
What is the definite integral version of variation of parameters for solving inhomegeneous linear system?
\({\displaystyle \dot{\vb{x}} = A(t) \vb{x} + \vb{F}(t)}\)
40.2 Back
\({\displaystyle \vb{x}(t) = \Phi(t) \biggl( \int_{t_0}^t \Phi^{-1}(u) \cdot \vb{F}(u) \dd{u} + \vb{C} \biggr)}\), where \({\displaystyle \vb{C} = \Phi^{-1}(t_0) \cdot \vb{x}(t_0)}\)
41 Get general solution for this linear system
41.1 Front
Get general solution for this linear system
\({\displaystyle \vb{u}’ = A \vb{u} + \begin{pmatrix}5 \\ 10\end{pmatrix}}\) where \({\displaystyle A = \begin{pmatrix}1 & 1 \\ -4 & 1\end{pmatrix}}\)
Give the general solution if \({\displaystyle e^{At} = \begin{pmatrix}e^t \cos(2t) & 1/2 e^t \sin(2t) \\ -2e^t \sin(2t) & e^t \cos(2t)\end{pmatrix}}\)
And solve for \(\vb{u}(0) = \vb{0}\)
41.2 Back
We guess a constant solution because the input vector is constant, \(\vb{u} = \begin{pmatrix}k_1 \\ k_2\end{pmatrix}\)
Substituting this into the DE gives
\({\displaystyle \begin{pmatrix}0 \\ 0\end{pmatrix} = A \begin{pmatrix}k_1 \\ k_2\end{pmatrix} + \begin{pmatrix}5 \\ 10\end{pmatrix}}\)
This implies
\({\displaystyle \vb{u} = - A^{-1} \begin{pmatrix}5 \\ 10\end{pmatrix} = - \frac{1}{5} \begin{pmatrix}1 & -1 \\ 4 & 1\end{pmatrix} \begin{pmatrix}5 \\ 10\end{pmatrix} = \begin{pmatrix}1 \\ -6\end{pmatrix}}\)
Since all homogeneous solutions are of the form \({\displaystyle e^{At} \begin{pmatrix}a \\ b\end{pmatrix}}\), the general solution is then given by
\({\displaystyle e^{At} \begin{pmatrix}a \\ b\end{pmatrix} + \begin{pmatrix}1 \\ -6\end{pmatrix} = \begin{pmatrix}e^t (a \cos(2t) + b/2 \sin(2t)) + 1 \\ e^t(b \cos(2t) - 2a \sin(2t)) - 6\end{pmatrix}}\)
To find the particular solution with \(\vb{u}(0) = \vb{0}\), we plug \(t = 0\) into the expression, and get that
\({\displaystyle \begin{pmatrix}a + 1 \\ b - 6\end{pmatrix} = \begin{pmatrix}0 \\ 0\end{pmatrix}}\)
so the desired solution is given by the constant \(a = -1\) and \(b=6\)
\({\displaystyle \vb{u} = \begin{pmatrix}e^t (- \cos(2t) + 3 \sin(2t)) + 1 \\ e^t (6 \cos(2t) + 2 \sin(2t)) - 6\end{pmatrix}}\)
42 What is the particular solution for this linear system?
42.1 Front
What is the particular solution for this linear system?
\({\displaystyle \dot{\vb{u}} = A \vb{u} + \vb{q}}\) where \(\vb{q}\) is constant
42.2 Back
Checking that \(A\) is invertible, then we estimate that \(\vb{u_p}\) is constant \({\displaystyle \begin{pmatrix}k_1 \\ k_2\end{pmatrix}}\)
Applying to the matrix equation
\({\displaystyle \dot{\vb{u_p}} = \begin{pmatrix}0 \\ 0\end{pmatrix} = A \vb{u_p} + \vb{q}}\)
\({\displaystyle \vb{u_p} = - A^{-1} \vb{q}}\)
43 Suppose that these are both normal solution, find the solution with this IVP
43.1 Front
Suppose that these are both normal solution, find the solution with this IVP
- \({\displaystyle e^{3t} \begin{pmatrix}1 \\ 1\end{pmatrix}}\)
- \({\displaystyle e^{2t} \begin{pmatrix}1 \\ 2\end{pmatrix}}\)
satisfy the equation \({\displaystyle \dot{\vb{u}} = A \vb{u}}\)
Find the solution \(\vb{u}\) such that \({\displaystyle \vb{u}(0) = \begin{pmatrix}1 \\ 0\end{pmatrix}}\)
Without using fundamental matrix
43.2 Back
\({\displaystyle \vb{u} = c_1 e^{3t} \begin{pmatrix}1 \\ 1\end{pmatrix} + c_2 e^{2t} \begin{pmatrix}1 \\ 2\end{pmatrix}}\)
\({\displaystyle \vb{u}(0) = \begin{pmatrix}1 \\ 0\end{pmatrix} = c_1 \begin{pmatrix}1 \\ 1\end{pmatrix} + c_2 \begin{pmatrix}1 \\ 2\end{pmatrix} = \begin{pmatrix}c_1 + c_2 \\ c_1 + 2 c_2\end{pmatrix}}\)
Thus \(c_1 = 2\) and \(c_2 = -1\)
44 Suppose that these are both normal solutions, find the matrix \(A\)
44.1 Front
Suppose that these are both normal solutions, find the matrix A
- \({\displaystyle e^{3t} \begin{pmatrix}1 \\ 1\end{pmatrix}}\)
- \({\displaystyle e^{2t} \begin{pmatrix}1 \\ 2\end{pmatrix}}\)
satisfy the equation \({\displaystyle \dot{\vb{u}} = A \vb{u}}\)
Find the matrix \(A\)
44.2 Back
The matrix \(A\) has eigenvalues \(3\) and \(2\), with eigenvectors \({\displaystyle \begin{pmatrix}1 \\ 1\end{pmatrix}}\) and \({\displaystyle \begin{pmatrix}1 \\ 2\end{pmatrix}}\)
The \({\displaystyle \begin{pmatrix}a & b \\ c & d\end{pmatrix} \begin{pmatrix}1 \\ 1\end{pmatrix} = 3 \begin{pmatrix}1 \\ 1\end{pmatrix}}\) and \({\displaystyle \begin{pmatrix}a & b \\ c & d\end{pmatrix} \begin{pmatrix}1 \\ 2\end{pmatrix} = 2 \begin{pmatrix}1 \\ 2\end{pmatrix}}\)
The top entries give the equation \(a + b = 3\) and \(a + 2b = 2\), which imply \(a = 4\), \(b = -1\)
The bottom entries give the equation \(c + d = 3\) and \(c + 2d = 4\), which imply \(c=2\), \(d=1\)
\({\displaystyle A = \begin{pmatrix}4 & -1 \\ 2 & 1\end{pmatrix}}\)