Captured On
[2020-02-20 Thu 21:51]
Source
Matrix Exponentials | Unit IV: First-order Systems | Differential Equations | Mathematics | MIT OpenCourseWare

1 For this linear system, how many solutions are there?

1.1 Front

For this linear system, how many solutions are there?

\({\displaystyle \dot{\vb{x}} = A \vb{x}}\), where AA is a \(n \cross n\) matrix

1.2 Back

There are nn linearly independent solutions for the system

2 How is the linear system when the coefficients are functions of the independent variable t?

2.1 Front

How is the linear system when the coefficients are functions of the independent variable t?

for \(n \cross n\) linear homogeneous systems

2.2 Back

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\)

3 When can we say that the n solutions are linearly independent?

3.1 Front

When can we say that the n solutions are linearly independent?

Solutions for the system \({\displaystyle \dot{\vb{x}} = A \vb{x}}\)

3.2 Back

If \(c_1 \vb{x_1}(t) + \dots + c_n \vb{x_n}(t) = 0\) for all tt     \implies all ci=0c_i = 0

4 What that means this expression?

4.1 Front

What that means this expression?

\(c_1 \vb{x_1}(t) + \dots + c_n \vb{x_n}(t) \equiv 0\)

4.2 Back

It’s identically 00, meaning zero for all tt, the symbol ≢\not \equiv means not identically 0, there is some t-valuet\text{-value} for which it is not zero

5 What is a fundamental set of solutions for this linear system?

5.1 Front

What is a fundamental set of solutions for this linear system?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\)

And what is the general solution?

5.2 Back

Is the set of solutions \(\vb{x_1}, \dots \vb{x_n}\) which is linearly independent for this system.

The general solution is \(\vb{x} = c_1 \vb{x_1} + \dots + c_n \vb{x_n}\) using the superposition principle because it’s a linear system.

6 What is the existence and uniqueness theorem for linear systems?

6.1 Front

What is the existence and uniqueness theorem for linear systems?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\)

6.2 Back

If the entries of the square matrix A(t)A(t) are continuous on an open interval II containing t0t_0, then the initial value problem

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), \(\vb{x}(t_0) = \vb{x_0}\)

has one and only one solution \(\vb{x}(t)\) on the interval II

7 What is the linear independence theorem for linear systems?

7.1 Front

What is the linear independence theorem for linear systems?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\),

7.2 Back

Supposing that the entries of A(t)A(t) are continuous on an open interval II

Let \(\vb{x_1}(t)\) and \(\vb{x_2}(t)\) be two solutions to the linear system on the interval II, such that at some point t0t_0 in II, the vectors \(\vb{x_1}(t_0)\) and \(\vb{x_2}(t_0)\) are linearly independent.

Then

  1. the solutions \(\vb{x_1}(t)\) and \(\vb{x_2}(t)\) are linearly independent on II, and
  2. the vectors \(\vb{x_1}(t_1)\) and \(\vb{x_2}(t_1)\) are linearly independent at every point t1t_1 of II

8 What is the general solution theorem for linear systems?

8.1 Front

What is the general solution theorem for linear systems?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), where AA is matrix of \(n \cross n\)

8.2 Back

  1. This system has nn linearly independent solutions
  2. If \(\vb{x_1}, \dots, \vb{x_n}\) are nn linearly independent solutions, then every solution \(\vb{x}\) can be written in this form for some choice of cic_i

\({\displaystyle \vb{x} = c_1 \vb{x_1} + \dots + c_n \vb{x_n}}\)

9 What is the Wronskian of these 2 vector functions?

9.1 Front

What is the Wronskian of these 2 vector functions?

Let \(\vb{x_1}(t)\) and \(\vb{x_2}(t)\) be two 2-vector functions

9.2 Back

We define their Wronskian to be the determinant

\({\displaystyle W(\vb{x_1}, \vb{x_2})(t) = \begin{vmatrix}x_1(t) & x_2 (t) \\ y_1(t) & y_2(t)\end{vmatrix}}\)

whose columns are the two vector functions

10 What does it mean that Wronskian is 0 in this point?

10.1 Front

What does it mean that Wronskian is 0 in this point?

\({\displaystyle W(\vb{x_1}, \vb{x_2}) (t_0) = 0}\)

10.2 Back

\({\displaystyle W(\vb{x_1}, \vb{x_2}) (t_0) = \begin{vmatrix}x_1(t_0) & x_2 (t_0) \\ y_1(t_0) & y_2(t_0)\end{vmatrix} = 0 \Leftrightarrow \vb{x_1}(t_0)}\) and \(\vb{x_2}(t_0)\) are dependent

11 What is the Wronskian vanishing theorem for linear systems?

11.1 Front

What is the Wronskian vanishing theorem for linear systems?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), suppose AA is \(2 \cross 2\)

11.2 Back

On an interval II where the entries of A(t)A(t) are continuous, let \(\vb{x_1}\) and \(\vb{x_2}\) be two solutions to this linear system and W(t)W(t) their Wronskian

\({\displaystyle W(\vb{x_1}, \vb{x_2})(t) = \begin{vmatrix}x_1(t) & x_2 (t) \\ y_1(t) & y_2(t)\end{vmatrix}}\)

Then either:

  1. W(t)0W(t) \equiv 0 on II, and \(\vb{x_1}\) and \(\vb{x_2}\) are linearly dependent on II, or
  2. W(t)W(t) is never 00 on II, and \(\vb{x_1}\) and \(\vb{x_2}\) are linearly independent on II

12 What is the matrix form of inhomogeneous linear system?

12.1 Front

What is the matrix form of inhomogeneous linear system?

12.2 Back

\({\displaystyle \dot{\vb{u}} = A(t) \vb{u} + \vb{F}(t)}\)

13 What is the general solution for this linear system?

13.1 Front

What is the general solution for this linear system?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x} + \vb{F}(t)}\), where AA is a square matrix \(2 \cross 2\)

13.2 Back

Its homogeneous linear system is \({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\) which solutions are \(\vb{x_h} = c_1 \vb{x_1} + c_2 \vb{x_2}\)

And \(\vb{x_p}\) is a solution to inhomogeneous linear system, so the general solution to the system is

\({\displaystyle \vb{x} = \vb{x_p} + \vb{x_h}}\)

14 What is the existence and uniqueness theorem for this linear system?

14.1 Front

What is the existence and uniqueness theorem for this linear system?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x} + \vb{F}(t)}\)

14.2 Back

We start with an initial time t0t_0 and the initial value problem

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x} + \vb{F}(t)}\), \(\vb{x}(t_0) = \vb{x_0}\)

If A(t)A(t) and \(\vb{F}(t)\) are continuous then there exists a unique solution to IVP

15 What is a fundamental matrix for this system?

15.1 Front

What is a fundamental matrix for this system?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\) where A(t)A(t) is \(2 \cross 2\) square matrix

15.2 Back

The fundamental matrix is the matrix composed by the 2 linearly independent solutions \(\vb{x_1}\) and \(\vb{x_2}\)

\({\displaystyle \Phi (t) = \begin{pmatrix}\vb{x_1} \\ \vb{x_2}\end{pmatrix} = \begin{pmatrix} x_1 & x_2 \\ y_1 & y_2\end{pmatrix}}\)

16 What is the general solution using fundamental matrix notation?

16.1 Front

What is the general solution using fundamental matrix notation?

Where \(\vb{x_1}\) and \(\vb{x_2}\) are the 2 linearly independent solutions

16.2 Back

General solution \({\displaystyle \vb{x}(t) = c_1 \begin{pmatrix}x_1 \\ y_1\end{pmatrix} + c_2 \begin{pmatrix} x_2 \\ y_2\end{pmatrix} = \begin{pmatrix}x_1 & x_2 \\ y_1 & y_2\end{pmatrix} \begin{pmatrix}c_1 \\ c_2\end{pmatrix}}\)

which becomes using the fundamental matrix

\({\displaystyle \vb{x} = \Phi(t) \vb{c}}\), where \(\vb{c} = \begin{pmatrix}c_1 \\ c_2\end{pmatrix}\)

Note that the vector \(\vb{c}\) must be written on the right, even though the c’sc’\text{s} are usually written on the left when they are the coefficients of the solutions \(\vb{x_i}\)

17 How can we solve this IVP using fundamental matrix notation?

17.1 Front

How can we solve this IVP using fundamental matrix notation?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), where \(\vb{x}(t_0) = \vb{x_0}\)

Show the process

17.2 Back

The general solution is \({\displaystyle \vb{x} = \Phi(t) \vb{c}}\), where \(\vb{c} = \begin{pmatrix}c_1 \\ c_2\end{pmatrix}\)

We choose the \(\vb{c}\) so that the initial condition is satisfied, substituting t0t_0 gives us the matrix equation for \(\vb{c}\)

\({\displaystyle \Phi(t_0) \vb{c} = \vb{x_0}}\)

Since the determinant \({\displaystyle \abs{\Phi(t_0)}}\) is the value at t0t_0 of the Wronskian of \(\vb{x_1}\) and \(\vb{x_2}\), it is non-zero since the two solutions are linearly independent. Therefore the inverse matrix exists and the matrix equation above can be solved for \(\vb{c}\)

\({\displaystyle \vb{c} = \Phi(t_0)^{-1} \vb{x_0}}\)

Using the above value of \(\vb{c}\), the solution to the IVP can now be written

\({\displaystyle \vb{x} = \Phi(t) \Phi(t_0)^{-1} \vb{x_0}}\)

Note that when the solution is written in this form, it’s “obvious” that \(\vb{x}(t_0) = \vb{x_0}\),i.e., that the initial condition in IVP is satified

18 Why the determinant of fundamental matrix could be never 0?

18.1 Front

Why the determinant of fundamental matrix could be never 0?

\({\displaystyle \abs{\Phi}(t)}\)

18.2 Back

Because it’s the value of the Wronskian of xix_i linearly independents solutions of the linear system.

19 How many fundamental matrix could have a linear system?

19.1 Front

How many fundamental matrix could have a linear system?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\)

19.2 Back

There is no an unique fundamental matrix since there are many ways to pick two independent solutions of \({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\) to form the columns of Φ\Phi

20 How could we know that a matrix could be a fundamental matrix of the system?

20.1 Front

How could we know that a matrix could be a fundamental matrix of the system?

\({\displaystyle \dot{\vb{x}} = A \vb{x}}\)

20.2 Back

Φ(t)\Phi(t) is a fundamental matrix for the system if its determinant \({\displaystyle \abs{\Phi(t)}}\) is non-zero and it satisfies the matrix equation

Φ˙=AΦ{\displaystyle \dot{\Phi} = A \Phi}

where Φ˙\dot{\Phi} means that each entry of Φ\Phi has been differentiated

21 How could be proof that this matrix equations is true?

21.1 Front

How could be proof that this matrix equations is true?

Φ˙=AΦ{\displaystyle \dot{\Phi} = A \Phi}, suppose AA a \(2 \cross 2\) square matrix

21.2 Back

This is true, if Φ(t)\Phi(t) is a fundamental matrix for the system \({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\)

In this case, the determinant of the fundamental matrix is never 0, \(\abs{\Phi(t)} \not \equiv 0\), so its columns \(\vb{x_1}\) and \(\vb{x_2}\) are linearly independent

Let \({\displaystyle \Phi = \begin{pmatrix}\vb{x_1} \\ \vb{x_2}\end{pmatrix}}\). According to the rules for matrix multiplication

\({\displaystyle \dot{\Phi} = A \Phi \implies \begin{pmatrix}\dot{\vb{x_1}} \\ \dot{\vb{x_2}}\end{pmatrix} = A \begin{pmatrix}\vb{x_1} \\ \vb{x_2}\end{pmatrix} = \begin{pmatrix}A \vb{x_1} \\ A \vb{x_2}\end{pmatrix}}\)

which shows that

\({\displaystyle \dot{\vb{x_1}} = A \vb{x_1}}\) and \({\displaystyle \dot{\vb{x_2}} = A \vb{x_2}}\)

this means that \(\vb{x_1}\) and \(\vb{x_2}\) are solutions to the system \({\displaystyle \dot{\vb{x}} = A \vb{x}}\)

22 Write this linear system in its matrix form

22.1 Front

Write this linear system in its matrix form

\({\displaystyle \dot{\vb{x}} = A \vb{x}}\)

22.2 Back

Let Φ(t)\Phi(t) be a fundamental matrix of this system

Φ˙=AΦ{\displaystyle \dot{\Phi} = A \Phi}

23 What is the best choice for fundamental matrix from this system?

23.1 Front

What is the best choice for fundamental matrix from this system?

\({\displaystyle \dot{\vb{u}} = A(t) \vb{x}}\) where AA is a 2 square matrix and initial condition \(\vb{x}(t_0) = \vb{x_0}\)

23.2 Back

The fundamental matrix has form \({\displaystyle \Phi(t) = \begin{pmatrix}\vb{x_1} \\ \vb{x_2}\end{pmatrix} = \begin{pmatrix}x_1 & x_2 \\ y_1 & y_2\end{pmatrix}}\)

The solution to IVP is \(\vb{x} = \Phi(t) \Phi^{-1}(t_0) \vb{x_0}\)

There 2 methods

  1. If the ODE has constant coefficients, and its eigenvalues are real and distinct
    • Fundamental matrix would be the one whose columns are the normal modes
    • Normal modes has form \({\displaystyle \vb{x_i} = \vec{\alpha_1} e^{\lambda_i t}}\), i=1,2i = 1,2
  2. Useful in showing how the solution depends on the initial conditions
    • Pick Φ(t0)=I=(1001){\displaystyle \Phi(t_0) = I = \begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix}}
    • \({\displaystyle \vb{x_1}(t_0) = \begin{pmatrix}1 \\ 0\end{pmatrix}}\)
    • \({\displaystyle \vb{x_2}(t_0) = \begin{pmatrix}0 \\ 1\end{pmatrix}}\)
    • Since the \(\vb{x_i}(t)\) are uniquely determined by these initial conditions, the fundamental matrix Φ(t)\Phi(t) satisfying Φ(t0)=I\Phi(t_0) = I is also unique

24 What is the normalized fundamental matrix?

24.1 Front

What is the normalized fundamental matrix?

\({\displaystyle \dot{\vb{u}} = A \vb{u}}\) at t=t0t = t_0

24.2 Back

The unique matrix Φ~t0(t)\widetilde{\Phi}_{t_0}(t) satisfying:

Φ~t0=AΦ~t0{\displaystyle \widetilde{\Phi}_{t_0}’ = A \widetilde{\Phi}_{t_0}}, Φ~t0(t0)=I{\displaystyle \widetilde{\Phi}_{t_0} (t_0) = I}

is called the normalized fundamental matrix at t0t_0 for AA

The Φ~t0\widetilde{\Phi}_{t_0} must be a fundamental matrix which determinant is never 0. So in this case it’s definition its true because \({\displaystyle \abs{\widetilde{\Phi}_{t_0}(t_0)} = 1}\)

25 What does this symbol mean?

25.1 Front

What does this symbol mean?

Φ~0\widetilde{\Phi}_0

25.2 Back

It’s the normalized fundamental matrix at t=0t = 0 for AA

26 How is the solution of this system using normalized fundamental matrix?

26.1 Front

How is the solution of this system using normalized fundamental matrix?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), where \(\vb{x}(0) = \vb{x_0}\)

26.2 Back

\({\displaystyle \vb{x}(t) = \widetilde{\Phi}_0(t) \vb{x_0}}\)

27 How can we compute the normalized fundamental matrix?

27.1 Front

How can we compute the normalized fundamental matrix?

Φ~0(t)\widetilde{\Phi}_0 (t)

Show the process

27.2 Back

Matrix form for the solution to the IVP: \({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), \({\displaystyle \vb{x}(0) = \vb{x_0}}\),

  • \({\displaystyle \vb{x} = \widetilde{\Phi}_0 \vb{x_0}}\)
  • \({\displaystyle \vb{x} = \Phi(t) \Phi(0)^{-1} \vb{x_0}}\)

Find any fundamental matrix Φ(t)\Phi(t) and then

Φ~0(t)=Φ(t)Φ(0)1{\displaystyle \widetilde{\Phi}_0(t) = \Phi(t) \Phi(0)^{-1}}

To verify this, we have to see that the matrix on the RHS satisfies the 2 conditions of

  • Φ~0=AΦ~0{\displaystyle \widetilde{\Phi}_0’ = A \widetilde{\Phi}_0}
    • Use of rule for matrix differentiation
  • Φ~0(0)=I{\displaystyle \widetilde{\Phi}_0 (0) = I}
    • Trivial to check Φ~0=Φ(0)Φ(0)1=I{\displaystyle \widetilde{\Phi}_0 = \Phi(0) \Phi(0)^{-1} = I}

Since Φ(t)\Phi(t) is a fundamental matrix (any of them)

(Φ(t)Φ(0)1)=Φ(t)Φ(0)1=AΦ(t)Φ(0)1=A(Φ(t)Φ(0)1){\displaystyle (\Phi(t) \Phi(0)^{-1})’ = \Phi(t)’\Phi(0)^{-1} = A \Phi(t) \Phi(0)^{-1} = A(\Phi(t) \Phi(0)^{-1})}

showing that Φ(t)Φ(0)1\Phi(t)\Phi(0)^{-1} also satisfies the first condition

28 What are the basic properties of fundamental matrix?

28.1 Front

What are the basic properties of fundamental matrix?

Fundamental matrix Φ(t)\Phi(t) of \({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\)

28.2 Back

  1. det(Φ(t))0\operatorname{det}(\Phi(t)) \neq 0 for any tt
  2. Φ(t)=AΦ(t){\displaystyle \Phi(t)’ = A \Phi(t)}, so cols solve the system

29 What is the definition of the exponential matrix?

29.1 Front

What is the definition of the exponential matrix?

eA{\displaystyle e^A}

29.2 Back

Given an \(n \cross n\) constant matrix AA, the exponential matrix eAe^A is the \(n \cross n\) matrix defined by

eA=I+A+A22!++Ann!+{\displaystyle e^A = I + A + \frac{A^2}{2!} + \dots + \frac{A^n}{n!} + \dots}

30 What are the dimensions of the exponential matrix?

30.1 Front

What are the dimensions of the exponential matrix?

eAte^{At} where AA is nn square matrix

30.2 Back

It’s a \(n \cross n\) matrix

31 What can we say about this expression?

31.1 Front

What can we say about this expression?

eA=I+A+A22!++Ann!+{\displaystyle e^A = I + A + \frac{A^2}{2!} + \dots + \frac{A^n}{n!} + \dots}

31.2 Back

Each term of the RHS is an \(n \cross n\) matrix adding up the ij-thij\text{-th} entry of each of these matrices gives you an infinite series whose sum is the ij-thij\text{-th} entry of eAe^A

The series always converges

32 Expand this expression

32.1 Front

Expand this expression

(At)2(At)^2 where AA is nn square matrix

32.2 Back

(At)2=AtAt=AAt2=A2t2{\displaystyle (At)^2 = At \cdot At = A \cdot A \cdot t^2 = A^2 t^2}

33 How could we solve this linear system?

33.1 Front

How could we solve this linear system?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x}}\), where \(\vb{x}(0) = \vb{x_0}\) and AA is a square constant matrix

33.2 Back

As AA is a square constant matrix, we can use this theorem

  • eAt=Φ~0(t){\displaystyle e^{At} = \widetilde{\Phi}_0 (t)} is the normalized fundamental matrix at 00
  • the unique solution to this IVP is \({\displaystyle \vb{x} = e^{At} \vb{x_0}}\)

34 Could we use this normalized fundamental matrix for any linear system?

34.1 Front

Could we use this normalized fundamental matrix for any linear system?

Φ~0=eAt{\displaystyle \widetilde{\Phi}_0 = e^{At}}, \({\displaystyle \dot{\vb{x}} = Ax}\)

34.2 Back

No, only when AA is a constant square matrix

35 Why can we use this normalized fundamental matrix for this linear system?

35.1 Front

Why can we use this normalized fundamental matrix for this linear system?

Φ~0=eAt{\displaystyle \widetilde{\Phi}_0 = e^{At}} for the linear system \({\displaystyle \dot{\vb{x}} = A \vb{x}}\) where AA is a constant square matrix

Proof it

35.2 Back

If AA is constant eA0=I{\displaystyle e^{A \cdot 0} = I}, so the Φ~0(0)=I\widetilde{\Phi}_0(0) = I

Letting Φ=eAt\Phi = e^{At}, we must show that Φ=AΦ\Phi’ = A \Phi

We assume that we can differentiate the series eAt=I+At+A2t22!++Antnn!+{\displaystyle e^{At} = I + At + A^2 \frac{t^2}{2!} + \cdots + A^n \frac{t^n}{n!} + \cdots} term by term

We have for the individual terms

\({\displaystyle \dv{t} A^n \frac{t^n}{n!} = A^n \frac{t^{n-1}}{(n-1)!}}\)

since AnA^n is a constant matrix. Differentiates the series term by term then gives

\({\displaystyle \dv{\Phi}{t} = \dv{t} e^{At} = A + A^2 t + \cdots + A^n \frac{t^{n-1}}{(n-1)!} + \cdots = A e^{At} = A \Phi}\)

36 How is the series expansion for this expenential?

36.1 Front

How is the series expansion for this expenential?

eAte^{At} where AA is a constant square matrix

36.2 Back

eAt=I+At+A2t22!++Antnn!+{\displaystyle e^{At} = I + At + A^2 \frac{t^2}{2!} + \cdots + A^n \frac{t^n}{n!} + \cdots}

37 How can we compute this normalized fundamental matrix for a specific system?

37.1 Front

How can we compute this normalized fundamental matrix for a specific system?

Φ~0=eAt{\displaystyle \widetilde{\Phi}_0 = e^{At}}

37.2 Back

You can use several techniques available

  1. In simple cases, it can be calculated directly as an infinite series of matrices
  2. It can always be calculated as the normalized fundamental matrix
    • Φ~0(t)=Φ(t)Φ(0)1{\displaystyle \widetilde{\Phi}_0(t) = \Phi(t) \Phi(0)^{-1}}
  3. Using the exponential law
    • e(B+C)t=eBteCt{\displaystyle e^{(B + C)t} = e^{Bt} e^{Ct}} valid if BC=CBBC = CB
    • To use it, one looks for constant matrices BB and CC such that A=B+CA = B + C, BC=CBBC = CB and eBte^{Bt} and eCte^{Ct} are computable
    • eAt=eBteCte^{At} = e^{Bt}e^{Ct}

38 What is the formula to get particular solution to inhomegeneous systems?

38.1 Front

What is the formula to get particular solution to inhomegeneous systems?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x} + \vb{F}(t)}\)

38.2 Back

\({\displaystyle \vb{x_p} = \Phi \cdot \biggl(\int \Phi^{-1} \cdot \vb{F} \dd{t} + \vb{C} \biggr)}\)

39 How can we get a formula for solving inhomegeneous linear system?

39.1 Front

How can we get a formula for solving inhomegeneous linear system?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x} + \vb{F}(t)}\)

39.2 Back

General homogeneous solution: \({\displaystyle \vb{x} = \Phi \cdot \vb{c}}\) for a constant vector \(\vb{c}\)

Make cc variable \leadsto trial solution \({\displaystyle \vb{x} = \Phi \cdot \vb{v}(t)}\)

Plug this into \({\displaystyle \vb{x}’ = A \vb{x} + \vb{F}(t) \implies \Phi’ \cdot \vb{v} + \Phi \cdot \vb{v}’ = A \Phi \cdot \vb{v} + \vb{F}}\)

Now substitute for Φ=AΦ{\displaystyle \Phi’ = A \Phi}:

  • \({\displaystyle \implies A \Phi \cdot \vb{v} + \Phi \vb{v}’ = A \Phi \cdot \vb{v} + \vb{F}}\)
  • \({\displaystyle \implies \Phi \cdot \vb{v}’ = \vb{F}}\)
  • \({\displaystyle \implies \vb{v}’ = \Phi^{-1} \cdot \vb{F}}\)
  • \({\displaystyle \implies \vb{v} = \int \Phi^{-1} \cdot \vb{F} + \vb{C}}\)
  • \({\displaystyle \implies \vb{x} = \Phi \cdot \vb{v} = \Phi \biggl( \int \Phi^{-1} \cdot \vb{F} \dd{t} + \vb{C} \biggr)}\)

40 What is the definite integral version of variation of parameters for solving inhomegeneous linear system?

40.1 Front

What is the definite integral version of variation of parameters for solving inhomegeneous linear system?

\({\displaystyle \dot{\vb{x}} = A(t) \vb{x} + \vb{F}(t)}\)

40.2 Back

\({\displaystyle \vb{x}(t) = \Phi(t) \biggl( \int_{t_0}^t \Phi^{-1}(u) \cdot \vb{F}(u) \dd{u} + \vb{C} \biggr)}\), where \({\displaystyle \vb{C} = \Phi^{-1}(t_0) \cdot \vb{x}(t_0)}\)

41 Get general solution for this linear system

41.1 Front

Get general solution for this linear system

\({\displaystyle \vb{u}’ = A \vb{u} + \begin{pmatrix}5 \\ 10\end{pmatrix}}\) where A=(1141){\displaystyle A = \begin{pmatrix}1 & 1 \\ -4 & 1\end{pmatrix}}

Give the general solution if eAt=(etcos(2t)1/2etsin(2t)2etsin(2t)etcos(2t)){\displaystyle e^{At} = \begin{pmatrix}e^t \cos(2t) & 1/2 e^t \sin(2t) \\ -2e^t \sin(2t) & e^t \cos(2t)\end{pmatrix}}

And solve for \(\vb{u}(0) = \vb{0}\)

41.2 Back

We guess a constant solution because the input vector is constant, \(\vb{u} = \begin{pmatrix}k_1 \\ k_2\end{pmatrix}\)

Substituting this into the DE gives

(00)=A(k1k2)+(510){\displaystyle \begin{pmatrix}0 \\ 0\end{pmatrix} = A \begin{pmatrix}k_1 \\ k_2\end{pmatrix} + \begin{pmatrix}5 \\ 10\end{pmatrix}}

This implies

\({\displaystyle \vb{u} = - A^{-1} \begin{pmatrix}5 \\ 10\end{pmatrix} = - \frac{1}{5} \begin{pmatrix}1 & -1 \\ 4 & 1\end{pmatrix} \begin{pmatrix}5 \\ 10\end{pmatrix} = \begin{pmatrix}1 \\ -6\end{pmatrix}}\)

Since all homogeneous solutions are of the form eAt(ab){\displaystyle e^{At} \begin{pmatrix}a \\ b\end{pmatrix}}, the general solution is then given by

eAt(ab)+(16)=(et(acos(2t)+b/2sin(2t))+1et(bcos(2t)2asin(2t))6){\displaystyle e^{At} \begin{pmatrix}a \\ b\end{pmatrix} + \begin{pmatrix}1 \\ -6\end{pmatrix} = \begin{pmatrix}e^t (a \cos(2t) + b/2 \sin(2t)) + 1 \\ e^t(b \cos(2t) - 2a \sin(2t)) - 6\end{pmatrix}}

To find the particular solution with \(\vb{u}(0) = \vb{0}\), we plug t=0t = 0 into the expression, and get that

(a+1b6)=(00){\displaystyle \begin{pmatrix}a + 1 \\ b - 6\end{pmatrix} = \begin{pmatrix}0 \\ 0\end{pmatrix}}

so the desired solution is given by the constant a=1a = -1 and b=6b=6

\({\displaystyle \vb{u} = \begin{pmatrix}e^t (- \cos(2t) + 3 \sin(2t)) + 1 \\ e^t (6 \cos(2t) + 2 \sin(2t)) - 6\end{pmatrix}}\)

42 What is the particular solution for this linear system?

42.1 Front

What is the particular solution for this linear system?

\({\displaystyle \dot{\vb{u}} = A \vb{u} + \vb{q}}\) where \(\vb{q}\) is constant

42.2 Back

Checking that AA is invertible, then we estimate that \(\vb{u_p}\) is constant (k1k2){\displaystyle \begin{pmatrix}k_1 \\ k_2\end{pmatrix}}

Applying to the matrix equation

\({\displaystyle \dot{\vb{u_p}} = \begin{pmatrix}0 \\ 0\end{pmatrix} = A \vb{u_p} + \vb{q}}\)

\({\displaystyle \vb{u_p} = - A^{-1} \vb{q}}\)

43 Suppose that these are both normal solution, find the solution with this IVP

43.1 Front

Suppose that these are both normal solution, find the solution with this IVP

  • e3t(11){\displaystyle e^{3t} \begin{pmatrix}1 \\ 1\end{pmatrix}}
  • e2t(12){\displaystyle e^{2t} \begin{pmatrix}1 \\ 2\end{pmatrix}}

satisfy the equation \({\displaystyle \dot{\vb{u}} = A \vb{u}}\)

Find the solution \(\vb{u}\) such that \({\displaystyle \vb{u}(0) = \begin{pmatrix}1 \\ 0\end{pmatrix}}\)

Without using fundamental matrix

43.2 Back

\({\displaystyle \vb{u} = c_1 e^{3t} \begin{pmatrix}1 \\ 1\end{pmatrix} + c_2 e^{2t} \begin{pmatrix}1 \\ 2\end{pmatrix}}\)

\({\displaystyle \vb{u}(0) = \begin{pmatrix}1 \\ 0\end{pmatrix} = c_1 \begin{pmatrix}1 \\ 1\end{pmatrix} + c_2 \begin{pmatrix}1 \\ 2\end{pmatrix} = \begin{pmatrix}c_1 + c_2 \\ c_1 + 2 c_2\end{pmatrix}}\)

Thus c1=2c_1 = 2 and c2=1c_2 = -1

44 Suppose that these are both normal solutions, find the matrix AA

44.1 Front

Suppose that these are both normal solutions, find the matrix A

  • e3t(11){\displaystyle e^{3t} \begin{pmatrix}1 \\ 1\end{pmatrix}}
  • e2t(12){\displaystyle e^{2t} \begin{pmatrix}1 \\ 2\end{pmatrix}}

satisfy the equation \({\displaystyle \dot{\vb{u}} = A \vb{u}}\)

Find the matrix AA

44.2 Back

The matrix AA has eigenvalues 33 and 22, with eigenvectors (11){\displaystyle \begin{pmatrix}1 \\ 1\end{pmatrix}} and (12){\displaystyle \begin{pmatrix}1 \\ 2\end{pmatrix}}

The (abcd)(11)=3(11){\displaystyle \begin{pmatrix}a & b \\ c & d\end{pmatrix} \begin{pmatrix}1 \\ 1\end{pmatrix} = 3 \begin{pmatrix}1 \\ 1\end{pmatrix}} and (abcd)(12)=2(12){\displaystyle \begin{pmatrix}a & b \\ c & d\end{pmatrix} \begin{pmatrix}1 \\ 2\end{pmatrix} = 2 \begin{pmatrix}1 \\ 2\end{pmatrix}}

The top entries give the equation a+b=3a + b = 3 and a+2b=2a + 2b = 2, which imply a=4a = 4, b=1b = -1

The bottom entries give the equation c+d=3c + d = 3 and c+2d=4c + 2d = 4, which imply c=2c=2, d=1d=1

A=(4121){\displaystyle A = \begin{pmatrix}4 & -1 \\ 2 & 1\end{pmatrix}}