- Captured On
- Source
- Matrix Methods: Eigenvalues and Normal Modes | Unit IV: First-order Systems | Differential Equations | Mathematics | MIT OpenCourseWare
1 What is the trace of a square matrix?
1.1 Front
What is the trace of a square matrix?
For example, \({\displaystyle \operatorname{tr}\begin{pmatrix}a & b \\ c & d\end{pmatrix}}\)
1.2 Back
It’s the sum of the elements on the main diagonal; it’s denoted \(\operatorname{tr}(A)\):
\({\displaystyle \operatorname{tr}\begin{pmatrix}a & b \\ c & d\end{pmatrix} = a + d}\)
2 What is the difference between the determinant and a matrix?
2.1 Front
What is the difference between the determinant and a matrix?
2.2 Back
The determinant is a number, the matrix is the square array
3 What does really mean the phrase “the first row of the determinant”?
3.1 Front
What does really mean the phrase “the first row of the determinant”?
3.2 Back
That really means the first row of the corresponding matrix. Sometimes we write a matrix between vertical lines to indicate its determinant, so sometimes we can say phrases like this one.
4 What are the solutions for this equation?
4.1 Front
What are the solutions for this equation?
\(A \vb{x} = \vb{0}\), where \(A\) is a square matrix
4.2 Back
This always has the solution \(\vb{x} = 0\), which we call the trivial solution
Also, there is a theorem
\({\displaystyle A \vb{x} = 0} \text{ has a nontrivial solution} \Leftrightarrow \operatorname{det}(A) = 0\)
we can say that \(A\) is singular
5 What does mean linear independence of vectors?
5.1 Front
What does mean linear independence of vectors?
For two vectors
5.2 Back
This just means they are not zero are not multiples of each other
Let \(\vb{a} = \ev{a_1, a_2}\) and \(\vb{b} = \ev{b_1, b_2}\) be 2-vectors, and \(A\) the square matrix having these vectors for its rows (or columns)
\({\displaystyle \vb{a}, \vb{b} \text{ are linearly independent} \Leftrightarrow \operatorname{det}(A) \neq 0}\)
6 How must be \(\vb{v_{1}}\) and \(\vb{v_{2}}\) to form \(\vb{w}\)
6.1 Front
How must be $\vb{v_{1}}$ and $\vb{v_{2}}$ to form $\vb{w}$
\(\vb{w} = c_1 \vb{v_1} + c_2 \vb{v_2}\)
6.2 Back
Both vectors \(\vb{v_1}\) and \(\vb{v_2}\) must be linearly independent
7 What kind of solution we could get from this modal solution?
7.1 Front
What kind of solution we could get from this modal solution?
\({\displaystyle e^{2t} \begin{pmatrix}3 \\ 1\end{pmatrix}}\) and \({\displaystyle e^{-2t} \begin{pmatrix}-1 \\ 1\end{pmatrix}}\)
7.2 Back
You could look for solution of the form \({\displaystyle e^{\lambda t} \begin{pmatrix}a_1 \\ a_2 \end{pmatrix}}\)
8 How could we solve this system of equations?
8.1 Front
How could we solve this system of equations?
- \(\lambda a_1 = a_1 + 3a_2\)
- \(\lambda a_2 = a_1 - a_2\)
8.2 Back
These are a pair of nonlinear equations in three variables. The trick in solving them is to look at them as a pair of linear equations in the unknowns \(a_i\), with \(\lambda\) viewed as a parameter.
\begin{alignat*}{3} (1 - \lambda) &a_{1} + & 3a_2 &= 0\\ &a_1 + (-1 - \lambda) & a_2 &= 0 \end{alignat*}
In this form, we recognize them as forming a square system of homogeneous linear equations.
\(A \vb{a} = \vb{0}\), where \({\displaystyle A = \begin{pmatrix}1 - \lambda & 3 \\ 1 & 1 - \lambda \end{pmatrix}}\)
There is a non trivial solution if and only if \(\operatorname{det}(A) = 0\)
\({\displaystyle \begin{vmatrix}1 - \lambda & 3 \\ 1 & 1 - \lambda \end{vmatrix} = 0 \implies \lambda^2 - 4 = 0}\), so if \(\lambda = \pm 2\), there is a non trivial solution \(\vb{a}\)
Thus, we have 2 linear system
- \(\lambda = 2\)
- \(-a_1 + 3 a_2 = 0\)
- \(a_1 - 3 a_2 = 0\)
- Setting \(a_1 = 1 \implies a_2 = 3\)
- \(\lambda = -2\)
- \(3 a_1 + 3 a_2 = 0\)
- \(a_1 + a_2 = 0\)
- Setting \(a_1 = 1 \implies a_2 = -1\)
9 We get this system of equations as a DE response, why are linearly dependents?
9.1 Front
We get this system of equations as a DE response, why are linearly dependents?
- \(\lambda = 2\)
- \(-a_1 + 3 a_2 = 0\)
- \(a_1 - 3 a_2 = 0\)
- Setting \(a_1 = 3 \implies a_2 = 1\)
- \(\lambda = -2\)
- \(3 a_1 + 3 a_2 = 0\)
- \(a_1 + a_2 = 0\)
- Setting \(a_1 = 1 \implies a_2 = -1\)
9.2 Back
These are linear dependents because we set lambda a parameter for a 2 nonlinear equations. If this 2 equations were linearly independent, the nonlinear system only has a trivial solution.
All of our effort has been to locate the two values of \(\lambda\) for which this will not be so. The dependency of the two equations is thus a check on the correctness of the value of \(\lambda\)
10 Write the modal solution from this DE
10.1 Front
Write the modal solution from this DE
\({\displaystyle \begin{pmatrix}x’ \\ y’\end{pmatrix} = \begin{pmatrix}1 & 3 \\ 1 & -1\end{pmatrix} \begin{pmatrix}x \\ y\end{pmatrix}}\)
- \(\lambda = 2\)
- \(-a_1 + 3 a_2 = 0\)
- \(a_1 - 3 a_2 = 0\)
- Setting \(a_1 = 1 \implies a_2 = 3\)
- \(\lambda = -2\)
- \(3 a_1 + 3 a_2 = 0\)
- \(a_1 + a_2 = 0\)
- Setting \(a_1 = 1 \implies a_2 = -1\)
10.2 Back
We get 2 solutions
\({\displaystyle e^{2t} \begin{pmatrix}3 \\ 1\end{pmatrix}}\) and \({\displaystyle e^{-2t} \begin{pmatrix}1 \\ -1\end{pmatrix}}\)
We can use superposition and get a general solution multiplying both modal solution by a non zero constant and sum them up. We can do that because we set up one of the \(a’s\) with an arbitrary value, and the other is determined by the system.
11 How could we write this expression as homogeneous square system?
11.1 Front
How could we write this expression as homogeneous square system?
\({\displaystyle \lambda \begin{pmatrix}a_1 \\ a_2\end{pmatrix} = \begin{pmatrix}1 & 3 \\ 1 & -1\end{pmatrix} \begin{pmatrix}a_1 \\ a_2\end{pmatrix}}\)
11.2 Back
If we rewrite it as \({\displaystyle \begin{pmatrix}\lambda & 0 \\ 0 & \lambda\end{pmatrix} \begin{pmatrix}a_1 \\ a_2\end{pmatrix} = \begin{pmatrix}1 & 3 \\ 1 & -1\end{pmatrix} \begin{pmatrix}a_1 \\ a_2\end{pmatrix}}\)
We can subtract the left side form the right
\({\displaystyle \begin{pmatrix}1 - \lambda & 3 \\ 1 & -1 - \lambda \end{pmatrix} \begin{pmatrix} a_1 \\ a_2 \end{pmatrix} = \begin{pmatrix}0 \\ 0\end{pmatrix}}\)
The trick therefore is to replace the scalar \(\lambda\) by the diagonal matrix \(\lambda I\)
12 How could we combine the two sides by subtraction of this expression?
12.1 Front
How could we combine the two sides by subtraction of this expression?
\({\displaystyle \lambda \begin{pmatrix}a_1 \\ a_2\end{pmatrix} = \begin{pmatrix}a & b \\ c & d\end{pmatrix} \begin{pmatrix} a_1 \\ a_2\end{pmatrix}}\)
where \(\lambda\) is a scalar constant
12.2 Back
We can replace the scalar \(\lambda\) by the diagonal matrix \(\lambda I\). This gives
\({\displaystyle \begin{pmatrix}\lambda & 0 \\ 0 & \lambda\end{pmatrix} \begin{pmatrix}a_1 \\ a_2\end{pmatrix} = \begin{pmatrix}a & b \\ c & d\end{pmatrix} \begin{pmatrix} a_1 \\ a_2\end{pmatrix}}\)
So now we can subtract the LHS from the RHS using the distributive law for matrix addition and multiplication
\({\displaystyle \begin{pmatrix}a - \lambda & b \\ c & d- \lambda\end{pmatrix} \begin{pmatrix}a_1 \\ a_2 \end{pmatrix} = \begin{pmatrix}0 \\ 0\end{pmatrix}}\)
13 What is the characteristic equation of the matrix A?
13.1 Front
What is the characteristic equation of the matrix A?
\({\displaystyle A = \begin{pmatrix}a & b \\ c & d\end{pmatrix}}\)
Explain how to get it
13.2 Back
It’s the equation for computing the \(\lambda\) values where this square homogeneous system has a non-trivial solution
\({\displaystyle \begin{pmatrix}a - \lambda & b \\ c & d- \lambda\end{pmatrix} \begin{pmatrix}a_1 \\ a_2 \end{pmatrix} = \begin{pmatrix}0 \\ 0\end{pmatrix}}\)
Thus, \({\displaystyle \begin{vmatrix}A - \lambda I\end{vmatrix} = 0}\):
\({\displaystyle \begin{vmatrix}a - \lambda & b \\ c & d - \lambda\end{vmatrix} = 0}\)
The characteristic equation of the matrix is
\({\displaystyle p_A (\lambda) = \lambda^2 - (a + d)\lambda + (ad - bc) = \lambda^2 - \operatorname{tr}(A) \lambda + \operatorname{det}(A) = 0}\)
14 What are the characteristic values of the matrix A?
14.1 Front
What are the characteristic values of the matrix A?
\({\displaystyle A = \begin{pmatrix}a & b \\ c & d\end{pmatrix}}\)
14.2 Back
It’s the \(\lambda\) values which are roots of its characteristic equation
\({\displaystyle \lambda^2 - \operatorname{tr}(A)\lambda + \operatorname{det}(A) = 0}\)
Also, it’s called the eigenvalues
15 How could we get the eigenvectors for this matrix A?
15.1 Front
How could we get the eigenvectors for this matrix A?
\({\displaystyle A = \begin{pmatrix}a & b \\ c & d\end{pmatrix}}\)
15.2 Back
When you have the eigenvalues of the matrix \(A\), you can compute the nontrivial solution for the system \((A - \lambda I) \vb{v} = 0\)
For each of the 2 values of eigenvalues (real and distinct) you get a dependent system of equation (one will be a constant multiple of the other). It’s because these \(\lambda\) values produce the \(\operatorname{det}(A - \lambda I) = 0\)
- \((a- \lambda_1) a_1 + b a_2 = 0\)
- \(c a_1 + (d - \lambda_1) a_2 = 0\)
The eigenvector is a column vector with the values \({\displaystyle \begin{pmatrix}a_1 \\ a_2 \end{pmatrix}}\)
16 What is an eigenline?
16.1 Front
What is an eigenline?
16.2 Back
If \(\vb{v}\) is an eigenvector for \(\lambda\), then so is \(c \vb{v}\) for any real constant \(c\). It’s a line through \(\vb{v}\) which is called eigneline
17 Can we get solution for any linear system of DE using eigenvectors?
17.1 Front
Can we get solution for any linear system of DE using eigenvectors?
\({\displaystyle \dot{\vb{u}} = A \vb{u}}\), where \(A\) is a square matrix \(N \cross N\)
17.2 Back
Yes, the eigenvalues and eigenvector comes from the equation \({\displaystyle \begin{vmatrix} A - \lambda I\end{vmatrix} = 0}\)
18 What are the normal modes and general solution from this eigenvalues and eigenvectors?
18.1 Front
What are the normal modes and general solution from this eigenvalues and eigenvectors?
- \({\displaystyle \lambda_1 = -1}\) which eigenvector is \({\displaystyle \begin{pmatrix}1 \\ 1\end{pmatrix}}\)
- \({\displaystyle \lambda_2 = 2}\) which eigenvector is \({\displaystyle \begin{pmatrix} 1 \\ 4\end{pmatrix}}\)
18.2 Back
The normal modes are \({\displaystyle e^{-t} \begin{pmatrix}1 \\ 1\end{pmatrix}}\) and \({\displaystyle e^{2t} \begin{pmatrix}1 \\ 4\end{pmatrix}}\)
And the general solution is
\({\displaystyle \vb{u}(t) = c_1 e^{-t} \begin{pmatrix}1 \\ 1\end{pmatrix} + c_2 e^{2t} \begin{pmatrix}1 \\ 4\end{pmatrix}}\)
19 Given a complex eigenvalue how are the eigenvectors?
19.1 Front
Given a complex eigenvalue how are the eigenvectors?
\({\displaystyle \lambda = a \pm bi}\)
19.2 Back
The corresponding eigenvector \(\vb{v}\), by solving the equations
- \((a - \lambda)a_1 + ba_2 = 0\)
- \(c a_1 + (d+ \lambda)a_2 = 0\)
for its components \(a_1\) and \(a_2\). Since \(\lambda\) is complex, the \(a_i\) will also be complex, and therefore the eigenvector \(\vb{v}\) corresponding to \(\lambda\) will have complex components.
20 How is the form of the solution with a complex eigenvalue?
20.1 Front
How is the form of the solution with a complex eigenvalue?
\(\lambda = a \pm bi\)
20.2 Back
We get a complex solution with the form
\({\displaystyle \vb{x} = e^{(a + ib)t} \vb{v}}\), where \(\vb{v}\) it’s the corresponding eigenvector
21 Given this system, what are the real solutions?
21.1 Front
Given a system, what are the real solution?
\({\displaystyle \dot{\vb{x}} = A \vb{x}}\), where \(A\) is a real matrix and \(\vb{x}\) is a complex solution
21.2 Back
Then its real and imaginary parts of \(\vb{x} = \vb{x_1} + i \vb{x_2}\) are also solutions to the system.
Since \(\vb{x_1} + i \vb{x_2}\) is a solution, we have
\({\displaystyle (\vb{x_1} + i \vb{x_2})’ = A (\vb{x_1} + i \vb{x_2}) = A \vb{x_1} + i A \vb{x_2}}\)
Equating real and imaginary parts of this equation
\({\displaystyle \vb{x_1}’ = A \vb{x_1}}\) and \({\displaystyle \vb{x_2}’ = A \vb{x_2}}\)
Which shows exactly that the real vector \(\vb{x_1}\) and \(\vb{x_2}\) are solutions to \(\vb{x}’ = A \vb{x}\)
22 What it is the real solution from the complex eigenvalue?
22.1 Front
What it is the real solution from the complex eigenvalue?
\(\lambda = a \pm ib\)
Show the process with \(\vb{v}\) as complex eigenvector of this eigenvalue
22.2 Back
We can write its corresponding eigenvector \(\vb{v}\) in terms of real and imaginary parts, \(\vb{v} = \vb{v_1} + i \vb{v_2}\), where \(\vb{v_1}\) and \(\vb{v_2}\) it’s a real vector.
\({\displaystyle \vb{x} = e^{at} (\cos(bt) + i \sin(vt)) (\vb{v_1} + i \vb{v_2})}\)
Where the real and imaginary parts of \(\vb{x}\) give respectively the 2 real solutions
- \({\displaystyle \vb{x_1} = e^{at}(\vb{v_1} \cos(bt) - \vb{v_2} \sin(bt))}\)
- \({\displaystyle \vb{x_2} = e^{at}(\vb{v_1} \sin(bt) + \vb{v_2} \cos(bt))}\)
These solutions are linearly independent: they are two truly different solutions
The general solution is given by their linear combinations
\({\displaystyle c_1 \vb{x_1} + c_2 \vb{x_2}}\)
Note: The complex conjugate eigenvalue \(a - ib\) gives up to sign the same two solutions \(\vb{x_1}\) and \(\vb{x_2}\)
23 Do we need to compute the real solution for both complex eigenvalue?
23.1 Front
Do we need to compute the real solution for both complex eigenvalue?
\(\lambda = a \pm ib\)
23.2 Back
No, we will get the same 2 real solutions in both cases, so you only need to solve the system for one of the 2 complex eigenvalue.
24 What could happen when you solve a system which has a repeated eigenvalue?
24.1 Front
What could happen when you solve a system which has a repeated eigenvalue?
\({\displaystyle \dot{\vb{x}} = A \vb{x}}\), where \(A\) is a matrix \(2 \cross 2\)
\(\lambda\) is a double real root
24.2 Back
We need to find two linearly independent solution to the system. We can get one solution in the usual way. For example, let \(\vb{v_1}\) be an eigenvector corresponding to \(\lambda\). This is found by solving the system:
\({\displaystyle (A - \lambda I)\vb{a} = 0}\), so that gives the solution \({\displaystyle \vb{x_1} = e^{\lambda t} \vb{v_1}}\)
To find a second solution we have to distinguish 2 cases
- Complete case
- \(\lambda\) is a complete eigenvalue if there are 2 linearly independent eigenvectors \(\vb{v_1}\) and \(\vb{v_2}\) corresponding to \(\lambda\)
- Defective
- In this case only there is one non-zero solution \(\vb{v_1}\), so \(\vb{x_1}\) is the unique normal mode.
- However, a second order system needs two independent solutions
- Repeated roots in second order suggest we try multiplying our normal
solution by \(t\), but we need to fix as follows
- \({\displaystyle \vb{x_2} = e^{\lambda t} (t \vb{v_1 + \vb{v_2})}}\), where \(\vb{2}\) is any vector satisfying
- \({\displaystyle (A - \lambda I) \vb{v_2} = \vb{v_1}}\)
- Remember: \(A \vb{v_1} = \lambda \vb{v_1}\)
- \(\vb{v_2}\) is guaranteed to have a solution, provided that the eigenvalue \(\lambda\) really is defective