Ordinary Differential Equations and Linear Algebra 
        MATH 211 
    
        fall, freshman year
     
    Direction Fields  Separable Equations  Solving a Separable DE  Picard’s Theorem  Linear DEs  Superposition Principle  Nonhomogeneous Principle  Variation of parameters  Integrating factor method  Models  Linear mixing model  Newton cooling  Matrices  Operations  Multiplication  Special matrices  Vectors  Systems of linear equations  Solutions of linear equations  Elementary row operations  Row reduced echelon form  Linearity properties of matrices  Superposition  Nonhomogeneous  Solving a linear equation  Inverse of a matrix  Conditions for invertibility  Determinants  triangular matrices  Determinant row operations  Vector spaces and subspaces  Spanning theory  Linear Independence  Vector functions  Wronskian  Bases  Properties of   Invertible matrix characteristics  2nd order constant coefficients  Existence and uniqueness theorem  Solution space theorem (2nd order)  Nonhomogeneous superposition principle  Variation of Parameters  Distinct Eigenvalue Theorem  Eigenvalues & Eigenvectors  ODE Systems  Distinct eigenvalues  Repeated eigenvalues  Complex eigenvalues  Nonlinear first-order ODE systems  Jacobian  Differential equation  (DE ): equation consisting of >= 1 dependent variables wrt >= 1 independent variables
y ′ ( t ) = t 2 + y ( t ) y'(t)=t^2+y(t) y ′ ( t ) = t 2 + y ( t )  
 
Ordinary DE  (ODE ): DE which contains only one independent variable.
y ′ ′ + 4 ( y ′ ) 2 + t sin  y = 6 t y''+4(y')^2+t\sin y = 6t y ′ ′ + 4 ( y ′ ) 2 + t sin y = 6 t  
 
Partial DE : DE which contains >1 independent variable.
A solution  to an ODE is a function y ( t ) y(t) y ( t )   that satisfies the ODE on some time interval .
y ′ = k y y'=ky y ′ = k y  , k ∈ ( − ∞ , ∞ ) k \in (-\infty, \infty) k ∈ ( − ∞ , ∞ ) 
y = e k t y=e^{kt} y = e k t  
y = 2 e k t y=2e^{kt} y = 2 e k t  
 
ODE: y ′ = − t y y'=\frac{-t}{y} y ′ = y − t   .
y ( t ) = 1 − t 2 y(t)=\sqrt{1-t^2} y ( t ) = 1 − t 2    is a solution on t ∈ [ − 1 , 1 ] t \in [-1, 1] t ∈ [ − 1 , 1 ] 
y ′ ′ = − 3 y ′ + 2 y = 0 y''=-3y'+2y=0 y ′ ′ = − 3 y ′ + 2 y = 0 
y = e − t y=e^{-t} y = e − t  , y = e − 2 t y=e^{-2t} y = e − 2 t  
y ( t ) = c 1 e − t + c 2 e − 2 t y(t) = c_1e^{-t} + c_2 e^{-2t} y ( t ) = c 1  e − t + c 2  e − 2 t   where c 1 , c 2 ∈ R c_1, c_2 \in R c 1  , c 2  ∈ R  
 
 
y ′ = 3 t 2 y' = 3t^2 y ′ = 3 t 2 
y = t 3 + C y=t^3 + C y = t 3 + C  , C ∈ R C \in R C ∈ R  
 
From a general soln, we obtain a particular soln  by specifying the arbitrary constants.
An initial value problem  (IVP ):
y ′ ′ − 3 y ′ + 2 y = 0 y'' - 3y' + 2y = 0 y ′ ′ − 3 y ′ + 2 y = 0  ; y ( 0 ) = 2 y(0) = 2 y ( 0 ) = 2  , y ′ ( 0 ) = − 3 y'(0) = -3 y ′ ( 0 ) = − 3 
Solved by c 1 = c 2 = 1 c_1 = c_2 = 1 c 1  = c 2  = 1  
y ( 0 ) = e 0 + e 0 = 2 y(0)=e^0 + e^0 = 2 y ( 0 ) = e 0 + e 0 = 2  
y ′ ( 0 ) = − e 0 − 2 e 0 = − 3 y'(0)=-e^0 - 2e^0 = -3 y ′ ( 0 ) = − e 0 − 2 e 0 = − 3  
 
 
 
y ′ = 3 t 2 y' = 3t^2 y ′ = 3 t 2  ; y ( 2 ) = 4 y(2) = 4 y ( 2 ) = 4 
y = t 3 + C y=t^3 + C y = t 3 + C  
y ( 2 ) = 8 + C = 4    ⟹    C = − 4 y(2) = 8 + C = 4 \implies C = -4 y ( 2 ) = 8 + C = 4 ⟹ C = − 4  
y ( t ) = t 3 − 4 y(t) = t^3 - 4 y ( t ) = t 3 − 4   solves the IVP 
 
 
 
 
 
The order  of a DE is the order of its highest derivative.
Direction Fields  
Consider an ODE of the form y ′ = f ( t , y ) y' = f(t, y) y ′ = f ( t , y )  .
For each ( t , y ) (t, y) ( t , y )   value, plot a small line segment with the slope f ( t , y ) f(t, y) f ( t , y )  . A collection of such segments is a direction field .
Concavity depends on whether y ′ ′ y'' y ′ ′   is +ve or -ve.
y ′ = t − y y' = t -y y ′ = t − y 
   ⟹    y ′ ′ = 1 − y ′ = 1 + y − t \implies y'' = 1 - y' = 1 + y - t ⟹ y ′ ′ = 1 − y ′ = 1 + y − t 
when y ′ ′ = 0    ⟹    1 + y − t = 0    ⟹    y = t − 1 y'' = 0 \implies 1 + y - t = 0 \implies y = t - 1 y ′ ′ = 0 ⟹ 1 + y − t = 0 ⟹ y = t − 1 
y ′ ′ > 0    ⟹    1 + y − t > 0    ⟹    y > t − 1 y'' > 0 \implies 1 + y - t > 0 \implies y  > t -1 y ′ ′ > 0 ⟹ 1 + y − t > 0 ⟹ y > t − 1 
An equilibrium solution  is one which does not change over time. y ( t ) ≡ C y(t) \equiv C y ( t ) ≡ C 
y ′ = − 2 t y + t y' = -2ty + t y ′ = − 2 t y + t  
y ′ ≡ 0    ⟹    t = 2 t y y' \equiv 0 \implies t = 2ty y ′ ≡ 0 ⟹ t = 2 t y   for all R R R  
   ⟹    y = 1 2 \implies y = \frac{1}{2} ⟹ y = 2 1   
 
 
 
An equilibrium solution is:
stable  if solutions near it tend toward it as t → ∞ t \rightarrow \infty t → ∞ 
 
unstable  if solutions near it tend away from it.
 
y ′ = y 2 − 4 y' = y^2 - 4 y ′ = y 2 − 4 
y ≡ 0    ⟹    y 2 − 4 = 0 y \equiv 0 \implies y^2 - 4 = 0 y ≡ 0 ⟹ y 2 − 4 = 0  
   ⟹    y 2 = 4 \implies y^2 = 4 ⟹ y 2 = 4  
   ⟹    y = ± 2 \implies y = \pm 2 ⟹ y = ± 2  
y = 2 y = 2 y = 2   is unstable 
y = − 2 y = -2 y = − 2   is stable 
 
 
 
 
 
An isocline  of a DE y ′ = f ( t , y ) y' = f(t, y) y ′ = f ( t , y )   is a curve in the t t t  -y y y   plane along which the slope is constant: f ( t , y ) ≡ C f(t, y) \equiv C f ( t , y ) ≡ C 
y ′ = y 2 − 4 = C y' = y^2 - 4 = C y ′ = y 2 − 4 = C   for some y y y  
y 2 = C + 4 y^2 = C + 4 y 2 = C + 4  
y = ± C + 4 y = \pm \sqrt{C + 4} y = ± C + 4   
 
 
 
Separable Equations  
A separable DE  is one which can be written as y ′ = f ( t ) × g ( y ) y' = f(t) \times g(y) y ′ = f ( t ) × g ( y )  .
Solving a Separable DE  
Solve g ( y ) = 0 g(y) = 0 g ( y ) = 0   to find equilibrium solns.  
Else, assume g ( y ) ≠ 0 g(y) \neq 0 g ( y )   = 0  . Then:
d y d t = f ( t ) × g ( y ) \frac{dy}{dt} = f(t) \times g(y) d t d y  = f ( t ) × g ( y )  
d y g ( y ) = f ( t ) ⋅ d t \frac{dy}{g(y)} = f(t) \cdot dt g ( y ) d y  = f ( t ) ⋅ d t  
 
 
∫ d y g ( y ) = ∫ f ( t ) ⋅ d t \int \frac{dy}{g(y)} = \int f(t) \cdot dt ∫ g ( y ) d y  = ∫ f ( t ) ⋅ d t  
If possible, solve for y y y   in terms of t t t   to get an explicit soln 
If there’s an IVP, solve for C C C   using the initial conditions 
 
Don’t forget to check for g ( y ) = 0 g(y) = 0 g ( y ) = 0 
 
Picard’s Theorem  
A solution of a DE is unique  if there’s at most one solution to the DE.
y 1 ( t ) = y 2 ( t )    ⟹    y 1 ( t ) = y 2 ( t ) y_1(t) = y_2(t) \implies y_1(t) = y_2(t)
 y 1  ( t ) = y 2  ( t ) ⟹ y 1  ( t ) = y 2  ( t ) 
Typically infinite solutions. Initial value conditions usually nail down solutions.
y ′ = − 2 t y y' = -2ty y ′ = − 2 t y   doesn’t have a unique solution. y ′ = − 2 t y y' = -2ty y ′ = − 2 t y  ; y ( 0 ) = 1 y(0) = 1 y ( 0 ) = 1   does  have a unique solution
y ′ = 1 y y' = \frac{1}{y} y ′ = y 1   , y ( 0 ) = 0 y(0) = 0 y ( 0 ) = 0   has no solution. f f f   is not defined on the t t t   axis.
Sometimes we care about:
whether a solution exists 
whether a solution is unique (predictability) 
 
Picard’s Theorem : Suppose f ( t , y ) f(t, y) f ( t , y )   is continuous on the region R = ${ (t, y), a< t < b, c < y < d} $ (open rectangle) and ( t 0 , y 0 ∈ R ) (t_0, y_0 \in R) ( t 0  , y 0  ∈ R )  . Then there exists  an h > 0 h > 0 h > 0   such that the IVP y ′ = f ( t , y ) , y ( t 0 ) = y 0 y' = f(t, y), y(t_0) = y_0 y ′ = f ( t , y ) , y ( t 0  ) = y 0    has a solution for t t t   in the interval ( t 0 − h , t 0 + h ) (t_0 - h, t_0 + h) ( t 0  − h , t 0  + h )  .
If the partial derivative ∂ ∂ y f ( t , y ) \frac{\partial}{\partial y}f(t, y) ∂ y ∂  f ( t , y )   is also continuous in R R R  , then the solution is unique .
it can prove uniqueness, but can’t prove lack of uniqueness
 
sufficient , but not necessary
 
Linear DEs  
A DE is linear  if it is of the form:
a n ( t ) ⋅ d n y d t + a n − 1 ( t ) ⋅ d n − 1 y d t + ⋯ + a 1 ( t ) d y d t + a 0 ( t ) y = f ( t ) a_n(t) \cdot \frac{d^n y}{dt} + a_{n - 1}(t) \cdot \frac{d^{n-1} y}{dt} + \dots + a_1(t) \frac{dy}{dt} + a_0(t) y = f(t) a n  ( t ) ⋅ d t d n y  + a n − 1  ( t ) ⋅ d t d n − 1 y  + ⋯ + a 1  ( t ) d t d y  + a 0  ( t ) y = f ( t )  
where a 1 ( t ) , … , a n ( t ) a_1(t), \dots, a_n (t) a 1  ( t ) , … , a n  ( t )   are cts (on some interval) functions solely of t t t  .
If f ( t ) ≡ 0 f(t)\equiv 0 f ( t ) ≡ 0  : homogeneous  DE. Else, inhomogeneous  DE.
Equation 
Linear 
Homogeneous 
 
 
y ′ ′ + t y ′ − 3 y = 0 y'' + ty' - 3y = 0 y ′ ′ + t y ′ − 3 y = 0  
Yep 
Yep 
 
y ′ + y 2 = 0 y' + y^2 = 0 y ′ + y 2 = 0  
Nope 
Yep 
 
y ′ + sin  y = 1 y' + \sin y = 1 y ′ + sin y = 1  
Nope 
Nope 
 
y ′ − t 2 y = 0 y' - t^2 y = 0 y ′ − t 2 y = 0  
Yep 
Yep 
 
y ′ + ( sin  t ) y = 1 y' + (\sin t) y = 1 y ′ + ( sin t ) y = 1  
Yep 
Nope 
 
y ′ ′ − 3 y ′ + y = sin  t y'' - 3y' + y = \sin t y ′ ′ − 3 y ′ + y = sin t  
Yep 
Nope 
 
 
F ( t , y , y ′ , … y ( n ) ) = f ( t ) F(t, y, y', \dots y^{(n)}) = f(t) F ( t , y , y ′ , … y ( n ) ) = f ( t )  , the corresponding operator  is L [ y ] = F ( t , y , … y ( n ) L[y] = F(t, y, \dots y^{(n)} L [ y ] = F ( t , y , … y ( n )  )$
operators  are basically higher order functions?
 
All the y y y  s must be on one side.
An operator is linear if it satisfies:
For k ∈ R k \in R k ∈ R  , L [ k y ] = k ⋅ L [ y ] L[ky] = k \cdot L[y] L [ k y ] = k ⋅ L [ y ]  
For any y 1 y_1 y 1   , y 2 y_2 y 2   : L [ y 1 + y 2 ] = L [ y 1 ] + L [ y 2 ] L[y_1 + y_2] = L[y_1] + L[y_2] L [ y 1  + y 2  ] = L [ y 1  ] + L [ y 2  ]  
 
Differentiation is a linear operator.
Superposition Principle  
Suppose y 1 y_1 y 1    and y 2 y_2 y 2    solve the linear  and homogeneous  DE L [ y ] = 0 L[y] = 0 L [ y ] = 0  .
Then for any constant c 1 , c 2 ∈ R c_1, c_2 \in R c 1  , c 2  ∈ R  , L [ c 1 y 1 + c 2 y 2 ] = 0 L[c_1 y_1 + c_2 y_2] = 0 L [ c 1  y 1  + c 2  y 2  ] = 0 
Given that L [ y 1 ] = 0 L[y_1] = 0 L [ y 1  ] = 0  , L [ y 2 ] = 0 L[y_2] = 0 L [ y 2  ] = 0 
Let c 1 , c 2 ∈ R c_1, c_2 \in R c 1  , c 2  ∈ R  . Then, by linearity:
L [ c 1 y 1 + c 2 y 2 ] = L [ c 1 y 1 ] + L [ c 2 y 2 ] L[c_1 y_1 + c_2 y_2] = L[c_1 y_1] + L[c_2 y_2] L [ c 1  y 1  + c 2  y 2  ] = L [ c 1  y 1  ] + L [ c 2  y 2  ] 
= c 1 L [ y 1 ] + c 2 L y 2 = c_1 L[y_1] + c_2 L{y_2} = c 1  L [ y 1  ] + c 2  L y 2  
= c 1 ⋅ 0 + c 2 ⋅ 0 = 0 = c_1 \cdot 0 + c_2 \cdot 0 = 0 = c 1  ⋅ 0 + c 2  ⋅ 0 = 0 
If y 1 y_1 y 1   , y 2 y_2 y 2    solve a homog DE, then any linear combination of y 1 y_1 y 1   , y 2 y_2 y 2    (i.e. c 1 y 1 + c 2 y 2 c_1 y_1 + c_2 y_2 c 1  y 1  + c 2  y 2   ) solves the same DE.
Nonhomogeneous Principle  
Suppose y p ( t ) y_p(t) y p  ( t )   solves the linear , nonhomogeneous  DE L [ y p ] = f ( t ) L[y_p] = f(t) L [ y p  ] = f ( t )  .
Then, for any solution y h y_h y h    of the homogeneous equation L [ y h ] = 0 , y h + y p L[y_h] = 0, y_h + y_p L [ y h  ] = 0 , y h  + y p    also solves L [ y h + y p ] = f ( t ) L[y_h + y_p] = f(t) L [ y h  + y p  ] = f ( t ) 
Furthermore, every soln to L [ y ] = f ( t ) L[y] = f(t) L [ y ] = f ( t )   is of the form y = y h + y p y = y_h + y_p y = y h  + y p    for some fixed y p y_p y p    and some homogeneous solution.
Variation of parameters  
See pg. 64 of DELA 
 
y ′ − y = t y' - y = t y ′ − y = t 
y ′ − y = 0 y' - y = 0 y ′ − y = 0   has solution y h = c e t y_h = ce^t y h  = c e t  
Observe that y p = − t − 1 y_p = -t - 1 y p  = − t − 1   solves y ′ − y = t y' - y = t y ′ − y = t  
Then, every solution to y ′ − y = t y' - y = t y ′ − y = t   is of the form y ( t ) = y h + y p y(t) = y_h + y_p y ( t ) = y h  + y p   
 
y ′ + p ( t ) y = f ( t ) y' + p(t)y = f(t) y ′ + p ( t ) y = f ( t ) 
y h = c e − ∫ p ( t ) d t y_h = ce^{-\int p(t)dt} y h  = c e − ∫ p ( t ) d t   is a solution to y ′ + p ( t ) y = 0 y' + p(t)y = 0 y ′ + p ( t ) y = 0 
y p ( t ) = v ( t ) e − ∫ p ( t ) d t y_p(t)=v(t)e^{-\int p(t)dt} y p  ( t ) = v ( t ) e − ∫ p ( t ) d t   (1)
v ′ e − ∫ p ( t ) d t = f ( t ) v' e^{-\int p(t)dt}=f(t) v ′ e − ∫ p ( t ) d t = f ( t )   (2)
Solve (2) for v v v   and plug that into (1) to get y p ( t ) y_p(t) y p  ( t )  .
y ( t ) = y h + y p y(t) = y_h + y_p y ( t ) = y h  + y p  
Integrating factor method  
doesn’t work for higher order DEs
 
y ′ + p ( t ) y = f ( t ) y' + p(t)y = f(t) y ′ + p ( t ) y = f ( t ) 
μ ( t ) = e ∫ p ( t ) d t \mu(t) = e^{\int p(t)dt} μ ( t ) = e ∫ p ( t ) d t 
y ′ μ + p ( t ) μ y = f ( t ) μ y' \mu+ p(t) \mu y = f(t) \mu y ′ μ + p ( t ) μ y = f ( t ) μ 
   ⟹    ( y μ ) ′ = f ( t ) μ \implies (y \mu)' = f(t) \mu ⟹ ( y μ ) ′ = f ( t ) μ 
   ⟹    ( y μ ) = ∫ f ( t ) μ + C \implies (y \mu) = \int f(t) \mu +C ⟹ ( y μ ) = ∫ f ( t ) μ + C 
Models  
Linear mixing model  
x ( t ) x(t) x ( t )   is the amount of salt
x ′ = r i n − r o u t x' = r_{in} - r_{out} x ′ = r i n  − r o u t  
$r_{in} = $ concentration in × \times ×   flow rate in
$r_{out} = $ concentration out × \times ×   flow rate out
x ( 0 ) x(0) x ( 0 )   is the initial amount of salt. 0 0 0   if the water is initially pure
Newton cooling  
T T T   temperature of an object surrounded by a uniform temperature M M M  . Then:
d T d t = k ( M − T ) \frac{dT}{dt} = k(M - T) d t d T  = k ( M − T )  , k > 0 k > 0 k > 0 
T ( 0 ) = T 0 T(0) = T_0 T ( 0 ) = T 0  
T ( t ) = T 0 e − k t + M ( 1 − e − k t ) T(t) = T_0 e^{-kt} + M(1 - e^{-kt}) T ( t ) = T 0  e − k t + M ( 1 − e − k t ) 
Matrices  
[ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] \begin{bmatrix}a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \cdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} ⎣ ⎢ ⎢ ⎢ ⎢ ⎡  a 1 1  a 2 1  ⋮ a m 1   a 1 2  a 2 2  ⋮ a m 2   ⋯ ⋯ ⋱ ⋯  a 1 n  a 2 n  ⋮ a m n   ⎦ ⎥ ⎥ ⎥ ⎥ ⎤  
m m m   rows, n n n   columns
“entries”
notation: row followed by column
Operations  
Addition  (and subtraction): entry-wise
Scaling : entry-wise. scale every entry.
dimensions must match for entry-wise operations.
Multiplication  
not entry-wise.
A × B A \times B A × B  : rows of A A A   × \times ×   columns of B B B 
if A A A   is ( m × r ) (m\times r) ( m × r )   and B B B   is ( r × n ) (r \times n) ( r × n )  : A × B A \times B A × B   is ( m × n ) (m \times n) ( m × n ) 
c i j = [ a 11 ⋯ a i r ] × [ b 11 ⋮ b r j ] c_{ij}=\left[a_{11}\cdots a_{ir}\right] \times \begin{bmatrix}b_{11} \\ \vdots \\ b_{rj}\end{bmatrix} c i j  = [ a 1 1  ⋯ a i r  ] × ⎣ ⎢ ⎢ ⎡  b 1 1  ⋮ b r j   ⎦ ⎥ ⎥ ⎤  
not commutative : A × B A \times B A × B   is not always = B × A = B \times A = B × A 
distributive.
c i j = ∑ k = 1 a i k × b k j c_{ij} = \sum_{k=1} a_{ik} \times b_{kj} c i j  = ∑ k = 1  a i k  × b k j  
Special matrices  
zero matrix 0 m × n 0_{m\times n} 0 m × n  
all elements 0 0 0  : ( a i j = 0 ) (a_{ij} = 0) ( a i j  = 0 ) 
 
identity matrix I n I_n I n  
principal diagonal is all 1 1 1  s, everything else is 0 0 0 
a i j = { 1 i = j 0 i ≠ j a_{ij} = \begin{cases} 1 & i=j \\ 0 & i \ne j \end{cases} a i j  = { 1 0  i = j i   = j  
A × I n = A A \times I_n = A A × I n  = A 
the only non-zero term in each row-column dot prod is a i j a_{ij} a i j  
 
 
Vectors  
A row vector  is a 1 × n 1 \times n 1 × n   matrix
A column vector  is a n × 1 n \times 1 n × 1   matrix
The scalar product  (dot product) of a row vec with a column vec:
a ⃗ × b ⃗ = ∑ a i b i \vec a \times \vec b = \sum a_i b_i a × b = ∑ a i  b i  
Scalar product is a special case of matrix multiplication.
Systems of linear equations  
2 x + y = − 7 2x +y = -7 2 x + y = − 7 
3 x + 4 y = 2 3x + 4y = 2 3 x + 4 y = 2 
[ 2 1 3 4 ] [ x y ] = [ − 7 2 ] \begin{bmatrix}2 & 1\\ 3 & 4\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}-7 \\ 2\end{bmatrix} [ 2 3  1 4  ] [ x y  ] = [ − 7 2  ] 
 
a 11 x 1 + ⋯ + a 1 n x n = b 1 a_{11} x_1 + \cdots + a_{1n} x_n = b_1 a 1 1  x 1  + ⋯ + a 1 n  x n  = b 1  
⋮ \vdots ⋮ 
a m 1 x 1 + ⋯ + a m n x n = b m a_{m1} x_1 + \cdots + a_{mn} x_n = b_m a m 1  x 1  + ⋯ + a m n  x n  = b m  
[ a 11 ⋯ a 1 n ⋮ ⋱ ⋮ a m 1 ⋯ a m n ] [ x 1 ⋮ x m ] = [ b 1 ⋮ b m ] \begin{bmatrix}a_{11} & \cdots & a_{1n}\\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn}\end{bmatrix}\begin{bmatrix}x_1 \\ \vdots \\ x_m \end{bmatrix} = \begin{bmatrix}b_1 \\ \vdots \\ b_m\end{bmatrix} ⎣ ⎢ ⎢ ⎡  a 1 1  ⋮ a m 1   ⋯ ⋱ ⋯  a 1 n  ⋮ a m n   ⎦ ⎥ ⎥ ⎤  ⎣ ⎢ ⎢ ⎡  x 1  ⋮ x m   ⎦ ⎥ ⎥ ⎤  = ⎣ ⎢ ⎢ ⎡  b 1  ⋮ b m   ⎦ ⎥ ⎥ ⎤  
 
system is homogeneous  if b ⃗ = 0 ⃗ \vec b = \vec 0 b = 0 
a solution is a vector x ⃗ \vec x x   that satisfies A x ⃗ = b ⃗ A\vec x = \vec b A x = b  . We can write A x ⃗ = b ⃗ A\vec x = \vec b A x = b   as an augmented matrix .
Solutions of linear equations  
General idea:
if # equations < # unknowns
undetermined : infinitely many solutions
 
if # equations > # unknowns
overdetermined , no solutions
 
if # equations > # unknowns
unique  solution
 
 
Elementary row operations  
R i → R i + a R j R_i\rightarrow R_i+aR_j R i  → R i  + a R j  
R i → a R i R_i\rightarrow aR_i R i  → a R i  
R i ↔ R j R_i \leftrightarrow R_j R i  ↔ R j  
A matrix is in RREF  if:
zero rows are at the bottom 
the left-most non-zero entry of each non-zero row should be a 1 1 1  . the 1 1 1   is a called a pivot  
Each pivot is farther to the right than the row above it 
Each pivot is the only non-zero entry in its column, called a pivot column  
 
all matrices have a RREF (not just square ones).
Gauss-Jordan algorithm  converts a matrix to RREF form.
If the RREF of A x ⃗ = b ⃗ A\vec x = \vec b A x = b   has a row of the form \left[\matrix{0 & \cdots & 0 & | & k} \right]  (k ≠ 0 k\neq 0 k   = 0  ), the equation is inconsistent  (because it implies 0 = k 0 = k 0 = k  ). For no values of the params are all equations satisfied.
Else, the equation is consistent  (solution exists).
 
If every column is a pivot column, the solution (if it exists) is unique .
If there is at least one non-pivot column, there are infinitely many  solutions.
Linearity properties of matrices  
A ( c 1 x ⃗ 1 + c 2 x ⃗ 2 ) = c 1 x ⃗ 1 + c 2 x ⃗ 2 A(c_1 \vec x_1 + c_2 \vec x_2) = c_1 \vec x_1 + c_2 \vec x_2 A ( c 1  x 1  + c 2  x 2  ) = c 1  x 1  + c 2  x 2  
Superposition  
A x 1 = 0 A x_1 = 0 A x 1  = 0   and A x 2 − 0 A x_2 - 0 A x 2  − 0      ⟹    A ( c 1 x ⃗ 1 + c 2 x ⃗ 2 ) = 0 \implies A(c_1 \vec x_1 + c_2 \vec x_2) = 0 ⟹ A ( c 1  x 1  + c 2  x 2  ) = 0 
Nonhomogeneous  
A x p = b A x_p = b A x p  = b   and A x h − 0 A x_h - 0 A x h  − 0      ⟹    A ( c 1 x ⃗ p + c 2 x ⃗ h ) = b \implies A(c_1 \vec x_p + c_2 \vec x_h) = b ⟹ A ( c 1  x p  + c 2  x h  ) = b 
To solve A x = b Ax = b A x = b  , we can find one solution x p x_p x p   , fully solve A x b = 0 Ax_b = 0 A x b  = 0  . Then every solution to A x = b Ax = b A x = b   will be of the form x = x p + x h x = x_p + x_h x = x p  + x h  
Solving a linear equation  
Find RREF of A A A  
Set all free variables = 0 = 0 = 0   to find x p x_p x p    particular soln 
For each free  variable, set = 1 = 1 = 1   (set others equal to 0 0 0  ), solve for the basic variables. homogeneous 
Combine to get the general soln. 
 
if there are free variables, system is dependent  (∞ \infty ∞   solns)
if no free variables, system is independent  (unique soln)
The rank  is the number of pivot columns
Inverse of a matrix  
A A − 1 = A − 1 A = I AA^{-1} = A^{-1}A = I A A − 1 = A − 1 A = I 
A − 1 A^{-1} A − 1   is the inverse . A A A   is invertible .
I B = B I = B IB = BI = B I B = B I = B 
A X = B    ⟹    X = B × A − 1 AX=B\implies X=B\times A^{-1} A X = B ⟹ X = B × A − 1  . X X X   is the unique solution.
Find x ⃗ 1 \vec x_1 x 1    by solving A x ⃗ 1 = − e ⃗ 1 A \vec x_1 = - \vec e_1 A x 1  = − e 1  
A X = B AX=B A X = B   has a unique soln iff A A A   is invertible.
if A A A   is not invertible: either no soln or infinitely many.
Conditions for invertibility  
A A A   is a n × n n\times n n × n   matrix. Following equivalent:
A − 1 A^{-1} A − 1   exists 
R R E F ( A ) = I \mathrm{RREF}(A) = I R R E F ( A ) = I  
R R E F ( A ) \mathrm{RREF}(A) R R E F ( A )   has n n n   pivot columns 
 
 
A X = B AX = B A X = B   has a unique soln for every B B B   in R n \mathbb{R}^n R n  
A X = 0 AX = 0 A X = 0   has a unique soln X = 0 X = 0 X = 0  
 
Determinants  
square matrices only.
det  A = ∣ A ∣ \det A = | A | det A = ∣ A ∣ 
det  [ a b c d ] = a d − b c \det \begin{bmatrix}a & b \\ c & d \end{bmatrix} = ad - bc det [ a c  b d  ] = a d − b c 
+ − + − + − + − + \begin{matrix}+ & - & + \\ - & + & - \\ + & - & +\end{matrix} + − +  − + −  + − +  
expand along any row|column.
∣ A ∣ |A| ∣ A ∣   is defined recursively for higher order matrices.
triangular matrices  
det  T = ∏ i = 1 n a i i \det T = \prod_{i=1}^{n}a_{ii} det T = ∏ i = 1 n  a i i  
main diagonal .
diagonal matrix is a special case of triangular matrices.
Determinant row operations  
if R i ↔ R j R_i \leftrightarrow R_j R i  ↔ R j  
∣ A ∗ ∣ = − ∣ A ∣ |A*|=-|A| ∣ A ∗ ∣ = − ∣ A ∣ 
 
if R j ∗ = R j + k R i R_j* = R_j + kR_i R j  ∗ = R j  + k R i  
∣ A ∗ ∣ = ∣ A ∣ |A*| = |A| ∣ A ∗ ∣ = ∣ A ∣ 
 
if R i ∗ = k R i R_i* = kR_i R i  ∗ = k R i  
∣ A ∗ ∣ = k ∣ A ∣ |A*| = k|A| ∣ A ∗ ∣ = k ∣ A ∣ 
 
 
det  I = 1 \det I = 1 det I = 1 
If A A A   is invertible (non-singular ), R R E F ( A ) = I \mathrm{RREF}(A) = I R R E F ( A ) = I  , ∣ A ∣ ≠ 0 |A| \neq 0 ∣ A ∣   = 0  .
If A A A   is not invertible (singular ), RREF has a zero row. ∣ A ∣ = 0 |A| = 0 ∣ A ∣ = 0 
∣ A ∣ ≠ 0    ⟹    |A| \neq 0 \implies ∣ A ∣   = 0 ⟹   invertible. ∣ A ∣ = 0    ⟹    |A| = 0 \implies ∣ A ∣ = 0 ⟹   not invertible.
Transpose : swap rows and columns. reflect along main diagonal. A T A^T A T  . A = ( a i j )    ⟹    A T = ( a j i ) A=(a_{ij})\implies A^T = (a_{ji}) A = ( a i j  ) ⟹ A T = ( a j i  ) 
matrix is symmetric if A T = A A^T = A A T = A 
Trace : sum of the diagonal entries of a matrix.
Vector spaces and subspaces  
if f , g ∈ V f,g \in V f , g ∈ V  , c ∈ R c \in R c ∈ R  :
c ∘ f ∈ V c\circ f\in V c ∘ f ∈ V  
f + g ∈ V f+g \in V f + g ∈ V  
 
A vector space , V V V  , is a collection of objects called vectors  with two operations:
vector addition 
scalar multiplication 
 
that satisfy the following properties for all x ⃗ y ⃗ z ⃗ ∈ V \vec x \vec y\vec z \in V x y  z ∈ V   and c , d ∈ R c,d \in \mathbb{R} c , d ∈ R  :
closure
x ⃗ + y ⃗ ∈ V \vec x + \vec y \in V x + y  ∈ V  
c x ⃗ ∈ V c \vec x \in V c x ∈ V  
 
 
addition
there is a zero vector  0 ⃗ ∈ V \vec 0 \in V 0 ∈ V   such that x ⃗ + 0 ⃗ = x ⃗ \vec x + \vec 0 = \vec x x + 0 = x   (additive identity) 
for every x ⃗ ∈ V \vec x \in V x ∈ V  , there is a ( − x ⃗ ) ∈ V (- \vec x) \in V ( − x ) ∈ V   such that ( − x ⃗ ) + x ⃗ = 0 (-\vec x) + \vec x = 0 ( − x ) + x = 0   (additive inverse) 
( x ⃗ + y ⃗ ) + z ⃗ = x ⃗ + ( y ⃗ + z ⃗ ) (\vec x + \vec y) + \vec z = \vec x + (\vec y + \vec z) ( x + y  ) + z = x + ( y  + z )   (associativity) 
x ⃗ + y ⃗ = y ⃗ + x ⃗ \vec x + \vec y = \vec y + \vec x x + y  = y  + x   (commutativity) 
 
 
scalar multiplication properties
1 × x ⃗ = x ⃗ 1 \times \vec x = \vec x 1 × x = x   (scalar multiplicative identity) 
c ( x ⃗ + y ⃗ ) = c x ⃗ + c y ⃗ c (\vec x + \vec y) = c \vec x + c \vec y c ( x + y  ) = c x + c y    (first distributive) 
( c + d ) x ⃗ = c x ⃗ + d y ⃗ (c + d)\vec x = c \vec x + d \vec y ( c + d ) x = c x + d y    (second distributive) 
c ( d x ⃗ ) = ( c d ) x ⃗ c(d \vec x) = (cd)\vec x c ( d x ) = ( c d ) x   (associativity) 
 
 
 
Subspace  is a subset of a vector space W ⊂ V W \subset V W ⊂ V  :
non-empty (aka. 0 ⃗ ∈ W \vec 0 \in W 0 ∈ W  ) 
closed over addition v , w ∈ W    ⟹    v + w ∈ W v,w\in W \implies v+w \in W v , w ∈ W ⟹ v + w ∈ W  
closed over scalar multiplication c ∈ R , v ∈ W    ⟹    c v ∈ W c\in \mathbb{R},v \in W \implies cv \in W c ∈ R , v ∈ W ⟹ c v ∈ W  
 
(2) and (3) together: c 1 v 1 + c 2 v 2 ∈ W c_1v_1 + c_2v_2 \in W c 1  v 1  + c 2  v 2  ∈ W 
Spanning theory  
Linear Independence  
A set is linearly dependent  if no vector in the set can be written as a linear combination of the others. Else, it is linearly dependent
A set is linearly independent  if c 1 v ⃗ 1 + ⋯ + c n v ⃗ n = 0    ⟹    c 1 = ⋯ = c n = 0 c_1\vec v_1 + \cdots + c_n\vec v_n = 0 \implies c_1 = \cdots = c_n = 0 c 1  v 1  + ⋯ + c n  v n  = 0 ⟹ c 1  = ⋯ = c n  = 0  .
Else, linearly dependent .
redundant vector    ⟹    \implies ⟹   there is a non-trivial way to get 0 ⃗ \vec 0 0 
no non redundant vector    ⟹    \implies ⟹   there is no non-trivial way to get 0 ⃗ \vec 0 0 
Vector functions  
v ⃗ ( t ) = ( f 1 ( t ) ⋮ f n ( t ) ) \vec v(t) = \left(\begin{matrix}f_1(t) \\ \vdots \\ f_n(t)\end{matrix}\right) v ( t ) = ⎝ ⎜ ⎜ ⎛  f 1  ( t ) ⋮ f n  ( t )  ⎠ ⎟ ⎟ ⎞  
LI if c 1 v ⃗ 1 + ⋯ + c n v ⃗ n ≡ 0    ⟹    c 1 = ⋯ = c n = 0 ⃗ c_1 \vec v_1 + \cdots + c_n\vec v_n \equiv 0 \implies c_1 = \cdots = c_n = \vec 0 c 1  v 1  + ⋯ + c n  v n  ≡ 0 ⟹ c 1  = ⋯ = c n  = 0 
Checking one value is sufficient to show linear independence. Not sufficient to show linear dependence.
Wronskian  
Check whether functions are LI.
n n n   functions, n n n   derivatives, n × n n\times n n × n   matrix
W [ f 1 , ⋯   , f n ] ( t ) = ∣ f 1 ( t ) ⋯ f n ( t ) ⋮ ⋱ ⋮ f 1 ( n − 1 ) ( t ) ⋯ f n ( n − 1 ) ( t ) ∣ W[f_1, \cdots, f_n](t) = \begin{vmatrix}f_1(t) & \cdots & f_n(t) \\ \vdots & \ddots & \vdots \\ f_1^{(n-1)}(t) & \cdots & f_n^{(n-1)}(t)\end{vmatrix} W [ f 1  , ⋯ , f n  ] ( t ) = ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣  f 1  ( t ) ⋮ f 1 ( n − 1 )  ( t )  ⋯ ⋱ ⋯  f n  ( t ) ⋮ f n ( n − 1 )  ( t )  ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣  
If W [ f 1 , ⋯   , f n ] ( t ) ≠ 0 W[f_1, \cdots, f_n](t) \neq 0 W [ f 1  , ⋯ , f n  ] ( t )   = 0   for some t t t  , then { f 1 , ⋯   , f n } \{f_1, \cdots, f_n\} { f 1  , ⋯ , f n  }   is LI
if W ( t ) ≡ 0 W(t) \equiv 0 W ( t ) ≡ 0  , inconclusive.
Bases  
A set { v ⃗ 1 , ⋯   , v ⃗ n } \{\vec v_1,\cdots, \vec v_n\} { v 1  , ⋯ , v n  }   is a basis  for V V V   if it is LI and spanning.
Basis theorem : the number of vectors in a basis of V V V   is always the same—the dimension  of V V V 
minimal  spanning set
if a vector is removed, the set will no longer be spanning
 
basis is a maximal  LI set
if a vector is added, the set will no longer remain LI
 
 
d i m ( V ) dim (V) d i m ( V )   is the size of a basis.
The minimum number of vectors needed to span V V V   is d i m ( V ) dim(V) d i m ( V )  
The maximum number of vectors a LI set can have is d i m ( V ) dim(V) d i m ( V )  
 
Properties of c o l ( A ) col(A) c o l ( A )    
The pivot columns form a basis for c o l ( A ) col (A) c o l ( A )  
The dimensions of c o l ( A ) col (A) c o l ( A )   is called the rank  of A A A  —number of pivot columns of A A A  
 
Invertible matrix characteristics  
A A A   is a n × n n\times n n × n   matrix. TFAE:
A is an invertible matrix 
A has n n n   pivot columns 
R R E F ( A ) = I n RREF(A) = I_n R R E F ( A ) = I n   
r a n k ( A ) = n rank(A) = n r a n k ( A ) = n  
columns of A A A   are linearly independent 
A x ⃗ = 0 ⃗ A \vec x = \vec 0 A x = 0    has a unique  solution: x ⃗ = 0 ⃗ \vec x = \vec 0 x = 0  
A x ⃗ = b ⃗ A \vec x = \vec b A x = b   has a unique solution for every b ⃗ ∈ R n \vec b \in \mathbb{R}^n b ∈ R n  
 
it’s a big world in math —Gregory Lyons (2020)
 
2nd order constant coefficients  
a y ′ ′ + b y ′ + c y = 0 ay'' + by' + cy =0 a y ′ ′ + b y ′ + c y = 0  , a , b , c ∈ R a, b, c \in \mathbb{R} a , b , c ∈ R   (1)
Recall that y ′ − r y = 0    ⟹    y = e r t y' - ry = 0 \implies y = e^{rt} y ′ − r y = 0 ⟹ y = e r t 
Try y = e r t y = e^{rt} y = e r t   in (1)
   ⟹    a r 2 + b r + c = 0 \implies ar^2 + br + c = 0 ⟹ a r 2 + b r + c = 0   (2). The values of r r r   that solve this, solve (1)
Solutions to (2) are called characteristic roots  or eigenvalues  of (1).
Case 1 Δ > 0 \Delta > 0 Δ > 0  :
r 1 , r 2 = − b ± Δ 2 a r_1, r_2 = \frac{-b\pm \sqrt\Delta}{2a} r 1  , r 2  = 2 a − b ± Δ   
   ⟹    y ( t ) = c 1 e r 1 t + c 2 e r 2 t \implies y(t) = c_1 e^{r_1 t} + c_2 e^{r_2 t} ⟹ y ( t ) = c 1  e r 1  t + c 2  e r 2  t 
 
Case 2 Δ = 0 \Delta = 0 Δ = 0  :
y ( t ) = { e r t , t e r t } y(t) = \{e^{rt}, te^{rt}\} y ( t ) = { e r t , t e r t } 
 
Case 3 Δ < 0 \Delta < 0 Δ < 0 
r = α ± β i r = \alpha \pm \beta i r = α ± β i 
y = e α t [ sin  ( β t ) + cos  ( β t ) ] y = e^{\alpha t}\left[\sin(\beta t)+\cos(\beta t)\right] y = e α t [ sin ( β t ) + cos ( β t ) ] 
 
 
Dimension of the soln space for a linear 2nd order ODE
Existence and uniqueness theorem  
y ′ ′ + p ( t ) y ′ + q ( t ) y = 0 y'' + p(t)y' + q(t)y = 0 y ′ ′ + p ( t ) y ′ + q ( t ) y = 0 
if p ( t ) p(t) p ( t )   and q ( t ) q(t) q ( t )   are cts, then for any A , B ∈ R A,B\in \mathbb{R} A , B ∈ R  , there exists a unique y ( t ) y(t) y ( t )   solving the IVP:
y ′ ′ + p y ′ + q y = 0 y'' + py' + qy = 0 y ′ ′ + p y ′ + q y = 0  , y ( t 0 ) = A y(t_0) = A y ( t 0  ) = A  , y ′ ( t 0 ) = B y'(t_0) = B y ′ ( t 0  ) = B 
Solution space theorem (2nd order)  
The soln space S S S   of y ′ ′ + p y ′ + q y = 0 y'' + py' + qy = 0 y ′ ′ + p y ′ + q y = 0  , y ( t 0 ) = A y(t_0) = A y ( t 0  ) = A  , y ′ ( t 0 ) = B y'(t_0) = B y ′ ( t 0  ) = B   has dimension two.
Nonhomogeneous superposition principle  
if L L L   is linear and y i y_i y i    solves L [ y ] = f i ( t ) L[y] = f_i(t) L [ y ] = f i  ( t )   (i = 1 , 2 , ⋯   , n i = 1, 2, \cdots, n i = 1 , 2 , ⋯ , n  ), then c 1 y 1 + ⋯ + c n y n c_1y_1+\cdots +c_ny_n c 1  y 1  + ⋯ + c n  y n    solves L [ y ] = c 1 f 1 + ⋯ + c n f n L[y] = c_1f_1 + \cdots + c_n f_n L [ y ] = c 1  f 1  + ⋯ + c n  f n   .
Variation of Parameters  
u 1 ′ = − y 2 f W u_1' = \frac{-y_2 f}{W} u 1 ′  = W − y 2  f  
u 2 ′ = y 1 f W u_2' = \frac{y_1 f}{W} u 2 ′  = W y 1  f  
W = ∣ f 1 f 2 f 1 ′ f 2 ′ ∣ W=\begin{vmatrix}f_1 & f_2 \\ f_1' & f_2'\end{vmatrix} W = ∣ ∣ ∣ ∣ ∣  f 1  f 1 ′   f 2  f 2 ′   ∣ ∣ ∣ ∣ ∣  
u 1 = ∫ u 1 ′ u_1 = \int u_1' u 1  = ∫ u 1 ′   , u 2 = ∫ u 2 ′ u_2 = \int u_2' u 2  = ∫ u 2 ′  
y p = u 1 f 1 + u 2 f 2 y_p = u_1 f_1 +u_2 f_2 y p  = u 1  f 1  + u 2  f 2  
Distinct Eigenvalue Theorem  
If λ 1 , ⋯   , λ m \lambda_1, \cdots, \lambda_m λ 1  , ⋯ , λ m    are distinct e.vals for a n × n n\times n n × n   matrix A A A  , their corresponding e.vectors are LI.
Eigenvalues & Eigenvectors  
λ v = A v \lambda v = Av λ v = A v 
( A − λ I ) v = 0 ⃗ (A - \lambda I)v = \vec 0 ( A − λ I ) v = 0 
ODE Systems  
Distinct eigenvalues  
x ⃗ ′ = A x ⃗ \vec x' = A \vec x x ′ = A x 
solved by x ⃗ = e λ t v ⃗ \vec x = e^{\lambda t} \vec v x = e λ t v 
x ⃗ = c 1 e λ 1 t v ⃗ 1 + c 2 e λ 1 t v ⃗ 2 \vec x = c_1 e^{\lambda_1 t} \vec v_1 + c_2 e^{\lambda_1 t}\vec v_2 x = c 1  e λ 1  t v 1  + c 2  e λ 1  t v 2  
Repeated eigenvalues  
find e.vec v ⃗ \vec v v   for e.val λ \lambda λ  
find non-zero u ⃗ \vec u u   such that ( A − λ I ) u ⃗ = v (A - \lambda I) \vec u = v ( A − λ I ) u = v  
x ⃗ ( t ) = c 1 e λ t v ⃗ + c 2 e λ t ( t v ⃗ + u ⃗ ) \vec x (t) = c_1 e^{\lambda t} \vec v + c_2 e^{\lambda t} (t \vec v + \vec u) x ( t ) = c 1  e λ t v + c 2  e λ t ( t v + u )  
 
Complex eigenvalues  
a + b i ‾ = a − b i \overline{a + bi} = a - bi a + b i  = a − b i 
x + y ‾ = x ‾ + y ‾ \overline{x + y} = \overline x + \overline y x + y  = x + y  
Eigenstuff comes in conjugate pairs .
λ 1 , λ 2 = α ± i β \lambda_1, \lambda_2 = \alpha \pm i\beta λ 1  , λ 2  = α ± i β 
v ⃗ 1 , v ⃗ 2 = p ⃗ ± i q ⃗ \vec v_1, \vec v_2 = \vec p \pm i\vec q v 1  , v 2  = p  ± i q  
x ⃗ R e = e α t [ cos  ( β t ) p ⃗ − sin  ( β t ) q ⃗ ] \vec x_{Re} = e^{\alpha t}\left[\cos(\beta t) \vec p - \sin (\beta t)\vec q\right] x R e  = e α t [ cos ( β t ) p  − sin ( β t ) q  ] 
x ⃗ I m = e α t [ sin  ( β t ) p ⃗ + cos  ( β t ) q ⃗ ] \vec x_{Im} = e^{\alpha t}\left[\sin(\beta t) \vec p + \cos (\beta t)\vec q\right] x I m  = e α t [ sin ( β t ) p  + cos ( β t ) q  ] 
x ⃗ = c 1 x ⃗ R e + c 2 x ⃗ I m \vec x = c_1 \vec x_{Re} + c_2 \vec x_{Im} x = c 1  x R e  + c 2  x I m  
Nonlinear first-order ODE systems  
h-nullclines: y ′ = 0 y' = 0 y ′ = 0 
v-nullclines: x ′ = 0 x' = 0 x ′ = 0 
equilibria: x ′ = y ′ = 0 x' = y' = 0 x ′ = y ′ = 0 
Jacobian  
J ( f , g ) = [ ∂ f ∂ x ∂ f ∂ y ∂ g ∂ x ∂ g ∂ y ] J(f, g) = \begin{bmatrix}\frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\ \frac{\partial g}{\partial x} & \frac{\partial g}{\partial y}\end{bmatrix} J ( f , g ) = [ ∂ x ∂ f  ∂ x ∂ g   ∂ y ∂ f  ∂ y ∂ g   ]