Shreyas’ Notes

Ordinary Differential Equations and Linear Algebra

MATH 211

fall, freshman year

Differential equation (DE): equation consisting of >= 1 dependent variables wrt >= 1 independent variables

Ordinary DE (ODE): DE which contains only one independent variable.

Partial DE: DE which contains >1 independent variable.

A solution to an ODE is a function y(t)y(t) that satisfies the ODE on some time interval.

y=kyy'=ky, k(,)k \in (-\infty, \infty)

ODE: y=tyy'=\frac{-t}{y}.

y(t)=1t2y(t)=\sqrt{1-t^2} is a solution on t[1,1]t \in [-1, 1]

y=3y+2y=0y''=-3y'+2y=0

y=3t2y' = 3t^2

From a general soln, we obtain a particular soln by specifying the arbitrary constants.

An initial value problem (IVP):

y3y+2y=0y'' - 3y' + 2y = 0; y(0)=2y(0) = 2, y(0)=3y'(0) = -3

y=3t2y' = 3t^2; y(2)=4y(2) = 4

The order of a DE is the order of its highest derivative.

Direction Fields §

Consider an ODE of the form y=f(t,y)y' = f(t, y).

For each (t,y)(t, y) value, plot a small line segment with the slope f(t,y)f(t, y). A collection of such segments is a direction field.

Concavity depends on whether yy'' is +ve or -ve.

y=tyy' = t -y

    y=1y=1+yt\implies y'' = 1 - y' = 1 + y - t

when y=0    1+yt=0    y=t1y'' = 0 \implies 1 + y - t = 0 \implies y = t - 1

y>0    1+yt>0    y>t1y'' > 0 \implies 1 + y - t > 0 \implies y > t -1

An equilibrium solution is one which does not change over time. y(t)Cy(t) \equiv C

An equilibrium solution is:

An isocline of a DE y=f(t,y)y' = f(t, y) is a curve in the tt-yy plane along which the slope is constant: f(t,y)Cf(t, y) \equiv C

Separable Equations §

A separable DE is one which can be written as y=f(t)×g(y)y' = f(t) \times g(y).

Solving a Separable DE §

  1. Solve g(y)=0g(y) = 0 to find equilibrium solns.
  2. Else, assume g(y)0g(y) \neq 0. Then:
    • dydt=f(t)×g(y)\frac{dy}{dt} = f(t) \times g(y)
    • dyg(y)=f(t)dt\frac{dy}{g(y)} = f(t) \cdot dt
  3. dyg(y)=f(t)dt\int \frac{dy}{g(y)} = \int f(t) \cdot dt
  4. If possible, solve for yy in terms of tt to get an explicit soln
  5. If there’s an IVP, solve for CC using the initial conditions

Don’t forget to check for g(y)=0g(y) = 0

Picard’s Theorem §

A solution of a DE is unique if there’s at most one solution to the DE.

y1(t)=y2(t)    y1(t)=y2(t)y_1(t) = y_2(t) \implies y_1(t) = y_2(t)

Typically infinite solutions. Initial value conditions usually nail down solutions.

y=2tyy' = -2ty doesn’t have a unique solution. y=2tyy' = -2ty; y(0)=1y(0) = 1 does have a unique solution

y=1yy' = \frac{1}{y}, y(0)=0y(0) = 0 has no solution. ff is not defined on the tt axis.

Sometimes we care about:

Picard’s Theorem: Suppose f(t,y)f(t, y) is continuous on the region R = ${ (t, y), a< t < b, c < y < d} $ (open rectangle) and (t0,y0R)(t_0, y_0 \in R). Then there exists an h>0h > 0 such that the IVP y=f(t,y),y(t0)=y0y' = f(t, y), y(t_0) = y_0 has a solution for tt in the interval (t0h,t0+h)(t_0 - h, t_0 + h).

If the partial derivative yf(t,y)\frac{\partial}{\partial y}f(t, y) is also continuous in RR, then the solution is unique.

it can prove uniqueness, but can’t prove lack of uniqueness

sufficient, but not necessary

Linear DEs §

A DE is linear if it is of the form:

an(t)dnydt+an1(t)dn1ydt++a1(t)dydt+a0(t)y=f(t)a_n(t) \cdot \frac{d^n y}{dt} + a_{n - 1}(t) \cdot \frac{d^{n-1} y}{dt} + \dots + a_1(t) \frac{dy}{dt} + a_0(t) y = f(t)

where a1(t),,an(t)a_1(t), \dots, a_n (t) are cts (on some interval) functions solely of tt.

If f(t)0f(t)\equiv 0: homogeneous DE. Else, inhomogeneous DE.

Equation Linear Homogeneous
y+ty3y=0y'' + ty' - 3y = 0 Yep Yep
y+y2=0y' + y^2 = 0 Nope Yep
y+siny=1y' + \sin y = 1 Nope Nope
yt2y=0y' - t^2 y = 0 Yep Yep
y+(sint)y=1y' + (\sin t) y = 1 Yep Nope
y3y+y=sinty'' - 3y' + y = \sin t Yep Nope

F(t,y,y,y(n))=f(t)F(t, y, y', \dots y^{(n)}) = f(t), the corresponding operator is L[y]=F(t,y,y(n)L[y] = F(t, y, \dots y^{(n)})$

operators are basically higher order functions?

All the yys must be on one side.

An operator is linear if it satisfies:

  1. For kRk \in R, L[ky]=kL[y]L[ky] = k \cdot L[y]
  2. For any y1y_1, y2y_2: L[y1+y2]=L[y1]+L[y2]L[y_1 + y_2] = L[y_1] + L[y_2]

Differentiation is a linear operator.

Superposition Principle §

Suppose y1y_1 and y2y_2 solve the linear and homogeneous DE L[y]=0L[y] = 0.

Then for any constant c1,c2Rc_1, c_2 \in R, L[c1y1+c2y2]=0L[c_1 y_1 + c_2 y_2] = 0

Given that L[y1]=0L[y_1] = 0, L[y2]=0L[y_2] = 0

Let c1,c2Rc_1, c_2 \in R. Then, by linearity:

L[c1y1+c2y2]=L[c1y1]+L[c2y2]L[c_1 y_1 + c_2 y_2] = L[c_1 y_1] + L[c_2 y_2]

=c1L[y1]+c2Ly2= c_1 L[y_1] + c_2 L{y_2}

=c10+c20=0= c_1 \cdot 0 + c_2 \cdot 0 = 0

If y1y_1, y2y_2 solve a homog DE, then any linear combination of y1y_1, y2y_2 (i.e. c1y1+c2y2c_1 y_1 + c_2 y_2) solves the same DE.

Nonhomogeneous Principle §

Suppose yp(t)y_p(t) solves the linear, nonhomogeneous DE L[yp]=f(t)L[y_p] = f(t).

Then, for any solution yhy_h of the homogeneous equation L[yh]=0,yh+ypL[y_h] = 0, y_h + y_p also solves L[yh+yp]=f(t)L[y_h + y_p] = f(t)

Furthermore, every soln to L[y]=f(t)L[y] = f(t) is of the form y=yh+ypy = y_h + y_p for some fixed ypy_p and some homogeneous solution.

Variation of parameters §

See pg. 64 of DELA

yy=ty' - y = t

  1. yy=0y' - y = 0 has solution yh=cety_h = ce^t
  2. Observe that yp=t1y_p = -t - 1 solves yy=ty' - y = t
  3. Then, every solution to yy=ty' - y = t is of the form y(t)=yh+ypy(t) = y_h + y_p

y+p(t)y=f(t)y' + p(t)y = f(t)

yh=cep(t)dty_h = ce^{-\int p(t)dt} is a solution to y+p(t)y=0y' + p(t)y = 0

yp(t)=v(t)ep(t)dty_p(t)=v(t)e^{-\int p(t)dt} (1)

vep(t)dt=f(t)v' e^{-\int p(t)dt}=f(t) (2)

Solve (2) for vv and plug that into (1) to get yp(t)y_p(t).

y(t)=yh+ypy(t) = y_h + y_p

Integrating factor method §

doesn’t work for higher order DEs

y+p(t)y=f(t)y' + p(t)y = f(t)

μ(t)=ep(t)dt\mu(t) = e^{\int p(t)dt}

yμ+p(t)μy=f(t)μy' \mu+ p(t) \mu y = f(t) \mu

    (yμ)=f(t)μ\implies (y \mu)' = f(t) \mu

    (yμ)=f(t)μ+C\implies (y \mu) = \int f(t) \mu +C

Models §

Linear mixing model §

x(t)x(t) is the amount of salt

x=rinroutx' = r_{in} - r_{out}

$r_{in} = $ concentration in ×\times flow rate in

$r_{out} = $ concentration out ×\times flow rate out

x(0)x(0) is the initial amount of salt. 00 if the water is initially pure

Newton cooling §

TT temperature of an object surrounded by a uniform temperature MM. Then:

dTdt=k(MT)\frac{dT}{dt} = k(M - T), k>0k > 0

T(0)=T0T(0) = T_0

T(t)=T0ekt+M(1ekt)T(t) = T_0 e^{-kt} + M(1 - e^{-kt})

Matrices §

[a11a12a1na21a22a2nam1am2amn]\begin{bmatrix}a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \cdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix}

mm rows, nn columns

“entries”

notation: row followed by column

Operations §

Addition (and subtraction): entry-wise

Scaling: entry-wise. scale every entry.

dimensions must match for entry-wise operations.

Multiplication §

not entry-wise.

A×BA \times B: rows of AA ×\times columns of BB

if AA is (m×r)(m\times r) and BB is (r×n)(r \times n): A×BA \times B is (m×n)(m \times n)

cij=[a11air]×[b11brj]c_{ij}=\left[a_{11}\cdots a_{ir}\right] \times \begin{bmatrix}b_{11} \\ \vdots \\ b_{rj}\end{bmatrix}

not commutative: A×BA \times B is not always =B×A= B \times A

distributive.

cij=k=1aik×bkjc_{ij} = \sum_{k=1} a_{ik} \times b_{kj}

Special matrices §

Vectors §

A row vector is a 1×n1 \times n matrix

A column vector is a n×1n \times 1 matrix

The scalar product (dot product) of a row vec with a column vec:

a×b=aibi\vec a \times \vec b = \sum a_i b_i

Scalar product is a special case of matrix multiplication.

Systems of linear equations §

2x+y=72x +y = -7

3x+4y=23x + 4y = 2

[2134][xy]=[72]\begin{bmatrix}2 & 1\\ 3 & 4\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}-7 \\ 2\end{bmatrix}


a11x1++a1nxn=b1a_{11} x_1 + \cdots + a_{1n} x_n = b_1

\vdots

am1x1++amnxn=bma_{m1} x_1 + \cdots + a_{mn} x_n = b_m

[a11a1nam1amn][x1xm]=[b1bm]\begin{bmatrix}a_{11} & \cdots & a_{1n}\\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn}\end{bmatrix}\begin{bmatrix}x_1 \\ \vdots \\ x_m \end{bmatrix} = \begin{bmatrix}b_1 \\ \vdots \\ b_m\end{bmatrix}


system is homogeneous if b=0\vec b = \vec 0

a solution is a vector x\vec x that satisfies Ax=bA\vec x = \vec b. We can write Ax=bA\vec x = \vec b as an augmented matrix.

Solutions of linear equations §

General[1] idea:

Elementary row operations §

RiRi+aRjR_i\rightarrow R_i+aR_j

RiaRiR_i\rightarrow aR_i

RiRjR_i \leftrightarrow R_j

Row reduced echelon form §

A matrix is in RREF if:

  1. zero rows are at the bottom
  2. the left-most non-zero entry of each non-zero row should be a 11. the 11 is a called a pivot
  3. Each pivot is farther to the right than the row above it
  4. Each pivot is the only non-zero entry in its column, called a pivot column

all matrices have a RREF (not just square ones).

Gauss-Jordan algorithm converts a matrix to RREF form.

If the RREF of Ax=bA\vec x = \vec b has a row of the form \left[\matrix{0 & \cdots & 0 & | & k} \right] (k0k\neq 0), the equation is inconsistent (because it implies 0=k0 = k). For no values of the params are all equations satisfied.

Else, the equation is consistent (solution exists).


If every column is a pivot column, the solution (if it exists) is unique.

If there is at least one non-pivot column, there are infinitely many solutions.

Linearity properties of matrices §

A(c1x1+c2x2)=c1x1+c2x2A(c_1 \vec x_1 + c_2 \vec x_2) = c_1 \vec x_1 + c_2 \vec x_2

Superposition §

Ax1=0A x_1 = 0 and Ax20A x_2 - 0     A(c1x1+c2x2)=0\implies A(c_1 \vec x_1 + c_2 \vec x_2) = 0

Nonhomogeneous §

Axp=bA x_p = b and Axh0A x_h - 0     A(c1xp+c2xh)=b\implies A(c_1 \vec x_p + c_2 \vec x_h) = b

To solve Ax=bAx = b, we can find one solution xpx_p, fully solve Axb=0Ax_b = 0. Then every solution to Ax=bAx = b will be of the form x=xp+xhx = x_p + x_h

Solving a linear equation §

  1. Find RREF of AA
  2. Set all free variables[2] =0= 0 to find xpx_p particular soln
  3. For each free variable, set =1= 1 (set others equal to 00), solve for the basic variables[3]. homogeneous
  4. Combine to get the general soln.

if there are free variables, system is dependent (\infty solns)

if no free variables, system is independent (unique soln)

The rank is the number of pivot columns

Inverse of a matrix §

AA1=A1A=IAA^{-1} = A^{-1}A = I

A1A^{-1} is the inverse. AA is invertible.

IB=BI=BIB = BI = B

AX=B    X=B×A1AX=B\implies X=B\times A^{-1}. XX is the unique solution.

Find x1\vec x_1 by solving Ax1=e1A \vec x_1 = - \vec e_1

AX=BAX=B has a unique soln iff AA is invertible.

if AA is not invertible: either no soln or infinitely many.

Conditions for invertibility §

AA is a n×nn\times n matrix. Following equivalent:

  1. A1A^{-1} exists
  2. RREF(A)=I\mathrm{RREF}(A) = I
    1. RREF(A)\mathrm{RREF}(A) has nn pivot columns
  3. AX=BAX = B has a unique soln for every BB in Rn\mathbb{R}^n
  4. AX=0AX = 0 has a unique soln X=0X = 0

Determinants §

square matrices only.

detA=A\det A = | A |

det[abcd]=adbc\det \begin{bmatrix}a & b \\ c & d \end{bmatrix} = ad - bc

+++++\begin{matrix}+ & - & + \\ - & + & - \\ + & - & +\end{matrix}

expand along any row|column.

A|A| is defined recursively for higher order matrices.

triangular matrices §

detT=i=1naii\det T = \prod_{i=1}^{n}a_{ii}

main diagonal.

diagonal matrix is a special case of triangular matrices.

Determinant row operations §

detI=1\det I = 1

If AA is invertible (non-singular), RREF(A)=I\mathrm{RREF}(A) = I, A0|A| \neq 0.

If AA is not invertible (singular), RREF has a zero row. A=0|A| = 0

A0    |A| \neq 0 \implies invertible. A=0    |A| = 0 \implies not invertible.

Transpose: swap rows and columns. reflect along main diagonal. ATA^T. A=(aij)    AT=(aji)A=(a_{ij})\implies A^T = (a_{ji})

matrix is symmetric if AT=AA^T = A

Trace: sum of the diagonal entries of a matrix.

Vector spaces and subspaces §

if f,gVf,g \in V, cRc \in R:

  1. cfVc\circ f\in V
  2. f+gVf+g \in V

A vector space, VV, is a collection of objects called vectors with two operations:

that satisfy the following properties for all xyzV\vec x \vec y\vec z \in V and c,dRc,d \in \mathbb{R}:

  1. closure
    1. x+yV\vec x + \vec y \in V
    2. cxVc \vec x \in V
  2. addition
    1. there is a zero vector 0V\vec 0 \in V such that x+0=x\vec x + \vec 0 = \vec x (additive identity)
    2. for every xV\vec x \in V, there is a (x)V(- \vec x) \in V such that (x)+x=0(-\vec x) + \vec x = 0 (additive inverse)
    3. (x+y)+z=x+(y+z)(\vec x + \vec y) + \vec z = \vec x + (\vec y + \vec z) (associativity)
    4. x+y=y+x\vec x + \vec y = \vec y + \vec x (commutativity)
  3. scalar multiplication properties
    1. 1×x=x1 \times \vec x = \vec x (scalar multiplicative identity)
    2. c(x+y)=cx+cyc (\vec x + \vec y) = c \vec x + c \vec y (first distributive)
    3. (c+d)x=cx+dy(c + d)\vec x = c \vec x + d \vec y (second distributive)
    4. c(dx)=(cd)xc(d \vec x) = (cd)\vec x (associativity)

Subspace is a subset of a vector space WVW \subset V:

  1. non-empty (aka. 0W\vec 0 \in W)
  2. closed over addition v,wW    v+wWv,w\in W \implies v+w \in W
  3. closed over scalar multiplication cR,vW    cvWc\in \mathbb{R},v \in W \implies cv \in W

(2) and (3) together: c1v1+c2v2Wc_1v_1 + c_2v_2 \in W

Spanning theory §

Linear Independence §

A set is linearly dependent if no vector in the set can be written as a linear combination of the others. Else, it is linearly dependent

A set is linearly independent if c1v1++cnvn=0    c1==cn=0c_1\vec v_1 + \cdots + c_n\vec v_n = 0 \implies c_1 = \cdots = c_n = 0.

Else, linearly dependent.

redundant vector     \implies there is a non-trivial way to get 0\vec 0

no non redundant vector     \implies there is no non-trivial way to get 0\vec 0

Vector functions §

v(t)=(f1(t)fn(t))\vec v(t) = \left(\begin{matrix}f_1(t) \\ \vdots \\ f_n(t)\end{matrix}\right)

LI if c1v1++cnvn0    c1==cn=0c_1 \vec v_1 + \cdots + c_n\vec v_n \equiv 0 \implies c_1 = \cdots = c_n = \vec 0

Checking one value is sufficient to show linear independence. Not sufficient to show linear dependence.

Wronskian §

Check whether functions are LI.

nn functions, nn derivatives, n×nn\times n matrix

W[f1,,fn](t)=f1(t)fn(t)f1(n1)(t)fn(n1)(t)W[f_1, \cdots, f_n](t) = \begin{vmatrix}f_1(t) & \cdots & f_n(t) \\ \vdots & \ddots & \vdots \\ f_1^{(n-1)}(t) & \cdots & f_n^{(n-1)}(t)\end{vmatrix}

If W[f1,,fn](t)0W[f_1, \cdots, f_n](t) \neq 0 for some tt, then {f1,,fn}\{f_1, \cdots, f_n\} is LI

if W(t)0W(t) \equiv 0, inconclusive.

Bases §

A set {v1,,vn}\{\vec v_1,\cdots, \vec v_n\} is a basis for VV if it is LI and spanning.

Basis theorem: the number of vectors in a basis of VV is always the same—the dimension of VV

dim(V)dim (V) is the size of a basis.

Properties of col(A)col(A) §

Invertible matrix characteristics §

AA is a n×nn\times n matrix. TFAE:

it’s a big world in math —Gregory Lyons (2020)

2nd order constant coefficients §

ay+by+cy=0ay'' + by' + cy =0, a,b,cRa, b, c \in \mathbb{R} (1)

Recall that yry=0    y=erty' - ry = 0 \implies y = e^{rt}

Try y=erty = e^{rt} in (1)

    ar2+br+c=0\implies ar^2 + br + c = 0 (2). The values of rr that solve this, solve (1)

Solutions to (2) are called characteristic roots or eigenvalues of (1).

Dimension of the soln space for a linear 2nd order ODE

Existence and uniqueness theorem §

y+p(t)y+q(t)y=0y'' + p(t)y' + q(t)y = 0

if p(t)p(t) and q(t)q(t) are cts, then for any A,BRA,B\in \mathbb{R}, there exists a unique y(t)y(t) solving the IVP:

y+py+qy=0y'' + py' + qy = 0, y(t0)=Ay(t_0) = A, y(t0)=By'(t_0) = B

Solution space theorem (2nd order) §

The soln space SS of y+py+qy=0y'' + py' + qy = 0, y(t0)=Ay(t_0) = A, y(t0)=By'(t_0) = B has dimension two.

Nonhomogeneous superposition principle §

if LL is linear and yiy_i solves L[y]=fi(t)L[y] = f_i(t) (i=1,2,,ni = 1, 2, \cdots, n), then c1y1++cnync_1y_1+\cdots +c_ny_n solves L[y]=c1f1++cnfnL[y] = c_1f_1 + \cdots + c_n f_n.

Variation of Parameters §

u1=y2fWu_1' = \frac{-y_2 f}{W}

u2=y1fWu_2' = \frac{y_1 f}{W}

W=f1f2f1f2W=\begin{vmatrix}f_1 & f_2 \\ f_1' & f_2'\end{vmatrix}

u1=u1u_1 = \int u_1', u2=u2u_2 = \int u_2'

yp=u1f1+u2f2y_p = u_1 f_1 +u_2 f_2

Distinct Eigenvalue Theorem §

If λ1,,λm\lambda_1, \cdots, \lambda_m are distinct e.vals for a n×nn\times n matrix AA, their corresponding e.vectors are LI.

Eigenvalues & Eigenvectors §

λv=Av\lambda v = Av

(AλI)v=0(A - \lambda I)v = \vec 0

ODE Systems §

Distinct eigenvalues §

x=Ax\vec x' = A \vec x

solved by x=eλtv\vec x = e^{\lambda t} \vec v

x=c1eλ1tv1+c2eλ1tv2\vec x = c_1 e^{\lambda_1 t} \vec v_1 + c_2 e^{\lambda_1 t}\vec v_2

Repeated eigenvalues §

  1. find e.vec v\vec v for e.val λ\lambda
  2. find non-zero u\vec u such that (AλI)u=v(A - \lambda I) \vec u = v
  3. x(t)=c1eλtv+c2eλt(tv+u)\vec x (t) = c_1 e^{\lambda t} \vec v + c_2 e^{\lambda t} (t \vec v + \vec u)

Complex eigenvalues §

a+bi=abi\overline{a + bi} = a - bi

x+y=x+y\overline{x + y} = \overline x + \overline y

Eigenstuff comes in conjugate pairs.

λ1,λ2=α±iβ\lambda_1, \lambda_2 = \alpha \pm i\beta

v1,v2=p±iq\vec v_1, \vec v_2 = \vec p \pm i\vec q

xRe=eαt[cos(βt)psin(βt)q]\vec x_{Re} = e^{\alpha t}\left[\cos(\beta t) \vec p - \sin (\beta t)\vec q\right]

xIm=eαt[sin(βt)p+cos(βt)q]\vec x_{Im} = e^{\alpha t}\left[\sin(\beta t) \vec p + \cos (\beta t)\vec q\right]

x=c1xRe+c2xIm\vec x = c_1 \vec x_{Re} + c_2 \vec x_{Im}

Nonlinear first-order ODE systems §

h-nullclines: y=0y' = 0

v-nullclines: x=0x' = 0

equilibria: x=y=0x' = y' = 0

Jacobian §

J(f,g)=[fxfygxgy]J(f, g) = \begin{bmatrix}\frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\ \frac{\partial g}{\partial x} & \frac{\partial g}{\partial y}\end{bmatrix}


  1. Exceptions exist when some/all equations are redundant ↩︎

  2. non-pivot columns ↩︎

  3. pivot columns ↩︎

  4. Ax=[v1vn]×[x1xn]=x1v1++xnvnA\vec x = [\vec v_1 \cdots \vec v_n]\times [\vec x_1 \cdots \vec x_n] = \vec x_1 \vec v_1 + \cdots + \vec x_n \vec v_n ↩︎