Shreyas’ Notes

# MATH 211

Differential equation (DE): equation consisting of >= 1 dependent variables wrt >= 1 independent variables

• $y'(t)=t^2+y(t)$

Ordinary DE (ODE): DE which contains only one independent variable.

• $y''+4(y')^2+t\sin y = 6t$

Partial DE: DE which contains >1 independent variable.

A solution to an ODE is a function $y(t)$ that satisfies the ODE on some time interval.

$y'=ky$, $k \in (-\infty, \infty)$

• $y=e^{kt}$
• $y=2e^{kt}$

ODE: $y'=\frac{-t}{y}$.

$y(t)=\sqrt{1-t^2}$ is a solution on $t \in [-1, 1]$

$y''=-3y'+2y=0$

• $y=e^{-t}$, $y=e^{-2t}$
• $y(t) = c_1e^{-t} + c_2 e^{-2t}$ where $c_1, c_2 \in R$
• general solution

$y' = 3t^2$

• $y=t^3 + C$, $C \in R$

From a general soln, we obtain a particular soln by specifying the arbitrary constants.

An initial value problem (IVP):

• ODE
• initial conditions

$y'' - 3y' + 2y = 0$; $y(0) = 2$, $y'(0) = -3$

• Solved by $c_1 = c_2 = 1$
• $y(0)=e^0 + e^0 = 2$
• $y'(0)=-e^0 - 2e^0 = -3$

$y' = 3t^2$; $y(2) = 4$

• $y=t^3 + C$
• $y(2) = 8 + C = 4 \implies C = -4$
• $y(t) = t^3 - 4$ solves the IVP

The order of a DE is the order of its highest derivative.

## Direction Fields §

Consider an ODE of the form $y' = f(t, y)$.

For each $(t, y)$ value, plot a small line segment with the slope $f(t, y)$. A collection of such segments is a direction field.

Concavity depends on whether $y''$ is +ve or -ve.

$y' = t -y$

$\implies y'' = 1 - y' = 1 + y - t$

when $y'' = 0 \implies 1 + y - t = 0 \implies y = t - 1$

$y'' > 0 \implies 1 + y - t > 0 \implies y > t -1$

An equilibrium solution is one which does not change over time. $y(t) \equiv C$

• $y' = -2ty + t$
• $y' \equiv 0 \implies t = 2ty$ for all $R$
• $\implies y = \frac{1}{2}$

An equilibrium solution is:

• stable if solutions near it tend toward it as $t \rightarrow \infty$

• unstable if solutions near it tend away from it.

• $y' = y^2 - 4$

• $y \equiv 0 \implies y^2 - 4 = 0$
• $\implies y^2 = 4$
• $\implies y = \pm 2$
• $y = 2$ is unstable
• $y = -2$ is stable

An isocline of a DE $y' = f(t, y)$ is a curve in the $t$-$y$ plane along which the slope is constant: $f(t, y) \equiv C$

• $y' = y^2 - 4 = C$ for some $y$
• $y^2 = C + 4$
• $y = \pm \sqrt{C + 4}$

## Separable Equations §

A separable DE is one which can be written as $y' = f(t) \times g(y)$.

### Solving a Separable DE §

1. Solve $g(y) = 0$ to find equilibrium solns.
2. Else, assume $g(y) \neq 0$. Then:
• $\frac{dy}{dt} = f(t) \times g(y)$
• $\frac{dy}{g(y)} = f(t) \cdot dt$
3. $\int \frac{dy}{g(y)} = \int f(t) \cdot dt$
4. If possible, solve for $y$ in terms of $t$ to get an explicit soln
5. If there’s an IVP, solve for $C$ using the initial conditions

Don’t forget to check for $g(y) = 0$

• $y' = \frac{-t}{y}$

• $f(t) = -t$
• $g(y) = \frac{1}{y}$
• $g(y)$ is never equal to $0$
• $y \cdot dy= t \cdot dt$
• $\int y \cdot dy = \int -t \cdot dt$
• $\frac{y^2}{2} = - \frac{t^2}{2} + C$
• $y^2 = -t^2 + C$
• $y = \pm \sqrt{C - t^2}$
• $y' = \frac{t^2}{1 - y^2}$

• $f(t) = t^2$
• $g(y) = \frac{1}{1 - y^2}$
• $\frac{dy}{dt} = \frac{t^2}{1 - y^2}$
• $(1 - y^2) \cdot dy = t^2 \cdot dt$
• $\int (1 - y^2) \cdot dy = \int t^2 \cdot dt$
• $y - \frac{1}{3}y^3 = \frac{1}{3} t^3 + C$

## Picard’s Theorem §

A solution of a DE is unique if there’s at most one solution to the DE.

$y_1(t) = y_2(t) \implies y_1(t) = y_2(t)$

Typically infinite solutions. Initial value conditions usually nail down solutions.

$y' = -2ty$ doesn’t have a unique solution. $y' = -2ty$; $y(0) = 1$ does have a unique solution

$y' = \frac{1}{y}$, $y(0) = 0$ has no solution. $f$ is not defined on the $t$ axis.

• whether a solution exists
• whether a solution is unique (predictability)

Picard’s Theorem: Suppose $f(t, y)$ is continuous on the region R = ${ (t, y), a< t < b, c < y < d}$ (open rectangle) and $(t_0, y_0 \in R)$. Then there exists an $h > 0$ such that the IVP $y' = f(t, y), y(t_0) = y_0$ has a solution for $t$ in the interval $(t_0 - h, t_0 + h)$.

If the partial derivative $\frac{\partial}{\partial y}f(t, y)$ is also continuous in $R$, then the solution is unique.

it can prove uniqueness, but can’t prove lack of uniqueness

sufficient, but not necessary

## Linear DEs §

A DE is linear if it is of the form:

$a_n(t) \cdot \frac{d^n y}{dt} + a_{n - 1}(t) \cdot \frac{d^{n-1} y}{dt} + \dots + a_1(t) \frac{dy}{dt} + a_0(t) y = f(t)$

where $a_1(t), \dots, a_n (t)$ are cts (on some interval) functions solely of $t$.

If $f(t)\equiv 0$: homogeneous DE. Else, inhomogeneous DE.

Equation Linear Homogeneous
$y'' + ty' - 3y = 0$ Yep Yep
$y' + y^2 = 0$ Nope Yep
$y' + \sin y = 1$ Nope Nope
$y' - t^2 y = 0$ Yep Yep
$y' + (\sin t) y = 1$ Yep Nope
$y'' - 3y' + y = \sin t$ Yep Nope

$F(t, y, y', \dots y^{(n)}) = f(t)$, the corresponding operator is $L[y] = F(t, y, \dots y^{(n)}$)$operators are basically higher order functions? All the $y$s must be on one side. An operator is linear if it satisfies: 1. For $k \in R$, $L[ky] = k \cdot L[y]$ 2. For any $y_1$, $y_2$: $L[y_1 + y_2] = L[y_1] + L[y_2]$ Differentiation is a linear operator. ### Superposition Principle § Suppose $y_1$ and $y_2$ solve the linear and homogeneous DE $L[y] = 0$. Then for any constant $c_1, c_2 \in R$, $L[c_1 y_1 + c_2 y_2] = 0$ Given that $L[y_1] = 0$, $L[y_2] = 0$ Let $c_1, c_2 \in R$. Then, by linearity: $L[c_1 y_1 + c_2 y_2] = L[c_1 y_1] + L[c_2 y_2]$ $= c_1 L[y_1] + c_2 L{y_2}$ $= c_1 \cdot 0 + c_2 \cdot 0 = 0$ If $y_1$, $y_2$ solve a homog DE, then any linear combination of $y_1$, $y_2$ (i.e. $c_1 y_1 + c_2 y_2$) solves the same DE. ### Nonhomogeneous Principle § Suppose $y_p(t)$ solves the linear, nonhomogeneous DE $L[y_p] = f(t)$. Then, for any solution $y_h$ of the homogeneous equation $L[y_h] = 0, y_h + y_p$ also solves $L[y_h + y_p] = f(t)$ Furthermore, every soln to $L[y] = f(t)$ is of the form $y = y_h + y_p$ for some fixed $y_p$ and some homogeneous solution. ### Variation of parameters § See pg. 64 of DELA $y' - y = t$ 1. $y' - y = 0$ has solution $y_h = ce^t$ 2. Observe that $y_p = -t - 1$ solves $y' - y = t$ 3. Then, every solution to $y' - y = t$ is of the form $y(t) = y_h + y_p$ $y' + p(t)y = f(t)$ $y_h = ce^{-\int p(t)dt}$ is a solution to $y' + p(t)y = 0$ $y_p(t)=v(t)e^{-\int p(t)dt}$ (1) $v' e^{-\int p(t)dt}=f(t)$ (2) Solve (2) for $v$ and plug that into (1) to get $y_p(t)$. $y(t) = y_h + y_p$ ### Integrating factor method § doesn’t work for higher order DEs $y' + p(t)y = f(t)$ $\mu(t) = e^{\int p(t)dt}$ $y' \mu+ p(t) \mu y = f(t) \mu$ $\implies (y \mu)' = f(t) \mu$ $\implies (y \mu) = \int f(t) \mu +C$ ### Models § #### Linear mixing model § $x(t)$ is the amount of salt $x' = r_{in} - r_{out}$$r_{in} = $concentration in $\times$ flow rate in$r_{out} = \$ concentration out $\times$ flow rate out

$x(0)$ is the initial amount of salt. $0$ if the water is initially pure

#### Newton cooling §

$T$ temperature of an object surrounded by a uniform temperature $M$. Then:

$\frac{dT}{dt} = k(M - T)$, $k > 0$

$T(0) = T_0$

$T(t) = T_0 e^{-kt} + M(1 - e^{-kt})$

## Matrices §

$\begin{bmatrix}a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \cdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix}$

$m$ rows, $n$ columns

“entries”

notation: row followed by column

### Operations §

Scaling: entry-wise. scale every entry.

dimensions must match for entry-wise operations.

#### Multiplication §

not entry-wise.

$A \times B$: rows of $A$ $\times$ columns of $B$

if $A$ is $(m\times r)$ and $B$ is $(r \times n)$: $A \times B$ is $(m \times n)$

$c_{ij}=\left[a_{11}\cdots a_{ir}\right] \times \begin{bmatrix}b_{11} \\ \vdots \\ b_{rj}\end{bmatrix}$

not commutative: $A \times B$ is not always $= B \times A$

distributive.

$c_{ij} = \sum_{k=1} a_{ik} \times b_{kj}$

### Special matrices §

• zero matrix $0_{m\times n}$

all elements $0$: $(a_{ij} = 0)$

• identity matrix $I_n$

principal diagonal is all $1$s, everything else is $0$

$a_{ij} = \begin{cases} 1 & i=j \\ 0 & i \ne j \end{cases}$

$A \times I_n = A$

the only non-zero term in each row-column dot prod is $a_{ij}$

### Vectors §

A row vector is a $1 \times n$ matrix

A column vector is a $n \times 1$ matrix

The scalar product (dot product) of a row vec with a column vec:

$\vec a \times \vec b = \sum a_i b_i$

Scalar product is a special case of matrix multiplication.

### Systems of linear equations §

$2x +y = -7$

$3x + 4y = 2$

$\begin{bmatrix}2 & 1\\ 3 & 4\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}-7 \\ 2\end{bmatrix}$

$a_{11} x_1 + \cdots + a_{1n} x_n = b_1$

$\vdots$

$a_{m1} x_1 + \cdots + a_{mn} x_n = b_m$

$\begin{bmatrix}a_{11} & \cdots & a_{1n}\\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn}\end{bmatrix}\begin{bmatrix}x_1 \\ \vdots \\ x_m \end{bmatrix} = \begin{bmatrix}b_1 \\ \vdots \\ b_m\end{bmatrix}$

system is homogeneous if $\vec b = \vec 0$

a solution is a vector $\vec x$ that satisfies $A\vec x = \vec b$. We can write $A\vec x = \vec b$ as an augmented matrix.

## Solutions of linear equations §

General[1] idea:

• if # equations < # unknowns

undetermined: infinitely many solutions

• if # equations > # unknowns

overdetermined, no solutions

• if # equations > # unknowns

unique solution

### Elementary row operations §

$R_i\rightarrow R_i+aR_j$

$R_i\rightarrow aR_i$

$R_i \leftrightarrow R_j$

#### Row reduced echelon form §

A matrix is in RREF if:

1. zero rows are at the bottom
2. the left-most non-zero entry of each non-zero row should be a $1$. the $1$ is a called a pivot
3. Each pivot is farther to the right than the row above it
4. Each pivot is the only non-zero entry in its column, called a pivot column

all matrices have a RREF (not just square ones).

Gauss-Jordan algorithm converts a matrix to RREF form.

If the RREF of $A\vec x = \vec b$ has a row of the form \left[\matrix{0 & \cdots & 0 & | & k} \right] ($k\neq 0$), the equation is inconsistent (because it implies $0 = k$). For no values of the params are all equations satisfied.

Else, the equation is consistent (solution exists).

If every column is a pivot column, the solution (if it exists) is unique.

If there is at least one non-pivot column, there are infinitely many solutions.

### Linearity properties of matrices §

$A(c_1 \vec x_1 + c_2 \vec x_2) = c_1 \vec x_1 + c_2 \vec x_2$

#### Superposition §

$A x_1 = 0$ and $A x_2 - 0$ $\implies A(c_1 \vec x_1 + c_2 \vec x_2) = 0$

#### Nonhomogeneous §

$A x_p = b$ and $A x_h - 0$ $\implies A(c_1 \vec x_p + c_2 \vec x_h) = b$

To solve $Ax = b$, we can find one solution $x_p$, fully solve $Ax_b = 0$. Then every solution to $Ax = b$ will be of the form $x = x_p + x_h$

### Solving a linear equation §

1. Find RREF of $A$
2. Set all free variables[2] $= 0$ to find $x_p$ particular soln
3. For each free variable, set $= 1$ (set others equal to $0$), solve for the basic variables[3]. homogeneous
4. Combine to get the general soln.

if there are free variables, system is dependent ($\infty$ solns)

if no free variables, system is independent (unique soln)

The rank is the number of pivot columns

### Inverse of a matrix §

$AA^{-1} = A^{-1}A = I$

$A^{-1}$ is the inverse. $A$ is invertible.

$IB = BI = B$

$AX=B\implies X=B\times A^{-1}$. $X$ is the unique solution.

Find $\vec x_1$ by solving $A \vec x_1 = - \vec e_1$

$AX=B$ has a unique soln iff $A$ is invertible.

if $A$ is not invertible: either no soln or infinitely many.

#### Conditions for invertibility §

$A$ is a $n\times n$ matrix. Following equivalent:

1. $A^{-1}$ exists
2. $\mathrm{RREF}(A) = I$
1. $\mathrm{RREF}(A)$ has $n$ pivot columns
3. $AX = B$ has a unique soln for every $B$ in $\mathbb{R}^n$
4. $AX = 0$ has a unique soln $X = 0$

### Determinants §

square matrices only.

$\det A = | A |$

$\det \begin{bmatrix}a & b \\ c & d \end{bmatrix} = ad - bc$

$\begin{matrix}+ & - & + \\ - & + & - \\ + & - & +\end{matrix}$

expand along any row|column.

$|A|$ is defined recursively for higher order matrices.

#### triangular matrices §

• upper
• lower

$\det T = \prod_{i=1}^{n}a_{ii}$

main diagonal.

diagonal matrix is a special case of triangular matrices.

#### Determinant row operations §

• if $R_i \leftrightarrow R_j$

$|A*|=-|A|$

• if $R_j* = R_j + kR_i$

$|A*| = |A|$

• if $R_i* = kR_i$

$|A*| = k|A|$

$\det I = 1$

If $A$ is invertible (non-singular), $\mathrm{RREF}(A) = I$, $|A| \neq 0$.

If $A$ is not invertible (singular), RREF has a zero row. $|A| = 0$

$|A| \neq 0 \implies$ invertible. $|A| = 0 \implies$ not invertible.

Transpose: swap rows and columns. reflect along main diagonal. $A^T$. $A=(a_{ij})\implies A^T = (a_{ji})$

matrix is symmetric if $A^T = A$

Trace: sum of the diagonal entries of a matrix.

## Vector spaces and subspaces §

if $f,g \in V$, $c \in R$:

1. $c\circ f\in V$
2. $f+g \in V$

A vector space, $V$, is a collection of objects called vectors with two operations:

• scalar multiplication

that satisfy the following properties for all $\vec x \vec y\vec z \in V$ and $c,d \in \mathbb{R}$:

1. closure
1. $\vec x + \vec y \in V$
2. $c \vec x \in V$
1. there is a zero vector $\vec 0 \in V$ such that $\vec x + \vec 0 = \vec x$ (additive identity)
2. for every $\vec x \in V$, there is a $(- \vec x) \in V$ such that $(-\vec x) + \vec x = 0$ (additive inverse)
3. $(\vec x + \vec y) + \vec z = \vec x + (\vec y + \vec z)$ (associativity)
4. $\vec x + \vec y = \vec y + \vec x$ (commutativity)
3. scalar multiplication properties
1. $1 \times \vec x = \vec x$ (scalar multiplicative identity)
2. $c (\vec x + \vec y) = c \vec x + c \vec y$ (first distributive)
3. $(c + d)\vec x = c \vec x + d \vec y$ (second distributive)
4. $c(d \vec x) = (cd)\vec x$ (associativity)

Subspace is a subset of a vector space $W \subset V$:

1. non-empty (aka. $\vec 0 \in W$)
2. closed over addition $v,w\in W \implies v+w \in W$
3. closed over scalar multiplication $c\in \mathbb{R},v \in W \implies cv \in W$

(2) and (3) together: $c_1v_1 + c_2v_2 \in W$

## Linear Independence §

A set is linearly dependent if no vector in the set can be written as a linear combination of the others. Else, it is linearly dependent

A set is linearly independent if $c_1\vec v_1 + \cdots + c_n\vec v_n = 0 \implies c_1 = \cdots = c_n = 0$.

Else, linearly dependent.

redundant vector $\implies$ there is a non-trivial way to get $\vec 0$

no non redundant vector $\implies$ there is no non-trivial way to get $\vec 0$

• $A\vec x = \vec 0$ has a unique soln $\vec x = \vec 0$

• $RREF(A)$ has $n$ pivots

• Vanilla LI

• Function LI

### Vector functions §

$\vec v(t) = \left(\begin{matrix}f_1(t) \\ \vdots \\ f_n(t)\end{matrix}\right)$

LI if $c_1 \vec v_1 + \cdots + c_n\vec v_n \equiv 0 \implies c_1 = \cdots = c_n = \vec 0$

Checking one value is sufficient to show linear independence. Not sufficient to show linear dependence.

### Wronskian §

Check whether functions are LI.

$n$ functions, $n$ derivatives, $n\times n$ matrix

$W[f_1, \cdots, f_n](t) = \begin{vmatrix}f_1(t) & \cdots & f_n(t) \\ \vdots & \ddots & \vdots \\ f_1^{(n-1)}(t) & \cdots & f_n^{(n-1)}(t)\end{vmatrix}$

If $W[f_1, \cdots, f_n](t) \neq 0$ for some $t$, then $\{f_1, \cdots, f_n\}$ is LI

if $W(t) \equiv 0$, inconclusive.

### Bases §

A set $\{\vec v_1,\cdots, \vec v_n\}$ is a basis for $V$ if it is LI and spanning.

Basis theorem: the number of vectors in a basis of $V$ is always the same—the dimension of $V$

• minimal spanning set

if a vector is removed, the set will no longer be spanning

• basis is a maximal LI set

if a vector is added, the set will no longer remain LI

$dim (V)$ is the size of a basis.

• The minimum number of vectors needed to span $V$ is $dim(V)$
• The maximum number of vectors a LI set can have is $dim(V)$

### Properties of $col(A)$§

• The pivot columns form a basis for $col (A)$
• The dimensions of $col (A)$ is called the rank of $A$—number of pivot columns of $A$

### Invertible matrix characteristics §

$A$ is a $n\times n$ matrix. TFAE:

• A is an invertible matrix
• A has $n$ pivot columns
• $RREF(A) = I_n$
• $rank(A) = n$
• columns of $A$ are linearly independent
• $A \vec x = \vec 0$ [4] has a unique solution: $\vec x = \vec 0$
• $A \vec x = \vec b$ has a unique solution for every $\vec b \in \mathbb{R}^n$

it’s a big world in math —Gregory Lyons (2020)

## 2nd order constant coefficients §

$ay'' + by' + cy =0$, $a, b, c \in \mathbb{R}$ (1)

Recall that $y' - ry = 0 \implies y = e^{rt}$

Try $y = e^{rt}$ in (1)

$\implies ar^2 + br + c = 0$ (2). The values of $r$ that solve this, solve (1)

Solutions to (2) are called characteristic roots or eigenvalues of (1).

• Case 1 $\Delta > 0$:

$r_1, r_2 = \frac{-b\pm \sqrt\Delta}{2a}$

$\implies y(t) = c_1 e^{r_1 t} + c_2 e^{r_2 t}$

• Case 2 $\Delta = 0$:

$y(t) = \{e^{rt}, te^{rt}\}$

• Case 3 $\Delta < 0$

$r = \alpha \pm \beta i$

$y = e^{\alpha t}\left[\sin(\beta t)+\cos(\beta t)\right]$

Dimension of the soln space for a linear 2nd order ODE

### Existence and uniqueness theorem §

$y'' + p(t)y' + q(t)y = 0$

if $p(t)$ and $q(t)$ are cts, then for any $A,B\in \mathbb{R}$, there exists a unique $y(t)$ solving the IVP:

$y'' + py' + qy = 0$, $y(t_0) = A$, $y'(t_0) = B$

### Solution space theorem (2nd order) §

The soln space $S$ of $y'' + py' + qy = 0$, $y(t_0) = A$, $y'(t_0) = B$ has dimension two.

## Nonhomogeneous superposition principle §

if $L$ is linear and $y_i$ solves $L[y] = f_i(t)$ ($i = 1, 2, \cdots, n$), then $c_1y_1+\cdots +c_ny_n$ solves $L[y] = c_1f_1 + \cdots + c_n f_n$.

## Variation of Parameters §

$u_1' = \frac{-y_2 f}{W}$

$u_2' = \frac{y_1 f}{W}$

$W=\begin{vmatrix}f_1 & f_2 \\ f_1' & f_2'\end{vmatrix}$

$u_1 = \int u_1'$, $u_2 = \int u_2'$

$y_p = u_1 f_1 +u_2 f_2$

## Distinct Eigenvalue Theorem §

If $\lambda_1, \cdots, \lambda_m$ are distinct e.vals for a $n\times n$ matrix $A$, their corresponding e.vectors are LI.

## Eigenvalues & Eigenvectors §

$\lambda v = Av$

$(A - \lambda I)v = \vec 0$

## ODE Systems §

### Distinct eigenvalues §

$\vec x' = A \vec x$

solved by $\vec x = e^{\lambda t} \vec v$

$\vec x = c_1 e^{\lambda_1 t} \vec v_1 + c_2 e^{\lambda_1 t}\vec v_2$

### Repeated eigenvalues §

1. find e.vec $\vec v$ for e.val $\lambda$
2. find non-zero $\vec u$ such that $(A - \lambda I) \vec u = v$
3. $\vec x (t) = c_1 e^{\lambda t} \vec v + c_2 e^{\lambda t} (t \vec v + \vec u)$

### Complex eigenvalues §

$\overline{a + bi} = a - bi$

$\overline{x + y} = \overline x + \overline y$

Eigenstuff comes in conjugate pairs.

$\lambda_1, \lambda_2 = \alpha \pm i\beta$

$\vec v_1, \vec v_2 = \vec p \pm i\vec q$

$\vec x_{Re} = e^{\alpha t}\left[\cos(\beta t) \vec p - \sin (\beta t)\vec q\right]$

$\vec x_{Im} = e^{\alpha t}\left[\sin(\beta t) \vec p + \cos (\beta t)\vec q\right]$

$\vec x = c_1 \vec x_{Re} + c_2 \vec x_{Im}$

## Nonlinear first-order ODE systems §

h-nullclines: $y' = 0$

v-nullclines: $x' = 0$

equilibria: $x' = y' = 0$

### Jacobian §

$J(f, g) = \begin{bmatrix}\frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\ \frac{\partial g}{\partial x} & \frac{\partial g}{\partial y}\end{bmatrix}$

1. Exceptions exist when some/all equations are redundant ↩︎

2. non-pivot columns ↩︎

3. pivot columns ↩︎

4. $A\vec x = [\vec v_1 \cdots \vec v_n]\times [\vec x_1 \cdots \vec x_n] = \vec x_1 \vec v_1 + \cdots + \vec x_n \vec v_n$ ↩︎