Unit 1

Section 1 (Vectors and Matrices I)

  • What is an n-vector, how to add and subtract vectors, and multiply them with scalars (real numbers).
  • What is the dot product of vectors.
  • How is the dot product related to angle, orthogonality, Euclidean norm (length) and distance of vectors.
  • What is an m × n (“m by n”) matrix and how to correctly determine matrix dimensions.
  • How to add and subtract matrices, and multiply them with scalars.
  • How to use indices when accessing individual entries in vectors and matrices.
  • How to calculate matrix-vector and matrix-matrix products, and when they are defined and undefined.

Section 2 (Vectors and Matrices II)

  • More about vectors and matrices, including zero vectors, zero matrices, identity matrices, and diagonal matrices.
  • About commutative, associative, and distributive properties of vector and matrix operations.
  • That some important vector and matrix operations are not commutative.
  • How to count arithmetic operations involved in various vector and matrix operations.
  • Which vector and matrix operations are most computationally demanding.
  • About matrix transposition, and how to work with transposed and symmetric matrices.

Section 3 (Linear Systems)

  • Basic facts about linear systems and their solutions.
  • Basic terminology including equivalent, consistent and inconsistent systems.
  • About the existence and uniqueness of solutions to linear systems.
  • How to transform a linear system to a matrix form.

Section 4 (Elementary Row Operations)

  • How to perform three types of elementary row operations with matrices.
  • That elementary row operations are reversible, and how to reverse them.
  • How to create the augmented matrix (A|b) of a linear system Ax = b.
  • How applying elementary row operations to an augmented matrix changes the underlying linear system.
  • That applying elementary row operations to an augmented matrix does not change the solution to the underlying linear system. 
  • How to solve linear systems using Gauss elimination.
  • That the Gauss elimination has two phases – forward elimination and back substitution.
  • That the Gauss elimination is reversible.
  • How to do the Gauss elimination efficiently.

Section 5 (Echelon Forms)

  • What is an echelon form of a matrix and how to obtain it.
  • How to use the echelon form to infer existence and uniqueness of solution.
  • What is an under- and over-determined system.
  • How to determine pivot positions and pivot columns.
  • What are basic and free variables.
  • What is the reduced echelon form and how to obtain it.
  • About the uniqueness of the reduced echelon form.
  • How to express solution sets to linear systems in parametric vector form.

Unit 2

Section 6 (Linear Combinations)

  • What is a linear combination of vectors.
  • How to write a linear system as a vector equation and vice versa.
  • How to write a vector equation as a matrix equation Ax = b and vice versa.
  • What is a linear span, and how to check if a vector is in the linear span of other vectors.
  • The Solvability Theorem for linear systems.
  • That the linear system Ax = b has a solution for every right-hand side if and only if the matrix A has a pivot in every row.

Section 7 (Linear Independence)

  • What it means for vectors to be linearly independent or dependent.
  • How to determine linear independence or dependence of vectors.
  • How is linear independence related to linear span.
  • How is linear independence related to the matrix equation Ax = 0.
  • About singular and nonsingular matrices.
  • About homogeneous and nonhomogeneous linear systems.

Section 8 (Linear Transformations I)

  • About linear vector transformations of the form T(x) = Ax.
  • How to figure out the matrix of a linear vector transformation.
  • About rotations, scaling transformations, and shear transformations. 
  • That surjectivity of the transformation T(x) = Ax is equivalent to the existence of a solution to the linear system Ax = b for any right-hand side.
  • How to easily check if the transformation T(x) = Ax is onto (surjective).
  • That injectivity of the transformation T(x) = Ax is equivalent to the uniqueness of solution to the linear system Ax = b.
  • How to easily check if the transformation T(x) = Ax is one-to-one (injective). 
  • That the solution to the linear system Ax = b is unique if and only if the matrix A has a pivot in every column.
  • That bijectivity of the transformation T(x) = Ax is equivalent to the existence and uniqueness of solution to the linear system Ax = b for any right-hand side.

Section 9 (Linear Transformations II)

  • How to find a spanning set for the range of a linear transformation.
  • What is the kernel of a linear transformation.
  • How to find a spanning set for the kernel.
  • That the kernel determines injectivity of linear transformations.
  • About compositions of linear transformations.
  • How to obtain the matrix of a composite transformation.
  • About inverse matrices and inverse transformations.
  • How to obtain the matrix of an inverse transformation.
  • What can one do with inverse matrices.

Section 10 (Special Matrices and Decompositions)

  • What are block matrices and how to work with them.
  • How to count arithmetic operations in the Gauss elimination.
  • How to work with block diagonal, tridiagonal and diagonal matrices.
  • What are upper and lower triangular matrices.
  • What are sparse and dense matrices.
  • What are elementary matrices.
  • Gauss elimination in terms of elementary matrices.
  • Matrix inversion in terms of elementary matrices.
  • About LU decomposition of nonsingular matrices.
  • What is the definiteness of symmetric matrices.
  • How to check if a symmetric matrix is positive-definite, positive-semidefinite, negative-semidefinite, negative-definite or indefinite.
  • About Cholesky decomposition of symmetric positive-definite (SPD) matrices.

Unit 3

Section 11 (Determinants I)

  • What is a permutation and how to calculate its sign.
  • What is a determinant, the Leibniz formula.
  • Simplified formulas for 2×2 and 3×3 determinants.
  • How to calculate determinants using Laplace (cofactor) expansion.
  • How to calculate determinants using elementary row operations.
  • How to calculate determinants of diagonal, block diagonal and triangular matrices.

Section 12 (Determinants II)

  • About the determinants of elementary matrices and their inverses.
  • That matrix A is invertible if and only if det(A) ≠ 0, and det(A-1) = 1/det(A). 
  • That for any two n×n matrices A and B it holds det(AB) = det(A)det(B) .
  • That  det(A+B) ≠ det(A) + det(B) .
  • That  det(A) = det(AT) and what it means for column operations.
  • How to use the so-called Cramer’s rule to solve the linear system Ax=b. 
  • About the adjugate (adjunct) matrix and how to use it to calculate matrix inverse.
  • About the relations between det(A), linear dependence of columns and rows in the matrix A, properties of the linear transformation T(x)=Ax, and the existence and uniqueness of solution to the linear system Ax=b . 
  • How to use determinants to calculate area and volume.

Section 13 (Linear Spaces)

  • What is a linear space and how it differs from a standard set. 
  • How to check whether or not a set is a linear space.
  • That linear spaces may contain other types of elements besides vectors.
  • That a linear span always is a linear space.
  • How to check if a set of vectors generates a given linear space.
  • How to work with linear spaces of matrices.
  • How to work with polynomial spaces.

Section 14 (Basis, Coordinates, Subspaces)

  • What is a basis, and that a linear space can have many bases.
  • How to find the coordinates of an element relative to a given basis.
  • How to determine the dimension of a linear space.
  • What is a subspace of a linear space, and how to recognize one.
  • About the union and intersection of subspaces.
  • How the change of basis in a linear space affects the coordinates of its elements.

Section 15 (Null Space, Column Space, Rank)

  • What is the null space, how to check if a vector belongs to the null space.
  • How to determine the dimension of the null space and find its basis.
  • How is the null space related to the uniqueness of solution to the linear system Ax=b
  • What is the column space, how to check if a vector belongs to the column space.
  • How to determine the dimension of the column space and find its basis.
  • How is the column space related to the existence of solution to the linear system Ax=b
  • How is the null space related to the kernel of the linear transformation T(x)=Ax
  • How is the column space related to the range of the linear transformation T(x)=Ax
  • What is the rank of a matrix, and its implications for the existence and uniqueness of solution to the linear system Ax = b.
  • What does it mean for a matrix to have full rank or be rank-deficient.
  • The Rank Theorem.

Unit 4

Section 16 (Eigenproblems I)

  • What is an eigenproblem.
  • About important applications of eigenproblems in various areas of science and engineering.
  • How to verify if a given vector is an eigenvector, and if a given value is an eigenvalue. 
  • How to calculate eigenvalues and eigenvectors.
  • About the characteristic polynomial and characteristic equation.
  • How to handle matrices with repeated eigenvalues.
  • About algebraic and geometric multiplicity of eigenvalues.
  • How to determine algebraic and geometric multiplicity of eigenvalues.
  • How to find a basis of an eigenspace. 

Section 17 (Eigenproblems II)

  • The geometrical meaning of eigenvalues and eigenvectors in R2 and R3.
  • That a matrix is singular if and only if it has a zero eigenvalue.
  • That the null space of a matrix is the eigenspace corresponding to the zero eigenvalue.
  • That eigenvectors corresponding to different eigenvalues are linearly independent.
  • The Cayley–Hamilton theorem (CHT).
  • How to use the CHT to efficiently calculate matrix inverse.
  • How to use the CHT to efficiently calculate matrix powers and matrix exponential.
  • About similar matrices, diagonalizable matrices, and eigenvector basis.
  • How to use the eigenvector basis to efficiently calculate arbitrary matrix functions including matrix powers, the inverse matrix, matrix exponential, the square root of a matrix, etc.

Section 18 (Complex Linear Systems)

  • About complex numbers and basic complex arithmetic.
  • How to solve complex linear equations with real and complex coefficients.
  • How to solve complex linear systems with real and complex matrices.
  • How to perform complex elementary row operations.
  • How to determine the rank of a complex matrix. 
  • How to check if a complex matrix is nonsingular. 
  • How to use the Cramer’s rule for complex matrices.
  • How to check if complex vectors are linearly independent.
  • How to invert complex matrices.
  • How to transform complex linear systems into real ones.
  • About the structure of complex roots of real-valued polynomials. 
  • About complex eigenvalues and eigenvectors of real matrices.
  • How to diagonalize matrices with complex eigenvalues. 

Section 19 (Normed Spaces, Inner Product Spaces)

  • Important properties of the Euclidean norm and dot product in Rn
  • The Cauchy-Schwarz inequality and triangle inequality.
  • How are norm and inner product defined and used in general linear spaces.
  • About important norms and inner products in spaces of matrices and polynomials.
  • How to calculate norms, distances and angles of matrices and polynomials.
  • That every inner product induces a norm, but not every norm induces an inner product.
  • About the parallelogram law.
  • About the norm and inner product of complex vectors in Cn.
  • About the norm and inner product in general complex linear spaces.

Section 20 (Orthogonality and Best Approximation)

  • About orthogonal complements and orthogonal subspaces.
  • How to find a basis in an orthogonal complement.
  • That Row(A) and Nul(A) of a m×n matrix A are orthogonal complements in Rn.
  • About orthogonal sets and orthogonal bases.
  • About orthogonal decompositions and orthogonal projections on subspaces.
  • That orthogonal projection operators are idempotent.
  • How to calculate orthogonal decomposition with and without orthogonal basis.
  • How to calculate the best approximation of an element in a subspace.
  • How to calculate the distance of an element from a subspace.
  • The Gram-Schmidt process.
  • How to orthogonalize vectors, matrices, and polynomials.
  • About the Fourier series expansion of periodic functions.
  • How to obtain Legendre polynomials.
  • How to use Legendre polynomials for best polynomial approximation of functions. 

Unit 5

Section 21 (Spectral Theorem)

  • About the basis and dimension of the complex vector space Cn
  • What are conjugate-transpose matrices and how to work with them.
  • About Hermitian, orthogonal, and unitary matrices.
  • That the eigenvalues of real symmetric and Hermitian matrices are real.
  • That the eigenvectors of real symmetric and Hermitian matrices can be used to create an orthogonal basis in Rn and Cn.
  • About orthogonal diagonalization of real symmetric matrices.
  • About unitary diagonalization of Hermitian matrices.
  • The Spectral Theorem for real symmetric and Hermitian matrices.
  • About the outer product of vectors.
  • How to perform spectral decomposition of real symmetric and Hermitian matrices.
  • How eigenvalues are related to definiteness of real symmetric and Hermitian matrices.
  • How to calculate eigenvalues of large matrices using Numpy.

Section 22 (QR Factorization and Least-Squares Problems)

  • How to perform the QR factorization of rectangular and square matrices.
  • What are Least-Squares problems and how to solve them.
  • Properties of the normal equation, existence and uniqueness of solution.
  • How to fit lines, polynomials, and bivariate polynomials to data.
  • How to fit implicitly given curves to data.
  • How to solve Least-Square problems via QR factorization.

Section 23 (Singular Value Decomposition)

  • The basic idea of SVD.
  • About the shared eigenvalues of the matrices \(A^TA\) and \(AA^T.\)
  • What are singular values and how to calculate them efficiently.
  • That the number of nonzero singular values equals the rank of the matrix.
  • How to calculate right and left singular vectors, and perform SVD.
  • The difference between full and compact SVD.
  • About the Moore-Penrose pseudoinverse and its basic properties.
  • What types of problems can be solved using the M-P pseudoinverse.
  • How to calculate the M-P pseudoinverse using SVD.
  • How to calculate the SVD of complex matrices.
  • How are singular values related to the Frobenius norm.
  • What is the spectral (operator) norm of a matrix.
  • How to calculate the error caused by truncating the SVD.
  • How to use SVD for rank estimation in large data sets.
  • How SVD is used in image processing.

Section 24 (Large Linear Systems)

  • The condition number and how to calculate it using singular values.
  • The condition number of real symmetric and Hermitian matrices. 
  • The role the condition number plays in the solution of linear systems.
  • Mantissa, exponent, and the representation (round-off) error.
  • Machine epsilon and finite computer arithmetic.
  • Instability of the Gauss elimination.
  • Memory issues related to storing large matrices. 
  • COO, CSR and CSC representation of sparse matrices.
  • Sparsity-preserving and sparsity-breaking matrix operations.

Section 25 (Linear Algebra with Python)

  • Import Numpy and Scipy.
  • Define (large) vectors and matrices.
  • Perform standard vector and matrix operations.
  • Edit matrices and vectors using the Python for-loop.
  • Extract row and column vectors from matrices.
  • Use direct and iterative matrix solvers.
  • Determine the rank of a matrix.
  • Perform LU and Cholesky factorizations.
  • Calculate determinants, eigenvalues and eigenvectors.
  • Diagonalize matrices, calculate functions of matrices.
  • Perform QR factorization and orthogonalize sets of vectors.
  • Solve Least-Squares problems.
  • Perform spectral decomposition and SVD.