Matrices

[1]:
%matplotlib inline
[2]:
import numpy as np
import matplotlib.pyplot as plt

Matrices

Example 1: Linear equations

Consider a set of \(m\) linear equations in \(n\) unknowns:

\begin{align*} a_{11} x_1 + &a_{12} x_2& +& ... + &a_{1n} x_n &=& b_1\\ \vdots && &&\vdots &= &\vdots\\ a_{m1} x_1 + &a_{m2} x_2& +& ... + &a_{mn} x_n &=&b_m \end{align*}

We can let

\begin{align*} A=\left[\begin{matrix}a_{11}&\cdots&a_{1n}\\ \vdots & &\vdots\\ a_{m1}&\cdots&a_{mn}\end{matrix}\right] \end{align*} \begin{align*} x = \left[\begin{matrix}x_1\\ \vdots\\ x_n\end{matrix}\right] & \;\;\;\;\textrm{ and } & b = \left[\begin{matrix}b_1\\ \vdots\\ b_m\end{matrix}\right] \end{align*}

and re-write the system:

\[Ax = b\]

Example 2: Covariance matrix

Suppose we have \(n\) observations of \(p\) variables in a \(n \times p\) matrix. The covariance matrix is a matrix with special useful properties - it is symmetric and positive semi-definite.

The sample covariance matrix is given by the element-wise formula

cov

[3]:
n, p = 10, 3
x = np.random.random((n, p))
[4]:
x
[4]:
array([[0.05876721, 0.27802967, 0.35410852],
       [0.73565888, 0.40077871, 0.10766922],
       [0.23761851, 0.87978327, 0.37013641],
       [0.39821508, 0.71982028, 0.16116735],
       [0.8039294 , 0.16688121, 0.62088514],
       [0.01694924, 0.2128597 , 0.10361269],
       [0.27363723, 0.68670227, 0.81350046],
       [0.08434413, 0.89748654, 0.62903273],
       [0.9007611 , 0.87730537, 0.59816395],
       [0.30074854, 0.67312175, 0.90986231]])
[5]:
np.cov(x, rowvar=False)
[5]:
array([[ 0.1042219 , -0.00072016,  0.00692216],
       [-0.00072016,  0.08304724,  0.02946459],
       [ 0.00692216,  0.02946459,  0.08446201]])
[6]:
v = x - x.mean(axis=0)
v.T @ v
[6]:
array([[ 0.93799706, -0.00648144,  0.06229945],
       [-0.00648144,  0.74742519,  0.26518131],
       [ 0.06229945,  0.26518131,  0.76015807]])

Example 3: Adjacency matrix for graph

You can represent a graph as an adjacency matrix. This is a \(n \times n\) matrix \(A\) for a graph with \(n\) nodes, where a 1 at \(A(i, j)\) indicates that there is an edge between node \(i\) and node \(j\). The adjacency matrix is typically a sparse graph, where most entires are 0 (no edges) and sparse matrix representations are useful for efficient calculations.

[7]:
import networkx as nx
[8]:
g = nx.random_lobster(50,0.7,0.7)
[9]:
nx.draw_kamada_kawai(g, nodesize=200, alpha=0.5)
/opt/conda/lib/python3.6/site-packages/networkx/drawing/nx_pylab.py:563: MatplotlibDeprecationWarning:
The iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.
  if not cb.iterable(width):
/opt/conda/lib/python3.6/site-packages/networkx/drawing/nx_pylab.py:611: MatplotlibDeprecationWarning:
The is_numlike function was deprecated in Matplotlib 3.0 and will be removed in 3.2. Use isinstance(..., numbers.Number) instead.
  if cb.is_numlike(alpha):
../_images/notebooks_S07C_Matrices_Annotated_14_1.png
[10]:
m = nx.adjacency_matrix(g)
[11]:
m
[11]:
<73x73 sparse matrix of type '<class 'numpy.longlong'>'
        with 144 stored elements in Compressed Sparse Row format>
[12]:
m.todense()
[12]:
matrix([[0, 1, 0, ..., 0, 0, 0],
        [1, 0, 1, ..., 0, 0, 0],
        [0, 1, 0, ..., 0, 0, 0],
        ...,
        [0, 0, 0, ..., 0, 0, 0],
        [0, 0, 0, ..., 0, 0, 1],
        [0, 0, 0, ..., 0, 1, 0]], dtype=int64)

Matrix representation in Python

We simply use nd-arrays with two dimensions to represent matrices. There is a np.matrix class, but it is not often used because most numpy creation functions return ndarrays, and confusing behavior can result when mixed with ndarrays.

[13]:
M1 = np.random.random((4,4))
M1
[13]:
array([[0.13466606, 0.31871206, 0.18290284, 0.87409045],
       [0.03222064, 0.17796886, 0.71342889, 0.44179838],
       [0.3030862 , 0.46394261, 0.10391697, 0.65641948],
       [0.8470372 , 0.47871508, 0.22701056, 0.79364801]])
[14]:
M2 = np.matrix(M1)
M2
[14]:
matrix([[0.13466606, 0.31871206, 0.18290284, 0.87409045],
        [0.03222064, 0.17796886, 0.71342889, 0.44179838],
        [0.3030862 , 0.46394261, 0.10391697, 0.65641948],
        [0.8470372 , 0.47871508, 0.22701056, 0.79364801]])

Matrix multiplication

[15]:
M1 @ M1
[15]:
array([[0.82422651, 0.60293722, 0.46944367, 1.07229792],
       [0.60052341, 0.58442762, 0.30729164, 0.92573113],
       [0.64327147, 0.54161397, 0.54623827, 1.05907298],
       [0.87054483, 0.84040848, 0.7002114 , 1.73077398]])
[16]:
M2 * M2
[16]:
matrix([[0.82422651, 0.60293722, 0.46944367, 1.07229792],
        [0.60052341, 0.58442762, 0.30729164, 0.92573113],
        [0.64327147, 0.54161397, 0.54623827, 1.05907298],
        [0.87054483, 0.84040848, 0.7002114 , 1.73077398]])

Transposition

[17]:
M1.T
[17]:
array([[0.13466606, 0.03222064, 0.3030862 , 0.8470372 ],
       [0.31871206, 0.17796886, 0.46394261, 0.47871508],
       [0.18290284, 0.71342889, 0.10391697, 0.22701056],
       [0.87409045, 0.44179838, 0.65641948, 0.79364801]])
[18]:
M2.T
[18]:
matrix([[0.13466606, 0.03222064, 0.3030862 , 0.8470372 ],
        [0.31871206, 0.17796886, 0.46394261, 0.47871508],
        [0.18290284, 0.71342889, 0.10391697, 0.22701056],
        [0.87409045, 0.44179838, 0.65641948, 0.79364801]])

Inverse

[19]:
np.linalg.inv(M1)
[19]:
array([[-0.37507535, -0.25444397, -1.48136379,  1.77995629],
       [-3.11815776,  0.46447657,  5.74842653, -1.57882611],
       [-0.75082365,  1.58303768, -0.24176332,  0.14565961],
       [ 2.49588835, -0.46140694, -1.81718518,  0.27096699]])
[20]:
M2.I
[20]:
matrix([[-0.37507535, -0.25444397, -1.48136379,  1.77995629],
        [-3.11815776,  0.46447657,  5.74842653, -1.57882611],
        [-0.75082365,  1.58303768, -0.24176332,  0.14565961],
        [ 2.49588835, -0.46140694, -1.81718518,  0.27096699]])

Properties of a matrix

[21]:
M = np.arange(16).reshape((4,4))
[22]:
M
[22]:
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11],
       [12, 13, 14, 15]])
[23]:
M.size
[23]:
16
[24]:
M.shape
[24]:
(4, 4)

Just as with vectors, we can measure the size or norm of a matrix. There are many possible norms.

The following norms can be calculated using np.linalg.norm

=====  ============================  ==========================
ord    norm for matrices             norm for vectors
=====  ============================  ==========================
None   Frobenius norm                2-norm
'fro'  Frobenius norm                --
'nuc'  nuclear norm                  --
inf    max(sum(abs(x), axis=1))      max(abs(x))
-inf   min(sum(abs(x), axis=1))      min(abs(x))
0      --                            sum(x != 0)
1      max(sum(abs(x), axis=0))      as below
-1     min(sum(abs(x), axis=0))      as below
2      2-norm (largest sing. value)  as below
-2     smallest singular value       as below
other  --                            sum(abs(x)**ord)**(1./ord)
=====  ============================  ==========================
[25]:
np.linalg.norm(M)
[25]:
35.21363372331802
[26]:
np.sqrt(np.sum(M**2))
[26]:
35.21363372331802
[27]:
np.linalg.norm(M, ord=2)
[27]:
35.13996365902469
[28]:
np.linalg.svd(M)[1][0]
[28]:
35.13996365902469

The trace of a matrix \(A\) is the sum of its diagonal elements. It is important for a couple of reasons:

  • It is an invariant of a matrix under change of basis

  • It defines a matrix norm

[29]:
M.trace()
[29]:
30
[30]:
M.diagonal().sum()
[30]:
30

The determinant of a matrix is defined to be the alternating sum of permutations of the elements of a matrix. Let’s not dwell on that though. It is important to know that the determinant of a \(2\times 2\) matrix is

\[\begin{split}\left|\begin{matrix}a_{11} & a_{12}\\a_{21} & a_{22}\end{matrix}\right| = a_{11}a_{22} - a_{12}a_{21}\end{split}\]

This may be extended to an \(n \times n\) matrix by minor expansion. I will leave that for you to google.

What is most important about the determinant:

  • Like the trace, it is also invariant under change of basis

  • An \(n\times n\) matrix \(A\) is invertible \(\iff\) det\((A)\neq 0\)

  • The rows(columns) of an \(n\times n\) matrix \(A\) are linearly independent \(\iff\) det\((A)\neq 0\)

Geometrically, the determinant is the volume of the paralleliped spanned by the column vectors of the matrix.

[31]:
np.linalg.det(M)
[31]:
0.0

Matrix operations

[32]:
A = np.arange(12).reshape((3,4))
B = np.arange(12).reshape((4,3))

Addition

[33]:
A + A
[33]:
array([[ 0,  2,  4,  6],
       [ 8, 10, 12, 14],
       [16, 18, 20, 22]])

Scalar multiplication

[34]:
2 * A
[34]:
array([[ 0,  2,  4,  6],
       [ 8, 10, 12, 14],
       [16, 18, 20, 22]])

Matrix multiplication

[35]:
A @ B
[35]:
array([[ 42,  48,  54],
       [114, 136, 158],
       [186, 224, 262]])
[36]:
A.T @ A
[36]:
array([[ 80,  92, 104, 116],
       [ 92, 107, 122, 137],
       [104, 122, 140, 158],
       [116, 137, 158, 179]])
[37]:
A @ A.T
[37]:
array([[ 14,  38,  62],
       [ 38, 126, 214],
       [ 62, 214, 366]])

First, let’s review some linear algebra topics:

Linear Independence

A collection of vectors \(v_1,...,v_n\) is said to be linearly independent if

\[c_1v_1 + \cdots c_nv_n = 0\]
\[\iff\]
\[c_1=\cdots=c_n=0\]

In other words, any linear combination of the vectors that results in a zero vector is trivial.

Another interpretation of this is that no vector in the set may be expressed as a linear combination of the others. In this sense, linear independence is an expression of non-redundancy in a set of vectors.

Fact: Any linearly independent set of \(n\) vectors spans an \(n\)-dimensional space. (I.e. the collection of all possible linear combinations is \(\mathbb{R}^n\).) Such a set of vectors is said to be a basis of \(\mathbb{R}^n\). Another term for basis is minimal spanning set.

Column space, Row space, Rank and Kernel

Let \(A\) be an \(m\times n\) matrix. We can view the columns of \(A\) as vectors, say \(a_1,...,a_n\). The space of all linear combinations of the \(a_i\) are the column space of the matrix \(A\). Now, if \(a_1,...,a_n\) are linearly independent, then the column space is of dimension \(n\). Otherwise, the dimension of the column space is the size of the maximal set of linearly independent \(a_i\). Row space is exactly analogous, but the vectors are the rows of \(A\).

The rank of a matrix A is the dimension of its column space - and - the dimension of its row space. These are equal for any matrix. Rank can be thought of as a measure of non-degeneracy of a system of linear equations, in that it is the dimension of the image of the linear transformation determined by \(A\).

The kernel of a matrix A is the dimension of the space mapped to zero under the linear transformation that \(A\) represents. The dimension of the kernel of a linear transformation is called the nullity.

Index theorem: For an \(m\times n\) matrix \(A\),

rank(\(A\)) + nullity(\(A\)) = \(n\).

Solving linear equations (Matrix-vector multiplication)

We return to solving the system of equations

\[Ax = b\]

Expanding,

\begin{align*} \left[\begin{matrix}a_{11}&\cdots&a_{1n}\\ \vdots & &\vdots\\ a_{m1}&\cdots&a_{mn}\end{matrix}\right] \left[\begin{matrix}x_1\\ \vdots\\ x_n\end{matrix}\right] = \left[\begin{matrix}b_1\\ \vdots\\ b_m\end{matrix}\right] \end{align*}

which can be rewritten as a weighted sum of the column vectors of \(A\)

\[\begin{split}x_1 \left[ \matrix{a_{11} \\ \vdots \\ a_{m1}} \right] + \cdots + x_n \left[ \matrix{a_{1n} \\ \vdots \\ a_{mn}} \right] = \left[\begin{matrix}b_1\\ \vdots\\ b_m\end{matrix}\right]\end{split}\]

So solving the system of equations means finding the appropriate weights \(x\) such that the sum of weighted column vectors of \(A\) is the same as \(b\). Put another way, we are trying to express \(b\) as a linear combination of the column vectors of \(A\).

Note that we have two different useful perspectives on a matrix - as a collection of column (or row) vectors that span a vector space, and as a function that transforms or maps a vector into another vector.

[38]:
A = np.random.random((3,3))
b = np.random.random((3,1))
[39]:
A
[39]:
array([[0.18326492, 0.88715452, 0.29795212],
       [0.28180974, 0.2799786 , 0.50142232],
       [0.96211145, 0.09523118, 0.91433839]])
[40]:
b
[40]:
array([[0.54495548],
       [0.00754695],
       [0.75174215]])

Using the solve function.

[41]:
np.linalg.solve(A, b)
[41]:
array([[ 2.29453662],
       [ 0.69950611],
       [-1.66510848]])

Using the projection onto the column space of A.

[42]:
np.linalg.inv(A.T @ A) @ A.T @ b
[42]:
array([[ 2.29453662],
       [ 0.69950611],
       [-1.66510848]])

When \(m<n\), the linear system is said to be underdetermined. I.e. there are fewer equations than unknowns. In this case, there are either no solutions (if the system is inconsistent) or infinite solutions. A unique solution is not possible.

When \(m>n\), the system may be overdetermined. In other words, there are more equations than unknowns. They system could be inconsistent, or some of the equations could be redundant.

There are many techniques to solve and analyze linear systems. Our goal is to understand the theory behind many of the built-in functions, and how they efficiently solve systems of equations.

Special Matrices

Some matrices have interesting properties that allow us either simplify the underlying linear system or to understand more about it.

Square Matrices

Square matrices have the same number of columns (usually denoted \(n\)). We refer to an arbitrary square matrix as and \(n\times n\) or we refer to it as a ‘square matrix of dimension \(n\)’. If an \(n\times n\) matrix \(A\) has full rank (i.e. it has rank \(n\)), then \(A\) is invertible, and its inverse is unique. This is a situation that leads to a unique solution to a linear system.

Diagonal Matrices

A diagonal matrix is a matrix with all entries off the diagonal equal to zero. Strictly speaking, such a matrix should be square, but we can also consider rectangular matrices of size \(m\times n\) to be diagonal, if all entries \(a_{ij}\) are zero for \(i\neq j\)

Symmetric and Skew Symmetric

A matrix \(A\) is (skew) symmetric iff \(a_{ij} = (-)a_{ji}\).

Equivalently, \(A\) is (skew) symmetric iff

\[A = (-)A^T\]

Upper and Lower Triangular

A matrix \(A\) is (upper|lower) triangular if \(a_{ij} = 0\) for all \(i (>|<) j\)

Orthogonal and Orthonormal

A matrix \(A\) is orthogonal iff

\[A A^T = I\]

In other words, \(A\) is orthogonal iff

\[A^T=A^{-1}\]

Fact: The rows and columns of an orthogonal matrix are an orthonormal set of vectors.

Positive Definite

Positive definite matrices are an important class of matrices with very desirable properties. A square matrix \(A\) is positive definite if

\[u^TA u > 0\]

for any non-zero n-dimensional vector \(u\).

A symmetric, positive-definite matrix \(A\) is a positive-definite matrix such that

\[A = A^T\]

IMPORTANT:

  • Symmetric, positive-definite matrices have ‘square-roots’ (in a sense)

  • Any symmetric, positive-definite matrix is diagonizable!!!

  • Co-variance matrices are symmetric and positive-definite

Now that we have the basics down, we can move on to numerical methods for solving systems - aka matrix decompositions.