# Linear Algebra Examples¶

This just shows the machanics of linear algebra calculations with python. See Lecture 5 for motivation and understanding.

In [1]:

import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
%matplotlib inline

In [2]:

plt.style.use('ggplot')


## Exact solution of linear system of equations¶

\begin{align} x + 2y &= 3 \\ 3x + 4y &= 17 \end{align}
In [3]:

A = np.array([[1,2],[3,4]])
A

Out[3]:

array([[1, 2],
[3, 4]])

In [4]:

b = np.array([3,17])
b

Out[4]:

array([ 3, 17])

In [5]:

x = la.solve(A, b)
x

Out[5]:

array([ 11.,  -4.])

In [6]:

np.allclose(A @ x, b)

Out[6]:

True

In [7]:

A1 = np.random.random((1000,1000))
b1 = np.random.random(1000)


### Using solve is faster and more stable numerically than using matrix inversion¶

In [8]:

%timeit la.solve(A1, b1)

The slowest run took 5.24 times longer than the fastest. This could mean that an intermediate result is being cached
1 loops, best of 3: 90.3 ms per loop

In [9]:

%timeit la.inv(A1) @ b1

1 loops, best of 3: 140 ms per loop


### Under the hood (Optional)¶

The solve function uses the dgesv fortran function to do the actual work. Here is an example of how to do this directly with the lapack function. There is rarely any reason to use blas or lapack functions directly becuase the linalg package provides more convenient functions that also perfrom error checking, but you can use Python to experiment with lapack or blass before using them in a language like C or Fortran.

In [10]:

import scipy.linalg.lapack as lapack

In [11]:

lu, piv, x, info = lapack.dgesv(A, b)
x

Out[11]:

array([ 11.,  -4.])


## Basic information about a matrix¶

In [12]:

C = np.array([[1, 2+3j], [3-2j, 4]])
C

Out[12]:

array([[ 1.+0.j,  2.+3.j],
[ 3.-2.j,  4.+0.j]])

In [13]:

C.conjugate()

Out[13]:

array([[ 1.-0.j,  2.-3.j],
[ 3.+2.j,  4.-0.j]])

In [14]:

def trace(M):
return np.diag(M).sum()

In [15]:

trace(C)

Out[15]:

(5+0j)

In [16]:

np.allclose(trace(C), la.eigvals(C).sum())

Out[16]:

True

In [17]:

la.det(C)

Out[17]:

(-8-5j)

In [18]:

np.linalg.matrix_rank(C)

Out[18]:

2

In [19]:

la.norm(C, None) # Frobenius (default)

Out[19]:

6.5574385243020004

In [20]:

la.norm(C, 2) # largest sinular value

Out[20]:

6.3890280236012158

In [21]:

la.norm(C, -2) # smallest singular value

Out[21]:

1.4765909770949921

In [22]:

la.svdvals(C)

Out[22]:

array([ 6.38902802,  1.47659098])


## Least-squares solution¶

In [23]:

la.solve(A, b)

Out[23]:

array([ 11.,  -4.])

In [24]:

x, resid, rank, s = la.lstsq(A, b)
x

Out[24]:

array([ 11.,  -4.])

In [25]:

A1 = np.array([[1,2],[2,4]])
A1

Out[25]:

array([[1, 2],
[2, 4]])

In [26]:

b1 = np.array([3, 17])
b1

Out[26]:

array([ 3, 17])

In [27]:

try:
la.solve(A1, b1)
except la.LinAlgError as e:
print(e)

singular matrix

In [28]:

x, resid, rank, s = la.lstsq(A1, b1)
x

Out[28]:

array([ 1.48,  2.96])

In [29]:

A2 = np.random.random((10,3))
b2 = np.random.random(10)

In [30]:

try:
la.solve(A2, b2)
except ValueError as e:
print(e)

expected square matrix

In [31]:

x, resid, rank, s = la.lstsq(A2, b2)
x

Out[31]:

array([ 0.4036226 ,  0.38604513,  0.40359296])


### Normal equations¶

One way to solve least squares equations $$X\beta = y$$ for $$\beta$$ is by using the formula $$\beta = (X^TX)^{-1}X^Ty$$ as you may have learnt in statistical theory classes (or can derive yourself with a bit of calculus). This is implemented below.

Note: This is not how the la.lstsq function solves least square problems as it can be inefficent for large matrices.

In [33]:

def least_squares(X, y):
return la.solve(X.T @ X, X.T @ y)

In [34]:

least_squares(A2, b2)

Out[34]:

array([ 0.4036226 ,  0.38604513,  0.40359296])


## Matrix Decompositinos¶

In [35]:

A = np.array([[1,0.6],[0.6,4]])
A

Out[35]:

array([[ 1. ,  0.6],
[ 0.6,  4. ]])


### LU¶

In [36]:

p, l, u = la.lu(A)

In [37]:

p

Out[37]:

array([[ 1.,  0.],
[ 0.,  1.]])

In [38]:

l

Out[38]:

array([[ 1. ,  0. ],
[ 0.6,  1. ]])

In [39]:

u

Out[39]:

array([[ 1.  ,  0.6 ],
[ 0.  ,  3.64]])

In [40]:

np.allclose(p@l@u, A)

Out[40]:

True


### Choleskey¶

In [41]:

U = la.cholesky(A)
U

Out[41]:

array([[ 1.       ,  0.6      ],
[ 0.       ,  1.9078784]])

In [42]:

np.allclose(U.T @ U, A)

Out[42]:

True

In [43]:

# If workiing wiht complex matrices
np.allclose(U.T.conj() @ U, A)

Out[43]:

True


### QR¶

In [44]:

Q, R = la.qr(A)

In [45]:

Q

Out[45]:

array([[-0.85749293, -0.51449576],
[-0.51449576,  0.85749293]])

In [46]:

np.allclose((la.norm(Q[:,0]), la.norm(Q[:,1])), (1,1))

Out[46]:

True

In [ ]:



In [47]:

np.allclose(Q@R, A)

Out[47]:

True


### Spectral¶

In [48]:

u, v = la.eig(A)

In [49]:

u

Out[49]:

array([ 0.88445056+0.j,  4.11554944+0.j])

In [50]:

v

Out[50]:

array([[-0.98195639, -0.18910752],
[ 0.18910752, -0.98195639]])

In [51]:

np.allclose((la.norm(v[:,0]), la.norm(v[:,1])), (1,1))

Out[51]:

True

In [52]:

np.allclose(v @ np.diag(u) @ v.T, A)

Out[52]:

True


#### Inverting A¶

In [53]:

np.allclose(v @ np.diag(1/u) @ v.T, la.inv(A))

Out[53]:

True


#### Powers of A¶

In [54]:

np.allclose(v @ np.diag(u**5) @ v.T, np.linalg.matrix_power(A, 5))

Out[54]:

True


### SVD¶

In [55]:

U, s, V = la.svd(A)

In [56]:

U

Out[56]:

array([[ 0.18910752,  0.98195639],
[ 0.98195639, -0.18910752]])

In [57]:

np.allclose((la.norm(U[:,0]), la.norm(U[:,1])), (1,1))

Out[57]:

True

In [58]:

s

Out[58]:

array([ 4.11554944,  0.88445056])

In [59]:

V

Out[59]:

array([[ 0.18910752,  0.98195639],
[ 0.98195639, -0.18910752]])

In [60]:

np.allclose((la.norm(V[:,0]), la.norm(V[:,1])), (1,1))

Out[60]:

True

In [61]:

np.allclose(U @ np.diag(s) @ V, A)

Out[61]:

True

In [ ]: