Linear Algebra Examples¶
This just shows the machanics of linear algebra calculations with python. See Lecture 5 for motivation and understanding.
[1]:
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
%matplotlib inline
[2]:
plt.style.use('ggplot')
Resources¶
Tutorial for ``scipy.linalg` <http://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html>`__
Exact solution of linear system of equations¶
[3]:
A = np.array([[1,2],[3,4]])
A
[3]:
array([[1, 2],
[3, 4]])
[4]:
b = np.array([3,17])
b
[4]:
array([ 3, 17])
[5]:
x = la.solve(A, b)
x
[5]:
array([11., -4.])
[6]:
np.allclose(A @ x, b)
[6]:
True
[7]:
A1 = np.random.random((1000,1000))
b1 = np.random.random(1000)
Using solve is faster and more stable numerically than using matrix inversion¶
[8]:
%timeit la.solve(A1, b1)
806 ms ± 9.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
[9]:
%timeit la.inv(A1) @ b1
870 ms ± 9.07 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Under the hood (Optional)¶
The solve
function uses the dgesv
fortran function to do the actual work. Here is an example of how to do this directly with the lapack
function. There is rarely any reason to use blas
or lapack
functions directly becuase the linalg
package provides more convenient functions that also perfrom error checking, but you can use Python to experiment with lapack
or blas
before using them in a language like C or Fortran.
[10]:
import scipy.linalg.lapack as lapack
[11]:
lu, piv, x, info = lapack.dgesv(A, b)
x
[11]:
array([11., -4.])
Basic information about a matrix¶
[12]:
C = np.array([[1, 2+3j], [3-2j, 4]])
C
[12]:
array([[1.+0.j, 2.+3.j],
[3.-2.j, 4.+0.j]])
[13]:
C.conjugate()
[13]:
array([[1.-0.j, 2.-3.j],
[3.+2.j, 4.-0.j]])
[14]:
def trace(M):
return np.diag(M).sum()
[15]:
trace(C)
[15]:
(5+0j)
[16]:
np.allclose(trace(C), la.eigvals(C).sum())
[16]:
True
[17]:
la.det(C)
[17]:
(-8-5j)
[18]:
np.linalg.matrix_rank(C)
[18]:
2
[19]:
la.norm(C, None) # Frobenius (default)
[19]:
6.557438524302
[20]:
la.norm(C, 2) # largest sinular value
[20]:
6.389028023601216
[21]:
la.norm(C, -2) # smallest singular value
[21]:
1.476590977094992
[22]:
la.svdvals(C)
[22]:
array([6.38902802, 1.47659098])
Least-squares solution¶
[23]:
la.solve(A, b)
[23]:
array([11., -4.])
[24]:
x, resid, rank, s = la.lstsq(A, b)
x
[24]:
array([11., -4.])
[25]:
A1 = np.array([[1,2],[2,4]])
A1
[25]:
array([[1, 2],
[2, 4]])
[26]:
b1 = np.array([3, 17])
b1
[26]:
array([ 3, 17])
[27]:
try:
la.solve(A1, b1)
except la.LinAlgError as e:
print(e)
Matrix is singular.
[28]:
x, resid, rank, s = la.lstsq(A1, b1)
x
[28]:
array([1.48, 2.96])
[29]:
A2 = np.random.random((10,3))
b2 = np.random.random(10)
[30]:
try:
la.solve(A2, b2)
except ValueError as e:
print(e)
Input a needs to be a square matrix.
[31]:
x, resid, rank, s = la.lstsq(A2, b2)
x
[31]:
array([ 0.41443305, -0.53024457, 0.61557362])
Normal equations¶
One way to solve least squares equations \(X\beta = y\) for \(\beta\) is by using the formula \(\beta = (X^TX)^{-1}X^Ty\) as you may have learnt in statistical theory classes (or can derive yourself with a bit of calculus). This is implemented below.
Note: This is not how the la.lstsq
function solves least square problems as it can be inefficent for large matrices.
[32]:
def least_squares(X, y):
return la.solve(X.T @ X, X.T @ y)
[33]:
least_squares(A2, b2)
[33]:
array([ 0.41443305, -0.53024457, 0.61557362])
Matrix Decompositions¶
[34]:
A = np.array([[1,0.6],[0.6,4]])
A
[34]:
array([[1. , 0.6],
[0.6, 4. ]])
LU¶
[35]:
p, l, u = la.lu(A)
[36]:
p
[36]:
array([[1., 0.],
[0., 1.]])
[37]:
l
[37]:
array([[1. , 0. ],
[0.6, 1. ]])
[38]:
u
[38]:
array([[1. , 0.6 ],
[0. , 3.64]])
[39]:
np.allclose(p@l@u, A)
[39]:
True
Choleskey¶
[40]:
U = la.cholesky(A)
U
[40]:
array([[1. , 0.6 ],
[0. , 1.9078784]])
[41]:
np.allclose(U.T @ U, A)
[41]:
True
[42]:
# If workiing wiht complex matrices
np.allclose(U.T.conj() @ U, A)
[42]:
True
QR¶
[43]:
Q, R = la.qr(A)
[44]:
Q
[44]:
array([[-0.85749293, -0.51449576],
[-0.51449576, 0.85749293]])
[45]:
np.allclose((la.norm(Q[:,0]), la.norm(Q[:,1])), (1,1))
[45]:
True
[ ]:
[46]:
np.allclose(Q@R, A)
[46]:
True
Spectral¶
[47]:
u, v = la.eig(A)
[48]:
u
[48]:
array([0.88445056+0.j, 4.11554944+0.j])
[49]:
v
[49]:
array([[-0.98195639, -0.18910752],
[ 0.18910752, -0.98195639]])
[50]:
np.allclose((la.norm(v[:,0]), la.norm(v[:,1])), (1,1))
[50]:
True
[51]:
np.allclose(v @ np.diag(u) @ v.T, A)
[51]:
True
SVD¶
[54]:
U, s, V = la.svd(A)
[55]:
U
[55]:
array([[ 0.18910752, 0.98195639],
[ 0.98195639, -0.18910752]])
[56]:
np.allclose((la.norm(U[:,0]), la.norm(U[:,1])), (1,1))
[56]:
True
[57]:
s
[57]:
array([4.11554944, 0.88445056])
[58]:
V
[58]:
array([[ 0.18910752, 0.98195639],
[ 0.98195639, -0.18910752]])
[59]:
np.allclose((la.norm(V[:,0]), la.norm(V[:,1])), (1,1))
[59]:
True
[60]:
np.allclose(U @ np.diag(s) @ V, A)
[60]:
True
[ ]: