Review of solving simultaneous linear equations (HS)

Reference

  1. Matrix algebra from a statistician’s perspective, David A. Harville
  2. Linear algebra and its applications, 4th ed, David C. Lay

Linear Systems: Consistency & Solutions

Motivation

In many instances, the solution to a problem in statistics can be reduced to a problem of solving a system of linear equations. For example, the problem of obtaining least squares estimates of the parameters in multiple linear regression can be reduced to solving a system of linear equations called the normal equations (will discuss in the next lecture), or constrained normal equations if the parameters are subject to linear constraints. (?? reference)

Consider a set of \(m\) linear equations in \(n\) unknowns:

\begin{align*} a_{11} x_1 + &a_{12} x_2& +& ... + &a_{1n} x_n &=& b_1\\ \vdots && &&\vdots &= &\vdots\\ a_{m1} x_1 + &a_{m2} x_2& +& ... + &a_{mn} x_n &=&b_m \end{align*}

Collectively, these equations are called a system of linear equations or simply a linear system.

We can let:

\begin{align*} A=\left[\begin{matrix}a_{11} & \cdots & a_{1n}\\ \vdots & &\vdots\\ a_{m1}&\cdots & a_{mn} \end{matrix}\right], & & x = \left[\begin{matrix}x_1\\ \vdots\\ x_n\end{matrix}\right] & \;\;\;\;\textrm{ and } & b = \left[\begin{matrix}b_1\\ \vdots\\ b_m\end{matrix}\right] \end{align*}

And re-write the system:

\[\mathbf{Ax} = \mathbf{b},\]

where \(\mathbf{A}\) is a called the coefficient matrix (or matrix of coefficients), \(\mathbf{x}\) is the solution to the linear system.

Solving linear systems

The basic strategy to solve linear systems is Gaussian Elimination (GE).

Let’s review how Gaussian elimination (GE) works. We will deal with a \(3\times 3\) system of equations for conciseness, but everything here generalizes to the \(n\times n\) case. Consider the following equation:

\[\begin{split}\left(\begin{matrix}a_{11}&a_{12} & a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{matrix}\right)\left(\begin{matrix}x_1\\x_2\\x_3\end{matrix}\right) = \left(\begin{matrix}b_1\\b_2\\b_3\end{matrix}\right)\end{split}\]

For simplicity, let us assume that the leftmost matrix \(A\) is non-singular. To solve the system using GE, we start with the “augmented matrix”:

\[\begin{split}\left(\begin{array}{ccc|c}a_{11}&a_{12} & a_{13}& b_1 \\a_{21}&a_{22}&a_{23}&b_2\\a_{31}&a_{32}&a_{33}&b_3\end{array}\right)\end{split}\]

We begin at the first entry, \(a_{11}\). If \(a_{11} \neq 0\), then we divide the first row by \(a_{11}\) and then subtract the appropriate multiple of the first row from each of the other rows, zeroing out the first entry of all rows. (If \(a_{11}\) is zero, we need to permute rows. We will not go into details of that here.) The result is as follows:

\left(\begin{array}{ccc|c} 1 & \frac{a_{12}}{a_{11}} & \frac{a_{13}}{a_{11}} & \frac{b_1}{a_{11}} \\ 0 & a_{22} - a_{21}\frac{a_{12}}{a_{11}} & a_{23} - a_{21}\frac{a_{13}}{a_{11}} & b_2 - a_{21}\frac{b_1}{a_{11}}\\ 0&a_{32}-a_{31}\frac{a_{12}}{a_{11}} & a_{33} - a_{31}\frac{a_{13}}{a_{11}} &b_3- a_{31}\frac{b_1}{a_{11}}\end{array}\right)

We repeat the procedure for the second row: first divide by the leading entry, then subtract the appropriate multiple of the resulting row from each of the third and first rows, so that the second entry in row 1 and in row 3 are zero. We could continue until the matrix on the left is the identity. In that case, we can then just “read off” the solution: i.e., the vector \(x\) is the resulting column vector on the right. Usually, it is more efficient to stop at reduced row echelon form (upper triangular, with ones on the diagonal), and then use back substitution to obtain the final answer.

Note that in some cases, it is necessary to permute rows to obtain reduced row echelon form. This is called partial pivoting. If we also manipulate columns, that is called full pivoting.

It should be mentioned that we may obtain the inverse of a matrix using GE, by reducing the matrix \(A\) to the identity, with the identity matrix as the augmented portion.

Example (?? reference)

Solve the linear system below:

Missing

Summary

To solve a linear system, we apply elementary row operations to the argumented matrix until reduced row echelon form is obtained and then use back substitution to find the solutions.

Elementary row operations include: 1. (Replacement) Replace one row by the sum of itself and a multiple of another row. 2. (interchange) Interchange two rows. 3. (Scaling) Multiply all entries in a row by a non-zero constant.

Consistency (Existence of one or more solutions)

A linear system is said to be consistent if it has one or more solutions; a linear system is inconsistent if no solution exists.

Fact: There are ONLY three possible scenarios for a linear system:

  1. no solution
  2. exactly one solution
  3. infinitely many solutions

Linear Independence:

  • If \(A\) is an \(m\times n\) matrix and \(m>n\), if all \(m\) rows are linearly independent, then the system is overdetermined and inconsistent. The system cannot be solved exactly. This is the usual case in data analysis, and why least squares is so important.
  • If \(A\) is an \(m\times n\) matrix and \(m<n\), if all \(m\) rows are linearly independent, then the system is underdetermined and there are infinite solutions.
  • If \(A\) is an \(m\times n\) matrix and some of its rows are linearly dependent, then the system is reducible. We can get rid of some equations.
  • If \(A\) is a square matrix and its rows are linearly independent, the system has a unique solution. (\(A\) is invertible.)

Solutions

In [ ]: