logo

Finding the Inverse Matrix Using Gaussian Elimination Algorithm 📂Matrix Algebra

Finding the Inverse Matrix Using Gaussian Elimination Algorithm

Algorithm

Input

A invertible matrix $M \in \mathbb{R}^{n \times n}$ is given.


Step 1. Initialization

Create an identity matrix $X$ of the same size as $M$.


Step 2. Echelon Form

Through Gaussian elimination, transform $M$ into echelon form. Normally, it would be sufficient for it to be in the form of an upper triangular matrix, but in the algorithm for calculating the inverse matrix, we go as far as calculating it to be a diagonal matrix. Note that the same row operations applied to $M$ are applied to $X$ as well.


Output

You obtain $X$, which is the inverse matrix of $M$, satisfying $MX = XM = E_{n}$.

Example

It’s better to see an example than to hear a detailed explanation.

Input

A matrix $M = \begin{bmatrix} 1 & 0 & 1 & 1\\ 2 & 0 & 1 & 0\\ -2& 3 & 4 & 0\\ -5& 5 & 6 & 0\end{bmatrix}$ is given.


Step 1. Initialization

$$ \begin{bmatrix} 1 & 0 & 1 & 1 \\ 2 & 0 & 1 & 0 \\ -2& 3 & 4 & 0 \\ -5& 5 & 6 & 0 \end{bmatrix} \cdots \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} $$

An identity matrix of the same size as the given matrix is created. Above, the left is $M$ and the right will be the inverse matrix $X$ later.


Step 2. Echelon Form

Strictly speaking, the method introduced in this post does not create an echelon form in the usual manner. For the sake of computational convenience, a diagonal matrix is created first, and then each row is divided by its pivot.

$$ \begin{align*} & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 2 & 0 & 1 & 0 \\ -2& 3 & 4 & 0 \\ -5& 5 & 6 & 0 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 0 & -1 & -2 \\ 0 & 3 & 6 & 2 \\ 0 & 5 & 11 & 5 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ -2 & 1 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ 5 & 0 & 0 & 1 \end{bmatrix} \end{align*} $$

Applying row operations to $M$ while applying the same to $X$ means the same as adding the first row of $M$, multiplied by $-2, 2, 5$, to the second, third, and fourth rows, and similarly for $X$.

$$ \begin{align*} & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 0 & -1 & -2 \\ 0 & 3 & 6 & 2 \\ 0 & 5 & 11 & 5 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ -2 & 1 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ 5 & 0 & 0 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 3 & 6 & 2 \\ 0 & 0 & -1 & -2 \\ 0 & 5 & 11 & 5 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ 5 & 0 & 0 & 1 \end{bmatrix} \end{align*} $$

If the pivot is $0$, that is if $M_{ii} = 0$, then the $j$-th row and the $i$-th row are swapped, where $j$ meets $M_{ji} \ne 0$. The existence of the $j$-th row satisfying $M_{ji} \ne 0$ is ensured by the assumption that $M$ is an invertible matrix. Naturally, this row exchange operation must be applied to $X$ as well.

$$ \begin{align*} & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 3 & 6 & 2 \\ 0 & 0 & -1 & -2 \\ 0 & 5 & 11 & 5 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ 5 & 0 & 0 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 3 & 6 & 2 \\ 0 & 0 & -1 & -2 \\ 0 & 0 & 1 & 5/3 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ 5/3 & 0 & -5/3 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 3 & 0 & -10 \\ 0 & 0 & -1 & -2 \\ 0 & 0 & 0 & -1/3 \end{bmatrix} & \cdots & \begin{bmatrix} -1 & 1 & 0 & 0 \\ -10 & 6 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ -1/3 & 1 & -5/3 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 3 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1/3 \end{bmatrix} & \cdots & \begin{bmatrix} 0 & -2 & 5 & -3 \\ 0 & -24 & 51 & -30 \\ 0 & -5 & 10 & -6 \\ -1/3 & 1 & -5/3 & 1 \end{bmatrix} \end{align*} $$

Repeat until $M$ becomes a diagonal matrix. Originally, it is made into the form of an upper triangular matrix and then divided by the row’s pivot to create an echelon form. After that, other row’s pivot’s subsequent components are eliminated, but this order is changed because it’s cumbersome in code writing. This order can be changed without issues due to the theoretical background of the Gaussian elimination.

$$ \begin{align*} & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 3 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1/3 \end{bmatrix} & \cdots & \begin{bmatrix} 0 & -2 & 5 & -3 \\ 0 & -24 & 51 & -30 \\ 0 & -5 & 10 & -6 \\ -1/3 & 1 & -5/3 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} & \cdots & \begin{bmatrix} 0 & -2 & 5 & -3 \\ 0 & -8 & 17 & -10 \\ 0 & 5 & -10 & 6 \\ 1 & -3 & 5 & -3 \end{bmatrix} \end{align*} $$

After completing the diagonal matrix, divide each row by its respective pivot. Now, $M$ should be in the form of an identity matrix.


Output

$$ X = \begin{bmatrix} 0 & -2 & 5 & -3 \\ 0 & -8 & 17 & -10 \\ 0 & 5 & -10 & 6 \\ 1 & -3 & 5 & -3 \end{bmatrix} $$

$X$ is the inverse matrix of $M$. The process up to now can be summarized as follows.

$$ \begin{align*} & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 2 & 0 & 1 & 0 \\ -2& 3 & 4 & 0 \\ -5& 5 & 6 & 0 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 0 & -1 & -2 \\ 0 & 3 & 6 & 2 \\ 0 & 5 & 11 & 5 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ -2 & 1 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ 5 & 0 & 0 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 3 & 6 & 2 \\ 0 & 0 & -1 & -2 \\ 0 & 5 & 11 & 5 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ 5 & 0 & 0 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 3 & 6 & 2 \\ 0 & 0 & -1 & -2 \\ 0 & 0 & 1 & 5/3 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ 5/3 & 0 & -5/3 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 3 & 0 & -10 \\ 0 & 0 & -1 & -2 \\ 0 & 0 & 0 & -1/3 \end{bmatrix} & \cdots & \begin{bmatrix} -1 & 1 & 0 & 0 \\ -10 & 6 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ -1/3 & 1 & -5/3 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 3 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1/3 \end{bmatrix} & \cdots & \begin{bmatrix} 0 & -2 & 5 & -3 \\ 0 & -24 & 51 & -30 \\ 0 & -5 & 10 & -6 \\ -1/3 & 1 & -5/3 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} & \cdots & \begin{bmatrix} 0 & -2 & 5 & -3 \\ 0 & -8 & 17 & -10 \\ 0 & 5 & -10 & 6 \\ 1 & -3 & 5 & -3 \end{bmatrix} \end{align*} $$

Code

Below is the Julia code for finding the inverse matrix using Gaussian elimination.

using LinearAlgebra

M = [
      1 0 1 1
      2 0 1 0
     -2 3 4 0
     -5 5 6 0
     ]
inv_M = inv(M)

n = size(M)[1]
M = Float64.(M)
X = Matrix(1.0I,n,n)
for j = 1:n
    if iszero(M[j,j])
        for i = j:n
            if !iszero(M[i,i])
                M[[j,i],:] = M[[i,j],:]
                X[[j,i],:] = X[[i,j],:]
                break
            end
        end
    end

    pivot = M[j,j]
    for i = 1:n
        if i == j continue end
        r = (- M[i,j] / pivot)
        M[i,:] += r*M[j,:]
        X[i,:] += r*X[j,:]
    end
end

for i = 1:n
    pivot = M[i,i]
    M[i,:] /= pivot
    X[i,:] /= pivot
end
M
X
inv_M