logo

Finding the Inverse Matrix Using Gaussian Elimination Algorithm 📂Matrix Algebra

Finding the Inverse Matrix Using Gaussian Elimination Algorithm

Algorithm

Input

A invertible matrix MRn×nM \in \mathbb{R}^{n \times n} is given.


Step 1. Initialization

Create an identity matrix XX of the same size as MM.


Step 2. Echelon Form

Through Gaussian elimination, transform MM into echelon form. Normally, it would be sufficient for it to be in the form of an upper triangular matrix, but in the algorithm for calculating the inverse matrix, we go as far as calculating it to be a diagonal matrix. Note that the same row operations applied to MM are applied to XX as well.


Output

You obtain XX, which is the inverse matrix of MM, satisfying MX=XM=EnMX = XM = E_{n}.

Example

It’s better to see an example than to hear a detailed explanation.

Input

A matrix M=[1011201023405560]M = \begin{bmatrix} 1 & 0 & 1 & 1\\ 2 & 0 & 1 & 0\\ -2& 3 & 4 & 0\\ -5& 5 & 6 & 0\end{bmatrix} is given.


Step 1. Initialization

[1011201023405560][1000010000100001] \begin{bmatrix} 1 & 0 & 1 & 1 \\ 2 & 0 & 1 & 0 \\ -2& 3 & 4 & 0 \\ -5& 5 & 6 & 0 \end{bmatrix} \cdots \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}

An identity matrix of the same size as the given matrix is created. Above, the left is MM and the right will be the inverse matrix XX later.


Step 2. Echelon Form

Strictly speaking, the method introduced in this post does not create an echelon form in the usual manner. For the sake of computational convenience, a diagonal matrix is created first, and then each row is divided by its pivot.

[1011201023405560][1000010000100001][10110012036205115][1000210020105001] \begin{align*} & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 2 & 0 & 1 & 0 \\ -2& 3 & 4 & 0 \\ -5& 5 & 6 & 0 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 0 & -1 & -2 \\ 0 & 3 & 6 & 2 \\ 0 & 5 & 11 & 5 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ -2 & 1 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ 5 & 0 & 0 & 1 \end{bmatrix} \end{align*}

Applying row operations to MM while applying the same to XX means the same as adding the first row of MM, multiplied by 2,2,5-2, 2, 5, to the second, third, and fourth rows, and similarly for XX.

[10110012036205115][1000210020105001][10110362001205115][1000201021005001] \begin{align*} & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 0 & -1 & -2 \\ 0 & 3 & 6 & 2 \\ 0 & 5 & 11 & 5 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ -2 & 1 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ 5 & 0 & 0 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 3 & 6 & 2 \\ 0 & 0 & -1 & -2 \\ 0 & 5 & 11 & 5 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ 5 & 0 & 0 & 1 \end{bmatrix} \end{align*}

If the pivot is 00, that is if Mii=0M_{ii} = 0, then the jj-th row and the ii-th row are swapped, where jj meets Mji0M_{ji} \ne 0. The existence of the jj-th row satisfying Mji0M_{ji} \ne 0 is ensured by the assumption that MM is an invertible matrix. Naturally, this row exchange operation must be applied to XX as well.

[10110362001205115][1000201021005001][1011036200120015/3][1000201021005/305/31][10010301000120001/3][11001061021001/315/31][1000030000100001/3][02530245130051061/315/31] \begin{align*} & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 3 & 6 & 2 \\ 0 & 0 & -1 & -2 \\ 0 & 5 & 11 & 5 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ 5 & 0 & 0 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 3 & 6 & 2 \\ 0 & 0 & -1 & -2 \\ 0 & 0 & 1 & 5/3 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ 5/3 & 0 & -5/3 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 3 & 0 & -10 \\ 0 & 0 & -1 & -2 \\ 0 & 0 & 0 & -1/3 \end{bmatrix} & \cdots & \begin{bmatrix} -1 & 1 & 0 & 0 \\ -10 & 6 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ -1/3 & 1 & -5/3 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 3 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1/3 \end{bmatrix} & \cdots & \begin{bmatrix} 0 & -2 & 5 & -3 \\ 0 & -24 & 51 & -30 \\ 0 & -5 & 10 & -6 \\ -1/3 & 1 & -5/3 & 1 \end{bmatrix} \end{align*}

Repeat until MM becomes a diagonal matrix. Originally, it is made into the form of an upper triangular matrix and then divided by the row’s pivot to create an echelon form. After that, other row’s pivot’s subsequent components are eliminated, but this order is changed because it’s cumbersome in code writing. This order can be changed without issues due to the theoretical background of the Gaussian elimination.

[1000030000100001/3][02530245130051061/315/31][1000010000100001][0253081710051061353] \begin{align*} & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 3 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1/3 \end{bmatrix} & \cdots & \begin{bmatrix} 0 & -2 & 5 & -3 \\ 0 & -24 & 51 & -30 \\ 0 & -5 & 10 & -6 \\ -1/3 & 1 & -5/3 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} & \cdots & \begin{bmatrix} 0 & -2 & 5 & -3 \\ 0 & -8 & 17 & -10 \\ 0 & 5 & -10 & 6 \\ 1 & -3 & 5 & -3 \end{bmatrix} \end{align*}

After completing the diagonal matrix, divide each row by its respective pivot. Now, MM should be in the form of an identity matrix.


Output

X=[0253081710051061353] X = \begin{bmatrix} 0 & -2 & 5 & -3 \\ 0 & -8 & 17 & -10 \\ 0 & 5 & -10 & 6 \\ 1 & -3 & 5 & -3 \end{bmatrix}

XX is the inverse matrix of MM. The process up to now can be summarized as follows.

[1011201023405560][1000010000100001][10110012036205115][1000210020105001][10110362001205115][1000201021005001][1011036200120015/3][1000201021005/305/31][10010301000120001/3][11001061021001/315/31][1000030000100001/3][02530245130051061/315/31][1000010000100001][0253081710051061353] \begin{align*} & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 2 & 0 & 1 & 0 \\ -2& 3 & 4 & 0 \\ -5& 5 & 6 & 0 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 0 & -1 & -2 \\ 0 & 3 & 6 & 2 \\ 0 & 5 & 11 & 5 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ -2 & 1 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ 5 & 0 & 0 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 3 & 6 & 2 \\ 0 & 0 & -1 & -2 \\ 0 & 5 & 11 & 5 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ 5 & 0 & 0 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 1 & 1 \\ 0 & 3 & 6 & 2 \\ 0 & 0 & -1 & -2 \\ 0 & 0 & 1 & 5/3 \end{bmatrix} & \cdots & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ 5/3 & 0 & -5/3 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 3 & 0 & -10 \\ 0 & 0 & -1 & -2 \\ 0 & 0 & 0 & -1/3 \end{bmatrix} & \cdots & \begin{bmatrix} -1 & 1 & 0 & 0 \\ -10 & 6 & 1 & 0 \\ -2 & 1 & 0 & 0 \\ -1/3 & 1 & -5/3 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 3 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1/3 \end{bmatrix} & \cdots & \begin{bmatrix} 0 & -2 & 5 & -3 \\ 0 & -24 & 51 & -30 \\ 0 & -5 & 10 & -6 \\ -1/3 & 1 & -5/3 & 1 \end{bmatrix} \\ \to & \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} & \cdots & \begin{bmatrix} 0 & -2 & 5 & -3 \\ 0 & -8 & 17 & -10 \\ 0 & 5 & -10 & 6 \\ 1 & -3 & 5 & -3 \end{bmatrix} \end{align*}

Code

Below is the Julia code for finding the inverse matrix using Gaussian elimination.

using LinearAlgebra

M = [
      1 0 1 1
      2 0 1 0
     -2 3 4 0
     -5 5 6 0
     ]
inv_M = inv(M)

n = size(M)[1]
M = Float64.(M)
X = Matrix(1.0I,n,n)
for j = 1:n
    if iszero(M[j,j])
        for i = j:n
            if !iszero(M[i,i])
                M[[j,i],:] = M[[i,j],:]
                X[[j,i],:] = X[[i,j],:]
                break
            end
        end
    end

    pivot = M[j,j]
    for i = 1:n
        if i == j continue end
        r = (- M[i,j] / pivot)
        M[i,:] += r*M[j,:]
        X[i,:] += r*X[j,:]
    end
end

for i = 1:n
    pivot = M[i,i]
    M[i,:] /= pivot
    X[i,:] /= pivot
end
M
X
inv_M