logo

Determinants 📂Matrix Algebra

Determinants

Definitions

Let’s denote AA as the following 2×22 \times 2 matrix.

A=[abcd] A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}

The determinant of AA is defined as follows and is denoted by det(A)\det(A).

det(A):=adbc \det(A) := ad - bc

Explanation

To talk about determinants, we cannot skip discussing the very purpose of linear algebra. It wouldn’t be an exaggeration to say that most problems in mathematics boil down to ‘can we solve the equation?’ For instance, consider the simple equation

ax=b ax = b

It’s easy to see that this equation has a solution, as long as a=0 a = 0 is not zero. Similarly, the quadratic equation

ax2+bx+c=0 a x^2 + b x + c = 0

can be easily solved using the quadratic formula. Mathematicians, thus challenged themselves with increasingly difficult problems by elevating the degree of xx. However, the unfortunate genius Abel proved that ‘Algebraic equations of degree five or higher do not have a general solution’.

Meanwhile, another path was left to be explored by increasing either the number of unknowns or the number of equations itself. This is where determinants came into play. Despite how it might seem in Korean, determinants actually appeared before matrices historically1, and the English terms determinant and matrix are not directly related. The name determinant was given to this formula because it can determine whether or not a unique solution exists for a system of two linear equations with two unknowns as follows.

{ax+by=0cx+dy=0 \left\{ \begin{align*} ax + by &= 0 \\ cx + dy &= 0 \end{align*} \right.

Given the simultaneous equations as above, if adbc=0ad-bc = 0, then there exists only the trivial solution x=y=0x=y=0, and if adbc0ad-bc \ne 0, it has a unique non-trivial solution. Therefore, adbcad-bc serves as a formula that determines whether a given system of equations has a solution or not, hence the name.

As is well-known, systems of equations can be expressed in the form of matrices. A ‘simple’ system of equations can be expressed as follows.

Ax=b A \mathbf{x} = \mathbf{b}

Remembering that the solution to ax=bax = b was x=bax = \dfrac{b}{a}, consider that 1a\dfrac{1}{a} is the inverse of aa, so multiplying both sides by it leaves only xx. Tying this to the condition for the existence of a solution, a=0a= 0 has no inverse, so there is no solution for ax=bax = b. Similarly, whether a solution can be found for AA boils down to whether AA has an inverse. The existence of an inverse of AA itself signifies the existence of a solution for the linear system expressed by AA, and finding this inverse is how a solution is determined. From this, we can understand that the condition for the existence of an inverse matrix of AA and the condition for the linear system expressed by AA to have a unique solution are the same.

The inverse matrix of A=[abcd]A = \begin{bmatrix} a & b \\ c & d\end{bmatrix} is as follows.

A1=1adbc[dbca] A^{-1} = \dfrac{1}{ad-bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}

This is proven simply by directly multiplying AA and A1A^{-1}. If det(A)=adbc=0\det (A) = ad - bc = 0, regardless of the shape of the matrix, the constant before A1A^{-1} becomes 10\dfrac{1}{0}, hence no inverse can exist. The term invertibility is sometimes referred to as nonsingularity for this reason. Singular translates to ‘peculiar’, but to stick to mathematical terminology, it roughly means ‘dividing by zero’.

On the other hand, from the perspective of a function that maps n×nn\times n real numbers to 11 real numbers, the determinant can be defined as follows:

Definition

A function det:Rn×nR \det : \mathbb{R}^{n \times n } \to \mathbb{R} is defined as the determinant if it satisfies the following conditions:

  • For the identity matrix InI_{n}, det(In)=1\det(I_{n}) = 1
  • For 1i,jn1 \le i,j \le n, det[r1rirjrn]=det[r1rjrirn]\det \begin{bmatrix} \mathbb{r_{1}} \\ \vdots \\ \mathbb{r_{i}} \\ \vdots \\ \mathbb{r_{j}} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix} = - \det \begin{bmatrix} \mathbb{r_{1}} \\ \vdots \\ \mathbb{r_{j}} \\ \vdots \\ \mathbb{r_{i}} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix}
  • det[kr1+lr1rn]=kdet[r1rn]+ldet[r1rn]\det \begin{bmatrix} k \mathbb{r_{1}} + l \mathbb{r_{1}}^{\prime} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix} = k \det \begin{bmatrix} \mathbb{r_{1}} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix} + l \det \begin{bmatrix} \mathbb{r_{1}}^{\prime} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix}

Explanation

As such, generalizing the determinant makes it easier to discuss the existence or absence of solutions to systems of equations. This discussion is perfectly encapsulated in the theorem below.

ACn×n,A1    detA0 \forall A \in \mathbb{C}^{n \times n},\quad \exists A^{-1} \iff \det{A} \ne 0

It’s nearly taken as a definition, so obvious it might seem. However, if we cannot explain why this theorem holds or question its obviousness, it’s as if we haven’t truly grasped the concept of determinants. Especially since the concept precedes the definition in the case of determinants, if it’s not comprehensible, it’s advisable to spend time understanding it.