logo

Determinants 📂Matrix Algebra

Determinants

Definitions

Let’s denote $A$ as the following $2 \times 2$ matrix.

$$ A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$

The determinant of $A$ is defined as follows and is denoted by $\det(A)$.

$$ \det(A) := ad - bc $$

Explanation

To talk about determinants, we cannot skip discussing the very purpose of linear algebra. It wouldn’t be an exaggeration to say that most problems in mathematics boil down to ‘can we solve the equation?’ For instance, consider the simple equation

$$ ax = b $$

It’s easy to see that this equation has a solution, as long as $ a = 0$ is not zero. Similarly, the quadratic equation

$$ a x^2 + b x + c = 0 $$

can be easily solved using the quadratic formula. Mathematicians, thus challenged themselves with increasingly difficult problems by elevating the degree of $x$. However, the unfortunate genius Abel proved that ‘Algebraic equations of degree five or higher do not have a general solution’.

Meanwhile, another path was left to be explored by increasing either the number of unknowns or the number of equations itself. This is where determinants came into play. Despite how it might seem in Korean, determinants actually appeared before matrices historically1, and the English terms determinant and matrix are not directly related. The name determinant was given to this formula because it can determine whether or not a unique solution exists for a system of two linear equations with two unknowns as follows.

$$ \left\{ \begin{align*} ax + by &= 0 \\ cx + dy &= 0 \end{align*} \right. $$

Given the simultaneous equations as above, if $ad-bc = 0$, then there exists only the trivial solution $x=y=0$, and if $ad-bc \ne 0$, it has a unique non-trivial solution. Therefore, $ad-bc$ serves as a formula that determines whether a given system of equations has a solution or not, hence the name.

As is well-known, systems of equations can be expressed in the form of matrices. A ‘simple’ system of equations can be expressed as follows.

$$ A \mathbf{x} = \mathbf{b} $$

Remembering that the solution to $ax = b$ was $x = \dfrac{b}{a}$, consider that $\dfrac{1}{a}$ is the inverse of $a$, so multiplying both sides by it leaves only $x$. Tying this to the condition for the existence of a solution, $a= 0$ has no inverse, so there is no solution for $ax = b$. Similarly, whether a solution can be found for $A$ boils down to whether $A$ has an inverse. The existence of an inverse of $A$ itself signifies the existence of a solution for the linear system expressed by $A$, and finding this inverse is how a solution is determined. From this, we can understand that the condition for the existence of an inverse matrix of $A$ and the condition for the linear system expressed by $A$ to have a unique solution are the same.

The inverse matrix of $A = \begin{bmatrix} a & b \\ c & d\end{bmatrix}$ is as follows.

$$ A^{-1} = \dfrac{1}{ad-bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} $$

This is proven simply by directly multiplying $A$ and $A^{-1}$. If $\det (A) = ad - bc = 0$, regardless of the shape of the matrix, the constant before $A^{-1}$ becomes $\dfrac{1}{0}$, hence no inverse can exist. The term invertibility is sometimes referred to as nonsingularity for this reason. Singular translates to ‘peculiar’, but to stick to mathematical terminology, it roughly means ‘dividing by zero’.

On the other hand, from the perspective of a function that maps $n\times n$ real numbers to $1$ real numbers, the determinant can be defined as follows:

Definition

A function $ \det : \mathbb{R}^{n \times n } \to \mathbb{R} $ is defined as the determinant if it satisfies the following conditions:

  • For the identity matrix $I_{n}$, $\det(I_{n}) = 1$
  • For $1 \le i,j \le n$, $\det \begin{bmatrix} \mathbb{r_{1}} \\ \vdots \\ \mathbb{r_{i}} \\ \vdots \\ \mathbb{r_{j}} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix} = - \det \begin{bmatrix} \mathbb{r_{1}} \\ \vdots \\ \mathbb{r_{j}} \\ \vdots \\ \mathbb{r_{i}} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix}$
  • $\det \begin{bmatrix} k \mathbb{r_{1}} + l \mathbb{r_{1}}^{\prime} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix} = k \det \begin{bmatrix} \mathbb{r_{1}} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix} + l \det \begin{bmatrix} \mathbb{r_{1}}^{\prime} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix}$

Explanation

As such, generalizing the determinant makes it easier to discuss the existence or absence of solutions to systems of equations. This discussion is perfectly encapsulated in the theorem below.

$$ \forall A \in \mathbb{C}^{n \times n},\quad \exists A^{-1} \iff \det{A} \ne 0 $$

It’s nearly taken as a definition, so obvious it might seem. However, if we cannot explain why this theorem holds or question its obviousness, it’s as if we haven’t truly grasped the concept of determinants. Especially since the concept precedes the definition in the case of determinants, if it’s not comprehensible, it’s advisable to spend time understanding it.