logo

Determinants 📂Matrix Algebra

Determinants

Definition

Let AA be the following 2×22 \times 2 matrix.

A=[abcd] A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}

The determinant of AA is defined as follows and is denoted by det(A)\det(A).

det(A):=adbc \det(A) := ad - bc

Explanation

To discuss determinants, we cannot ignore the very purpose of linear algebra. Most problems in mathematics can be summarized as ‘Can we solve the equation?’ A simple equation like

ax=b ax = b

demonstrates that unless a=0 a = 0, this equation clearly has a solution. Quadratic equations such as

ax2+bx+c=0 a x^2 + b x + c = 0

can also be easily solved using the quadratic formula. Consequently, mathematicians increased the degree of xx, aiming to tackle more challenging problems. However, it was proven by the unfortunate genius Abel that ‘algebraic equations of degree 5 or higher have no general solution.’

Meanwhile, exploring paths by increasing variables or the number of equations remained open. And thus came determinants. Although it might seem from its Korean name that determinants appeared after matrices, historically, determinants emerged before matrices1, and in fact, the English words “determinant” and “matrix” are not particularly related. The term “determinant” refers to a formula used to determine whether a system of linear equations with two variables has a solution.

{ax+by=0cx+dy=0 \left\{ \begin{align*} ax + by &= 0 \\ cx + dy &= 0 \end{align*} \right.

When provided with a system of equations like above, if adbc=0ad-bc = 0 then only the trivial solution x=y=0x=y=0 exists. If adbc0ad-bc \ne 0, it has a unique non-trivial solution. Therefore, adbcad-bc becomes the formula that determines the existence of solutions to the given system of equations, hence the name determinant.

As you know, systems of equations can be expressed in the form of matrices. A ‘simple’ system of equations can be represented as follows:

Ax=b A \mathbf{x} = \mathbf{b}

Recall that the solution of ax=bax = b was x=bax = \dfrac{b}{a}. Since 1a\dfrac{1}{a} is the inverse of aa, multiplying both sides could leave only xx. Regarding the conditions for the existence of solutions, a=0a= 0 does not have an inverse, thus a solution for ax=bax = b does not exist. Similarly, the question of the existence of Ax=bA \mathbf{x} = \mathbf{b} also boils down to whether the inverse of AA can be found. The existence of the inverse of AA itself determines whether the linear system represented by AA has a solution, and finding this inverse is equivalent to finding the solution. It is evident that the condition for the existence of the inverse of AA coincides with the condition for the linear system represented by AA to have a unique solution.

The inverse of A=[abcd]A = \begin{bmatrix} a & b \\ c & d\end{bmatrix} is as follows:

A1=1adbc[dbca] A^{-1} = \dfrac{1}{ad-bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}

The method to prove this is simply to multiply AA with A1A^{-1}. If det(A)=adbc=0\det (A) = ad - bc = 0 holds, regardless of the shape of the matrix, the constant in front of A1A^{-1} is 10\dfrac{1}{0}, meaning no inverse can exist. This is the reason invertibility is often called nonsingularity. The term “singular” translates to “peculiar,” which in mathematical terms, suggests a notion akin to “dividing by zero.”

On the other hand, from the perspective of viewing a determinant as a function mapping n×nn\times n real numbers to 11 real numbers, it can be defined as follows.

General Definition

Definition by Properties

A function det:Rn×nR \det : \mathbb{R}^{n \times n } \to \mathbb{R} is defined as a determinant if it satisfies the following conditions:

  • For the identity matrix InI_{n}, det(In)=1\det(I_{n}) = 1
  • For 1i,jn1 \le i,j \le n, det[r1rirjrn]=det[r1rjrirn]\det \begin{bmatrix} \mathbb{r_{1}} \\ \vdots \\ \mathbb{r_{i}} \\ \vdots \\ \mathbb{r_{j}} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix} = - \det \begin{bmatrix} \mathbb{r_{1}} \\ \vdots \\ \mathbb{r_{j}} \\ \vdots \\ \mathbb{r_{i}} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix}
  • det[kr1+lr1rn]=kdet[r1rn]+ldet[r1rn]\det \begin{bmatrix} k \mathbb{r_{1}} + l \mathbb{r_{1}}^{\prime} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix} = k \det \begin{bmatrix} \mathbb{r_{1}} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix} + l \det \begin{bmatrix} \mathbb{r_{1}}^{\prime} \\ \vdots \\ \mathbb{r_{n}} \end{bmatrix}

Definition by Permutations

Let the permutation of the set {1,2,,n}\left\{ 1, 2, \dots, n \right\} be σ\sigma. Let A=[aij]A = [a_{ij}] be a n×nn \times n matrix. Then, the determinant of AA is defined as follows:

det(A)=σSnsgn(σ)a1σ(1)a2σ(2)anσ(n)=σSnsgn(σ)i=1naiσ(i) \begin{align*} \det (A) &= \sum_{\sigma \in S_{n}} \sgn(\sigma) a_{1\sigma(1)} a_{2\sigma(2)} \cdots a_{n\sigma(n)} \\ &= \sum_{\sigma \in S_{n}} \sgn(\sigma) \prod_{i=1}^{n} a_{i\sigma(i)} \\ \end{align*}

Here, SnS_{n} is the symmetric group, and sgn\sgn is the sign of the permutation.

Explanation

By broadening our understanding of determinants, it becomes significantly easier to discuss whether solutions for a system of equations exist. The culmination of such discussions is encapsulated in the theorem below.

ACn×n,A1    detA0 \forall A \in \mathbb{C}^{n \times n},\quad \exists A^{-1} \iff \det{A} \ne 0

Though regarded almost as a definition, this is a fact so obvious that it can be accepted as a theorem. However, if one cannot properly explain why such a theorem exists, or why it is truly obvious, then it implies a lack of understanding of determinants. Especially in the case of determinants, concepts often precede definitions, so dedicate ample time to ensuring comprehension if necessary.

See Also