logo

Linear Independence and Linear Dependence 📂Linear Algebra

Linear Independence and Linear Dependence

Definition1

Let’s denote a non-empty subset of vector space VV as S={v1,v2,,vr}S = \left\{ \mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{r} \right\}. For constants k1,k2,,krk_{1}, k_{2}, \dots, k_{r}, the following equation

k1v1+k2v2++krvr=0 k_{1} \mathbf{v}_{1} + k_{2} \mathbf{v}_{2} + \dots + k_{r} \mathbf{v}_{r} = \mathbf{0}

has at least one solution

k1=0, k2=0, , kr=0 k_{1} = 0,\ k_{2} = 0,\ \dots,\ k_{r} = 0

This is called a trivial solution. If the trivial solution is the only solution, then vectors v1,v2,,vr\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{r} are called linearly independent. If there is at least one solution that is not trivial, it is called linearly dependent.

Explanation

A trivial solution is a solution that is immediately obvious and hence, considered of little value. This is because, as mentioned in the definition, the case of 00 is a frequent occurrence.

From the above definition, the following theorem can be immediately derived.

Let’s denote a non-empty subset of vector space VV as S={v1,v2,,vr}S = \left\{ \mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{r} \right\}. If no vector in SS can be represented as a linear combination of other vectors, it is considered linearly independent. Conversely, if at least one vector can be represented as a linear combination of others, it is considered linearly dependent.

Thinking with the theorem in mind, the naming of “independent” and “dependent” makes sense. Some textbooks have the definition and theorem in reverse order.

Interestingly, the reference in the footnote, ‘Elementary Linear Algebra’, defines it as this text for the translated version, but the original version has it defined in the opposite way. Personally, I think defining it as this text is cleaner. This is because defining it in the opposite way requires separate definitions for independence/dependence for sets with only one element. The proof of the theorem is introduced later.

In simpler terms, if there are two distinct vectors such that one cannot be made identical to the other by scaling up or down, they are considered independent. For example, (1,0)(1,0) and (0,1)(0,1) cannot become equal to each other no matter how they are scaled, i.e., enlarged or reduced. To rewrite according to the definition,

k1(1,0)+k2(0,1)=0 k_{1} (1,0) + k_{2} (0,1) = \mathbf{0}

Rearranging the second term gives

k1(1,0)=k2(0,1) k_{1}(1,0) = - k_{2}(0,1)

Rearranging again gives

(k1,0)=(0,k2) (k_{1}, 0) = ( 0 , - k_{2})

As the only solution satisfying the above equation is k1=k2=0k_{1} = k_{2} = 0, (1,0)(1,0) and (0,1)(0,1) are linearly independent. This can be proven as a theorem.

Theorem

(a) A finite set that includes the zero vector is linearly dependent.

(b) The necessary and sufficient condition for a single vector v\mathbf{v} to be linearly independent is v0\mathbf{v} \ne \mathbf{0}.

(c) The necessary and sufficient condition for two distinct vectors to be linearly independent is that one vector cannot be represented as a multiple of the other.

(d) Let’s denote a set containing more than two vectors as S={v1,v2,,vr}S=\left\{ \mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{r} \right\}. The necessary and sufficient condition for SS to be linearly independent is that no vector in SS can be represented as a linear combination of other vectors.

(e) Let’s say TST \subset S. If SS is linearly independent, then TT is also linearly independent.

(e') Let’s say TST \subset S. If TT is linearly dependent, then SS is also linearly dependent.

Proof

(a)

Let’s say S={v1,v2,,vr,0}S=\left\{ \mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{r}, \mathbf{0} \right\}. Then, the following equation holds true.

0v1+0v2++0vr+10=0 0 \mathbf{v}_{1} + 0 \mathbf{v}_{2} + \dots + 0 \mathbf{v}_{r} + 1 \mathbf{0} = \mathbf{0}

Therefore, by definition, SS is linearly dependent.

(b)

Applying (a) to a set with a single element proves its validity.

(c)

()(\Longrightarrow)

Assume v1,v2\mathbf{v}_{1}, \mathbf{v}_{2} is linearly independent. Then,

k1v1+k2v2=0 k_{1} \mathbf{v}_{1} + k_{2} \mathbf{v}_{2} = \mathbf{0}

The only solution satisfying this equation is k1=k2=0k_{1}=k_{2}=0, so there does not exist a constant kk satisfying v1=k2k1v2=kv2\mathbf{v}_{1} = -\frac{k_{2}}{k_{1}}\mathbf{v}_{2} = -k\mathbf{v}_{2}.

()(\Longleftarrow)

Assume v1\mathbf{v}_{1} cannot be represented as a constant multiple of v2\mathbf{v}_{2}. That is, assume the following equation

v1=k2v \mathbf{v}_{1} = k_{2}\mathbf{v}

does not have a solution for k2k_{2}. Then,

k1v1+k2v2=0 k_{1} \mathbf{v}_{1} + k_{2} \mathbf{v}_{2} = \mathbf{0}

Since the only solution satisfying this equation is the trivial solution, v1,v2\mathbf{v}_{1}, \mathbf{v}_{2} is linearly independent.

(d)

()(\Longrightarrow)

Assume SS is linearly independent.

k1v1+k2v2++krvr=0 k_{1} \mathbf{v}_{1} + k_{2} \mathbf{v}_{2} + \dots + k_{r} \mathbf{v}_{r} = \mathbf{0}

Since the only solution is k1=k2==kr=0k_{1}=k_{2}=\cdots=k_{r}=0,

v1=k2k1v2krk1vr \mathbf{v}_{1} = -\frac{k_{2}}{k_{1}}\mathbf{v}_{2} - \cdots - \frac{k_{r}}{k_{1}}\mathbf{v}_{r}

no constants k2k1,,krk1\frac{k_{2}}{k_{1}}, \dots, \frac{k_{r}}{k_{1}} exist satisfying this. This applies to all vi\mathbf{v}_{i}, meaning no vector can be represented as a linear combination of others.

()(\Longleftarrow)

Assuming no vector can be represented as a linear combination of others,

v1=k2v2++krvr \mathbf{v}_{1} = k_{2}\mathbf{v}_{2} + \cdots + k_{r}\mathbf{v}_{r}

Assuming no solution exists for k2,,krk_{2}, \dots, k_{r},

k1v1+k2v2++krvr=0 k_{1}\mathbf{v}_{1} + k_{2}\mathbf{v}_{2} + \cdots + k_{r}\mathbf{v}_{r} = \mathbf{0}

Since the only solution satisfying this equation is the trivial solution, SS is linearly independent.

(e)

Let’s consider the two sets TT and SS as follows.

T={v1, v2,,vr},S={v1,v2,,vr,vr+1,,vn} T = \left\{ \mathbf{v}_{1},\ \mathbf{v}_{2}, \dots, \mathbf{v}_{r} \right\},\quad S = \left\{ \mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{r}, \mathbf{v}_{r+1}, \dots, \mathbf{v}_{n} \right\}

TT is a subset of SS. Now, assuming SS is linearly independent,

c1v1+c2v2++crvr+cr+1vr+1++cnvn=0 c_{1}\mathbf{v}_{1} + c_{2}\mathbf{v}_{2} + \cdots +c_{r} \mathbf{v}_{r} + c_{r+1} \mathbf{v}_{r+1} + \cdots + c_{n} \mathbf{v}_{n} = \mathbf{0}

The only solution satisfying this is the trivial solution c1=c2==cr=cr+1==cn=0c_{1}=c_{2} = \cdots = c_{r} = c_{r+1} = \cdots = c_{n} = 0. Therefore, since cr+1==cn=0c_{r+1} = \cdots = c_{n} = 0, the following equation holds true.

c1v1+c2v2++crvr+cr+1vr+1++cnvn= 0    c1v1+c2v2++crvr+(0vr+1++0vn)= 0    c1v1+c2v2++crvr+0= 0    c1v1+c2v2++crvr= 0 \begin{align*} && c_{1}\mathbf{v}_{1} + c_{2}\mathbf{v}_{2} + \cdots +c_{r} \mathbf{v}_{r} + c_{r+1} \mathbf{v}_{r+1} + \cdots + c_{n} \mathbf{v}_{n} =&\ \mathbf{0} \\ \implies && c_{1}\mathbf{v}_{1} + c_{2}\mathbf{v}_{2} + \cdots +c_{r} \mathbf{v}_{r} + \left( 0\mathbf{v}_{r+1} + \cdots + 0 \mathbf{v}_{n} \right) =&\ \mathbf{0} \\ \implies && c_{1}\mathbf{v}_{1} + c_{2}\mathbf{v}_{2} + \cdots +c_{r} \mathbf{v}_{r} + \mathbf{0} =&\ \mathbf{0} \\ \implies && c_{1}\mathbf{v}_{1} + c_{2}\mathbf{v}_{2} + \cdots +c_{r} \mathbf{v}_{r} =&\ \mathbf{0} \end{align*}

However, this equation only holds true when c1=c2==cr=0c_{1} = c_{2} = \cdots = c_{r} = 0, so TT is linearly independent.

(e')

This is a contrapositive of (e).


  1. Howard Anton, Chris Rorres, Anton Kaul, Elementary Linear Algebra: Applications Version(12th Edition). 2019, p228-229 ↩︎