logo

Direct Sum in Vector Spaces 📂Linear Algebra

Direct Sum in Vector Spaces

Definition

A vector space VV is said to be the direct sum of its two subspaces W1W_{1} and W2W_{2} if it satisfies the following, denoted by V=W1W2V = W_{1} \oplus W_{2}.

(i) Existence: For any vV\mathbf{v} \in V, there exist v1W1\mathbf{v}_{1} \in W_{1} and v2W2\mathbf{v}_{2} \in W_{2} satisfying v=v1+v2\mathbf{v} = \mathbf{v}_{1} + \mathbf{v}_{2}.

(ii) Exclusivity: W1W2={0}W_{1} \cap W_{2} = \left\{ \mathbf{0} \right\}

(iii) Uniqueness: For a given v\mathbf{v}, there exists a unique v1W1\mathbf{v}_{1} \in W_{1} and v2W2\mathbf{v}_{2} \in W_{2} satisfying v=v1+v2\mathbf{v} = \mathbf{v}_{1} + \mathbf{v}_{2}.

Generalization1

Let W1,W2,,WkW_{1}, W_{2}, \dots, W_{k} be subspaces of the vector space VV. When these subspaces meet the following conditions, VV is called the direct sum of W1,,WkW_{1}, \dots, W_{k}, denoted by V=W1WkV = W_{1} \oplus \cdots \oplus W_{k}.

  • V=i=1kWi\displaystyle V = \sum\limits_{i=1}^{k}W_{i}

  • WjijWi={0} for each j(1jk)\displaystyle W_{j} \bigcap \sum\limits_{i \ne j}W_{i} = \left\{ \mathbf{0} \right\} \text{ for each } j(1\le j \le k)

Here, i=1kWi\sum\limits_{i=1}^{k}W_{i} is the sum of the WiW_{i}.

Explanation

(i) Existence: This condition can be rewritten as V=W1+W2V = W_{1} + W_{2}, meaning "VV is the sum of W1W_{1} and W2W_{2}".

(iii) Uniqueness: In fact, this condition is not necessary. Due to condition (ii), if v1W1\mathbf{v}_{1} \in W_{1}, then ±v1W2\pm \mathbf{v}_{1} \notin W_{2} exists, and only one representation exists for the zero vector WW.

0=0+0,0W1,W2 \mathbf{0} = \mathbf{0} + \mathbf{0},\quad \mathbf{0}\in W_{1}, W_{2}

Therefore, if two expressions v1+v2\mathbf{v}_{1} + \mathbf{v}_{2} and v1+v2\mathbf{v}_{1}^{\prime} + \mathbf{v}_{2}^{\prime} exist for v\mathbf{v},

0=vv=(v1v1)+(v2v2)=0+0    v1=v1, v2=v2 \mathbf{0} = \mathbf{v} - \mathbf{v} = (\mathbf{v}_{1} - \mathbf{v}_{1}^{\prime}) + (\mathbf{v}_{2} - \mathbf{v}_{2}^{\prime}) = \mathbf{0} + \mathbf{0} \implies \mathbf{v}_{1}=\mathbf{v}_{1}^{\prime},\ \mathbf{v}_{2}=\mathbf{v}_{2}^{\prime}

Further, (i), (ii)     \iff (iii) is validated.

At first glance, the definition might seem complex, but looking at examples in Euclidean space makes it clear that this is a very logical and convenient concept. For example, considering R3=R×R×R\mathbb{R}^{3} = \mathbb{R} \times \mathbb{R} \times \mathbb{R}, elements of R3\mathbb{R}^{3} are n-dimensional vectors (x,y,z)(x,y,z), which can be divided into (x,y)(x,y) and (z)(z).

On the other hand, thinking about the process of recombining them gives (x,y)R2(x,y) \in \mathbb{R}^2 and, in turn, (z)R(z) \in \mathbb{R}. Therefore, their mere union R2R\mathbb{R}^2 \cup \mathbb{R} would include scalars and n-dimensional vectors as elements. From just these symbols, it’s evident how difficult it is to express the expansion and separation of the spaces we desire. When the concept of direct sum is introduced, however, it will be much easier to explain when subspaces neatly divide a vector space.

See Also


  1. Stephen H. Friedberg, Linear Algebra (4th Edition, 2002), p275 ↩︎