logo

Direct Sum in Vector Spaces 📂Linear Algebra

Direct Sum in Vector Spaces

Definition

A vector space $V$ is said to be the direct sum of its two subspaces $W_{1}$ and $W_{2}$ if it satisfies the following, denoted by $V = W_{1} \oplus W_{2}$.

(i) Existence: For any $\mathbf{v} \in V$, there exist $\mathbf{v}_{1} \in W_{1}$ and $\mathbf{v}_{2} \in W_{2}$ satisfying $\mathbf{v} = \mathbf{v}_{1} + \mathbf{v}_{2}$.

(ii) Exclusivity: $W_{1} \cap W_{2} = \left\{ \mathbf{0} \right\}$

(iii) Uniqueness: For a given $\mathbf{v}$, there exists a unique $\mathbf{v}_{1} \in W_{1}$ and $\mathbf{v}_{2} \in W_{2}$ satisfying $\mathbf{v} = \mathbf{v}_{1} + \mathbf{v}_{2}$.

Generalization1

Let $W_{1}, W_{2}, \dots, W_{k}$ be subspaces of the vector space $V$. When these subspaces meet the following conditions, $V$ is called the direct sum of $W_{1}, \dots, W_{k}$, denoted by $V = W_{1} \oplus \cdots \oplus W_{k}$.

  • $\displaystyle V = \sum\limits_{i=1}^{k}W_{i}$

  • $\displaystyle W_{j} \bigcap \sum\limits_{i \ne j}W_{i} = \left\{ \mathbf{0} \right\} \text{ for each } j(1\le j \le k)$

Here, $\sum\limits_{i=1}^{k}W_{i}$ is the sum of the $W_{i}$.

Explanation

(i) Existence: This condition can be rewritten as $V = W_{1} + W_{2}$, meaning "$V$ is the sum of $W_{1}$ and $W_{2}$".

(iii) Uniqueness: In fact, this condition is not necessary. Due to condition (ii), if $\mathbf{v}_{1} \in W_{1}$, then $\pm \mathbf{v}_{1} \notin W_{2}$ exists, and only one representation exists for the zero vector $W$.

$$ \mathbf{0} = \mathbf{0} + \mathbf{0},\quad \mathbf{0}\in W_{1}, W_{2} $$

Therefore, if two expressions $\mathbf{v}_{1} + \mathbf{v}_{2}$ and $\mathbf{v}_{1}^{\prime} + \mathbf{v}_{2}^{\prime}$ exist for $\mathbf{v}$,

$$ \mathbf{0} = \mathbf{v} - \mathbf{v} = (\mathbf{v}_{1} - \mathbf{v}_{1}^{\prime}) + (\mathbf{v}_{2} - \mathbf{v}_{2}^{\prime}) = \mathbf{0} + \mathbf{0} \implies \mathbf{v}_{1}=\mathbf{v}_{1}^{\prime},\ \mathbf{v}_{2}=\mathbf{v}_{2}^{\prime} $$

Further, (i), (ii) $\iff$ (iii) is validated.

At first glance, the definition might seem complex, but looking at examples in Euclidean space makes it clear that this is a very logical and convenient concept. For example, considering $\mathbb{R}^{3} = \mathbb{R} \times \mathbb{R} \times \mathbb{R}$, elements of $\mathbb{R}^{3}$ are n-dimensional vectors $(x,y,z)$, which can be divided into $(x,y)$ and $(z)$.

On the other hand, thinking about the process of recombining them gives $(x,y) \in \mathbb{R}^2$ and, in turn, $(z) \in \mathbb{R}$. Therefore, their mere union $\mathbb{R}^2 \cup \mathbb{R}$ would include scalars and n-dimensional vectors as elements. From just these symbols, it’s evident how difficult it is to express the expansion and separation of the spaces we desire. When the concept of direct sum is introduced, however, it will be much easier to explain when subspaces neatly divide a vector space.

See Also


  1. Stephen H. Friedberg, Linear Algebra (4th Edition, 2002), p275 ↩︎