logo

Composition of Linear Transformations 📂Linear Algebra

Composition of Linear Transformations

Definition1

Given linear transformations $T_{1} : V \to W$ and $T_{2} : W \to Z$, the transformation defined by $T_{2} T_{1}$ is called the composition of $T_{1}$ and $T_{2}$.

$$ (T_{2} \circ T_{1})(\mathbf{x}) = T_{2}\left( T_{1}(\mathbf{x}) \right) \quad \mathbf{x} \in V $$

Explanation

The composition of linear transformations is often denoted simply as follows:

$$ T_{2}T_{1}\mathbf{x} = (T_{2} \circ T_{1}) (\mathbf{x}) $$

In finite dimensions, this is essentially the same as matrix multiplication, making it a natural notation.

Properties1 2

Consider linear transformations $T_{1} : V \to W$ and $T_{2} : W \to Z$.

(a) The composition $T_{2} T_{1}$ of $T_{1}$ and $T_{2}$ is also a linear transformation.

(b) The following holds for $T, U_{1}, U_{2} \in \href{../3283}{L(V)}$ and $a \in \mathbb{R}$:

$$ T(U_{1} + U_{2}) = TU_{1} + TU_{2} \quad \text{and} \quad (U_{1} + U_{2})T = U_{1}T + U_{2}T \\[0.5em] T(U_{1}U_{2}) = (T_{1})U_{2} \\[0.5em] TI = IT = T \\[0.5em] a(U_{1}U_{2}) = (aU_{1})U_{2} = U_{1}(aU_{2}) $$

If $T_{1}, T_{2}$ is injective, then the following holds:

(c) $T_{2} T_{1}$ is injective.

(d) $(T_{2} T_{1})^{-1} = T_{1}^{-1} T_{2}^{-1}$

(e) Let $V, W, Z$ be a finite-dimensional vector space, and let $\alpha, \beta, \gamma$ be their ordered bases, respectively. And suppose $T : V \to W$, $U : W \to Z$ are linear transformations. Then,

$$ [UT]_{\alpha}^{\gamma} = [U]_{\beta}^{\gamma}[T]_{\alpha}^{\beta} $$

$[T]_{\alpha}^{\beta}$ is the matrix representation of $T$.

Proof

(a)

Given $\mathbf{x}_{1}, \mathbf{x}_{2} \in V$ and let $k$ be any constant. Since $T_{1}, T_{2}$ is linear, the following holds:

$$ \begin{align*} (T_{2} T_{1})(\mathbf{x}_{1} + k \mathbf{x}_{2}) &= T_{2} \left( T_{1} \left( \mathbf{x}_{1} + k \mathbf{x}_{2} \right) \right) \\ &= T_{2} \left( T_{1} ( \mathbf{x}_{1} ) + k T_{1} ( \mathbf{x}_{2} ) \right) \\ &= T_{2} \left( T_{1} ( \mathbf{x}_{1} ) \right) + k T_{2}\left( T_{1} ( \mathbf{x}_{2} ) \right) \\ &= (T_{2} T_{1}) ( \mathbf{x}_{1} ) + k (T_{2} T_{1})( \mathbf{x}_{2} ) \end{align*} $$

(c)

Suppose that $\mathbf{x}_{1}$ and $\mathbf{x}_{2}$ are different vectors in $V$. Since $T_{1}$ is injective, $T_{1}(\mathbf{x}_{1})$ and $T_{1}(\mathbf{x}_{2})$ are different vectors. Thus, since $T_{2}$ is also injective, the following two vectors are also different:

$$ (T_{2} T_{1})(\mathbf{x}_{1}) = T_{2} \left( T_{1}(\mathbf{x}_{1}) \right) \quad \text{and} \quad (T_{2} T_{1})(\mathbf{x}_{2}) = T_{2} \left( T_{1}(\mathbf{x}_{2}) \right) $$

Therefore, $T_{2} T_{1}$ is injective.

(d)

Let $\mathbf{z}$ be the image of $\mathbf{x} \in V$ by $T_{2} T_{1}$.

$$ \mathbf{z} = (T_{2} T_{1}) ( \mathbf{x} ) = T_{2} ( T_{1} (\mathbf{x})) $$

Applying $T_{2}^{-1}$ to both sides yields:

$$ T_{2}^{-1}(\mathbf{z}) = ( T_{2}^{-1} T_{2} T_{1}) ( \mathbf{x} ) = T_{1} (\mathbf{x}) $$

Further applying $T_{1}^{-1}$ to both sides yields:

$$ ( T_{1}^{-1} T_{2}^{-1} )(\mathbf{z}) = ( T_{1}^{-1} T_{1} ) ( \mathbf{x} ) = \mathbf{x} $$

Thus, the following is obtained:

$$ (T_{1}^{-1} T_{2}^{-1}) ( (T_{2} T_{1} )(\mathbf{x}) ) = \mathbf{x} $$


  1. Howard Anton, Elementary Linear Algebra: Aplications Version (12th Edition, 2019), p465-468 ↩︎ ↩︎

  2. Stephen H. Friedberg, Linear Algebra (4th Edition, 2002), p86-89 ↩︎