logo

The Matrix Representation of a Linear Transformation 📂Linear Algebra

The Matrix Representation of a Linear Transformation

Definition1

Let’s call $V, W$ a finite-dimensional vector space. Let’s call $\beta = \left\{ \mathbf{v}_{1}, \dots, \mathbf{v}_{n} \right\}$ and $\gamma = \left\{ \mathbf{w}_{1}, \dots, \mathbf{w}_{m} \right\}$ the ordered bases for $V$ and $W$, respectively. Let’s call $T : V \to W$ a linear transformation. Then, by the uniqueness of the basis representation, there exists a unique scalar $a_{ij}$ satisfying the following.

$$ T(\mathbf{v}_{j}) = \sum_{i=1}^{m}a_{ij}\mathbf{w}_{i} = a_{1j}\mathbf{w}_{1} + \cdots + a_{mj}\mathbf{w}_{m} \quad \text{ for } 1 \le j \le n $$

At this time, the $m \times n$ matrix $A$ defined by $A_{ij} = a_{ij}$ is called the matrix representation for $T$ relative to the ordered bases $\beta$ and $\gamma$, and is denoted by $[T]_{\gamma, \beta}$ or $[T]_{\beta}^{\gamma}$.

Explanation

Every linear transformation can be represented by a matrix, and conversely, there exists a linear transformation corresponding to any matrix, essentially making the linear transformation and its matrix representation fundamentally the same. This is one of the reasons why matrices are studied in linear algebra. From the definition, such matrix representation can be found using the image of the basis.

If $V=W$ and $\beta=\gamma$, it is simply denoted as follows.

$$ [T]_{\beta} = [T]_{\gamma, \beta} $$

Properties

Let’s call $V, W$ a finite-dimensional vector space with an ordered basis $\beta, \gamma$ given. And let’s call it $T, U : V \to W$. Then the following holds.

  • $[T + U]_{\beta}^{\gamma} = [T]_{\beta}^{\gamma} + [U]_{\beta}^{\gamma}$

  • $[aT]_{\beta}^{\gamma} = a[T]_{\beta}^{\gamma}$

Regarding $T$ and its inverse transformation $T^{-1}$, the following holds$\\[0.6em]$

  • $T$ being invertible is equivalent to $[T]_{\beta}^{\gamma}$ being invertible. Furthermore, $[T^{-1}]_{\beta}^{\gamma} = ([T]_{\beta}^{\gamma})^{-1}$.

Let’s call $V, W, Z$ a finite-dimensional vector space, and $\alpha, \beta, \gamma$ their respective ordered bases. And let’s call $T : V \to W$, $U : W \to Z$ linear transformations. Then the following holds$\\[0.6em]$

  • $[UT]_{\alpha}^{\gamma} = [U]_{\beta}^{\gamma}[T]_{\alpha}^{\beta}$

Finding the Matrix1

Let’s call the basis of $V$ as $\beta$, and the basis of $W$ as $\gamma$. And let’s call the coordinate vector of $\mathbf{x} \in V$ as $[\mathbf{x}]_{\beta}$, and the coordinate vector of $T(\mathbf{x})\in W$ as $[T(\mathbf{x})]_{\gamma}$.

Slide1.PNG

Then our goal is to find the $m \times n$ matrix $A$ that transforms the vector $[\mathbf{x}]_{\beta}$ into the vector $[T(\mathbf{x})]_{\gamma}$ by matrix multiplication. By finding $A$, we can perform the linear transformation $T$ by calculating matrix multiplication without specifically calculating $T(\mathbf{x})$ according to the given $T$.

$$ \begin{equation} A[\mathbf{x}]_{\beta} = [T(\mathbf{x})]_{\gamma} \end{equation} $$

Let’s specifically call the two bases $\beta = \left\{ \mathbf{v}_{1}, \dots, \mathbf{v}_{n} \right\}$, $\gamma = \left\{ \mathbf{w}_{1}, \dots, \mathbf{w}_{m} \right\}$. Then, for each $\mathbf{v}_{i}$, $(1)$ must hold, thus we obtain the following.

$$ \begin{equation} A[\mathbf{v}_{1}]_{\beta} = [T(\mathbf{v}_{1})]_{\gamma},\quad A[\mathbf{v}_{2}]_{\beta} = [T(\mathbf{v}_{2})]_{\gamma},\quad \dots,\quad A[\mathbf{v}_{n}]_{\beta} = [T(\mathbf{v}_{n})]_{\gamma} \end{equation} $$

Let’s say the matrix $A$ is as follows.

$$ A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} $$

The $[\mathbf{v}_{i}]_{\beta}$s are as follows.

$$ [\mathbf{v}_{1}]_{\beta} = \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \quad [\mathbf{v}_{2}]_{\beta} = \begin{bmatrix} 0 \\ 1 \\ \vdots \\ 0 \end{bmatrix}, \quad \dots,\quad [\mathbf{v}_{n}]_{\beta} = \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 1 \end{bmatrix} $$

Therefore, we obtain the following.

$$ \begin{align*} A[\mathbf{v}_{1}]_{\beta} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix} &= \begin{bmatrix} a_{11} \\ a_{21} \\ \vdots \\ a_{m1} \end{bmatrix} \\[3em] A[\mathbf{v}_{2}]_{\beta} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} \begin{bmatrix} 0 \\ 1 \\ \vdots \\ 0 \end{bmatrix} &= \begin{bmatrix} a_{12} \\ a_{22} \\ \vdots \\ a_{m2} \end{bmatrix} \\[1em] &\vdots \\[1em] A[\mathbf{v}_{n}]_{\beta} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix} &= \begin{bmatrix} a_{1n} \\ a_{2n} \\ \vdots \\ a_{mn} \end{bmatrix} \end{align*} $$

Then, by $(2)$, we obtain the following.

$$ [T(\mathbf{v}_{1})]_{\gamma} = \begin{bmatrix} a_{11} \\ a_{21} \\ \vdots \\ a_{m1} \end{bmatrix},\quad [T(\mathbf{v}_{2})]_{\gamma} = \begin{bmatrix} a_{12} \\ a_{22} \\ \vdots \\ a_{m2} \end{bmatrix},\quad \dots,\quad [T(\mathbf{v}_{n})]_{\gamma} = \begin{bmatrix} a_{1n} \\ a_{2n} \\ \vdots \\ a_{mn} \end{bmatrix} $$

Therefore, the $j$th column of the matrix $A$ is $[T(\mathbf{v}_{j})]_{\gamma}$.

$$ A = \begin{bmatrix} [T(\mathbf{v}_{1})]_{\gamma} & [T(\mathbf{v}_{2})]_{\gamma} & \cdots & [T(\mathbf{v}_{n})]_{\gamma}\end{bmatrix} $$

Thus, the following equation holds.

$$ [T]_{\gamma, \beta} [\mathbf{x}]_{\beta} = [T(\mathbf{x})]_{\gamma} = [T]_{\beta}^{\gamma}[\mathbf{x}]_{\beta} $$

This can be intuitively seen as canceling out adjacent (or duplicated in subscripts) 2 of $\beta$ and substituting $\mathbf{x}$ into $T$.


  1. Stephen H. Friedberg, Linear Algebra (4th Edition, 2002), p80-81 ↩︎ ↩︎