Inverse of Linear Transformations
Definition1
Let $V, W$ be a vector space, and $T : V \to W$ be a linear transformation. If the linear transformation $U : W \to V$ satisfies the following, then $U$ is called the inverse or inverse transformation of $T$.
$$ TU = I_{W} \quad \text{and} \quad UT = I_{V} $$
$TU$ is the composition of $U$ and $T$, $I_{X} : X \to X$ is the identity transformation. If $T$ has an inverse, $T$ is called an invertible transformation. If $T$ is invertible, the inverse $U$ is unique and denoted as $T^{-1} = U$.
Explanation
According to the definition, a transformation being invertible is equivalent to being a bijective function.
Properties
(a) $(TU)^{-1} = U^{-1}T^{-1}$
(b) $(T^{-1})^{-1} = T$
(c) If $T : V \to W$ is a linear transformation and $V, W$ are vector spaces of the same dimension and finite dimension, then
$$ \rank (T) = \dim (V) $$
$\rank (T)$ is the rank of $T$.
(d) The inverse $T^{-1} : W \to V$ of a linear transformation $T : V \to W$ is also a linear transformation.
(e) For an invertible linear transformation $T : V \to W$, it’s necessary and sufficient for $V$ to be finite-dimensional if and only if $W$ is finite-dimensional. In this case, $\dim(V) = \dim(W)$ holds.
(f) $T$ being invertible is equivalent to $[T]_{\beta}^{\gamma}$ being invertible. Furthermore, $[T^{-1}]_{\beta}^{\gamma} = ([T]_{\beta}^{\gamma})^{-1}$. In this case, $[T]_{\beta}^{\gamma}$ is the matrix representation of $T$.
Proof
(d)
Given $\mathbf{x}_{1}, \mathbf{x}_{2} \in V$, and let $k$ be any constant. Then, since $T$ is linear, the following holds.
$$ \begin{align*} T^{-1} \left( T(\mathbf{x}_{1}) + k T(\mathbf{x}_{2}) \right) &= T^{-1} \left( T(\mathbf{x}_{1} + k \mathbf{x}_{2}) \right) \\ &= \mathbf{x}_{1} + k \mathbf{x}_{2} \\ &= T^{-1}\left( T(\mathbf{x}_{1}) \right) + kT^{-1}\left( T(\mathbf{x}_{2}) \right) \end{align*} $$
■
(e)2
Assume $V$ is finite-dimensional, and $\beta = \left\{ \mathbf{v}_{1}, \dots, \mathbf{v}_{n} \right\}$ is a basis for $V$. Then, $T(\beta)$ spans the codomain $R(T)=W$.
$$ \span (T(\beta)) = R(T) = W $$
Thus, $W$ is finite-dimensional. The converse also holds.
Now, assume $V, W$ is finite-dimensional. Since $T$ is injective and surjective,
$$ \nullity (T) = 0 \quad \text{and} \quad \rank(T) = \dim(R(T)) = \dim(W) $$
By the dimension theorem,
$$ \dim(V) = \rank(T) + \nullity(T) = \dim(W) $$
■
(f)2
$(\Longrightarrow)$
Assume $T : V \to W$ is invertible. Then, by (e), $\dim(V) = n = \dim(W)$. Thus, $[T]_{\beta}^{\gamma}$ is a $n \times n$ matrix. As for the inverse $T^{-1}$ of $T$, since $T^{-1}T = I_{V}, TT^{-1} = I_{W}$,
$I_{n} = [I_{V}]_{\beta} = \href{../3074}{[T^{-1}T]_{\beta} = {[T^{-1}]_{\gamma}^{\beta}[T]_{\beta}^{\gamma}}}$
Similarly, $I_{n} = [I_{W}]_{\beta} = [TT^{-1}]_{\beta} = [T]_{\beta}^{\gamma}[T^{-1}]_{\gamma}^{\beta}$ is satisfied. Therefore, $[T]_{\beta}^{\gamma}$ is an invertible matrix, and its inverse is $([T]_{\beta}^{\gamma})^{-1} = [T^{-1}]_{\gamma}^{\beta}$.
$(\Longleftarrow)$
Assume $A = [T]_{\beta}^{\gamma}$ is an invertible matrix. Then, a $n \times n$ matrix $B$ satisfying $AB = BA = I$ exists. Then, the linear transformation $U : W \to V$ defined as follows uniquely exists.
$$ U(\mathbf{w}_{j}) = \sum_{i=1}^{n}B_{ij}\mathbf{v}_{i} \quad \text{ for } j = 1,\dots,n $$
In this case, $\gamma = \left\{ \mathbf{w}_{1}, \dots, \mathbf{w}_{n} \right\}, \beta = \left\{ \mathbf{v}_{1}, \dots, \mathbf{v}_{n} \right\}$. Hence, the matrix representation of $U$ is $[U]_{\gamma}^{\beta} = B$. Then, the following holds.
$$ [UT]_{\beta} = [U]_{\gamma}^{\beta} [T]_{\gamma}^{\beta} = BA = I_{n} = [I_{V}]_{\beta} $$
Therefore, $UT = I_{V}$, and similarly, $TU = I_{W}$ holds. Therefore, $T$ is an invertible transformation.
■