What is Inner Product in Real Vector Spaces?
Definition1
Let $V$ be a real vector space. An inner product on $V$ is a function that maps two vectors in $V$ to a single real number $\langle \mathbf{u}, \mathbf{v} \rangle$, satisfying the following conditions:
When $\mathbf{u}, \mathbf{v}, \mathbf{w} \in V$ and $k \in \mathbb{R}$,
- $\langle \mathbf{u}, \mathbf{v} \rangle = \langle \mathbf{v}, \mathbf{u} \rangle$
- $\langle \mathbf{u} + \mathbf{v}, \mathbf{w} \rangle = \langle \mathbf{u}, \mathbf{w} \rangle + \langle \mathbf{v}, \mathbf{w} \rangle$
- $\langle k \mathbf{u}, \mathbf{v} \rangle = k \langle \mathbf{u}, \mathbf{v} \rangle$
- $\langle \mathbf{v}, \mathbf{v} \rangle \ge 0 \quad \text{and} \quad \langle \mathbf{v}, \mathbf{v} \rangle = 0 \iff \mathbf{v}=\mathbf{0}$
Explanation
Reading the definition, one might think that these are obviously the conditions that need to be satisfied to define the inner product we have been using since high school. Conceptually, it does not get more general than this, except for extending real numbers to complex numbers here. If you are learning linear algebra, the definition of the inner product would stop at ensuring it results in a real number. However, when studying functional analysis or Hilbert spaces, it generalizes to include complex numbers.
In Euclidean space, [distance], [norm], and [inner product (dot product)] are usually defined first, and then their relationship is derived as properties. However, when inner products are generalized in this manner in general vector spaces, norms and distances too are naturally defined as follows.
$$ \begin{align*} \| \mathbf{v} \| &:= \sqrt{ \langle \mathbf{v}, \mathbf{v} \rangle } \\ d( \mathbf{u}, \mathbf{v}) &:= \| \mathbf{u} - \mathbf{v} \| = \sqrt{ \langle \mathbf{u} - \mathbf{v}, \mathbf{u} - \mathbf{v} \rangle } \end{align*} $$
Inner Product in Various Spaces
Euclidean Space
In Euclidean space, a weighted inner product can be defined as follows.
For $\mathbf{u}, \mathbf{v} \in \mathbb{R}^{n}$, $w_{i} \mathbb{R}$,
$$ \langle \mathbf{u}, \mathbf{v} \rangle = w_{1}u_{1}v_{1} + w_{2}u_{2}v_{2} + \cdots + w_{n}u_{n}v_{n} $$
If the observed values in a physical experiment are denoted by $x_{1}, \dots x_{n}$ and the number of observations by $f_{1}+f_{2}+\cdots +f_{n}=m$, then defining $w_{1}=w_{2}=\cdots=w_{n}=\frac{1}{m}$ means expressing the average using a weighted inner product.
$$ \langle \mathbf{x}, \mathbf{f} \rangle = \dfrac{1}{m} \left( f_{1}x_{1} + f_{2}x_{2} + \cdots f_{n}x_{n} \right) $$
Matrix Space
In the matrix space of $M_{nn}$, the inner product is defined as follows.
For $U, V \in M_{nn}(\mathbb{C})$,
$$ \langle U, V \rangle = \text{Tr}(U^{\ast} V) $$
Here, $\text{Tr}$ is the trace. This is also denoted as follows and called the Frobenius inner product.
$$ \left\langle U, V \right\rangle_{F} $$
Looking at an example of the $2 \times 2$ matrix, it is easy to see that the definition above represents the sum of the products of each element. Suppose two matrices $U, V$ are as follows.
$$ U=\begin{bmatrix} u_{1} & u_{2} \\ u_{3} & u_{4} \end{bmatrix} ,\quad V=\begin{bmatrix} v_{1} & v_{2} \\ v_{3} & v_{4} \end{bmatrix} $$
Then
$$ \begin{align*} && U^{\ast} V &= \begin{bmatrix} u_{1}^{\ast} & u_{3}^{\ast} \\ u_{2}^{\ast} & u_{4}^{\ast} \end{bmatrix} \begin{bmatrix} v_{1}^{\ } & v_{2}^{\ } \\ v_{3}^{\ } & v_{4}^{\ } \end{bmatrix} \\ && &= \begin{bmatrix} u_{1}^{\ast}v_{1}^{\ } + u_{3}^{\ast}v_{3}^{\ } & u_{1}^{\ast}v_{2}^{\ } + u_{3}^{\ast}v_{4}^{\ } \\ u_{2}^{\ast}v_{1}^{\ } + u_{4}^{\ast}v_{3}^{\ } & u_{2}^{\ast}v_{2}^{\ } + u_{4}^{\ast}v_{4}^{\ } \end{bmatrix} \\ \implies && \text{Tr}(U^{\ast}V) &= u_{1}^{\ast}v_{1}^{\ } + u_{2}^{\ast}v_{2}^{\ } + u_{3}^{\ast}v_{3}^{\ } + u_{4}^{\ast}v_{4}^{\ } \end{align*} $$
See Also
Howard Anton, Elementary Linear Algebra: Aplications Version (12th Edition, 2019), p341-349 ↩︎