logo

Gram-Schmidt Orthonormalization 📂Linear Algebra

Gram-Schmidt Orthonormalization

Theorem

Every finite-dimensional inner product space has an orthonormal basis.

Explanation

As with most existence proofs, it may not seem long or complex, but it is an extremely important theorem. Many of the logical foundations that support linear algebra rely on the existence of this orthonormal basis.

Proof

Let one of the bases generating the inner product space $(V, \left\langle \cdot , \cdot \right\rangle)$ be denoted as $\left\{ \mathbf{x}_{1} , \cdots , \mathbf{x}_{n} \right\}$. Define new vectors

$$ \mathbf{v}_{1} := {{ \mathbf{x}_{1} } \over { \left\| \mathbf{x}_{1} \right\| }} $$

and

$$ {\color{blue} \mathbf{v}_{2}} := {{ {\color{red} \mathbf{x}_{2} - ( \mathbf{x}_{2} \cdot \mathbf{v}_{1} ) \mathbf{v}_{1} } } \over { \left\| \mathbf{x}_{2} - ( \mathbf{x}_{2} \cdot \mathbf{v}_{1} ) \mathbf{v}_{1} \right\| }} = {{ \mathbf{x}_{2} - \text{Proj} ( \mathbf{x}_{1} ) } \over { \left\| \mathbf{x}_{2} - \text{Proj} ( \mathbf{x}_{1} ) \right\| }} $$.

20180120\_134426.png

As depicted in the diagram, $\mathbf{v}_{1} \perp \mathbf{v}_{2}$ holds. Since the above process can be repeated for

$$ \mathbf{v}_{k} := {{ \mathbf{x}_{k} - \text{Proj} ( \mathbf{x}_{k-1} ) } \over { \left\| \mathbf{x}_{k} - \text{Proj} ( \mathbf{x}_{k-1} ) \right\| }} $$, we can find an orthogonal basis $\left\{ \mathbf{v}_{1} , \cdots , \mathbf{v}_{n} \right\}$. By definition, all $\mathbf{v}_{1} , \cdots , \mathbf{v}_{n}$ have a magnitude of $1$, making $\left\{ \mathbf{v}_{1} , \cdots , \mathbf{v}_{n} \right\}$ orthonormal vectors.


In fact, normalization simply involves dividing by the norm to make the magnitude 1, and the core idea is to find orthogonal vectors using projection.

Code

The proof of the Gram-Schmidt orthogonalization process is essentially an algorithm. In practice, for a given random matrix X, the orthonormalized matrix V will have the inner product of each row vector be $0$ and the magnitude $1$.

julia> X = randn(3, 3)
3×3 Matrix{Float64}:
  0.0582107   0.414695  0.730785
 -0.740409   -0.885193  0.302674
  1.25107     0.840262  0.176503

julia> V = gram_schmidt(X)
3×3 Matrix{Float64}:
  0.0400097   0.739513  0.671952
 -0.508902   -0.563652  0.650626
  0.859894   -0.367989  0.353788

julia> V[1,:]'V[2,:], V[1,:]'V[3,:], V[2,:]'V[3,:]
(0.0, -8.326672684688674e-17, 1.1102230246251565e-16)

julia> norm(V[1,:]), norm(V[2,:]), norm(V[3,:])
(1.0, 0.9999999999999999, 1.0)

The following is a Julia code that performs Gram-Schmidt orthonormalization on a matrix.

using LinearAlgebra

function gram_schmidt(A)
    U = deepcopy(A)
    for i in axes(A, 1)
        for j in 1:(i-1)
            U[:, i] -= (U[:, i]'U[:, j])*U[:, j]/norm(U[:, j])
        end
        U[:, i] /= norm(U[:, i])
    end
    return U
end

X = randn(3, 3)
V = gram_schmidt(X)

V[1,:]'V[2,:], V[1,:]'V[3,:], V[2,:]'V[3,:]
norm(V[1,:]), norm(V[2,:]), norm(V[3,:])

Lyapunov Spectrum

When calculating the Lyapunov Spectrum, a function is needed to compute both the orthonormal matrix U and an orthogonal matrix V that measures how the axes of a unit ball are changed, as shown below.

using LinearAlgebra

function gram_schmidt(J)
    N = size(J, 1)
    U, V = deepcopy(J), deepcopy(J)
    U[:,1] = V[:,1] / norm(V[:,1])
    for j in 2:N
        for jp in 1:(j-1)
            V[:,j] -= (J[:,j]'U[:,jp])*U[:,jp]
        end
        U[:,j] = V[:,j] / norm(V[:,j])
    end
    return U, V
end

See Also