logo

Decomposition of Symmetric Matrices into LDU 📂Matrix Algebra

Decomposition of Symmetric Matrices into LDU

Overview

$L^{T}$ is an upper triangular matrix, so we can consider replacing $U$ in $A = LU$ with $U:= DL^{T}$. As general conditions become stricter than for the LU decomposition, the amount of computation significantly decreases.

Theorem

For the invertible symmetric matrix $A$, there exist a lower triangular matrix $L$ and a diagonal matrix $D$ satisfying $A = LDL^{T}$.

Proof

Since $A$ is an invertible matrix, there exist a lower triangular matrix $L$ and an upper triangular matrix $U$ that satisfy $A = L U$. On the other hand, as $A = A^{T}$ holds,

$$ L U = A = A^{T} = U^{T} L^{T} $$

In the LU decomposition, $L$ is obtained as a lower triangular matrix with all diagonal elements as $1$, so $\det L = 1$, indicating that the inverse matrix $L^{-1}$ exists. Since $L^{-1}$ is a lower triangular matrix and $$ L U = U^{T} L^{T} $$ is rearranged for $U$, $$ U = L^{-1} U^{T} L^{T} $$ multiplying both sides by $(L^{T})^{-1}$ on the right, $$ U (L^{T})^{-1} = L^{-1} U^{T}$$ is obtained. Since the left side of $U (L^{T})^{-1} = L^{-1} U^{T}$ is an upper triangular matrix and the right side is a lower triangular matrix, $$ D: = U (L^{T})^{-1} = L^{-1} U^{T} $$ is a diagonal matrix. Rearranging this for $U$, it becomes $U = DL^{T}$, and we obtain the following: $$ A = LU = LDL^{T} $$

Calculation

Now, to specifically calculate $L$ and $D$, let’s define as follows:

$$ L : = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ l_{21} & 1 & 0 & \cdots & 0 \\ l_{31} & l_{32} & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ l_{n1} & l_{n2} & l_{n3} & \cdots & 1 \end{bmatrix},\quad D : = \begin{bmatrix} d_{11} & 0 & 0 & \cdots & 0 \\ 0 & d_{22} & 0 & \cdots & 0 \\ 0 & 0 & d_{33} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & d_{nn} \end{bmatrix} $$

Let’s directly calculate $A = (a_{ij}) = LDL^{T}$.

Since $l_{ij} > 0$ and $l_{ii} = 1$,

$$ \begin{align*} a_{ij} =& \sum_{\nu = 1}^{n} l_{i \nu} \sum_{\mu = 1}^{n} d_{\nu \mu} l_{\mu j}^{T} \\ =& \sum_{\nu = 1}^{n} l_{i \nu} \sum_{\mu = 1}^{n} \delta_{\nu \mu} d_{\nu \mu} l_{ j \mu} \\ =& \sum_{\nu = 1}^{n} l_{i \nu} d_{\nu \nu} l_{ j \nu} \\ =& \sum_{\nu = 1}^{ \min (i,j) } l_{i \nu} d_{\nu \nu} l_{ j \nu} \end{align*} $$

First, let’s assume the case where $j \le i$.

$$ a_{ij} = \sum_{\nu = 1}^{j} l_{i \nu} d_{\nu \nu} l_{j \nu} = \sum_{\nu = 1}^{j-1} l_{i \nu} d_{\nu \nu} l_{j \nu} + l_{ij} d_{jj} l_{jj} = \sum_{\nu = 1}^{j-1} l_{i \nu} d_{\nu \nu} l_{j \nu} + l_{ij} d_{jj} $$

Here, in the case where $i = j = k$,

$$ a_{kk} = \sum_{\nu = 1}^{k-1} l_{k \nu} d_{\nu \nu} l_{k \nu} + d_{kk} d_{kk} $$

rearranging it,

$$ d_{kk} = a_{kk} - \sum_{\nu = 1}^{k-1} d_{\nu \nu} l_{k \nu}^2 $$

Conversely, assuming $j = k$ and $i = k+1, \cdots , n$,

$$ a_{ik} = \sum_{\nu = 1}^{k-1} l_{i \nu} d_{\nu \nu} l_{k \nu} + l_{ik} d_{kk} $$

rearranging $l_{ik}$,

$$ l_{ik} = {{1} \over {d_{kk}}} \left\{ a_{ik} - \sum_{\nu = 1}^{k-1} l_{i \nu} d_{\nu \nu} l_{k \nu} \right\} $$

Algorithm

Suppose $(a_{ij}) \in \mathbb{R}^{n \times n}$ is an invertible symmetric matrix.

**Step 1. $k = 1$

Substitute $d_{11} = a_{11}$ and calculate $\displaystyle l_{i1} = {{1} \over {d_{11}}} a_{i1}$.


**Step 2. $k = 2, 3, \cdots , n-1$

  • Step 2-1. Calculate the following: $$ d_{kk} = a_{kk} - \sum_{\nu = 1}^{k-1} d_{\nu \nu} l_{k \nu}^2 $$

  • Step 2-2. For $i = k+1 , k+2 , \cdots , n-1$, calculate the following: $$ l_{ik} = {{1} \over {d_{kk}}} \left\{ a_{ik} - \sum_{\nu = 1}^{k-1} l_{i \nu} d_{\nu \nu} l_{k \nu} \right\} $$


Step 3. For $k = n$, calculate the following: $$ d_{nn} = a_{nn} - \sum_{\nu = 1}^{n-1} d_{\nu \nu} l_{n \nu}^2 $$


  • Although calculations seemed complex when done manually, organizing them into an algorithm significantly reduces the computational workload. Always remember to first solve for the diagonal components of $D$ and then compute the values of $L$ column by column.

Code

Now, let’s take an example similar to the one used in LU decomposition and perform LDU decomposition.

5A1AC6BB2.png

The following code implements the above algorithm in R.

B <- matrix(c(2,1,0,1,2,1,0,1,2), ncol= 3); B
 
LDU<-function(A)
{
  n=length(A[,1])
  L<-diag(1,n)
  D<-diag(0,n)
  
  k=1
  D[1,1] = A[1,1]
  L[,1] = A[,1]/D[1,1]
 
  for(k in 2:(n-1))
  {
    D[k,k] = A[k,k] - sum(diag(D)[1:(k-1)]*L[k,1:(k-1)]^2)
    for(i in (k+1):n)
    {
      L[i,k] = ( A[i,k] - sum(L[i,1:(k-1)]*diag(D)[1:(k-1)]*L[k,1:(k-1)]) )/D[k,k]
    }
  }
  
  D[n,n] = A[n,n] - sum(diag(D)[1:(n-1)]*L[n,1:(n-1)]^2)
 
  return(list(L=L, D=D))
}
 
LDU(B)
LDU(B)$L%*%LDU(B)$D%*%t(LDU(B)$L)

The results are as follows: 20171128\_143203.png

Since LDU decomposition isn’t separately implemented in R, I verified only the decomposition, and it was correct.

Because the size of $B$ is not very large and the computation is manageable, if in doubt, you might want to try manual calculation directly.