Paper Review: Denoising Diffusion Probabilistic Models (DDPM)
📂Machine Learning Paper Review: Denoising Diffusion Probabilistic Models (DDPM) Overview and Summary A generative model refers to a method to find a probability distribution Y Y Y that a given random sample { y i } \left\{ y_{i} \right\} { y i } follows. As it’s highly challenging to directly find this from scratch, it’s common to use well-known distributions to approximate the desired distribution. Thus, if a dataset { x i } \left\{ x_{i} \right\} { x i } following a well-known distribution X X X is given, the generative model can be regarded as a function f f f , and developing a generative model involves finding a function that closely approximates f f f .
f : { x i } → { y j }
f : \left\{ x_{i} \right\} \to \left\{ y_{j} \right\}
f : { x i } → { y j }
The most commonly used well-known distribution is the normal distribution . Therefore, to understand generative models easily, one might consider it as a method to extract samples following an unknown distribution from a normal distribution.
The Denoising Diffusion Probabilistic Models (DDPM) similarly introduces a method to extract data following an unknown distribution from a normal distribution. Consider a noise z \mathbf{z} z extracted from the normal distribution N ( 0 , I ) N(0, I) N ( 0 , I ) and an image x 0 \mathbf{x}_{0} x 0 assumed to be extracted from some unknown distribution X X X . While it’s challenging to find a function x 0 = p ( z ) \mathbf{x}_{0} = p(\mathbf{z}) x 0 = p ( z ) that creates x 0 \mathbf{x}_{0} x 0 from z \mathbf{z} z , finding the inverse function z = q ( x 0 ) \mathbf{z} = q(\mathbf{x}_{0}) z = q ( x 0 ) that degrades x 0 \mathbf{x}_{0} x 0 to produce z \mathbf{z} z is also not easy. (Photo credits )
However, it is easy to obtain a slightly deteriorated image x 1 = x 0 + ϵ 1 \mathbf{x}_{1} = \mathbf{x}_{0} + \boldsymbol{\epsilon}_{1} x 1 = x 0 + ϵ 1 by adding a tiny noise ϵ 1 \boldsymbol{\epsilon}_{1} ϵ 1 extracted from a normal distribution to the image x 0 \mathbf{x}_{0} x 0 . Extracting and adding small Gaussian noise ϵ 2 \boldsymbol{\epsilon}_{2} ϵ 2 to obtain x 2 = x 1 + ϵ 2 \mathbf{x}_{2} = \mathbf{x}_{1} + \boldsymbol{\epsilon}_{2} x 2 = x 1 + ϵ 2 is also easy. Repeating such a process many times, x T \mathbf{x}_{T} x T would look like noise drawn from a normal distribution. In other words, continuously adding Gaussian noise to the image x 0 \mathbf{x}_{0} x 0 results in an image similar to noise sampled from a normal distribution, a process termed diffusion . The process is called diffusion because, mathematically, the phenomenon of particles spreading out randomly, known as Brownian motion , can be described by continuously adding values drawn from a Gaussian distribution; real-world Brownian motion is related to the diffusion (heat) equation.
Since diffusion involves adding noise repeatedly, its reverse process involves subtracting noise, naturally leading to denoising as an inverse process. This paper introduces a method to perform the unknown denoising process by utilizing knowledge about the known diffusion process. The idea was initially introduced as diffusion probabilistic models (DPM) in a 2015 paper : Deep Unsupervised Learning using Nonequilibrium Thermodynamics , and DDPM complements this with effective integration with deep learning.
The DDPM paper and its appendix are not very verbose with mathematical exposition. Even numerous reviews of this paper often fail to provide a clear explanation of how the formulas are derived. This article focuses meticulously on the mathematical description of DDPM, rather than its performance or results. Although the text is somewhat lengthy, there should be no obstacles in the derivation of the formulas.
Not a single line of formula has been glossed over. This is arguably the most mathematically accurate and detailed explanation of the DDPM paper’s formulas worldwide.
Preliminaries In our paper, the random variable x ∈ R D \mathbf{x} \in \mathbb{R}^{D} x ∈ R D and its realization are both denoted as x ∈ R D \mathbf{x} \in \mathbb{R}^{D} x ∈ R D . Generally, the probability density function of a random variable X X X is expressed as p X ( x ) p_{X}(x) p X ( x ) , while in this paper, it will be denoted as p ( x ) = p x ( x ) p(\mathbf{x}) = p_{\mathbf{x}}(\mathbf{x}) p ( x ) = p x ( x ) . Feature Extraction
A sequence of random variables ( x 0 , x 1 , … , x T ) = { x t } t = 0 T (\mathbf{x}_{0}, \mathbf{x}_{1}, \dots, \mathbf{x}_{T}) = \left\{ \mathbf{x}_{t} \right\}_{t=0}^{T} ( x 0 , x 1 , … , x T ) = { x t } t = 0 T is termed a stochastic process . In this paper, a stochastic process is denoted as x 0 : T = ( x 0 , x 1 , … , x T ) \mathbf{x}_{0:T} = (\mathbf{x}_{0}, \mathbf{x}_{1}, \dots, \mathbf{x}_{T}) x 0 : T = ( x 0 , x 1 , … , x T ) . The joint probability density function of x 0 : T \mathbf{x}_{0:T} x 0 : T is expressed as follows:
p ( x 0 : T ) = p x 0 : T ( x 0 : T )
p(\mathbf{x}_{0:T}) = p_{\mathbf{x}_{0:T}}(\mathbf{x}_{0:T})
p ( x 0 : T ) = p x 0 : T ( x 0 : T )
Given x 0 : T − 1 \mathbf{x}_{0:T-1} x 0 : T − 1 , the conditional probability density function of x T \mathbf{x}_{T} x T is defined as p ( x T ∣ x 0 : T − 1 ) p(\mathbf{x}_{T} | \mathbf{x}_{0:T-1}) p ( x T ∣ x 0 : T − 1 ) if it satisfies the following:
p ( x T ∣ x 0 : T − 1 ) = p ( x 0 : T ) p ( x 0 : T − 1 )
p(\mathbf{x}_{T} | \mathbf{x}_{0:T-1}) = \dfrac{p(\mathbf{x}_{0:T})}{p(\mathbf{x}_{0:T-1})}
p ( x T ∣ x 0 : T − 1 ) = p ( x 0 : T − 1 ) p ( x 0 : T )
By repeating the above definition, we obtain the following.
p ( x 0 : T ) = p ( x 0 ) p ( x 1 ∣ x 0 ) p ( x 2 ∣ x 0 : 1 ) ⋯ p ( x T − 1 ∣ x 0 : T − 2 ) p ( x T ∣ x 0 : T − 1 ) (1)
p(\mathbf{x}_{0:T})
= p(\mathbf{x}_{0}) p(\mathbf{x}_{1} | \mathbf{x}_{0}) p(\mathbf{x}_{2} | \mathbf{x}_{0:1}) \cdots p(\mathbf{x}_{T-1} | \mathbf{x}_{0:T-2}) p(\mathbf{x}_{T} | \mathbf{x}_{0:T-1})
\tag{1}
p ( x 0 : T ) = p ( x 0 ) p ( x 1 ∣ x 0 ) p ( x 2 ∣ x 0 : 1 ) ⋯ p ( x T − 1 ∣ x 0 : T − 2 ) p ( x T ∣ x 0 : T − 1 ) ( 1 )
If x 0 : T \mathbf{x}_{0:T} x 0 : T satisfies the following, it is termed a Markov chain .
p ( x T ∣ x 0 : T − 1 ) = p ( x T ∣ x T − 1 )
p(\mathbf{x}_{T} | \mathbf{x}_{0:T-1}) = p(\mathbf{x}_{T} | \mathbf{x}_{T-1})
p ( x T ∣ x 0 : T − 1 ) = p ( x T ∣ x T − 1 )
If x 0 : T \mathbf{x}_{0:T} x 0 : T is a Markov chain, then ( 1 ) (1) ( 1 ) simplifies as follows:
p ( x 0 : T ) = p ( x 0 ) p ( x 1 ∣ x 0 ) p ( x 2 ∣ x 0 : 1 ) ⋯ p ( x T − 1 ∣ x 0 : T − 2 ) p ( x T ∣ x 0 : T − 1 ) = p ( x 0 ) p ( x 1 ∣ x 0 ) p ( x 2 ∣ x 1 ) ⋯ p ( x T − 1 ∣ x T − 2 ) p ( x T ∣ x T − 1 ) = p ( x 0 ) ∏ t = 1 T p ( x t ∣ x t − 1 ) (2)
\begin{align*}
p(\mathbf{x}_{0:T})
&= p(\mathbf{x}_{0}) p(\mathbf{x}_{1} | \mathbf{x}_{0}) p(\mathbf{x}_{2} | \mathbf{x}_{0:1}) \cdots p(\mathbf{x}_{T-1} | \mathbf{x}_{0:T-2}) p(\mathbf{x}_{T} | \mathbf{x}_{0:T-1}) \\
&= p(\mathbf{x}_{0}) p(\mathbf{x}_{1} | \mathbf{x}_{0}) p(\mathbf{x}_{2} | \mathbf{x}_{1}) \cdots p(\mathbf{x}_{T-1} | \mathbf{x}_{T-2}) p(\mathbf{x}_{T} | \mathbf{x}_{T-1}) \\
&= p(\mathbf{x}_{0}) \prod\limits_{t=1}^{T} p(\mathbf{x}_{t} | \mathbf{x}_{t-1})
\end{align*}
\tag{2}
p ( x 0 : T ) = p ( x 0 ) p ( x 1 ∣ x 0 ) p ( x 2 ∣ x 0 : 1 ) ⋯ p ( x T − 1 ∣ x 0 : T − 2 ) p ( x T ∣ x 0 : T − 1 ) = p ( x 0 ) p ( x 1 ∣ x 0 ) p ( x 2 ∣ x 1 ) ⋯ p ( x T − 1 ∣ x T − 2 ) p ( x T ∣ x T − 1 ) = p ( x 0 ) t = 1 ∏ T p ( x t ∣ x t − 1 ) ( 2 )
2 Background Section 2 briefly introduces how the conceptual idea of the diffusion model can be mathematically expressed. Since the foundational idea comes from other papers, the explanation is not in-depth, making it difficult for beginners due to omitted details. I’ll elaborate on these as thoroughly as possible below.
Diffusion Process (Forward Process )Let’s mathematically outline the diffusion process. An image x t ∈ R D \mathbf{x}_{t} \in \mathbb{R}^{D} x t ∈ R D at step t t t is x t − 1 ∈ R D \mathbf{x}_{t-1} \in \mathbb{R}^{D} x t − 1 ∈ R D from step t − 1 t-1 t − 1 with Gaussian noise β t ϵ t ∼ N ( 0 , β t I ) \sqrt{\beta_{t}}\boldsymbol{\epsilon}_{t} \sim \mathcal{N}(\mathbf{0}, \beta_{t}\mathbf{I}) β t ϵ t ∼ N ( 0 , β t I ) added. β t \beta_{t} β t determines the extent of diffusion of the noise and can be expressed as follows:
x t = 1 − β t x t − 1 + β t ϵ t , ϵ t ∼ N ( 0 , I )
\mathbf{x}_{t} = \sqrt{1 - \beta_{t}}\mathbf{x}_{t-1} + \sqrt{\beta_{t}}\boldsymbol{\epsilon}_{t}, \qquad \boldsymbol{\epsilon}_{t} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})
x t = 1 − β t x t − 1 + β t ϵ t , ϵ t ∼ N ( 0 , I )
The reason the coefficient is ( 1 − β t , β t ) (\sqrt{1-\beta_{t}}, \sqrt{\beta_{t}}) ( 1 − β t , β t ) rather than ( 1 − β t , β t ) (1-\beta_{t}, \beta_{t}) ( 1 − β t , β t ) is due to the properties of variance Var ( a X ) = a 2 Var ( X ) \Var(aX) = a^{2}\Var(X) Var ( a X ) = a 2 Var ( X ) . To ensure all n ≥ 1 n \ge 1 n ≥ 1 approximately follow a standard normal distribution when t t t is sufficiently large, the coefficient must include \sqrt{} . Assuming t t t is sufficiently large, x t \mathbf{x}_{t} x t will follow a standard normal distribution. At that time, the covariance matrix of x t + 1 \mathbf{x}_{t+1} x t + 1 is maintained as shown below (still following a standard normal distribution). As x t − 1 \mathbf{x}_{t-1} x t − 1 and ϵ t \boldsymbol{\epsilon}_{t} ϵ t are independent, the following holds:
Cov ( x t ) = Cov ( 1 − β t x t − 1 + β t ϵ t ) = ( 1 − β t ) Cov ( x t − 1 ) + β t Cov ( ϵ t ) = ( 1 − β t ) I + β t I = I
\begin{align*}
\Cov(\mathbf{x}_{t})
&= \Cov(\sqrt{1-\beta_{t}}\mathbf{x}_{t-1} + \sqrt{\beta_{t}}\boldsymbol{\epsilon}_{t}) \\
&= (1-\beta_{t})\Cov(\mathbf{x}_{t-1}) + \beta_{t}\Cov(\boldsymbol{\epsilon}_{t}) \\
&= (1-\beta_{t})\mathbf{I} + \beta_{t}\mathbf{I} \\
&= \mathbf{I}
\end{align*}
Cov ( x t ) = Cov ( 1 − β t x t − 1 + β t ϵ t ) = ( 1 − β t ) Cov ( x t − 1 ) + β t Cov ( ϵ t ) = ( 1 − β t ) I + β t I = I
When x t − 1 \mathbf{x}_{t-1} x t − 1 is fixed (as a constant), the conditional probability density function for x t \mathbf{x}_{t} x t is as follows:
q ( x t ∣ x t − 1 ) = N ( 1 − β t x t − 1 , β t I ) (3)
q(\mathbf{x}_{t} | \mathbf{x}_{t-1}) = \mathcal{N}(\sqrt{1 - \beta_{t}}\mathbf{x}_{t-1}, \beta_{t}\mathbf{I}) \tag{3}
q ( x t ∣ x t − 1 ) = N ( 1 − β t x t − 1 , β t I ) ( 3 )
(The notation N ( x t ; 1 − β t x t − 1 , β t I ) \mathcal{N}(\mathbf{x}_{t}; \sqrt{1 - \beta_{t}}\mathbf{x}_{t-1}, \beta_{t}\mathbf{I}) N ( x t ; 1 − β t x t − 1 , β t I ) was used in the paper to explicitly mention that it represents the distribution for x t \mathbf{x}_{t} x t .) Each of the following verifies that the mean vector and covariance matrix are as stated. Assume x t − 1 \mathbf{x}_{t-1} x t − 1 is fixed (i.e., it is constant).
E [ x t ∣ x t − 1 ] = E [ 1 − β t x t − 1 + β t ϵ t ] ( x t − 1 is constant vector ) = E [ 1 − β t x t − 1 ] + E [ β t ϵ t ] = 1 − β t x t − 1 + β t E [ ϵ t ] = 1 − β t x t − 1
\begin{align*}
\mathbb{E} [\mathbf{x}_{t} | \mathbf{x}_{t-1}]
&= \mathbb{E} \left[ \sqrt{1 - \beta_{t}}\mathbf{x}_{t-1} + \sqrt{\beta_{t}}\boldsymbol{\epsilon}_{t} \right] \qquad (\mathbf{x}_{t-1} \text{ is constant vector})\\
&= \mathbb{E} \left[ \sqrt{1 - \beta_{t}}\mathbf{x}_{t-1} \right] + \mathbb{E} \left[ \sqrt{\beta_{t}}\boldsymbol{\epsilon}_{t} \right] \\
&= \sqrt{1 - \beta_{t}}\mathbf{x}_{t-1} + \sqrt{\beta_{t}}\mathbb{E} \left[ \boldsymbol{\epsilon}_{t} \right] \\
&= \sqrt{1 - \beta_{t}}\mathbf{x}_{t-1}
\end{align*}
E [ x t ∣ x t − 1 ] = E [ 1 − β t x t − 1 + β t ϵ t ] ( x t − 1 is constant vector ) = E [ 1 − β t x t − 1 ] + E [ β t ϵ t ] = 1 − β t x t − 1 + β t E [ ϵ t ] = 1 − β t x t − 1
Cov ( x t ∣ x t − 1 ) = Cov ( 1 − β t x t − 1 + β t ϵ t ) ( x t − 1 is constant vector ) = Cov ( β t ϵ t ) = β t Cov ( ϵ t ) = β t I
\begin{align*}
\Cov (\mathbf{x}_{t} | \mathbf{x}_{t-1})
&= \Cov \left( \sqrt{1 - \beta_{t}}\mathbf{x}_{t-1} + \sqrt{\beta_{t}}\boldsymbol{\epsilon}_{t} \right) \qquad (\mathbf{x}_{t-1} \text{ is constant vector})\\
&= \Cov \left( \sqrt{\beta_{t}}\boldsymbol{\epsilon}_{t} \right) \\
&= \beta_{t}\Cov \left( \boldsymbol{\epsilon}_{t} \right) \\
&= \beta_{t} \mathbf{I}
\end{align*}
Cov ( x t ∣ x t − 1 ) = Cov ( 1 − β t x t − 1 + β t ϵ t ) ( x t − 1 is constant vector ) = Cov ( β t ϵ t ) = β t Cov ( ϵ t ) = β t I
Conditional to a given image x 0 \mathbf{x}_{0} x 0 , when Gaussian noise is repetitively added, the deteriorated image is x 1 , x 2 , … , x T \mathbf{x}_{1}, \mathbf{x}_{2}, \dots, \mathbf{x}_{T} x 1 , x 2 , … , x T (notice the same notations for this random variable). The probability density function for this entire process is given through the conditional joint probability density function q ( x 1 : T ∣ x 0 ) q(\mathbf{x}_{1:T} | \mathbf{x}_{0}) q ( x 1 : T ∣ x 0 ) , referred to as the forward process or diffusion process . As x 0 : T \mathbf{x}_{0:T} x 0 : T is Markov, by ( 2 ) \eqref{2} ( 2 ) , the diffusion process can be modeled as follows:
q ( x 1 : T ∣ x 0 ) = ∏ t = 1 T q ( x t ∣ x t − 1 ) (4)
q(\mathbf{x}_{1:T} | \mathbf{x}_{0}) = \prod\limits_{t=1}^{T} q(\mathbf{x}_{t} | \mathbf{x}_{t-1})
\tag{4}
q ( x 1 : T ∣ x 0 ) = t = 1 ∏ T q ( x t ∣ x t − 1 ) ( 4 )
Meanwhile, one can explicitly specify the conditional probability density function when x t \mathbf{x}_{t} x t is determined at any step t t t from x 0 \mathbf{x}_{0} x 0 . Suppose it’s α t = 1 − β t \alpha_{t} = 1 - \beta_{t} α t = 1 − β t , then:
x t = α t x t − 1 + 1 − α t ϵ t = α t ( α t − 1 x t − 2 + 1 − α t − 1 ϵ t − 1 ) + 1 − α t ϵ t = α t α t − 1 x t − 2 + α t 1 − α t − 1 ϵ t − 1 + 1 − α t ϵ t
\begin{align*}
\mathbf{x}_{t}
&= \sqrt{\alpha_{t}}\mathbf{x}_{t-1} + \sqrt{1 - \alpha_{t}}\boldsymbol{\epsilon}_{t} \\
&= \sqrt{\alpha_{t}}\left( \sqrt{\alpha_{t-1}}\mathbf{x}_{t-2} + \sqrt{1 - \alpha_{t-1}}\boldsymbol{\epsilon}_{t-1} \right) + \sqrt{1 - \alpha_{t}}\boldsymbol{\epsilon}_{t} \\
&= \sqrt{\alpha_{t}}\sqrt{\alpha_{t-1}}\mathbf{x}_{t-2} + \sqrt{\alpha_{t}}\sqrt{1 - \alpha_{t-1}}\boldsymbol{\epsilon}_{t-1} + \sqrt{1 - \alpha_{t}}\boldsymbol{\epsilon}_{t} \\
\end{align*}
x t = α t x t − 1 + 1 − α t ϵ t = α t ( α t − 1 x t − 2 + 1 − α t − 1 ϵ t − 1 ) + 1 − α t ϵ t = α t α t − 1 x t − 2 + α t 1 − α t − 1 ϵ t − 1 + 1 − α t ϵ t
The sum of two normal distributions can similarly be represented by one normal distribution:
If X 1 ∼ N ( μ 1 , σ 1 2 ) X_{1} \sim \mathcal{N}(\mu_{1}, \sigma_{1}^{2}) X 1 ∼ N ( μ 1 , σ 1 2 ) and X 2 ∼ N ( μ 2 , σ 2 2 ) X_{2} \sim \mathcal{N}(\mu_{2}, \sigma_{2}^{2}) X 2 ∼ N ( μ 2 , σ 2 2 ) are independent,
X 1 + X 2 ∼ N ( μ 1 + μ 2 , σ 1 2 + σ 2 2 )
X_{1} + X_{2} \sim \mathcal{N}(\mu_{1} + \mu_{2}, \sigma_{1}^{2} + \sigma_{2}^{2})
X 1 + X 2 ∼ N ( μ 1 + μ 2 , σ 1 2 + σ 2 2 )
The same logic applies to random vectors.
Therefore, the following holds:
α t 1 − α t − 1 ϵ t − 1 + 1 − α t ϵ t ∼ N ( 0 , α t ( 1 − α t − 1 ) I + ( 1 − α t ) I ) = N ( 0 , ( 1 − α t α t − 1 ) I )
\begin{align*}
\sqrt{\alpha_{t}}\sqrt{1 - \alpha_{t-1}}\boldsymbol{\epsilon}_{t-1} + \sqrt{1 - \alpha_{t}}\boldsymbol{\epsilon}_{t}
&\sim \mathcal{N}(\mathbf{0}, \alpha_{t}(1-\alpha_{t-1})\mathbf{I} + (1 - \alpha_{t})\mathbf{I}) \\
&\quad= \mathcal{N}(\mathbf{0}, (1 - \alpha_{t}\alpha_{t-1})\mathbf{I})
\end{align*}
α t 1 − α t − 1 ϵ t − 1 + 1 − α t ϵ t ∼ N ( 0 , α t ( 1 − α t − 1 ) I + ( 1 − α t ) I ) = N ( 0 , ( 1 − α t α t − 1 ) I )
Thus, x t \mathbf{x}_{t} x t is given by:
x t = α t α t − 1 x t − 2 + 1 − α t α t − 1 ϵ , ϵ ∼ N ( 0 , I )
\mathbf{x}_{t} = \sqrt{\alpha_{t}\alpha_{t-1}}\mathbf{x}_{t-2} + \sqrt{1 - \alpha_{t}\alpha_{t-1}}\boldsymbol{\epsilon}, \qquad \boldsymbol{\epsilon} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})
x t = α t α t − 1 x t − 2 + 1 − α t α t − 1 ϵ , ϵ ∼ N ( 0 , I )
Repeating this process yields the following:
x t = α t α t − 1 x t − 2 + 1 − α t α t − 1 ϵ = α t α t − 1 ( α t − 2 x t − 3 + 1 − α t − 2 ϵ t − 2 ) + 1 − α t α t − 1 ϵ = α t α t − 1 α t − 2 x t − 3 + α t α t − 1 ( 1 − α t − 2 ) ϵ t − 2 + 1 − α t α t − 1 ϵ = α t α t − 1 α t − 2 x t − 3 + 1 − α t α t − 1 α t − 2 ϵ ⋮ = ∏ s = 1 t α s x 0 + 1 − ∏ s = 1 t α s ϵ = α ‾ t x 0 + 1 − α ‾ t ϵ , ϵ ∼ N ( 0 , I ) , α ‾ t = ∏ s = 1 t α s
\begin{align*}
\mathbf{x}_{t}
&= \sqrt{\alpha_{t}\alpha_{t-1}}\mathbf{x}_{t-2} + \sqrt{1 - \alpha_{t}\alpha_{t-1}}\boldsymbol{\epsilon} \\
&= \sqrt{\alpha_{t}\alpha_{t-1}}(\sqrt{\alpha_{t-2}}\mathbf{x}_{t-3} + \sqrt{1 - \alpha_{t-2}}\boldsymbol{\epsilon}_{t-2}) + \sqrt{1 - \alpha_{t}\alpha_{t-1}}\boldsymbol{\epsilon} \\
&= \sqrt{\alpha_{t}\alpha_{t-1}\alpha_{t-2}}\mathbf{x}_{t-3} + \sqrt{\alpha_{t}\alpha_{t-1}(1 - \alpha_{t-2})}\boldsymbol{\epsilon}_{t-2} + \sqrt{1 - \alpha_{t}\alpha_{t-1}}\boldsymbol{\epsilon} \\
&= \sqrt{\alpha_{t}\alpha_{t-1}\alpha_{t-2}}\mathbf{x}_{t-3} + \sqrt{1 - \alpha_{t}\alpha_{t-1}\alpha_{t-2}}\boldsymbol{\epsilon} \\
&\vdots \\
&= \sqrt{\prod\limits_{s=1}^{t} \alpha_{s}}\mathbf{x}_{0} + \sqrt{1 - \prod\limits_{s=1}^{t} \alpha_{s}}\boldsymbol{\epsilon} \\
&= \sqrt{\overline{\alpha}_{t}}\mathbf{x}_{0} + \sqrt{1 - \overline{\alpha}_{t}}\boldsymbol{\epsilon}, \qquad \boldsymbol{\epsilon} \sim \mathcal{N}(\mathbf{0}, \mathbf{I}), \quad \overline{\alpha}_{t} = \prod\limits_{s=1}^{t} \alpha_{s} \tag{5}
\end{align*}
x t = α t α t − 1 x t − 2 + 1 − α t α t − 1 ϵ = α t α t − 1 ( α t − 2 x t − 3 + 1 − α t − 2 ϵ t − 2 ) + 1 − α t α t − 1 ϵ = α t α t − 1 α t − 2 x t − 3 + α t α t − 1 ( 1 − α t − 2 ) ϵ t − 2 + 1 − α t α t − 1 ϵ = α t α t − 1 α t − 2 x t − 3 + 1 − α t α t − 1 α t − 2 ϵ ⋮ = s = 1 ∏ t α s x 0 + 1 − s = 1 ∏ t α s ϵ = α t x 0 + 1 − α t ϵ , ϵ ∼ N ( 0 , I ) , α t = s = 1 ∏ t α s ( 5 )
Hence, given x 0 \mathbf{x}_{0} x 0 , the conditional joint density function concerning x t \mathbf{x}_{t} x t is given as follows:
q ( x t ∣ x 0 ) = N ( α ‾ t x 0 , ( 1 − α ‾ t ) I ) (6)
q(\mathbf{x}_{t} | \mathbf{x}_{0}) = \mathcal{N}(\sqrt{\overline{\alpha}_{t}}\mathbf{x}_{0}, (1 - \overline{\alpha}_{t})\mathbf{I}) \tag{6}
q ( x t ∣ x 0 ) = N ( α t x 0 , ( 1 − α t ) I ) ( 6 )
If α t \alpha_{t} α t is sufficiently small (i.e., β t \beta_{t} β t is close to 1 1 1 ) and T T T is sufficiently large, α ‾ T = ∏ s = 1 T α s \overline{\alpha}_{T} = \prod_{s=1}^{T} \alpha_{s} α T = ∏ s = 1 T α s closely approximates 0 0 0 , and thus, the following holds:
q ( x T ∣ x 0 ) → N ( 0 , I ) as T → ∞
q(\mathbf{x}_{T} | \mathbf{x}_{0}) \to \mathcal{N}(\mathbf{0}, \mathbf{I}) \quad \text{as } T \to \infty
q ( x T ∣ x 0 ) → N ( 0 , I ) as T → ∞
These equations support the idea that x T \mathbf{x}_{T} x T , which arises by continuously adding Gaussian noise to x 0 \mathbf{x}_{0} x 0 , follows a standard normal distribution.
Denoising Process (Reverse Process )Now consider the reverse process (denoising ) that retrieves x 0 \mathbf{x}_{0} x 0 by gradually removing noise from Gaussian noise x T ∼ N ( 0 , I ) \mathbf{x}_{T} \sim \mathcal{N}(\mathbf{0}, \mathbf{I}) x T ∼ N ( 0 , I ) . According to Feller , the reverse process of forward diffusion exactly mirrors the forward process for distributions like normal or binomial , given sufficiently small diffusion coefficients β \beta β .
“For both Gaussian and binomial diffusion, for continuous diffusion (limit of small step size β \beta β ) the reversal of the diffusion process has the identical functional form as the forward process.”
– Sohl-Dickstein, Jascha, et al. Deep unsupervised learning using nonequilibrium thermodynamics. International conference on machine learning. pmlr, 2015.
This implies that the probability density function of denoising, denoted p p p , satisfies the following. Given that q ( x t ∣ x t − 1 ) q(\mathbf{x}_{t} | \mathbf{x}_{t-1}) q ( x t ∣ x t − 1 ) follows a normal distribution, its reverse process is as follows:
p ( x t − 1 ∣ x t ) = N ( μ ( x t , t ) , Σ ( x t , t ) )
p(\mathbf{x}_{t-1} | \mathbf{x}_{t}) = \mathcal{N}(\mathbf{\mu}(\mathbf{x}_{t}, t), \boldsymbol{\Sigma}(\mathbf{x}_{t}, t))
p ( x t − 1 ∣ x t ) = N ( μ ( x t , t ) , Σ ( x t , t ))
The reverse process is also a Markov chain, allowing the entire process of noise removal to be expressed as follows when starting from a standard normal noise x T \mathbf{x}_{T} x T to result in a specific image x 0 \mathbf{x}_{0} x 0 :
p ( x 0 : T ) = p ( x T ) ∏ t = 1 T p ( x t − 1 ∣ x t )
p(\mathbf{x}_{0:T}) = p(\mathbf{x}_{T}) \prod\limits_{t=1}^{T} p(\mathbf{x}_{t-1} | \mathbf{x}_{t})
p ( x 0 : T ) = p ( x T ) t = 1 ∏ T p ( x t − 1 ∣ x t )
Unlike the diffusion process q ( x 0 : T ) q(\mathbf{x}_{0:T}) q ( x 0 : T ) , the denoising process p ( x 0 : T ) p(\mathbf{x}_{0:T}) p ( x 0 : T ) is unknown and needs to be estimated. Let us denote the approximation function with parameters θ \theta θ as p θ p_{\theta} p θ for p p p . Knowing that the probability density function with respect to x T \mathbf{x}_{T} x T is p ( x T ) = N ( 0 , I ) p(\mathbf{x}_{T}) = \mathcal{N}(\mathbf{0}, \mathbf{I}) p ( x T ) = N ( 0 , I ) , we have:
p θ ( x 0 : T ) = p ( x T ) ∏ t = 1 T p θ ( x t − 1 ∣ x t ) (7)
p_{\theta} (\mathbf{x}_{0:T}) = p(\mathbf{x}_{T}) \prod\limits_{t=1}^{T} p_{\theta} (\mathbf{x}_{t-1} | \mathbf{x}_{t}) \tag{7}
p θ ( x 0 : T ) = p ( x T ) t = 1 ∏ T p θ ( x t − 1 ∣ x t ) ( 7 )
While we know that each p ( x t − 1 ∣ x t ) p(\mathbf{x}_{t-1} | \mathbf{x}_{t}) p ( x t − 1 ∣ x t ) follows a normal distribution, its mean vector and covariance matrix are unknown, and therefore also need estimation. Consequently, p θ ( x t − 1 ∣ x t ) p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) p θ ( x t − 1 ∣ x t ) can be represented as follows:
p θ ( x t − 1 ∣ x t ) = N ( μ θ ( x t , t ) , Σ θ ( x t , t ) )
p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) = \mathcal{N}(\mu_{\theta}(\mathbf{x}_{t}, t), \Sigma_{\theta}(\mathbf{x}_{t}, t))
p θ ( x t − 1 ∣ x t ) = N ( μ θ ( x t , t ) , Σ θ ( x t , t ))
To generate new data, one seeks the probability distribution p ( x 0 ) p(\mathbf{x}_{0}) p ( x 0 ) with respect to the data x 0 \mathbf{x}_{0} x 0 . Knowing this allows for generating new data through random sampling that follows the same distribution as existing data. The approximating function of p ( x 0 ) p(\mathbf{x}_{0}) p ( x 0 ) is p θ ( x 0 ) p_{\theta}(\mathbf{x}_{0}) p θ ( x 0 ) with parameters θ \theta θ . To make p θ ( x 0 ) p_{\theta}(\mathbf{x}_{0}) p θ ( x 0 ) similar to p ( x 0 ) p(\mathbf{x}_{0}) p ( x 0 ) , one considers p θ ( x 0 ) p_{\theta}(\mathbf{x}_{0}) p θ ( x 0 ) as a likelihood and employs the maximum likelihood estimation method. Since p p p is assumed normal, leverage log-likelihood for mathematical simplification; this naturally bridges with relative entropy (Kullback-Leibler divergence) . Maximizing log p θ ( x 0 ) \log p_{\theta}(\mathbf{x}_{0}) log p θ ( x 0 ) is tantamount to minimizing − log p θ ( x 0 ) -\log p_{\theta}(\mathbf{x}_{0}) − log p θ ( x 0 ) , which is our starting point (gradient descent will be used). Hence, the aim is to find θ \theta θ that minimizes E q ( x 0 ) [ − log p θ ( x 0 ) ] \mathbb{E}_{q(\mathbf{x}_{0})}[-\log p_{\theta}(\mathbf{x}_{0})] E q ( x 0 ) [ − log p θ ( x 0 )] . By the definition of marginal/conditional probability density functions, p ( x ) p(\mathbf{x}) p ( x ) is expressed as follows:
p θ ( x 0 ) = ∫ p θ ( x 0 : T ) d x 1 : T = ∫ p ( x 0 : T ) q ( x 1 : T ∣ x 0 ) q ( x 1 : T ∣ x 0 ) d x 1 : T = ∫ q ( x 1 : T ∣ x 0 ) p ( x 0 : T ) q ( x 1 : T ∣ x 0 ) d x 1 : T = E q ( x 1 : T ∣ x 0 ) [ p ( x 0 : T ) q ( x 1 : T ∣ x 0 ) ]
\begin{align*}
p_{\theta}(\mathbf{x}_{0})
&= \int p_{\theta}(\mathbf{x}_{0:T}) d\mathbf{x}_{1:T} \\
&= \int p(\mathbf{x}_{0:T}) \dfrac{q(\mathbf{x}_{1:T} | \mathbf{x}_{0})}{q(\mathbf{x}_{1:T} | \mathbf{x}_{0})} d\mathbf{x}_{1:T} \\
&= \int q(\mathbf{x}_{1:T} | \mathbf{x}_{0}) \dfrac{ p(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T} | \mathbf{x}_{0})} d\mathbf{x}_{1:T} \\
&= \mathbb{E}_{q(\mathbf{x}_{1:T} | \mathbf{x}_{0})} \left[ \dfrac{ p(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T} | \mathbf{x}_{0})} \right]
\end{align*}
p θ ( x 0 ) = ∫ p θ ( x 0 : T ) d x 1 : T = ∫ p ( x 0 : T ) q ( x 1 : T ∣ x 0 ) q ( x 1 : T ∣ x 0 ) d x 1 : T = ∫ q ( x 1 : T ∣ x 0 ) q ( x 1 : T ∣ x 0 ) p ( x 0 : T ) d x 1 : T = E q ( x 1 : T ∣ x 0 ) [ q ( x 1 : T ∣ x 0 ) p ( x 0 : T ) ]
We can breakdown the expectation we intend to minimize as follows. By Jensen’s inequality , − log ( E [ X ] ) ≤ E [ − log X ] -\log (\mathbb{E}[X]) \le \mathbb{E}[-\log X] − log ( E [ X ]) ≤ E [ − log X ] implies:
E q ( x 0 ) [ − log p θ ( x 0 ) ] = E q ( x 0 ) [ − log ( E q ( x 1 : T ∣ x 0 ) [ p θ ( x 0 : T ) q ( x 1 : T ∣ x 0 ) ] ) ] ≤ E q ( x 0 ) [ E q ( x 1 : T ∣ x 0 ) ( − log p θ ( x 0 : T ) q ( x 1 : T ∣ x 0 ) ) ] = E q ( x 0 : T ) [ − log p θ ( x 0 : T ) q ( x 1 : T ∣ x 0 ) ]
\begin{align*}
\mathbb{E}_{q(\mathbf{x}_{0})}[-\log p_{\theta}(\mathbf{x}_{0})]
&= \mathbb{E}_{q(\mathbf{x}_{0})} \left[ -\log \left( \mathbb{E}_{q(\mathbf{x}_{1:T} | \mathbf{x}_{0})} \left[ \dfrac{ p_{\theta}(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T} | \mathbf{x}_{0})} \right] \right) \right] \\
&\le \mathbb{E}_{q(\mathbf{x}_{0})} \left[ \mathbb{E}_{q(\mathbf{x}_{1:T} | \mathbf{x}_{0})}\left( -\log \dfrac{ p_{\theta}(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T} | \mathbf{x}_{0})} \right) \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ -\log \dfrac{ p_{\theta}(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T} | \mathbf{x}_{0})} \right] \\
\end{align*}
E q ( x 0 ) [ − log p θ ( x 0 )] = E q ( x 0 ) [ − log ( E q ( x 1 : T ∣ x 0 ) [ q ( x 1 : T ∣ x 0 ) p θ ( x 0 : T ) ] ) ] ≤ E q ( x 0 ) [ E q ( x 1 : T ∣ x 0 ) ( − log q ( x 1 : T ∣ x 0 ) p θ ( x 0 : T ) ) ] = E q ( x 0 : T ) [ − log q ( x 1 : T ∣ x 0 ) p θ ( x 0 : T ) ]
The final equality holds owing to the definition of conditional probability density functions q ( X , Y ) = q ( X ) q ( Y ∣ X ) q(X,Y) = q(X)q(Y|X) q ( X , Y ) = q ( X ) q ( Y ∣ X ) . Envisioning the two forms of expectations as integral representations might offer clearer insights. Substituting ( 4 ) (4) ( 4 ) and ( 7 ) (7) ( 7 ) into the equation yields:
E q ( x 0 ) [ − log p θ ( x 0 ) ] ≤ E q ( x 0 : T ) [ − log p θ ( x 0 : T ) q ( x 1 : T ∣ x 0 ) ] = E q ( x 0 : T ) [ − log p ( x T ) ∏ t = 1 T p θ ( x t − 1 ∣ x t ) ∏ t = 1 T q ( x t ∣ x t − 1 ) ] = E q ( x 0 : T ) [ − log p ( x T ) − ∑ t = 1 T log p θ ( x t − 1 ∣ x t ) q ( x t ∣ x t − 1 ) ] = : L
\begin{align*}
\mathbb{E}_{q(\mathbf{x}_{0})}[-\log p_{\theta}(\mathbf{x}_{0})]
&\le \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ -\log \dfrac{ p_{\theta}(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T} | \mathbf{x}_{0})} \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ -\log \dfrac{ p(\mathbf{x}_{T}) \prod\limits_{t=1}^{T} p_{\theta} (\mathbf{x}_{t-1} | \mathbf{x}_{t})}{\prod\limits_{t=1}^{T} q(\mathbf{x}_{t} | \mathbf{x}_{t-1})} \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ -\log p(\mathbf{x}_{T}) - \sum\limits_{t=1}^{T} \log \dfrac{ p_{\theta} (\mathbf{x}_{t-1} | \mathbf{x}_{t})}{ q(\mathbf{x}_{t} | \mathbf{x}_{t-1})} \right] =: L
\end{align*}
E q ( x 0 ) [ − log p θ ( x 0 )] ≤ E q ( x 0 : T ) [ − log q ( x 1 : T ∣ x 0 ) p θ ( x 0 : T ) ] = E q ( x 0 : T ) − log t = 1 ∏ T q ( x t ∣ x t − 1 ) p ( x T ) t = 1 ∏ T p θ ( x t − 1 ∣ x t ) = E q ( x 0 : T ) [ − log p ( x T ) − t = 1 ∑ T log q ( x t ∣ x t − 1 ) p θ ( x t − 1 ∣ x t ) ] =: L
The last equality is valid due to the properties of logarithms . While the left term is our actual minimization goal, it is uncomputable due to the unknown distribution q ( x 0 ) q(\mathbf{x}_{0}) q ( x 0 ) of real data. But examining the right-hand side reveals that it involves computable terms, namely p ( x T ) p(\mathbf{x}_{T}) p ( x T ) and q ( x t ∣ x t − 1 ) q(\mathbf{x}_{t} | \mathbf{x}_{t-1}) q ( x t ∣ x t − 1 ) ; the first, being assumed normal, is a known quantity. The second is derived through actual execution of the diffusion process, thus by minimizing computable quantities on the right, the inequality implies an indirect minimization of the left.
Through algebraic manipulation, the loss function can be tailored for computational efficiency. Presently, the inclusion of log p θ ( x t − 1 ∣ x t ) q ( x t ∣ x t − 1 ) \log \dfrac{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})}{q(\mathbf{x}_{t} | \mathbf{x}_{t-1})} log q ( x t ∣ x t − 1 ) p θ ( x t − 1 ∣ x t ) in L L L necessitates sampling for p θ ( x t − 1 ∣ x t ) p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) p θ ( x t − 1 ∣ x t ) , introducing variance in real data that destabilizes the learning process. Let’s amend the formula within Kullback-Leibler divergence (KLD) for representation. Assuming normal distributions for p θ ( x t − 1 ∣ x t ) p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) p θ ( x t − 1 ∣ x t ) and q ( x t ∣ x t − 1 ) q(\mathbf{x}_{t} | \mathbf{x}_{t-1}) q ( x t ∣ x t − 1 ) yields a KLD closed-form solution by the formula below, allowing exact loss computation sans variance, yielding robust and efficient learning.
Relative Entropy of Normal Distributions :
The relative entropy between two multivariate normal distributions N ( μ , Σ ) N(\boldsymbol{\mu}, \Sigma) N ( μ , Σ ) and N ( μ 1 , Σ 1 ) N(\boldsymbol{\mu_{1}}, \Sigma_{1}) N ( μ 1 , Σ 1 ) is given by the following. With μ , μ 1 ∈ R D \boldsymbol{\mu}, \boldsymbol{\mu}_{1} \in \mathbb{R}^{D} μ , μ 1 ∈ R D :
D KL ( N ( μ , Σ ) ∥ N ( μ 1 , Σ 1 ) ) = 1 2 [ log ( ∣ Σ ∣ ∣ Σ 1 ∣ ) + Tr ( Σ 1 − 1 Σ ) + ( μ − μ 1 ) T Σ 1 − 1 ( μ − μ 1 ) − D ]
\begin{array}{l}
D_{\text{KL}}\big( N(\boldsymbol{\mu}, \Sigma) \| N(\boldsymbol{\mu_{1}}, \Sigma_{1}) \big) \\[1em]
= \dfrac{1}{2} \left[ \log \left( \dfrac{|\Sigma|}{|\Sigma_{1}|} \right) + \Tr(\Sigma_{1}^{-1}\Sigma) + (\boldsymbol{\mu} - \boldsymbol{\mu_{1}})^{\mathsf{T}} \Sigma_{1}^{-1} (\boldsymbol{\mu} - \boldsymbol{\mu_{1}}) - D \right]
\end{array}
D KL ( N ( μ , Σ ) ∥ N ( μ 1 , Σ 1 ) ) = 2 1 [ log ( ∣ Σ 1 ∣ ∣Σ∣ ) + Tr ( Σ 1 − 1 Σ ) + ( μ − μ 1 ) T Σ 1 − 1 ( μ − μ 1 ) − D ]
L = E q ( x 0 : T ) [ − log p ( x T ) − ∑ t = 1 T log p θ ( x t − 1 ∣ x t ) q ( x t ∣ x t − 1 ) ] = E q ( x 0 : T ) [ − log p ( x T ) − ∑ t = 2 T log p θ ( x t − 1 ∣ x t ) q ( x t ∣ x t − 1 ) − log p θ ( x 0 ∣ x 1 ) q ( x 1 ∣ x 0 ) ]
\begin{align*}
L
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ - \log p(\mathbf{x}_{T}) - \sum\limits_{t=1}^{T} \log \dfrac{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})}{q(\mathbf{x}_{t} | \mathbf{x}_{t-1})} \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ - \log p(\mathbf{x}_{T}) - \sum\limits_{t=2}^{T} \log \dfrac{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})}{q(\mathbf{x}_{t} | \mathbf{x}_{t-1})} - \log \dfrac{p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1})}{q(\mathbf{x}_{1} | \mathbf{x}_{0})} \right]
\end{align*}
L = E q ( x 0 : T ) [ − log p ( x T ) − t = 1 ∑ T log q ( x t ∣ x t − 1 ) p θ ( x t − 1 ∣ x t ) ] = E q ( x 0 : T ) [ − log p ( x T ) − t = 2 ∑ T log q ( x t ∣ x t − 1 ) p θ ( x t − 1 ∣ x t ) − log q ( x 1 ∣ x 0 ) p θ ( x 0 ∣ x 1 ) ]
Examining the numerator of the second term, the following simplification is possible:
q ( x t ∣ x t − 1 ) = q ( x t ∣ x t − 1 , x 0 ) by Markov property = q ( x t , x t − 1 , x 0 ) q ( x t − 1 , x 0 ) by def. of conditional pdf = q ( x t − 1 ∣ x t , x 0 ) q ( x t , x 0 ) q ( x t − 1 , x 0 ) by def. of conditional pdf = q ( x t − 1 ∣ x t , x 0 ) q ( x t , x 0 ) q ( x t − 1 , x 0 ) = q ( x t − 1 ∣ x t , x 0 ) q ( x t , x 0 ) p ( x 0 ) q ( x t − 1 , x 0 ) p ( x 0 ) = q ( x t − 1 ∣ x t , x 0 ) q ( x t , x 0 ) p ( x 0 ) p ( x 0 ) q ( x t − 1 , x 0 ) = q ( x t − 1 ∣ x t , x 0 ) q ( x t ∣ x 0 ) q ( x t − 1 ∣ x 0 ) by def. of conditional pdf
\begin{align*}
q(\mathbf{x}_{t} | \mathbf{x}_{t-1}) &= q(\mathbf{x}_{t} | \mathbf{x}_{t-1}, \mathbf{x}_{0}) &\text{by Markov property} \\
&= \dfrac{q(\mathbf{x}_{t}, \mathbf{x}_{t-1}, \mathbf{x}_{0})}{q(\mathbf{x}_{t-1}, \mathbf{x}_{0})} &\text{by def. of conditional pdf} \\
&= \dfrac{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) q(\mathbf{x}_{t}, \mathbf{x}_{0})}{q(\mathbf{x}_{t-1}, \mathbf{x}_{0})} &\text{by def. of conditional pdf} \\
&= q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \dfrac{q(\mathbf{x}_{t}, \mathbf{x}_{0})}{q(\mathbf{x}_{t-1}, \mathbf{x}_{0})} & \\
&= q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \dfrac{q(\mathbf{x}_{t}, \mathbf{x}_{0}) p(\mathbf{x}_{0})}{q(\mathbf{x}_{t-1}, \mathbf{x}_{0}) p(\mathbf{x}_{0})} & \\
&= q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \dfrac{q(\mathbf{x}_{t}, \mathbf{x}_{0}) }{ p(\mathbf{x}_{0})} \dfrac{p(\mathbf{x}_{0})}{q(\mathbf{x}_{t-1}, \mathbf{x}_{0})} & \\
&= q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \dfrac{q(\mathbf{x}_{t}| \mathbf{x}_{0})}{q(\mathbf{x}_{t-1}| \mathbf{x}_{0})} &\text{by def. of conditional pdf}
\end{align*}
q ( x t ∣ x t − 1 ) = q ( x t ∣ x t − 1 , x 0 ) = q ( x t − 1 , x 0 ) q ( x t , x t − 1 , x 0 ) = q ( x t − 1 , x 0 ) q ( x t − 1 ∣ x t , x 0 ) q ( x t , x 0 ) = q ( x t − 1 ∣ x t , x 0 ) q ( x t − 1 , x 0 ) q ( x t , x 0 ) = q ( x t − 1 ∣ x t , x 0 ) q ( x t − 1 , x 0 ) p ( x 0 ) q ( x t , x 0 ) p ( x 0 ) = q ( x t − 1 ∣ x t , x 0 ) p ( x 0 ) q ( x t , x 0 ) q ( x t − 1 , x 0 ) p ( x 0 ) = q ( x t − 1 ∣ x t , x 0 ) q ( x t − 1 ∣ x 0 ) q ( x t ∣ x 0 ) by Markov property by def. of conditional pdf by def. of conditional pdf by def. of conditional pdf
Substituting gives the formulation:
L = E q ( x 0 : T ) [ − log p ( x T ) − ∑ t = 2 T log ( p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) q ( x t − 1 ∣ x 0 ) q ( x t ∣ x 0 ) ) − log p θ ( x 0 ∣ x 1 ) q ( x 1 ∣ x 0 ) ] = E q ( x 0 : T ) [ − log p ( x T ) − ∑ t = 2 T log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) − ∑ t = 2 T log q ( x t − 1 ∣ x 0 ) q ( x t ∣ x 0 ) − log p θ ( x 0 ∣ x 1 ) q ( x 1 ∣ x 0 ) ] = E q ( x 0 : T ) [ − log p ( x T ) − ∑ t = 2 T log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) − log ( q ( x T − 1 ∣ x 0 ) q ( x T ∣ x 0 ) ⋅ q ( x T − 2 ∣ x 0 ) q ( x T − 1 ∣ x 0 ) ⋯ q ( x 1 ∣ x 0 ) q ( x 2 ∣ x 0 ) ) − log p θ ( x 0 ∣ x 1 ) q ( x 1 ∣ x 0 ) ] = E q ( x 0 : T ) [ − log p ( x T ) − ∑ t = 2 T log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) − log ( q ( x T − 1 ∣ x 0 ) q ( x T ∣ x 0 ) ⋅ q ( x T − 2 ∣ x 0 ) q ( x T − 1 ∣ x 0 ) ⋯ q ( x 1 ∣ x 0 ) q ( x 2 ∣ x 0 ) ⋅ p θ ( x 0 ∣ x 1 ) q ( x 1 ∣ x 0 ) ) ] = E q ( x 0 : T ) [ − log p ( x T ) − ∑ t = 2 T log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) − log ( q ( x T − 1 ∣ x 0 ) q ( x T ∣ x 0 ) ⋅ q ( x T − 1 ∣ x 0 ) q ( x T − 1 ∣ x 0 ) ⋯ q ( x 1 ∣ x 0 ) q ( x 2 ∣ x 0 ) ⋅ p θ ( x 0 ∣ x 1 ) q ( x 1 ∣ x 0 ) ) ] = E q ( x 0 : T ) [ − log p ( x T ) − ∑ t = 2 T log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) − log p θ ( x 0 ∣ x 1 ) q ( x T ∣ x 0 ) ] = E q ( x 0 : T ) [ − log p ( x T ) q ( x T ∣ x 0 ) − ∑ t = 2 T log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) − log p θ ( x 0 ∣ x 1 ) ] = E q ( x 0 : T ) [ log q ( x T ∣ x 0 ) p ( x T ) + ∑ t = 2 T log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) − log p θ ( x 0 ∣ x 1 ) ] ( 8 ) = E q ( x 0 : T ) [ log q ( x T ∣ x 0 ) p ( x T ) ] + ∑ t = 2 T E q ( x 0 : T ) [ log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) ] − E q ( x 0 : T ) [ log p θ ( x 0 ∣ x 1 ) ]
\begin{align*}
L
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ - \log p(\mathbf{x}_{T}) - \sum\limits_{t=2}^{T} \log \left( \dfrac{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})}{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})}\dfrac{q(\mathbf{x}_{t-1} | \mathbf{x}_{0})}{q(\mathbf{x}_{t} | \mathbf{x}_{0})} \right) - \log \dfrac{p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1})}{q(\mathbf{x}_{1} | \mathbf{x}_{0})} \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ - \log p(\mathbf{x}_{T}) - \sum\limits_{t=2}^{T} \log \dfrac{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})}{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})} - \sum\limits_{t=2}^{T} \log \dfrac{q(\mathbf{x}_{t-1} | \mathbf{x}_{0})}{q(\mathbf{x}_{t} | \mathbf{x}_{0})} - \log \dfrac{p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1})}{q(\mathbf{x}_{1} | \mathbf{x}_{0})} \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ - \log p(\mathbf{x}_{T}) - \sum\limits_{t=2}^{T} \log \dfrac{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})}{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})} \right. \\
&\qquad\qquad\qquad \left. -\log \left( \dfrac{q(\mathbf{x}_{T-1} | \mathbf{x}_{0})}{q(\mathbf{x}_{T} | \mathbf{x}_{0})} \cdot \dfrac{q(\mathbf{x}_{T-2} | \mathbf{x}_{0})}{q(\mathbf{x}_{T-1} | \mathbf{x}_{0})} \cdots \dfrac{q(\mathbf{x}_{1} | \mathbf{x}_{0})}{q(\mathbf{x}_{2} | \mathbf{x}_{0})} \right) - \log \dfrac{p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1})}{q(\mathbf{x}_{1} | \mathbf{x}_{0})} \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ - \log p(\mathbf{x}_{T}) - \sum\limits_{t=2}^{T} \log \dfrac{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})}{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})} \right. \\
&\qquad\qquad\qquad \left. - \log \left( \dfrac{q(\mathbf{x}_{T-1} | \mathbf{x}_{0})}{q(\mathbf{x}_{T} | \mathbf{x}_{0})} \cdot \dfrac{q(\mathbf{x}_{T-2} | \mathbf{x}_{0})}{q(\mathbf{x}_{T-1} | \mathbf{x}_{0})} \cdots \dfrac{q(\mathbf{x}_{1} | \mathbf{x}_{0})}{q(\mathbf{x}_{2} | \mathbf{x}_{0})} \cdot \dfrac{p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1})}{q(\mathbf{x}_{1} | \mathbf{x}_{0})} \right) \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ - \log p(\mathbf{x}_{T}) - \sum\limits_{t=2}^{T} \log \dfrac{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})}{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})} \right. \\
&\qquad\qquad\qquad \left. - \log \left( \dfrac{\color{red}{\cancel{\color{black}q(\mathbf{x}_{T-1} | \mathbf{x}_{0})}}}{q(\mathbf{x}_{T} | \mathbf{x}_{0})} \cdot \dfrac{\color{green}{\bcancel{\color{black}q(\mathbf{x}_{T-1} | \mathbf{x}_{0})}}}{\color{red}{\cancel{\color{black}q(\mathbf{x}_{T-1} | \mathbf{x}_{0})}}} \cdots \dfrac{\color{purple}{\cancel{\color{black}q(\mathbf{x}_{1} | \mathbf{x}_{0})}}}{\color{green}{\bcancel{\color{black}q(\mathbf{x}_{2} | \mathbf{x}_{0})}}} \cdot \dfrac{p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1})}{\color{purple}{\cancel{\color{black}q(\mathbf{x}_{1} | \mathbf{x}_{0})}}} \right) \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ - \log p(\mathbf{x}_{T}) - \sum\limits_{t=2}^{T} \log \dfrac{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})}{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})} - \log \dfrac{p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1})}{q(\mathbf{x}_{T} | \mathbf{x}_{0})} \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ - \log \dfrac{p(\mathbf{x}_{T})}{q(\mathbf{x}_{T} | \mathbf{x}_{0})} - \sum\limits_{t=2}^{T} \log \dfrac{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})}{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})} - \log p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1}) \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ \log \dfrac{q(\mathbf{x}_{T} | \mathbf{x}_{0})}{p(\mathbf{x}_{T})} + \sum\limits_{t=2}^{T} \log \dfrac{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})}{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})} - \log p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1}) \right] \\
(8) &= \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ \log \dfrac{q(\mathbf{x}_{T} | \mathbf{x}_{0})}{p(\mathbf{x}_{T})} \right] + \sum\limits_{t=2}^{T} \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ \log \dfrac{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})}{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})} \right] - \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ \log p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1}) \right] \tag{8}
\end{align*}
L ( 8 ) = E q ( x 0 : T ) [ − log p ( x T ) − t = 2 ∑ T log ( q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) q ( x t ∣ x 0 ) q ( x t − 1 ∣ x 0 ) ) − log q ( x 1 ∣ x 0 ) p θ ( x 0 ∣ x 1 ) ] = E q ( x 0 : T ) [ − log p ( x T ) − t = 2 ∑ T log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) − t = 2 ∑ T log q ( x t ∣ x 0 ) q ( x t − 1 ∣ x 0 ) − log q ( x 1 ∣ x 0 ) p θ ( x 0 ∣ x 1 ) ] = E q ( x 0 : T ) [ − log p ( x T ) − t = 2 ∑ T log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) − log ( q ( x T ∣ x 0 ) q ( x T − 1 ∣ x 0 ) ⋅ q ( x T − 1 ∣ x 0 ) q ( x T − 2 ∣ x 0 ) ⋯ q ( x 2 ∣ x 0 ) q ( x 1 ∣ x 0 ) ) − log q ( x 1 ∣ x 0 ) p θ ( x 0 ∣ x 1 ) ] = E q ( x 0 : T ) [ − log p ( x T ) − t = 2 ∑ T log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) − log ( q ( x T ∣ x 0 ) q ( x T − 1 ∣ x 0 ) ⋅ q ( x T − 1 ∣ x 0 ) q ( x T − 2 ∣ x 0 ) ⋯ q ( x 2 ∣ x 0 ) q ( x 1 ∣ x 0 ) ⋅ q ( x 1 ∣ x 0 ) p θ ( x 0 ∣ x 1 ) ) ] = E q ( x 0 : T ) [ − log p ( x T ) − t = 2 ∑ T log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) − log ( q ( x T ∣ x 0 ) q ( x T − 1 ∣ x 0 ) ⋅ q ( x T − 1 ∣ x 0 ) q ( x T − 1 ∣ x 0 ) ⋯ q ( x 2 ∣ x 0 ) q ( x 1 ∣ x 0 ) ⋅ q ( x 1 ∣ x 0 ) p θ ( x 0 ∣ x 1 ) ) ] = E q ( x 0 : T ) [ − log p ( x T ) − t = 2 ∑ T log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) − log q ( x T ∣ x 0 ) p θ ( x 0 ∣ x 1 ) ] = E q ( x 0 : T ) [ − log q ( x T ∣ x 0 ) p ( x T ) − t = 2 ∑ T log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) − log p θ ( x 0 ∣ x 1 ) ] = E q ( x 0 : T ) [ log p ( x T ) q ( x T ∣ x 0 ) + t = 2 ∑ T log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) − log p θ ( x 0 ∣ x 1 ) ] = E q ( x 0 : T ) [ log p ( x T ) q ( x T ∣ x 0 ) ] + t = 2 ∑ T E q ( x 0 : T ) [ log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) ] − E q ( x 0 : T ) [ log p θ ( x 0 ∣ x 1 ) ] ( 8 )
The first schema of ( 8 ) (8) ( 8 ) can be reformulated using KLD, as shown below.
E q ( x 0 : T ) [ log q ( x T ∣ x 0 ) p ( x T ) ] = ∫ q ( x 0 : T ) log q ( x T ∣ x 0 ) p ( x T ) d x 0 : T = ∫ q ( x 0 ) q ( x 1 : T ∣ x 0 ) log q ( x T ∣ x 0 ) p ( x T ) d x 0 : T by def. of conditional pdf = ∫ ∫ ∫ q ( x 0 ) q ( x 1 : T ∣ x 0 ) log q ( x T ∣ x 0 ) p ( x T ) d x 1 : T − 1 d x T d x 0 = ∫ ∫ q ( x 0 ) q ( x T ∣ x 0 ) log q ( x T ∣ x 0 ) p ( x T ) d x T d x 0 by def. of marginal pdf = E q ( x 0 ) [ ∫ q ( x T ∣ x 0 ) log q ( x T ∣ x 0 ) p ( x T ) d x T ] = E q ( x 0 ) [ D KL ( q ( x T ∣ x 0 ) ∥ p ( x T ) ) ]
\begin{align*}
&\mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ \log \dfrac{q(\mathbf{x}_{T} | \mathbf{x}_{0})}{p(\mathbf{x}_{T})} \right] \\
&= \int q(\mathbf{x}_{0:T}) \log \dfrac{q(\mathbf{x}_{T} | \mathbf{x}_{0})}{p(\mathbf{x}_{T})} d\mathbf{x}_{0:T} \\
&= \int q(\mathbf{x}_{0}) q(\mathbf{x}_{1:T} | \mathbf{x}_{0}) \log \dfrac{q(\mathbf{x}_{T} | \mathbf{x}_{0})}{p(\mathbf{x}_{T})} d\mathbf{x}_{0:T} &\text{by def. of conditional pdf} \\
&= \int\int\int q(\mathbf{x}_{0}) q(\mathbf{x}_{1:T} | \mathbf{x}_{0}) \log \dfrac{q(\mathbf{x}_{T} | \mathbf{x}_{0})}{p(\mathbf{x}_{T})} d\mathbf{x}_{1:T-1} d\mathbf{x}_{T} d\mathbf{x}_{0} \\
&= \int\int q(\mathbf{x}_{0}) q(\mathbf{x}_{T} | \mathbf{x}_{0}) \log \dfrac{q(\mathbf{x}_{T} | \mathbf{x}_{0})}{p(\mathbf{x}_{T})} d\mathbf{x}_{T} d\mathbf{x}_{0} &\text{by def. of marginal pdf} \\
&= \mathbb{E}_{q(\mathbf{x}_{0})}\left[ \int q(\mathbf{x}_{T} | \mathbf{x}_{0}) \log \dfrac{q(\mathbf{x}_{T} | \mathbf{x}_{0})}{p(\mathbf{x}_{T})} d\mathbf{x}_{T} \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{0})} \Big[ D_{\text{KL}} \left( q(\mathbf{x}_{T} | \mathbf{x}_{0}) \| p(\mathbf{x}_{T}) \right) \Big]
\end{align*}
E q ( x 0 : T ) [ log p ( x T ) q ( x T ∣ x 0 ) ] = ∫ q ( x 0 : T ) log p ( x T ) q ( x T ∣ x 0 ) d x 0 : T = ∫ q ( x 0 ) q ( x 1 : T ∣ x 0 ) log p ( x T ) q ( x T ∣ x 0 ) d x 0 : T = ∫∫∫ q ( x 0 ) q ( x 1 : T ∣ x 0 ) log p ( x T ) q ( x T ∣ x 0 ) d x 1 : T − 1 d x T d x 0 = ∫∫ q ( x 0 ) q ( x T ∣ x 0 ) log p ( x T ) q ( x T ∣ x 0 ) d x T d x 0 = E q ( x 0 ) [ ∫ q ( x T ∣ x 0 ) log p ( x T ) q ( x T ∣ x 0 ) d x T ] = E q ( x 0 ) [ D KL ( q ( x T ∣ x 0 ) ∥ p ( x T ) ) ] by def. of conditional pdf by def. of marginal pdf
Note that the paper casually writes this as E q ( x 0 : T ) = E q = E q ( x 0 ) \mathbb{E}_{q(\mathbf{x}_{0:T})} = \mathbb{E}_{q} = \mathbb{E}_{q(\mathbf{x}_{0})} E q ( x 0 : T ) = E q = E q ( x 0 ) . While the expected value inside solely depends on x 0 \mathbf{x}_{0} x 0 , resulting in the same outcome even with unusal integrals on arbitrary variables, this prompts meticulous reader awareness due to absent explanation in the paper.
Now, considering the second term in ( 8 ) (8) ( 8 ) . Similar to the first term, we seek expressions representing q ( x t − 1 ∣ x t , x 0 ) q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) q ( x t − 1 ∣ x t , x 0 ) with q ( x 0 : T ) q(\mathbf{x}_{0:T}) q ( x 0 : T ) reordering to employ a KLD form.
∑ t = 2 T E q ( x 0 : T ) [ log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) ] = ∑ t = 2 T ∫ q ( x 0 : T ) log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) d x 0 : T = ∑ t = 2 T ∫ q ( x 0 ) q ( x 1 : T ∣ x 0 ) log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) d x 0 : T = ∑ t = 2 T ∫ q ( x 0 ) q ( x 1 : t − 1 , x t + 1 : T ∣ x t , x 0 ) q ( x t ∣ x 0 ) log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) d x 0 : T = ∑ t = 2 T ∫ ∫ ∫ q ( x 0 ) q ( x 1 : t − 1 , x t + 1 : T ∣ x t , x 0 ) q ( x t ∣ x 0 ) log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) d x ( 1 : t − 2 , t + 1 : T ) d x t − 1 : t d x 0 = ∑ t = 2 T ∫ ∫ q ( x 0 ) q ( x t − 1 ∣ x t , x 0 ) q ( x t ∣ x 0 ) log q ( x t − 1 ∣ x t , x 0 ) p θ ( x t − 1 ∣ x t ) d x t − 1 : t d x 0 = ∑ t = 2 T ∫ ∫ q ( x 0 ) q ( x t ∣ x 0 ) D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) d x t d x 0 = ∫ q ( x 0 ) [ ∑ t = 2 T ∫ q ( x t ∣ x 0 ) D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) d x t ] d x 0 = E q ( x 0 ) [ ∑ t = 2 T E q ( x t ∣ x 0 ) [ D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) ] ]
\begin{align*}
&\sum\limits_{t=2}^{T} \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ \log \dfrac{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})}{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})} \right] \\
&= \sum\limits_{t=2}^{T} \int q(\mathbf{x}_{0:T}) \log \dfrac{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})}{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})} d\mathbf{x}_{0:T} \\
&= \sum\limits_{t=2}^{T} \int q(\mathbf{x}_{0}) q(\mathbf{x}_{1:T} | \mathbf{x}_{0}) \log \dfrac{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})}{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})} d\mathbf{x}_{0:T} \\
&= \sum\limits_{t=2}^{T} \int q(\mathbf{x}_{0}) q(\mathbf{x}_{1:t-1}, \mathbf{x}_{t+1:T} | \mathbf{x}_{t}, \mathbf{x}_{0}) q(\mathbf{x}_{t} | \mathbf{x}_{0}) \log \dfrac{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})}{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})} d\mathbf{x}_{0:T} \\
&= \sum\limits_{t=2}^{T} \int\int\int q(\mathbf{x}_{0}) q(\mathbf{x}_{1:t-1}, \mathbf{x}_{t+1:T} | \mathbf{x}_{t}, \mathbf{x}_{0}) q(\mathbf{x}_{t} | \mathbf{x}_{0}) \log \dfrac{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})}{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})} d\mathbf{x}_{(1:t-2,t+1:T)} d\mathbf{x}_{t-1:t} d\mathbf{x}_{0} \\
&= \sum\limits_{t=2}^{T} \int\int q(\mathbf{x}_{0}) q(\mathbf{x}_{t-1}| \mathbf{x}_{t}, \mathbf{x}_{0}) q(\mathbf{x}_{t} | \mathbf{x}_{0}) \log \dfrac{q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})}{p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t})} d\mathbf{x}_{t-1:t} d\mathbf{x}_{0} \\
&= \sum\limits_{t=2}^{T} \int\int q(\mathbf{x}_{0}) q(\mathbf{x}_{t} | \mathbf{x}_{0}) D_{\text{KL}} \left( q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \| p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) \right) d\mathbf{x}_{t} d\mathbf{x}_{0} \\
&= \int q(\mathbf{x}_{0}) \left[ \sum\limits_{t=2}^{T} \int q(\mathbf{x}_{t} | \mathbf{x}_{0}) D_{\text{KL}} \left( q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \| p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) \right) d\mathbf{x}_{t} \right] d\mathbf{x}_{0} \\
&= \mathbb{E}_{q(\mathbf{x}_{0})} \left[ \sum\limits_{t=2}^{T} \mathbb{E}_{q(\mathbf{x}_{t} | \mathbf{x}_{0})} \Big[ D_{\text{KL}} \left( q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \| p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) \right) \Big] \right] \\
\end{align*}
t = 2 ∑ T E q ( x 0 : T ) [ log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) ] = t = 2 ∑ T ∫ q ( x 0 : T ) log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) d x 0 : T = t = 2 ∑ T ∫ q ( x 0 ) q ( x 1 : T ∣ x 0 ) log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) d x 0 : T = t = 2 ∑ T ∫ q ( x 0 ) q ( x 1 : t − 1 , x t + 1 : T ∣ x t , x 0 ) q ( x t ∣ x 0 ) log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) d x 0 : T = t = 2 ∑ T ∫∫∫ q ( x 0 ) q ( x 1 : t − 1 , x t + 1 : T ∣ x t , x 0 ) q ( x t ∣ x 0 ) log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) d x ( 1 : t − 2 , t + 1 : T ) d x t − 1 : t d x 0 = t = 2 ∑ T ∫∫ q ( x 0 ) q ( x t − 1 ∣ x t , x 0 ) q ( x t ∣ x 0 ) log p θ ( x t − 1 ∣ x t ) q ( x t − 1 ∣ x t , x 0 ) d x t − 1 : t d x 0 = t = 2 ∑ T ∫∫ q ( x 0 ) q ( x t ∣ x 0 ) D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) d x t d x 0 = ∫ q ( x 0 ) [ t = 2 ∑ T ∫ q ( x t ∣ x 0 ) D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) d x t ] d x 0 = E q ( x 0 ) [ t = 2 ∑ T E q ( x t ∣ x 0 ) [ D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) ] ]
Once again, beware of the notation overuse where it’s written as E q [ ∑ t = 2 T D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) ] \displaystyle \mathbb{E}_{q} \left[ \sum\limits_{t=2}^{T} D_{\text{KL}} \left( q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \| p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) \right) \right] E q [ t = 2 ∑ T D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) ] in the paper. Utilizing superfluous integral adjustments on non-contributing variables reflects precisely the paper’s notation.
Finally, the entire schema of ( 8 ) (8) ( 8 ) is reconstructed to denote KLD forms:
L = E q ( x 0 ) [ D KL ( q ( x T ∣ x 0 ) ∥ p ( x T ) ) ] + E q ( x 0 ) [ ∑ t = 2 T E q ( x t ∣ x 0 ) [ D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) ] ] − E q ( x 0 : T ) [ log p θ ( x 0 ∣ x 1 ) ]
\begin{align*}
&L = \mathbb{E}_{q(\mathbf{x}_{0})} \Big[ D_{\text{KL}} \left( q(\mathbf{x}_{T} | \mathbf{x}_{0}) \| p(\mathbf{x}_{T}) \right) \Big] \\
&\qquad +\mathbb{E}_{q(\mathbf{x}_{0})} \left[ \sum\limits_{t=2}^{T} \mathbb{E}_{q(\mathbf{x}_{t} | \mathbf{x}_{0})} \Big[ D_{\text{KL}} \left( q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \| p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) \right) \Big] \right] - \mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ \log p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1}) \right]
\end{align*}
L = E q ( x 0 ) [ D KL ( q ( x T ∣ x 0 ) ∥ p ( x T ) ) ] + E q ( x 0 ) [ t = 2 ∑ T E q ( x t ∣ x 0 ) [ D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) ] ] − E q ( x 0 : T ) [ log p θ ( x 0 ∣ x 1 ) ]
An encompassing term construction involving integrational trickery allows for the expression as:
E q ( x 0 : T ) [ D KL ( q ( x T ∣ x 0 ) ∥ p ( x T ) ) ⏟ L T + ∑ t = 2 T [ D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) ] ⏟ L t − 1 − log p θ ( x 0 ∣ x 1 ) ⏟ L 0 ]
\mathbb{E}_{q(\mathbf{x}_{0:T})} \bigg[ \underbrace{D_{\text{KL}} \left( q(\mathbf{x}_{T} | \mathbf{x}_{0}) \| p(\mathbf{x}_{T}) \right)}_{L_{T}} + \sum\limits_{t=2}^{T} \underbrace{\Big[ D_{\text{KL}} \left( q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \| p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) \right) \Big]}_{L_{t-1}} - \underbrace{\log p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1})}_{L_{0}} \bigg]
E q ( x 0 : T ) [ L T D KL ( q ( x T ∣ x 0 ) ∥ p ( x T ) ) + t = 2 ∑ T L t − 1 [ D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) ] − L 0 log p θ ( x 0 ∣ x 1 ) ]
The derivation for q ( x t − 1 ∣ x t , x 0 ) q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) q ( x t − 1 ∣ x t , x 0 ) can be executed like this:
Properties of Conditional Probability
p ( x ∣ y , z ) = p ( x , y ∣ z ) p ( y ∣ z )
p(\mathbf{x} | \mathbf{y}, \mathbf{z})
= \dfrac{p(\mathbf{x}, \mathbf{y} | \mathbf{z}) }{p(\mathbf{y} | \mathbf{z})}
p ( x ∣ y , z ) = p ( y ∣ z ) p ( x , y ∣ z )
q ( x t − 1 ∣ x t , x 0 ) = q ( x t , x t − 1 ∣ x 0 ) q ( x t ∣ x 0 ) = q ( x t ∣ x t − 1 , x 0 ) q ( x t − 1 ∣ x 0 ) q ( x t ∣ x 0 ) = q ( x t ∣ x t − 1 ) q ( x t − 1 ∣ x 0 ) q ( x t ∣ x 0 )
\begin{align*}
q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})
&= \dfrac{q(\mathbf{x}_{t}, \mathbf{x}_{t-1} | \mathbf{x}_{0})}{q(\mathbf{x}_{t} | \mathbf{x}_{0})} \\
&= \dfrac{q(\mathbf{x}_{t} | \mathbf{x}_{t-1}, \mathbf{x}_{0}) q(\mathbf{x}_{t-1} | \mathbf{x}_{0})}{q(\mathbf{x}_{t} | \mathbf{x}_{0})} \\
&= \dfrac{q(\mathbf{x}_{t} | \mathbf{x}_{t-1}) q(\mathbf{x}_{t-1} | \mathbf{x}_{0})}{q(\mathbf{x}_{t} | \mathbf{x}_{0})}
\end{align*}
q ( x t − 1 ∣ x t , x 0 ) = q ( x t ∣ x 0 ) q ( x t , x t − 1 ∣ x 0 ) = q ( x t ∣ x 0 ) q ( x t ∣ x t − 1 , x 0 ) q ( x t − 1 ∣ x 0 ) = q ( x t ∣ x 0 ) q ( x t ∣ x t − 1 ) q ( x t − 1 ∣ x 0 )
The final equality is valid due to the Markov assumption of { x t } \left\{ \mathbf{x}_{t} \right\} { x t } . From earlier computations concerning probability density functions ( 3 ) (3) ( 3 ) , ( 6 ) (6) ( 6 ) , substitution directly follows.
q ( x t − 1 ∣ x t , x 0 ) = q ( x t ∣ x t − 1 ) q ( x t − 1 ∣ x 0 ) q ( x t ∣ x 0 ) = C exp ( − 1 2 ( x t − α t x t − 1 ) 2 ( 1 − α t ) ) exp ( − 1 2 ( x t − 1 − α ‾ t − 1 x 0 ) 2 ( 1 − α ‾ t − 1 ) ) exp ( − 1 2 ( x t − α ‾ t x 0 ) 2 ( 1 − α ‾ t ) ) = C exp [ − 1 2 ( 1 1 − α t x t T x t − 2 α t 1 − α t x t T x t − 1 + α t 1 − α t x t − 1 T x t − 1 + 1 1 − α ‾ t − 1 x t − 1 T x t − 1 − 2 α ‾ t − 1 1 − α ‾ t − 1 x 0 T x t − 1 + α t − 1 ‾ 1 − α ‾ t − 1 x 0 T x 0 − 1 1 − α ‾ t x t T x t + 2 α ‾ t 1 − α ‾ t x t T x 0 − α t ‾ 1 − α ‾ t x 0 T x 0 ) ]
\begin{align*}
&q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \\
&= \dfrac{q(\mathbf{x}_{t} | \mathbf{x}_{t-1}) q(\mathbf{x}_{t-1} | \mathbf{x}_{0})}{q(\mathbf{x}_{t} | \mathbf{x}_{0})} \\
&= C \dfrac{\exp\left( - \dfrac{1}{2} \dfrac{\left( \mathbf{x}_{t} - \sqrt{\alpha_{t}}\mathbf{x}_{t-1} \right)^{2}}{(1-\alpha_{t})} \right) \exp\left( - \dfrac{1}{2} \dfrac{\left( \mathbf{x}_{t-1} - \sqrt{\overline{\alpha}_{t-1}}\mathbf{x}_{0} \right)^{2}}{(1-\overline{\alpha}_{t-1})} \right)}{\exp\left( - \dfrac{1}{2} \dfrac{\left( \mathbf{x}_{t} - \sqrt{\overline{\alpha}_{t}}\mathbf{x}_{0} \right)^{2}}{(1-\overline{\alpha}_{t})} \right)} \\
&= C \exp\left[ -\dfrac{1}{2} \left( \dfrac{1}{1-\alpha_{t}}\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{t} - \dfrac{2\sqrt{\alpha_{t}}}{1-\alpha_{t}} \mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{t-1} + \dfrac{\alpha_{t}}{1-\alpha_{t}} \mathbf{x}_{t-1}^{\mathsf{T}}\mathbf{x}_{t-1} \right.\right.\\
&\qquad\qquad\qquad \quad + \dfrac{1}{1-\overline{\alpha}_{t-1}}\mathbf{x}_{t-1}^{\mathsf{T}}\mathbf{x}_{t-1} - \dfrac{2\sqrt{\overline{\alpha}_{t-1}}}{1-\overline{\alpha}_{t-1}}\mathbf{x}_{0}^{\mathsf{T}}\mathbf{x}_{t-1} + \dfrac{\overline{\alpha_{t-1}}}{1-\overline{\alpha}_{t-1}}\mathbf{x}_{0}^{\mathsf{T}}\mathbf{x}_{0} \\
&\qquad\qquad\qquad\quad \left.\left. - \dfrac{1}{1-\overline{\alpha}_{t}}\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{t} + \dfrac{2\sqrt{\overline{\alpha}_{t}}}{1-\overline{\alpha}_{t}}\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{0} - \dfrac{\overline{\alpha_{t}}}{1-\overline{\alpha}_{t}}\mathbf{x}_{0}^{\mathsf{T}}\mathbf{x}_{0} \right)\right] \tag{9}
\end{align*}
q ( x t − 1 ∣ x t , x 0 ) = q ( x t ∣ x 0 ) q ( x t ∣ x t − 1 ) q ( x t − 1 ∣ x 0 ) = C exp ( − 2 1 ( 1 − α t ) ( x t − α t x 0 ) 2 ) exp ( − 2 1 ( 1 − α t ) ( x t − α t x t − 1 ) 2 ) exp ( − 2 1 ( 1 − α t − 1 ) ( x t − 1 − α t − 1 x 0 ) 2 ) = C exp [ − 2 1 ( 1 − α t 1 x t T x t − 1 − α t 2 α t x t T x t − 1 + 1 − α t α t x t − 1 T x t − 1 + 1 − α t − 1 1 x t − 1 T x t − 1 − 1 − α t − 1 2 α t − 1 x 0 T x t − 1 + 1 − α t − 1 α t − 1 x 0 T x 0 − 1 − α t 1 x t T x t + 1 − α t 2 α t x t T x 0 − 1 − α t α t x 0 T x 0 ) ] ( 9 )
Here, C = 1 ( 2 π ( 1 − α t ) ( 1 − α t ‾ ) ( 1 − α t ‾ ) ) D C = \dfrac{1}{\sqrt{ \left(2\pi \dfrac{(1-\alpha_{t})(1-\overline{\alpha_{t}})}{(1-\overline{\alpha_{t}})} \right)^{D} }} C = ( 2 π ( 1 − α t ) ( 1 − α t ) ( 1 − α t ) ) D 1 is constant throughout. As we aim to find q ( x t − 1 ∣ x t , x 0 ) q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) q ( x t − 1 ∣ x t , x 0 ) , rearrange the exponent related to x t − 1 \mathbf{x}_{t-1} x t − 1 :
( α t 1 − α t + 1 1 − α ‾ t − 1 ) x t − 1 T x t − 1 − 2 ( α t 1 − α t x t T + α ‾ t − 1 1 − α ‾ t − 1 x 0 T ) x t − 1 + [ ( 1 1 − α t − 1 1 − α ‾ t ) x t T x t + 2 α ‾ t 1 − α ‾ t x t T x 0 + ( α ‾ t − 1 1 − α ‾ t − 1 − α ‾ t 1 − α ‾ t ) x 0 T x 0 ]
\begin{align*}
&\left( \dfrac{\alpha_{t}}{1-\alpha_{t}} + \dfrac{1}{1-\overline{\alpha}_{t-1}} \right) \mathbf{x}_{t-1}^{\mathsf{T}}\mathbf{x}_{t-1} - 2\left( \dfrac{\sqrt{\alpha_{t}}}{1-\alpha_{t}} \mathbf{x}_{t}^{\mathsf{T}} + \dfrac{\sqrt{\overline{\alpha}_{t-1}}}{1-\overline{\alpha}_{t-1}}\mathbf{x}_{0}^{\mathsf{T}} \right)\mathbf{x}_{t-1} \\
&\qquad + \left[ \left( \dfrac{1}{1-\alpha_{t}} - \dfrac{1}{1-\overline{\alpha}_{t}} \right)\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{t} + 2\dfrac{\sqrt{\overline{\alpha}_{t}}}{1-\overline{\alpha}_{t}}\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{0} + \left( \dfrac{\overline{\alpha}_{t-1}}{1-\overline{\alpha}_{t-1}} - \dfrac{\overline{\alpha}_{t}}{1-\overline{\alpha}_{t}} \right)\mathbf{x}_{0}^{\mathsf{T}}\mathbf{x}_{0}\right]
\end{align*}
( 1 − α t α t + 1 − α t − 1 1 ) x t − 1 T x t − 1 − 2 ( 1 − α t α t x t T + 1 − α t − 1 α t − 1 x 0 T ) x t − 1 + [ ( 1 − α t 1 − 1 − α t 1 ) x t T x t + 2 1 − α t α t x t T x 0 + ( 1 − α t − 1 α t − 1 − 1 − α t α t ) x 0 T x 0 ]
Combine constants from each term through common denominators:
( α t 1 − α t + 1 1 − α ‾ t − 1 ) = α t − α t α ‾ t − 1 + 1 − α t ( 1 − α t ) ( 1 − α ‾ t − 1 ) = 1 − α ‾ t ( 1 − α t ) ( 1 − α ‾ t − 1 )
\left( \dfrac{\alpha_{t}}{1-\alpha_{t}} + \dfrac{1}{1-\overline{\alpha}_{t-1}} \right)
= \dfrac{\alpha_{t} - \alpha_{t}\overline{\alpha}_{t-1} + 1 - \alpha_{t}}{(1-\alpha_{t})(1-\overline{\alpha}_{t-1})}
= \dfrac{1 - \overline{\alpha}_{t}}{(1-\alpha_{t})(1-\overline{\alpha}_{t-1})}
( 1 − α t α t + 1 − α t − 1 1 ) = ( 1 − α t ) ( 1 − α t − 1 ) α t − α t α t − 1 + 1 − α t = ( 1 − α t ) ( 1 − α t − 1 ) 1 − α t
( 1 1 − α t − 1 1 − α ‾ t ) = α t − α ‾ t ( 1 − α t ) ( 1 − α ‾ t ) = α t ( 1 − α ‾ t − 1 ) ( 1 − α t ) ( 1 − α ‾ t )
\left( \dfrac{1}{1-\alpha_{t}} - \dfrac{1}{1-\overline{\alpha}_{t}} \right)
= \dfrac{\alpha_{t} - \overline{\alpha}_{t}}{(1-\alpha_{t})(1-\overline{\alpha}_{t})}
= \dfrac{\alpha_{t}(1-\overline{\alpha}_{t-1})}{(1-\alpha_{t})(1-\overline{\alpha}_{t})}
( 1 − α t 1 − 1 − α t 1 ) = ( 1 − α t ) ( 1 − α t ) α t − α t = ( 1 − α t ) ( 1 − α t ) α t ( 1 − α t − 1 )
( α ‾ t − 1 1 − α ‾ t − 1 − α ‾ t 1 − α ‾ t ) = α ‾ t − 1 − α ‾ t ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) = α ‾ t − 1 ( 1 − α t ) ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t )
\left( \dfrac{\overline{\alpha}_{t-1}}{1-\overline{\alpha}_{t-1}} - \dfrac{\overline{\alpha}_{t}}{1-\overline{\alpha}_{t}} \right)
= \dfrac{\overline{\alpha}_{t-1} - \overline{\alpha}_{t}}{(1-\overline{\alpha}_{t-1})(1-\overline{\alpha}_{t})}
= \dfrac{\overline{\alpha}_{t-1}(1 - \alpha_{t})}{(1-\overline{\alpha}_{t-1})(1-\overline{\alpha}_{t})}
( 1 − α t − 1 α t − 1 − 1 − α t α t ) = ( 1 − α t − 1 ) ( 1 − α t ) α t − 1 − α t = ( 1 − α t − 1 ) ( 1 − α t ) α t − 1 ( 1 − α t )
Next, arrange the terms under:
1 − α ‾ t ( 1 − α t ) ( 1 − α ‾ t − 1 ) x t − 1 T x t − 1 − 2 ( α t 1 − α t x t T + α ‾ t − 1 1 − α ‾ t − 1 x 0 T ) x t − 1 + [ α t ( 1 − α ‾ t − 1 ) ( 1 − α t ) ( 1 − α ‾ t ) x t T x t + 2 α ‾ t 1 − α ‾ t x t T x 0 + α ‾ t − 1 ( 1 − α t ) ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) x 0 T x 0 ]
\begin{align*}
& \dfrac{1 - \overline{\alpha}_{t}}{(1-\alpha_{t})(1-\overline{\alpha}_{t-1})} \mathbf{x}_{t-1}^{\mathsf{T}}\mathbf{x}_{t-1} - 2\left( \dfrac{\sqrt{\alpha_{t}}}{1-\alpha_{t}} \mathbf{x}_{t}^{\mathsf{T}} + \dfrac{\sqrt{\overline{\alpha}_{t-1}}}{1-\overline{\alpha}_{t-1}}\mathbf{x}_{0}^{\mathsf{T}} \right)\mathbf{x}_{t-1} \\
&\qquad + \left[ \dfrac{\alpha_{t}(1-\overline{\alpha}_{t-1})}{(1-\alpha_{t})(1-\overline{\alpha}_{t})}\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{t} + 2\dfrac{\sqrt{\overline{\alpha}_{t}}}{1-\overline{\alpha}_{t}}\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{0} + \dfrac{\overline{\alpha}_{t-1}(1 - \alpha_{t})}{(1-\overline{\alpha}_{t-1})(1-\overline{\alpha}_{t})}\mathbf{x}_{0}^{\mathsf{T}}\mathbf{x}_{0}\right]
\end{align*}
( 1 − α t ) ( 1 − α t − 1 ) 1 − α t x t − 1 T x t − 1 − 2 ( 1 − α t α t x t T + 1 − α t − 1 α t − 1 x 0 T ) x t − 1 + [ ( 1 − α t ) ( 1 − α t ) α t ( 1 − α t − 1 ) x t T x t + 2 1 − α t α t x t T x 0 + ( 1 − α t − 1 ) ( 1 − α t ) α t − 1 ( 1 − α t ) x 0 T x 0 ]
Combining constants under x t − 1 T x t − 1 \mathbf{x}_{t-1}^{\mathsf{T}}\mathbf{x}_{t-1} x t − 1 T x t − 1 gives another structuring stride:
1 − α ‾ t ( 1 − α t ) ( 1 − α ‾ t − 1 ) ( x t − 1 T x t − 1 − 2 ( ( 1 − α ‾ t − 1 ) α t 1 − α ‾ t x t T + ( 1 − α t ) α ‾ t − 1 1 − α ‾ t x 0 T ) x t − 1 + [ α t ( 1 − α ‾ t − 1 ) 2 ( 1 − α ‾ t ) 2 x t T x t + 2 ( 1 − α t ) ( 1 − α ‾ t − 1 ) α ‾ t ( 1 − α ‾ t ) 2 x t T x 0 + α ‾ t − 1 ( 1 − α t ) 2 ( 1 − α ‾ t ) 2 x 0 T x 0 ] ) (10)
\begin{align*}
& \frac{1 - \overline{\alpha}_{t}}{(1-\alpha_{t})(1-\overline{\alpha}_{t-1})}
\left( \mathbf{x}_{t-1}^{\mathsf{T}}\mathbf{x}_{t-1} - 2 \left( \frac{(1-\overline{\alpha}_{t-1})\sqrt{\alpha_{t}}}{1 - \overline{\alpha}_{t}} \mathbf{x}_{t}^{\mathsf{T}} + \frac{(1-\alpha_{t})\overline{\alpha}_{t-1}}{1 - \overline{\alpha}_{t}}\mathbf{x}_{0}^{\mathsf{T}} \right)\mathbf{x}_{t-1} \right. \\
&\qquad +\left. \left[ \frac{\alpha_{t}(1-\overline{\alpha}_{t-1})^{2}}{(1-\overline{\alpha}_{t})^{2}}\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{t} + 2\frac{(1-\alpha_{t})(1-\overline{\alpha}_{t-1})\sqrt{\overline{\alpha}_{t}}}{(1 - \overline{\alpha}_{t})^{2}}\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{0} + \frac{\overline{\alpha}_{t-1}(1 - \alpha_{t})^{2}}{(1-\overline{\alpha}_{t})^{2}}\mathbf{x}_{0}^{\mathsf{T}}\mathbf{x}_{0}\right] \right)
\end{align*} \tag{10}
( 1 − α t ) ( 1 − α t − 1 ) 1 − α t ( x t − 1 T x t − 1 − 2 ( 1 − α t ( 1 − α t − 1 ) α t x t T + 1 − α t ( 1 − α t ) α t − 1 x 0 T ) x t − 1 + [ ( 1 − α t ) 2 α t ( 1 − α t − 1 ) 2 x t T x t + 2 ( 1 − α t ) 2 ( 1 − α t ) ( 1 − α t − 1 ) α t x t T x 0 + ( 1 − α t ) 2 α t − 1 ( 1 − α t ) 2 x 0 T x 0 ] ) ( 10 )
Recasting the expression yields:
[ α t ( 1 − α ‾ t − 1 ) 2 ( 1 − α ‾ t ) 2 x t T x t + 2 ( 1 − α t ) ( 1 − α ‾ t − 1 ) α t α ‾ t − 1 ( 1 − α ‾ t ) 2 x t T x 0 + α ‾ t − 1 ( 1 − α t ) 2 ( 1 − α ‾ t ) 2 x 0 T x 0 ] = [ α t 2 ( 1 − α ‾ t − 1 ) 2 ( 1 − α ‾ t ) 2 x t T x t + 2 ( 1 − α t ) ( 1 − α ‾ t − 1 ) α t α ‾ t − 1 ( 1 − α ‾ t ) 2 x t T x 0 + α ‾ t − 1 2 ( 1 − α t ) 2 ( 1 − α ‾ t ) 2 x 0 T x 0 ] = [ α t ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) x t + α ‾ t − 1 ( 1 − α t ) ( 1 − α ‾ t ) x 0 ] 2
\begin{align*}
&\left[ \dfrac{\alpha_{t}(1-\overline{\alpha}_{t-1})^{2}}{(1-\overline{\alpha}_{t})^{2}}\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{t} + 2\dfrac{(1-\alpha_{t})(1-\overline{\alpha}_{t-1})\sqrt{\alpha_{t}\overline{\alpha}_{t-1}}}{(1 - \overline{\alpha}_{t})^{2}}\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{0} + \dfrac{\overline{\alpha}_{t-1}(1 - \alpha_{t})^{2}}{(1-\overline{\alpha}_{t})^{2}}\mathbf{x}_{0}^{\mathsf{T}}\mathbf{x}_{0}\right] \\
&=\left[ \dfrac{\sqrt{\alpha_{t}}^{2}(1-\overline{\alpha}_{t-1})^{2}}{(1-\overline{\alpha}_{t})^{2}}\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{t} + 2\dfrac{(1-\alpha_{t})(1-\overline{\alpha}_{t-1})\sqrt{\alpha_{t}\overline{\alpha}_{t-1}}}{(1 - \overline{\alpha}_{t})^{2}}\mathbf{x}_{t}^{\mathsf{T}}\mathbf{x}_{0} + \dfrac{\sqrt{\overline{\alpha}_{t-1}}^{2}(1 - \alpha_{t})^{2}}{(1-\overline{\alpha}_{t})^{2}}\mathbf{x}_{0}^{\mathsf{T}}\mathbf{x}_{0}\right] \\
&=\left[ \dfrac{\sqrt{\alpha_{t}}(1-\overline{\alpha}_{t-1})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{t} + \dfrac{\sqrt{\overline{\alpha}_{t-1}}(1 - \alpha_{t})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{0}\right]^{2} \\
\end{align*}
[ ( 1 − α t ) 2 α t ( 1 − α t − 1 ) 2 x t T x t + 2 ( 1 − α t ) 2 ( 1 − α t ) ( 1 − α t − 1 ) α t α t − 1 x t T x 0 + ( 1 − α t ) 2 α t − 1 ( 1 − α t ) 2 x 0 T x 0 ] = [ ( 1 − α t ) 2 α t 2 ( 1 − α t − 1 ) 2 x t T x t + 2 ( 1 − α t ) 2 ( 1 − α t ) ( 1 − α t − 1 ) α t α t − 1 x t T x 0 + ( 1 − α t ) 2 α t − 1 2 ( 1 − α t ) 2 x 0 T x 0 ] = [ ( 1 − α t ) α t ( 1 − α t − 1 ) x t + ( 1 − α t ) α t − 1 ( 1 − α t ) x 0 ] 2
Plugging this into ( 10 ) (10) ( 10 ) returns:
1 − α ‾ t ( 1 − α t ) ( 1 − α ‾ t − 1 ) ( x t − 1 T x t − 1 − 2 ( ( 1 − α ‾ t − 1 ) α t 1 − α ‾ t x t T + ( 1 − α t ) α ‾ t − 1 1 − α ‾ t x 0 T ) x t − 1 + [ α t ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) x t + α ‾ t − 1 ( 1 − α t ) ( 1 − α ‾ t ) x 0 ] 2 ) = 1 − α ‾ t ( 1 − α t ) ( 1 − α ‾ t − 1 ) ( x t − [ α t ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) x t + α ‾ t − 1 ( 1 − α t ) ( 1 − α ‾ t ) x 0 ] ) 2
\begin{align*}
& \dfrac{1 - \overline{\alpha}_{t}}{(1-\alpha_{t})(1-\overline{\alpha}_{t-1})}
\left( \mathbf{x}_{t-1}^{\mathsf{T}}\mathbf{x}_{t-1} - 2 \left( \dfrac{(1-\overline{\alpha}_{t-1})\sqrt{\alpha_{t}}}{1 - \overline{\alpha}_{t}} \mathbf{x}_{t}^{\mathsf{T}} + \dfrac{(1-\alpha_{t})\overline{\alpha}_{t-1}}{1 - \overline{\alpha}_{t}}\mathbf{x}_{0}^{\mathsf{T}} \right)\mathbf{x}_{t-1} \right. \\
&\qquad \left. +\left[ \dfrac{\sqrt{\alpha_{t}}(1-\overline{\alpha}_{t-1})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{t} + \dfrac{\sqrt{\overline{\alpha}_{t-1}}(1 - \alpha_{t})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{0}\right]^{2} \right) \\
&= \dfrac{1 - \overline{\alpha}_{t}}{(1-\alpha_{t})(1-\overline{\alpha}_{t-1})} \left( \mathbf{x}_{t} - \left[ \dfrac{\sqrt{\alpha_{t}}(1-\overline{\alpha}_{t-1})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{t} + \dfrac{\sqrt{\overline{\alpha}_{t-1}}(1 - \alpha_{t})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{0}\right] \right)^{2}
\end{align*}
( 1 − α t ) ( 1 − α t − 1 ) 1 − α t ( x t − 1 T x t − 1 − 2 ( 1 − α t ( 1 − α t − 1 ) α t x t T + 1 − α t ( 1 − α t ) α t − 1 x 0 T ) x t − 1 + [ ( 1 − α t ) α t ( 1 − α t − 1 ) x t + ( 1 − α t ) α t − 1 ( 1 − α t ) x 0 ] 2 ) = ( 1 − α t ) ( 1 − α t − 1 ) 1 − α t ( x t − [ ( 1 − α t ) α t ( 1 − α t − 1 ) x t + ( 1 − α t ) α t − 1 ( 1 − α t ) x 0 ] ) 2
Reintegrating this with ( 9 ) (9) ( 9 ) gives:
q ( x t − 1 ∣ x t , x 0 ) = 1 ( 2 π ( 1 − α t ) ( 1 − α t ‾ ) ( 1 − α t ‾ ) ) D exp [ − 1 2 ( x t − [ α t ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) x t + α ‾ t − 1 ( 1 − α t ) ( 1 − α ‾ t ) x 0 ] ) 2 ( 1 − α t ) ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) ]
\begin{array}{l}
q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \\
= \frac{1}{\sqrt{ \left(2\pi \frac{(1-\alpha_{t})(1-\overline{\alpha_{t}})}{(1-\overline{\alpha_{t}})} \right)^{D} }} \exp\left[ -\dfrac{1}{2} \dfrac{\left( \mathbf{x}_{t} - \left[ \frac{\sqrt{\alpha_{t}}(1-\overline{\alpha}_{t-1})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{t} + \frac{\sqrt{\overline{\alpha}_{t-1}}(1 - \alpha_{t})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{0}\right] \right)^{2}}{\frac{(1-\alpha_{t})(1-\overline{\alpha}_{t-1})}{(1 - \overline{\alpha}_{t})}} \right]
\end{array}
q ( x t − 1 ∣ x t , x 0 ) = ( 2 π ( 1 − α t ) ( 1 − α t ) ( 1 − α t ) ) D 1 exp − 2 1 ( 1 − α t ) ( 1 − α t ) ( 1 − α t − 1 ) ( x t − [ ( 1 − α t ) α t ( 1 − α t − 1 ) x t + ( 1 − α t ) α t − 1 ( 1 − α t ) x 0 ] ) 2
This bears resemblance to:
q ( x t − 1 ∣ x t , x 0 ) = N ( α t ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) x t + α ‾ t − 1 ( 1 − α t ) ( 1 − α ‾ t ) x 0 , ( 1 − α t ) ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) I ) = N ( μ ~ t ( x t , x 0 ) , β ~ t I )
\begin{align*}
q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0})
&= \mathcal{N} \left( \dfrac{\sqrt{\alpha_{t}}(1-\overline{\alpha}_{t-1})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{t} + \dfrac{\sqrt{\overline{\alpha}_{t-1}}(1 - \alpha_{t})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{0}, \dfrac{(1-\alpha_{t})(1-\overline{\alpha}_{t-1})}{(1 - \overline{\alpha}_{t})}\mathbf{I} \right) \\
&= \mathcal{N} ( \tilde{\boldsymbol{\mu}}_{t}(\mathbf{x}_{t}, \mathbf{x}_{0}), \tilde{\beta}_{t} \mathbf{I})
\end{align*}
q ( x t − 1 ∣ x t , x 0 ) = N ( ( 1 − α t ) α t ( 1 − α t − 1 ) x t + ( 1 − α t ) α t − 1 ( 1 − α t ) x 0 , ( 1 − α t ) ( 1 − α t ) ( 1 − α t − 1 ) I ) = N ( μ ~ t ( x t , x 0 ) , β ~ t I )
where μ ~ t ( x t , x 0 ) = α t ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) x t + α ‾ t − 1 ( 1 − α t ) ( 1 − α ‾ t ) x 0 ,
\text{where}\qquad \tilde{\boldsymbol{\mu}}_{t}(\mathbf{x}_{t}, \mathbf{x}_{0}) = \frac{\sqrt{\alpha_{t}}(1-\overline{\alpha}_{t-1})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{t} + \frac{\sqrt{\overline{\alpha}_{t-1}}(1 - \alpha_{t})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{0},
where μ ~ t ( x t , x 0 ) = ( 1 − α t ) α t ( 1 − α t − 1 ) x t + ( 1 − α t ) α t − 1 ( 1 − α t ) x 0 ,
β ~ t = ( 1 − α ‾ t − 1 ) β t ( 1 − α ‾ t )
\tilde{\beta}_{t} = \frac{(1-\overline{\alpha}_{t-1}) \beta_{t}}{(1 - \overline{\alpha}_{t})}
β ~ t = ( 1 − α t ) ( 1 − α t − 1 ) β t
3 Diffusion Models and Denoising Autoencoders 3.1 Forward Process and L T L_{T} L T The paper fixes β t \beta_{t} β t as a constant instead of treating it as a learnable parameter. Hence, the expression for L T L_{T} L T lacks trainable parameters, allowing its omission during loss function implementation.
L T = D KL ( q ( x T ∣ x 0 ) ∥ p ( x T ) ) = D KL [ N ( α ‾ t x 0 , ( 1 − α ‾ t ) I ) ∥ N ( 0 , I ) ] = 1 2 [ log ( 1 − α ‾ t ) D + D ( 1 − α ‾ t ) + α ‾ t x 0 2 − D ]
\begin{align*}
L_{T}
&= D_{\text{KL}} \left( q(\mathbf{x}_{T} | \mathbf{x}_{0}) \| p(\mathbf{x}_{T}) \right) \\
&= D_{\text{KL}} \Big[ \mathcal{N} \left( \sqrt{\overline{\alpha}_{t}}\mathbf{x}_{0}, (1-\overline{\alpha}_{t}) \mathbf{I} \right) \| \mathcal{N} \left( 0, \mathbf{I} \right) \Big] \\
&= \dfrac{1}{2} \left[ \log (1-\overline{\alpha}_{t})^{D} + D(1-\overline{\alpha}_{t}) + \overline{\alpha}_{t}\mathbf{x}_{0}^{2} - D \right]
\end{align*}
L T = D KL ( q ( x T ∣ x 0 ) ∥ p ( x T ) ) = D KL [ N ( α t x 0 , ( 1 − α t ) I ) ∥ N ( 0 , I ) ] = 2 1 [ log ( 1 − α t ) D + D ( 1 − α t ) + α t x 0 2 − D ]
3.2 Reverse Process and L 1 : T − 1 L_{1:T-1} L 1 : T − 1 The authors stipulated the covariance matrix to be Σ θ ( x t , t ) = σ t 2 I \boldsymbol{\Sigma}_{\theta} (\mathbf{x}_{t}, t) = \sigma_{t}^{2} \mathbf{I} Σ θ ( x t , t ) = σ t 2 I by default for p θ ( x t − 1 ∣ x t ) = N ( μ θ ( x t , t ) , Σ θ ( x t , t ) ) p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) = \mathcal{N}(\boldsymbol{\mu}_{\theta}(\mathbf{x}_{t}, t), \boldsymbol{\Sigma}_{\theta} (\mathbf{x}_{t}, t)) p θ ( x t − 1 ∣ x t ) = N ( μ θ ( x t , t ) , Σ θ ( x t , t )) without learnable parameters. They found experimental outcomes were comparable in two distinct setups.
σ t 2 = β t or σ t 2 = β ~ t = 1 − α ‾ t − 1 1 − α ‾ t β t
\sigma_{t}^{2} = \beta_{t} \qquad \text{or} \qquad \sigma_{t}^{2} = \tilde{\beta}_{t} = \dfrac{1 - \overline{\alpha}_{t-1}}{1 - \overline{\alpha}_{t}} \beta_{t}
σ t 2 = β t or σ t 2 = β ~ t = 1 − α t 1 − α t − 1 β t
The left configuration, where x 0 ∼ N ( 0 , I ) \mathbf{x}_{0} \sim \mathcal{N} (\mathbf{0}, \mathbf{I}) x 0 ∼ N ( 0 , I ) extracted values optimise learning datasets, while, a single fixed x 0 ∼ N ( x 0 , I ) \mathbf{x}_{0} \sim \mathcal{N} (\mathbf{x}_{0}, \mathbf{I}) x 0 ∼ N ( x 0 , I ) excels in right-side setups.
The structure of loss function L t − 1 L_{t-1} L t − 1 manifests as:
L t − 1 = E q ( x t ∣ x 0 ) [ D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) ] = E q ( x t ∣ x 0 ) [ D KL ( N ( μ ~ t ( x t , x 0 ) , β ~ t I ) ∥ N ( μ θ ( x t , t ) , σ t 2 I ) ) ] = E q ( x t ∣ x 0 ) [ 1 2 ( log ( β ~ t σ t 2 ) D + D β ~ t σ t 2 + ( μ ~ t ( x t , x 0 ) − μ θ ( x t , t ) ) 2 σ t 2 − D ) ] = E q ( x t ∣ x 0 ) [ 1 2 σ t 2 ( μ ~ t ( x t , x 0 ) − μ θ ( x t , t ) ) 2 ] + C 2
\begin{align*}
L_{t-1}
&= \mathbb{E}_{q(\mathbf{x}_{t} | \mathbf{x}_{0})} \Big[ D_{\text{KL}} \left( q(\mathbf{x}_{t-1} | \mathbf{x}_{t}, \mathbf{x}_{0}) \| p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) \right) \Big] \\
&= \mathbb{E}_{q(\mathbf{x}_{t} | \mathbf{x}_{0})} \Big[ D_{\text{KL}} \left( \mathcal{N}( \tilde{\boldsymbol{\mu}}_{t}(\mathbf{x}_{t}, \mathbf{x}_{0}), \tilde{\beta}_{t} \mathbf{I}) \| \mathcal{N}(\boldsymbol{\mu}_{\theta}(\mathbf{x}_{t}, t), \sigma_{t}^{2} \mathbf{I}) \right) \Big] \\
&= \mathbb{E}_{q(\mathbf{x}_{t} | \mathbf{x}_{0})} \left[ \dfrac{1}{2} \left( \log \left( \dfrac{\tilde{\beta}_{t}}{\sigma_{t}^{2}} \right)^{D} + D\dfrac{\tilde{\beta}_{t}}{\sigma_{t}^{2}} + \dfrac{(\tilde{\boldsymbol{\mu}}_{t}(\mathbf{x}_{t}, \mathbf{x}_{0}) - \boldsymbol{\mu}_{\theta}(\mathbf{x}_{t}, t))^{2}}{\sigma_{t}^{2}} - D \right) \right] \\
&= \mathbb{E}_{q(\mathbf{x}_{t} | \mathbf{x}_{0})} \left[ \dfrac{1}{2\sigma_{t}^{2}} (\tilde{\boldsymbol{\mu}}_{t}(\mathbf{x}_{t}, \mathbf{x}_{0}) - \boldsymbol{\mu}_{\theta}(\mathbf{x}_{t}, t))^{2}\right] + C_{2}
\end{align*}
L t − 1 = E q ( x t ∣ x 0 ) [ D KL ( q ( x t − 1 ∣ x t , x 0 ) ∥ p θ ( x t − 1 ∣ x t ) ) ] = E q ( x t ∣ x 0 ) [ D KL ( N ( μ ~ t ( x t , x 0 ) , β ~ t I ) ∥ N ( μ θ ( x t , t ) , σ t 2 I ) ) ] = E q ( x t ∣ x 0 ) 2 1 log ( σ t 2 β ~ t ) D + D σ t 2 β ~ t + σ t 2 ( μ ~ t ( x t , x 0 ) − μ θ ( x t , t ) ) 2 − D = E q ( x t ∣ x 0 ) [ 2 σ t 2 1 ( μ ~ t ( x t , x 0 ) − μ θ ( x t , t ) ) 2 ] + C 2
Here, C 2 C_{2} C 2 bears no parameter θ \theta θ reliance. Therefore, μ θ \boldsymbol{\mu}_{\theta} μ θ is modeled to predict μ ~ t \tilde{\boldsymbol{\mu}}_{t} μ ~ t . Explicitly unfold μ ~ t \tilde{\boldsymbol{\mu}}_{t} μ ~ t , sharpening the learning target:
From ( 5 ) (5) ( 5 ) , relate x t \mathbf{x}_{t} x t with x 0 \mathbf{x}_{0} x 0 as x t ( x 0 , ϵ ) = α ‾ t x 0 + 1 − α ‾ t ϵ \mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon}) = \sqrt{\overline{\alpha}_{t}}\mathbf{x}_{0} + \sqrt{1-\overline{\alpha}_{t}}\boldsymbol{\epsilon} x t ( x 0 , ϵ ) = α t x 0 + 1 − α t ϵ through:
μ ~ t ( x t , x 0 ) = α t ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) x t + α ‾ t − 1 ( 1 − α t ) ( 1 − α ‾ t ) x 0 = α t ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) x t ( x 0 , ϵ ) + α ‾ t − 1 ( 1 − α t ) ( 1 − α ‾ t ) ( 1 α ‾ t x t ( x 0 , ϵ ) − 1 − α ‾ t α ‾ t ϵ ) = ( α t ( 1 − α ‾ t − 1 ) ( 1 − α ‾ t ) + α ‾ t − 1 ( 1 − α t ) ( 1 − α ‾ t ) α ‾ t ) x t ( x 0 , ϵ ) − α ‾ t − 1 ( 1 − α t ) 1 − α ‾ t ( 1 − α ‾ t ) α ‾ t ϵ = ( α t ( 1 − α ‾ t − 1 ) α t ( 1 − α ‾ t ) + ( 1 − α t ) ( 1 − α ‾ t ) α t ) x t ( x 0 , ϵ ) − ( 1 − α t ) 1 − α ‾ t α t ϵ = 1 α t ( ( α t − α ‾ t ) + ( 1 − α t ) ( 1 − α ‾ t ) x t ( x 0 , ϵ ) − β t 1 − α ‾ t ϵ ) = 1 α t ( x t ( x 0 , ϵ ) − β t 1 − α ‾ t ϵ )
\begin{align*}
\tilde{\boldsymbol{\mu}}_{t}(\mathbf{x}_{t}, \mathbf{x}_{0})
&= \frac{\sqrt{\alpha_{t}}(1-\overline{\alpha}_{t-1})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{t} + \frac{\sqrt{\overline{\alpha}_{t-1}}(1 - \alpha_{t})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{0} \\
&= \frac{\sqrt{\alpha_{t}}(1-\overline{\alpha}_{t-1})}{(1-\overline{\alpha}_{t})}\mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon}) + \frac{\sqrt{\overline{\alpha}_{t-1}}(1 - \alpha_{t})}{(1-\overline{\alpha}_{t})} \left( \dfrac{1}{\sqrt{\overline{\alpha}_{t}}} \mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon}) - \dfrac{\sqrt{1-\overline{\alpha}_{t}}}{\sqrt{\overline{\alpha}_{t}}}\boldsymbol{\epsilon} \right) \\
&= \left( \frac{\sqrt{\alpha_{t}}(1-\overline{\alpha}_{t-1})}{(1-\overline{\alpha}_{t})} + \frac{\sqrt{\overline{\alpha}_{t-1}}(1 - \alpha_{t})}{(1-\overline{\alpha}_{t})\sqrt{\overline{\alpha}_{t}}} \right)\mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon}) -
\frac{\sqrt{\overline{\alpha}_{t-1}}(1 - \alpha_{t})\sqrt{1-\overline{\alpha}_{t}}}{(1-\overline{\alpha}_{t})\sqrt{\overline{\alpha}_{t}}}\boldsymbol{\epsilon} \\
&= \left( \frac{\alpha_{t}(1-\overline{\alpha}_{t-1})}{\sqrt{\alpha_{t}}(1-\overline{\alpha}_{t})} + \frac{(1 - \alpha_{t})}{(1-\overline{\alpha}_{t})\sqrt{\alpha_{t}}} \right)\mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon}) -
\frac{(1 - \alpha_{t})}{\sqrt{1-\overline{\alpha}_{t}}\sqrt{\alpha_{t}}}\boldsymbol{\epsilon} \\
&= \dfrac{1}{\sqrt{\alpha_{t}}}\left( \frac{ (\alpha_{t} - \overline{\alpha}_{t}) + (1 - \alpha_{t})}{(1-\overline{\alpha}_{t})} \mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon}) - \frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\boldsymbol{\epsilon} \right) \\
&= \dfrac{1}{\sqrt{\alpha_{t}}}\left( \mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon}) - \frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\boldsymbol{\epsilon} \right) \\
\end{align*}
μ ~ t ( x t , x 0 ) = ( 1 − α t ) α t ( 1 − α t − 1 ) x t + ( 1 − α t ) α t − 1 ( 1 − α t ) x 0 = ( 1 − α t ) α t ( 1 − α t − 1 ) x t ( x 0 , ϵ ) + ( 1 − α t ) α t − 1 ( 1 − α t ) ( α t 1 x t ( x 0 , ϵ ) − α t 1 − α t ϵ ) = ( ( 1 − α t ) α t ( 1 − α t − 1 ) + ( 1 − α t ) α t α t − 1 ( 1 − α t ) ) x t ( x 0 , ϵ ) − ( 1 − α t ) α t α t − 1 ( 1 − α t ) 1 − α t ϵ = ( α t ( 1 − α t ) α t ( 1 − α t − 1 ) + ( 1 − α t ) α t ( 1 − α t ) ) x t ( x 0 , ϵ ) − 1 − α t α t ( 1 − α t ) ϵ = α t 1 ( ( 1 − α t ) ( α t − α t ) + ( 1 − α t ) x t ( x 0 , ϵ ) − 1 − α t β t ϵ ) = α t 1 ( x t ( x 0 , ϵ ) − 1 − α t β t ϵ )
Thus, the expression for L t − 1 L_{t-1} L t − 1 rewrites:
L t − 1 = E x t ( x 0 , ϵ ) [ 1 2 σ t 2 [ 1 α t ( x t ( x 0 , ϵ ) − β t 1 − α ‾ t ϵ ) − μ θ ( x t ( x 0 , ϵ ) , t ) ] 2 ] + C 2 (11)
L_{t-1} = \mathbb{E}_{\mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon})} \left[ \dfrac{1}{2\sigma_{t}^{2}} \left[ \dfrac{1}{\sqrt{\alpha_{t}}}\left( \mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon}) - \frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\boldsymbol{\epsilon} \right) - \boldsymbol{\mu}_{\theta}(\mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon}), t) \right]^{2} \right] + C_{2} \tag{11}
L t − 1 = E x t ( x 0 , ϵ ) [ 2 σ t 2 1 [ α t 1 ( x t ( x 0 , ϵ ) − 1 − α t β t ϵ ) − μ θ ( x t ( x 0 , ϵ ) , t ) ] 2 ] + C 2 ( 11 )
Ultimately, the learning target of μ θ ( x t , t ) \boldsymbol{\mu}_{\theta}(\mathbf{x}_{t}, t) μ θ ( x t , t ) is 1 α t ( x t ( x 0 , ϵ ) − β t 1 − α ‾ t ϵ ) \dfrac{1}{\sqrt{\alpha_{t}}}\left( \mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon}) - \frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\boldsymbol{\epsilon} \right) α t 1 ( x t ( x 0 , ϵ ) − 1 − α t β t ϵ ) . Following the given input x t \mathbf{x}_{t} x t and fixed constant β t \beta_{t} β t , this underscores the embedding of ϵ = ϵ t \boldsymbol{\epsilon} = \boldsymbol{\epsilon}_{t} ϵ = ϵ t as the network’s operative target. Parameter θ \theta θ being the sole dependency aligns with:
μ θ ( x t , t ) = 1 α t ( x t − β t 1 − α ‾ t ϵ θ ( x t , t ) ) (12)
\boldsymbol{\mu}_{\theta}(\mathbf{x}_{t}, t)
= \dfrac{1}{\sqrt{\alpha_{t}}}\left( \mathbf{x}_{t} - \frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t}, t) \right) \tag{12}
μ θ ( x t , t ) = α t 1 ( x t − 1 − α t β t ϵ θ ( x t , t ) ) ( 12 )
Such presentation is intuitively obvious and structurally coherent, considering x 0 \mathbf{x}_{0} x 0 restoration through knowable additive noise recovery across ϵ t \boldsymbol{\epsilon}_{t} ϵ t . This intuitive portrayal substantiates the logical validity through strictly mathematical demonstration.
At p ( x t − 1 ∣ x t ) = N ( μ θ , σ t 2 I ) p(\mathbf{x}_{t-1} | \mathbf{x}_{t}) = \mathcal{N}( \boldsymbol{\mu}_{\theta}, \sigma_{t}^{2}\mathbf{I}) p ( x t − 1 ∣ x t ) = N ( μ θ , σ t 2 I ) ’s mean vector, given x t \mathbf{x}_{t} x t , sampling x t − 1 ∼ p θ ( x t − 1 ∣ x t ) \mathbf{x}_{t-1} \sim p_{\theta}(\mathbf{x}_{t-1} | \mathbf{x}_{t}) x t − 1 ∼ p θ ( x t − 1 ∣ x t ) aligns with:
x t − 1 = 1 α t ( x t − β t 1 − α ‾ t ϵ θ ( x t , t ) ) + σ t z , z ∼ N ( 0 , I )
\mathbf{x}_{t-1} = \dfrac{1}{\sqrt{\alpha_{t}}}\left( \mathbf{x}_{t} - \frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t}, t) \right) + \sigma_{t} \mathbf{z},\qquad \mathbf{z} \sim \mathcal{N}(\mathbf{0}, \mathbf{I}) \\
x t − 1 = α t 1 ( x t − 1 − α t β t ϵ θ ( x t , t ) ) + σ t z , z ∼ N ( 0 , I )
Final integration of ( 12 ) (12) ( 12 ) into ( 11 ) (11) ( 11 ) offers:
E x t ( x 0 , ϵ ) [ 1 2 σ t 2 [ 1 α t ( x t ( x 0 , ϵ ) − β t 1 − α ‾ t ϵ ) − μ θ ( x t ( x 0 , ϵ ) , t ) ] 2 ] = E x t [ 1 2 σ t 2 [ 1 α t ( x t − β t 1 − α ‾ t ϵ ) − 1 α t ( x t − β t 1 − α ‾ t ϵ θ ( x t , t ) ) ] 2 ] = E x t [ 1 2 σ t 2 [ 1 α t β t 1 − α ‾ t ϵ − 1 α t β t 1 − α ‾ t ϵ θ ( x t , t ) ] 2 ] = E x t [ β t 2 2 σ t 2 α t ( 1 − α ‾ t ) [ ϵ − ϵ θ ( x t , t ) ] 2 ] = E x 0 , ϵ [ β t 2 2 σ t 2 α t ( 1 − α ‾ t ) [ ϵ − ϵ θ ( α ‾ t x 0 + 1 − α ‾ t ϵ , t ) ] 2 ]
\begin{align*}
& \mathbb{E}_{\mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon})} \left[ \dfrac{1}{2\sigma_{t}^{2}} \left[ \dfrac{1}{\sqrt{\alpha_{t}}}\left( \mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon}) - \frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\boldsymbol{\epsilon} \right) - \boldsymbol{\mu}_{\theta}(\mathbf{x}_{t}(\mathbf{x}_{0}, \boldsymbol{\epsilon}), t) \right]^{2} \right] \\
&= \mathbb{E}_{\mathbf{x}_{t}} \left[ \dfrac{1}{2\sigma_{t}^{2}} \left[ \dfrac{1}{\sqrt{\alpha_{t}}}\left( \mathbf{x}_{t} - \frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\boldsymbol{\epsilon} \right) - \dfrac{1}{\sqrt{\alpha_{t}}}\left( \mathbf{x}_{t} - \frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t}, t) \right) \right]^{2} \right] \\
&= \mathbb{E}_{\mathbf{x}_{t}} \left[ \dfrac{1}{2\sigma_{t}^{2}} \left[ \dfrac{1}{\sqrt{\alpha_{t}}}\frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\boldsymbol{\epsilon} - \dfrac{1}{\sqrt{\alpha_{t}}}\frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t}, t) \right]^{2} \right] \\
&= \mathbb{E}_{\mathbf{x}_{t}} \left[ \dfrac{\beta_{t}^{2}}{2\sigma_{t}^{2}\alpha_{t} (1 - \overline{\alpha}_{t})} \left[ \boldsymbol{\epsilon} - \boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t}, t) \right]^{2} \right] \\
&= \mathbb{E}_{\mathbf{x}_{0}, \boldsymbol{\epsilon}} \left[ \dfrac{\beta_{t}^{2}}{2\sigma_{t}^{2}\alpha_{t} (1 - \overline{\alpha}_{t})} \left[ \boldsymbol{\epsilon} - \boldsymbol{\epsilon}_{\theta}(\sqrt{\overline{\alpha}_{t}}\mathbf{x}_{0} + \sqrt{1-\overline{\alpha}_{t}}\boldsymbol{\epsilon}, t) \right]^{2} \right] \\
\end{align*}
E x t ( x 0 , ϵ ) [ 2 σ t 2 1 [ α t 1 ( x t ( x 0 , ϵ ) − 1 − α t β t ϵ ) − μ θ ( x t ( x 0 , ϵ ) , t ) ] 2 ] = E x t [ 2 σ t 2 1 [ α t 1 ( x t − 1 − α t β t ϵ ) − α t 1 ( x t − 1 − α t β t ϵ θ ( x t , t ) ) ] 2 ] = E x t [ 2 σ t 2 1 [ α t 1 1 − α t β t ϵ − α t 1 1 − α t β t ϵ θ ( x t , t ) ] 2 ] = E x t [ 2 σ t 2 α t ( 1 − α t ) β t 2 [ ϵ − ϵ θ ( x t , t ) ] 2 ] = E x 0 , ϵ [ 2 σ t 2 α t ( 1 − α t ) β t 2 [ ϵ − ϵ θ ( α t x 0 + 1 − α t ϵ , t ) ] 2 ]
These precedents succinctly translate training and sampling process into pseudocode.
3.3 Data Scaling, Reverse Process Decoder, and L 0 L_{0} L 0 All aforementioned discussions were on continuous probability density functions. Given that image data is a discrete variable within the range { 0 , 1 , … , 255 } \left\{ 0, 1, \dots, 255 \right\} { 0 , 1 , … , 255 } , proper scaling ensues. Initial rescaling linearly maps { 0 , 1 , … , 255 } \left\{ 0, 1, \dots, 255 \right\} { 0 , 1 , … , 255 } values to [ − 1 , 1 ] [-1, 1] [ − 1 , 1 ] . Ultimately, authors placed sampling’s final stage p θ ( x 0 , x 1 ) p_{\theta}(\mathbf{x}_{0}, \mathbf{x}_{1}) p θ ( x 0 , x 1 ) as:
p θ ( x 0 ∣ x 1 ) = ∏ i = 1 D ∫ δ − ( x 0 i ) δ + ( x 0 i ) N ( x ; μ θ i ( x 1 , 1 ) , σ 1 2 ) d x
p_{\theta}(\mathbf{x}_{0} | \mathbf{x}_{1}) = \prod\limits_{i = 1}^{D} \int\limits_{\delta_{-}(x_{0}^{i})}^{\delta_{+}(x_{0}^{i})} \mathcal{N}(x; \mu_{\theta}^{i}(\mathbf{x}_{1}, 1), \sigma_{1}^{2}) dx
p θ ( x 0 ∣ x 1 ) = i = 1 ∏ D δ − ( x 0 i ) ∫ δ + ( x 0 i ) N ( x ; μ θ i ( x 1 , 1 ) , σ 1 2 ) d x
δ + ( x ) = { ∞ if x = 1 x + 1 255 if x < 1 , δ − ( x ) = { x − 1 255 if x > − 1 − ∞ if x = − 1
\delta_{+}(x) = \begin{cases}
\infty & \text{if } x = 1 \\
x + \frac{1}{255} & \text{if } x \lt 1
\end{cases},
\qquad
\delta_{-}(x) = \begin{cases}
x - \frac{1}{255} & \text{if } x \gt -1 \\
-\infty & \text{if } x = -1
\end{cases}
δ + ( x ) = { ∞ x + 255 1 if x = 1 if x < 1 , δ − ( x ) = { x − 255 1 − ∞ if x > − 1 if x = − 1
Denoting x 0 i x_{0}^{i} x 0 i refers to the i i i th pixel of x 0 \mathbf{x}_{0} x 0 . Meaning a pixel value x x x is interpreted as within the range of [ x − 1 255 , x + 1 255 ] [x - \frac{1}{255}, x + \frac{1}{255}] [ x − 255 1 , x + 255 1 ] .
3.4 Simplified Training Objective In Section 3.2, we derived L t − 1 L_{t-1} L t − 1 to predict ϵ \boldsymbol{\epsilon} ϵ but found dropping preceding factors helped in practical application for both performance and implementation:
L simple ( θ ) : = E t , x 0 , ϵ [ ( ϵ − ϵ θ ( α ‾ t x 0 + 1 − α ‾ t ϵ , t ) ) 2 ]
L_{\text{simple}}(\theta) := \mathbb{E}_{t, \mathbf{x}_{0}, \boldsymbol{\epsilon}} \left[ \left( \boldsymbol{\epsilon} - \boldsymbol{\epsilon}_{\theta}(\sqrt{\overline{\alpha}_{t}}\mathbf{x}_{0} + \sqrt{1-\overline{\alpha}_{t}}\boldsymbol{\epsilon}, t) \right)^{2} \right]
L simple ( θ ) := E t , x 0 , ϵ [ ( ϵ − ϵ θ ( α t x 0 + 1 − α t ϵ , t ) ) 2 ]
From the pseudocode, note t t t follows a uniform distribution drawn between 1 1 1 and T T T .