logo

Convolution of Distributions, Distributions as Functions Defined on Real Numbers 📂Distribution Theory

Convolution of Distributions, Distributions as Functions Defined on Real Numbers

Buildup1

The goal of distribution theory is to rigorously define entities like the naively defined Dirac delta function in mathematical terms. As such, it becomes necessary to treat distributions, defined in the function space, as functions defined over the real number space. Initially, let’s consider how the differentiation, translation, etc., of distributions have been defined.

Since the domain of a distribution is a function space, it is thought that actions such as differentiation cannot be defined in the traditional sense. Instead, these actions are considered to be performed on test functions. Similarly, one aims to define the convolution of a distribution and a test function. Let’s assume we have a locally integrable function $u$ and its corresponding regular distribution $T_{u}$. The convolution of $u$ and a test function $\phi$ is as follows.

$$ u \ast \phi (\mathbf{x}) =\int u(\mathbf{y})\phi (\mathbf{x}-\mathbf{y})d\mathbf{y},\quad \mathbf{x},\mathbf{y}\in \mathbb{R}^{n} $$

Since one cannot convolute $T_{u}$ and $\phi$, let’s define the convolution of $T_{u}$ and $\phi$ as the convolution of $u$ and $\phi$ corresponding to $T_{u}$.

$$ T_{u} \ast \phi (\mathbf{x}):=\int u(\mathbf{y})\phi (\mathbf{x}-\mathbf{y})d\mathbf{y}=u\ast \phi (\mathbf{x}) $$

However, if for any function $f$, it is said that $\tilde{f}(y)=f(-y)$, $f_{x}(y)=f(y-x)$, then the following holds.

$$ \tilde{f}_{x}(y)=\tilde{f}(y-x)=f(x-y) $$

Therefore, $T_{u} \ast \phi$ can be denoted as follows.

$$ T_{u}\ast \phi (\mathbf{x}):=\int u(\mathbf{y})\tilde{\phi}_{\mathbf{x}}(\mathbf{y})d\mathbf{y}=T_{u}(\tilde{\phi}_{\mathbf{x}}) $$

Thus, the convolution of a distribution and a test function is finally defined as follows.

Definition

Let $T$ be a distribution and $\phi$ be a test function. The convolution of $T$ and $\phi$ is defined as follows.

$$ T \ast \phi (\mathbf{x}) :=T(\tilde{\phi}_{\mathbf{x}})=T(\phi (\mathbf{x}-\cdot)) $$

Explanation

Due to such a definition, the distribution $T$, whose domain is the function space, can be thought of as being defined over the real space $\mathbb{R}$. Hence, it becomes possible to talk about continuity, differentiation, etc., in the classical sense. In fact, the following theorem holds.

Theorem

Let $T$ be a distribution, $\phi$ be a test function. Then, the following holds.

$$ T\ast \phi \in C^{\infty} \quad \text{and} \quad \partial^{\alpha}(T\ast \phi)=T\ast \partial^{\alpha}\phi $$

Proof

For simplification, let’s assume it’s one-dimensional. For some test function $\phi$, there exists $r>0$ such that the following formula holds.

$$ \mathrm{supp}\phi \subset [-r,r] $$

Additionally, if some function $f :\mathbb{R}\to \mathbb{C}$ is defined by $f(y)=\phi (x+h-y)$ with respect to $\left| x \right| \le C $, $\left| h \right| \le 1$, then the following formula applies.

$$ \mathrm{supp}f \subset [-R,R],\quad R=r+C+1 $$

Now, let’s define $\psi$ and $\Psi$ as follows.

$$ \begin{align*} \psi_{x,h}(y) =&\ \phi (x+h-y)-\phi (x-y) \\ \Psi_{x,h}(y) =&\ \frac{\phi (x+h-y)-\phi (x-y) }{h}-\phi^{\prime}(x-y) \end{align*} $$

Then, the following holds.

$$ \begin{align*} \mathrm{supp} \psi_{x,h} &\subset [-R,R] \\ \mathrm{supp} \Psi_{x,h}&\subset[-R,R] \end{align*} $$

Furthermore, since $\phi$ is a test function, $\psi_{x,h}$, $\Psi_{x,h}$ are differentiable, and $\psi_{x,h}$, $\Psi_{x,h}$ along with their derivatives converge uniformly to $0$ when $h \to 0$. Therefore, by the continuity condition of distributions, the following formula holds.

$$ \begin{align*} \lim \limits_{h\to 0} \big[ \left( T \ast \phi \right)(x+h)- (T\ast \phi)(x) \big] =&\ \lim \limits_{h\to 0} \big[ T(\tilde{\phi}_{x+h}) -T(\tilde{\phi}_{x}) \big] \\ =&\ \lim \limits_{h\to 0} T(\tilde{\phi}_{x+h}-\tilde{\phi}_{x}) \\ =&\ \lim \limits_{h\to 0} T(\psi_{x,h}) \\ =&\ T(0) \\ =&\ 0 \end{align*} $$

Thus, $T\ast \phi$ is continuous. Also, the following formula holds.

$$ \begin{align*} \lim \limits_{h\to 0} \left[ \frac{ \left( T \ast \phi \right)(x+h)- (T\ast \phi)(x) }{h}- (T\ast \phi^{\prime})(x)\right] =&\ \lim \limits_{h\to 0} \left[ \frac{ T(\tilde{\phi}_{x+h}) -T(\tilde{\phi}_{x}) }{h}- T(\tilde{\phi^{\prime}}_{x})\right] \\ =&\ \lim \limits_{h\to 0} T\left( \frac{ \tilde{\phi}_{x+h} - \tilde{\phi}_{x} }{h}- \tilde{\phi^{\prime}}_{x}\right) \\ =&\ \lim \limits_{h \to 0}T \left( \Psi_{x,h} \right) \\ =&\ T(0) \\ =&\ 0 \end{align*} $$

Therefore, $T\ast \phi$ is differentiable, and its derivative is $T\ast \phi^{\prime}$. In the same manner, the $n$th derivative of $T\ast \phi$ is as follows.

$$ \left( T \ast \phi \right)^{(n)}=T\ast \phi^{(n)}\quad \forall n\in \mathbb{N} $$


  1. Gerald B. Folland, Fourier Analysis and Its Applications (1992), p316-317 ↩︎