Brook's Auxiliary Lemma Proof
Theorem 1
Let’s represent the support of the probability mass function $p : \mathbb{R}^{n} \to \mathbb{R}$ of the random vector $Z : \Omega \to \mathbb{R}^{n}$ as follows. $$ S_{Z} = \left\{ \left( z_{1} , \cdots , z_{n} \right) \in \mathbb{R}^{n} : p \left( z_{1} , \cdots , z_{n} \right) > 0 \right\} \subset \Omega $$ For all $\mathbf{x} := \left( x_{1} , \cdots , x_{n} \right) \in S_{Z}$ and $\mathbf{y} := \left( y_{1} , \cdots , y_{n} \right) \in S_{Z}$, the following holds. $$ {{ p \left( \mathbf{x} \right) } \over { p \left( \mathbf{y} \right) }} = \prod_{k=1}^{n} {{ p \left( x_{k} | x_{1} , \cdots , x_{k-1} , y_{k+1} , \cdots , y_{n} \right) } \over { p \left( y_{k} | x_{1} , \cdots , x_{k-1} , y_{k+1} , \cdots , y_{n} \right) }} $$ Or, it can be summarized for $p \left( \mathbf{x} \right)$ as follows. $$ p \left( \mathbf{x} \right) = p \left( \mathbf{y} \right) \prod_{k=1}^{n} {{ p \left( x_{k} | x_{1} , \cdots , x_{k-1} , y_{k+1} , \cdots , y_{n} \right) } \over { p \left( y_{k} | x_{1} , \cdots , x_{k-1} , y_{k+1} , \cdots , y_{n} \right) }} $$
Description
Although if you know the joint distribution, you can know the marginal distribution, generally, without the assumption of independence, it’s not possible to know the joint distribution just from the marginal distribution. However, Brook’s Lemma shows that it is possible to obtain the joint distribution from the marginal distribution, albeit in a limited manner.
Proof
Part 1. Univariate Distribution
Since $\displaystyle p \left( x | y \right) = {{ p \left( x , y \right) } \over { p \left( y \right) }}$ $$ \begin{align*} p \left( x , y \right) =& p \left( x | y \right) p (y) \\ =& p \left( y | x \right) p (x) \end{align*} $$ From this, the following is obtained. $$ {{ p(x) } \over { p(y) }} = {{ p \left( x | y \right) } \over { p \left( y | x \right) }} $$ Or, it can be summarized for $p (x)$ as follows. $$ p(x) = {{ p \left( x | y \right) } \over { p \left( y | x \right) }} p(y) $$
Part 2. Multivariate Distribution 2
Essentially, if you do it once for the bivariate distribution, it can be repeated in the same way for the $n$variate. $$ \begin{align*} p \left( x_{1} , x_{2} \right) =& p \left( x_{1} | x_{2} \right) p \left( x_{2} \right) \\ =& p \left( x_{1} | x_{2} \right) {{ p \left( x_{2} | y_{1} \right) } \over { p \left( y_{1} | x_{2} \right) }} p \left( y_{1} \right) \\ =& p \left( x_{1} | x_{2} \right) {{ p \left( x_{2} | y_{1} \right) } \over { p \left( y_{1} | x_{2} \right) }} {{ p \left( y_{2} | y_{1} \right) } \over { p \left( y_{2} | y_{1} \right) }} p \left( y_{1} \right) \\ =& {{ p \left( x_{1} | x_{2} \right) } \over { p \left( y_{1} | x_{2} \right) }} {{ p \left( x_{2} | y_{1} \right) } \over { p \left( y_{2} | y_{1} \right) }} p \left( y_{2} | y_{1} \right) p \left( y_{1} \right) \\ =& {{ p \left( x_{1} | x_{2} \right) } \over { p \left( y_{1} | x_{2} \right) }} {{ p \left( x_{2} | y_{1} \right) } \over { p \left( y_{2} | y_{1} \right) }} p \left( y_{1} , y_{2} \right) \\ =& p \left( y_{1} , y_{2} \right) \prod_{k=1}^{2} {{ p \left( x_{k} | x_{1} , \cdots , x_{k-1} , y_{k+1} , \cdots , y_{n} \right) } \over { p \left( y_{k} | x_{1} , \cdots , x_{k-1} , y_{k+1} , \cdots , y_{n} \right) }} \end{align*} $$
■