chapter
int64 1
14
| question_number
stringlengths 3
5
| difficulty
stringclasses 3
values | question_text
stringlengths 64
8.24k
| answer
stringlengths 56
16.5k
| answer_length
int64 14
16.6k
|
|---|---|---|---|---|---|
10
|
10.5
|
medium
|
- 10.5 (\*\*) Consider a model in which the set of all hidden stochastic variables, denoted collectively by $\mathbf{Z}$ , comprises some latent variables $\mathbf{z}$ together with some model parameters $\boldsymbol{\theta}$ . Suppose we use a variational distribution that factorizes between latent variables and parameters so that $q(\mathbf{z}, \boldsymbol{\theta}) = q_{\mathbf{z}}(\mathbf{z})q_{\boldsymbol{\theta}}(\boldsymbol{\theta})$ , in which the distribution $q_{\boldsymbol{\theta}}(\boldsymbol{\theta})$ is approximated by a point estimate of the form $q_{\boldsymbol{\theta}}(\boldsymbol{\theta}) = \delta(\boldsymbol{\theta} \boldsymbol{\theta}_0)$ where $\boldsymbol{\theta}_0$ is a vector of free parameters. Show that variational optimization of this factorized distribution is equivalent to an EM algorithm, in which the E step optimizes $q_{\mathbf{z}}(\mathbf{z})$ , and the M step maximizes the expected complete-data log posterior distribution of $\boldsymbol{\theta}$ with respect to $\boldsymbol{\theta}_0$ .
|
We introduce a property of Dirac function:
$$\int \delta(\boldsymbol{\theta} - \boldsymbol{\theta}_0) f(\boldsymbol{\theta}) d\boldsymbol{\theta} = f(\boldsymbol{\theta}_0)$$
We first calculate the optimal $q(\mathbf{z}, \boldsymbol{\theta})$ by fixing $q_{\boldsymbol{\theta}}(\boldsymbol{\theta})$ . This is achieved by minimizing the KL divergence given in Eq (10.4):
$$KL(q||p) = -\int \int q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{Z}|\mathbf{X})}{q(\mathbf{Z})} \right\} d\mathbf{Z}$$
$$= -\int \int q_{\mathbf{z}}(\mathbf{z}) q_{\theta}(\boldsymbol{\theta}) \ln \left\{ \frac{p(\mathbf{z}, \boldsymbol{\theta}|\mathbf{X})}{q_{\mathbf{z}}(\mathbf{z}) q_{\theta}(\boldsymbol{\theta})} \right\} d\mathbf{z} d\boldsymbol{\theta}$$
$$= -\int \int q_{\mathbf{z}}(\mathbf{z}) q_{\theta}(\boldsymbol{\theta}) \ln \left\{ \frac{p(\mathbf{z}, \boldsymbol{\theta}|\mathbf{X})}{q_{\mathbf{z}}(\mathbf{z})} \right\} d\mathbf{z} d\boldsymbol{\theta} + \int q_{\theta}(\boldsymbol{\theta}) \ln q_{\theta}(\boldsymbol{\theta}) d\boldsymbol{\theta}$$
$$= -\int \int q_{\mathbf{z}}(\mathbf{z}) q_{\theta}(\boldsymbol{\theta}) \ln \left\{ \frac{p(\mathbf{z}, \boldsymbol{\theta}|\mathbf{X})}{q_{\mathbf{z}}(\mathbf{z})} \right\} d\mathbf{z} d\boldsymbol{\theta} + \text{const}$$
$$= -\int q_{\theta}(\boldsymbol{\theta}) \left\{ \int q_{\mathbf{z}}(\mathbf{z}) \ln \left\{ \frac{p(\mathbf{z}, \boldsymbol{\theta}|\mathbf{X})}{q_{\mathbf{z}}(\mathbf{z})} \right\} d\mathbf{z} \right\} d\boldsymbol{\theta} + \text{const}$$
$$= -\int q_{\mathbf{z}}(\mathbf{z}) \ln \left\{ \frac{p(\mathbf{z}|\boldsymbol{\theta}_{0}, \mathbf{X}) p(\boldsymbol{\theta}_{0}|\mathbf{X})}{q_{\mathbf{z}}(\mathbf{z})} \right\} d\mathbf{z} + \text{const}$$
$$= -\int q_{\mathbf{z}}(\mathbf{z}) \ln \left\{ \frac{p(\mathbf{z}|\boldsymbol{\theta}_{0}, \mathbf{X}) p(\boldsymbol{\theta}_{0}|\mathbf{X})}{q_{\mathbf{z}}(\mathbf{z})} \right\} d\mathbf{z} + \text{const}$$
$$= -\int q_{\mathbf{z}}(\mathbf{z}) \ln \left\{ \frac{p(\mathbf{z}|\boldsymbol{\theta}_{0}, \mathbf{X})}{q_{\mathbf{z}}(\mathbf{z})} \right\} d\mathbf{z} + \text{const}$$
Here the 'Const' denotes the terms independent of $q_{\mathbf{z}}(\mathbf{z})$ . Note that we will show at the end of this problem, here 'Const' actually is $-\infty$ due to the existence of the entropy of Dirac function:
$$\int q_{\boldsymbol{\theta}}(\boldsymbol{\theta}) \ln q_{\boldsymbol{\theta}}(\boldsymbol{\theta}) d\boldsymbol{\theta}$$
Now it is clear that when $q_{\mathbf{z}}(\mathbf{z})$ equals $p(\mathbf{z}|\boldsymbol{\theta}_0, \mathbf{X})$ , the KL divergence is minimized. This corresponds to the E-step. Next, we calculate the optimal $q_{\boldsymbol{\theta}}(\boldsymbol{\theta})$ , i.e., $\boldsymbol{\theta}_0$ , by maximizing L(q) given in Eq $\mathcal{L}(q) = \int q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{X}, \mathbf{Z})}{q(\mathbf{Z})} \right\} d\mathbf{Z}$, but fixing $q_{\boldsymbol{\theta}}(\boldsymbol{\theta})$ :
$$\begin{split} L(q) &= \int \int q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{X}, \mathbf{Z})}{q(\mathbf{Z})} \right\} d\mathbf{Z} \\ &= \int \int q_{\mathbf{z}}(\mathbf{z}) q_{\theta}(\boldsymbol{\theta}) \ln \left\{ \frac{p(\mathbf{X}, \mathbf{z}, \boldsymbol{\theta})}{q_{\mathbf{z}}(\mathbf{z}) q_{\theta}(\boldsymbol{\theta})} \right\} d\mathbf{z} d\boldsymbol{\theta} \\ &= \int \int q_{\mathbf{z}}(\mathbf{z}) q_{\theta}(\boldsymbol{\theta}) \ln \left\{ \frac{p(\mathbf{X}, \mathbf{z}, \boldsymbol{\theta})}{q_{\mathbf{z}}(\mathbf{z})} \right\} d\mathbf{z} d\boldsymbol{\theta} - \int q_{\theta}(\boldsymbol{\theta}) \ln q_{\theta}(\boldsymbol{\theta}) d\boldsymbol{\theta} \\ &= \int \int q_{\mathbf{z}}(\mathbf{z}) q_{\theta}(\boldsymbol{\theta}) \ln \left\{ p(\mathbf{X}, \mathbf{z}, \boldsymbol{\theta}) \right\} d\mathbf{z} d\boldsymbol{\theta} - \int q_{\theta}(\boldsymbol{\theta}) \ln q_{\theta}(\boldsymbol{\theta}) d\boldsymbol{\theta} + \text{const} \\ &= \int q_{\theta}(\boldsymbol{\theta}) \mathbb{E}_{q_{\mathbf{z}}} [\ln p(\mathbf{X}, \mathbf{z}, \boldsymbol{\theta})] d\boldsymbol{\theta} - \int q_{\theta}(\boldsymbol{\theta}) \ln q_{\theta}(\boldsymbol{\theta}) d\boldsymbol{\theta} + \text{const} \\ &= \mathbb{E}_{q_{\mathbf{z}}(\mathbf{z})} [\ln p(\mathbf{X}, \mathbf{z}, \boldsymbol{\theta}_0)] - \int q_{\theta}(\boldsymbol{\theta}) \ln q_{\theta}(\boldsymbol{\theta}) d\boldsymbol{\theta} + \text{const} \end{split}$$
The second term is actually the entropy of a Dirac function, which is $-\infty$ and independent of the value of $\theta_0$ . Not strictly speaking, we only need to maximize the first term. This is exactly the M-step.
One important thing needs to be clarified here. You may ask no matter how we set $\theta_0$ , L(q) will always be $-\infty$ . Actually, this is an intrinsic problem as long as we use a point estimate $q_{\theta}(\theta)$ . This will even occur when we derive the optimal $q_{\mathbf{z}}(\mathbf{z})$ by minimizing the KL divergence at the first step. Therefore, the 'Maximizing' and 'Minimizing' is a general meaning in this problem where we neglect the $-\infty$ term.
| 5,129
|
10
|
10.6
|
medium
|
The alpha family of divergences is defined by $D_{\alpha}(p||q) = \frac{4}{1 - \alpha^2} \left( 1 - \int p(x)^{(1+\alpha)/2} q(x)^{(1-\alpha)/2} dx \right)$. Show that the Kullback-Leibler divergence $\mathrm{KL}(p\|q)$ corresponds to $\alpha \to 1$ . This can be done by writing $p^{\epsilon} = \exp(\epsilon \ln p) = 1 + \epsilon \ln p + O(\epsilon^2)$ and then taking $\epsilon \to 0$ . Similarly show that $\mathrm{KL}(q\|p)$ corresponds to $\alpha \to -1$ .
|
Let's use the hint by first enforcing $\alpha \to 1$ .
$$\begin{split} D_{\alpha}(p||q) &= \frac{4}{1-\alpha^2} \Big\{ 1 - \int p^{(1+\alpha)/2} q^{(1-\alpha)/2} \, dx \Big\} \\ &= \frac{4}{1-\alpha^2} \Big\{ 1 - \int \frac{p}{p^{(1-\alpha)/2}} \left[ 1 + \frac{1-\alpha}{2} \ln q + O(\frac{1-\alpha}{2})^2 \right] dx \Big\} \\ &= \frac{4}{1-\alpha^2} \Big\{ 1 - \int p \cdot \frac{1 + \frac{1-\alpha}{2} \ln q + O(\frac{1-\alpha}{2})^2}{1 + \frac{1-\alpha}{2} \ln p + O(\frac{1-\alpha}{2})^2} \, dx \Big\} \\ &\approx \frac{4}{1-\alpha^2} \Big\{ 1 - \int p \cdot \frac{1 + \frac{1-\alpha}{2} \ln q}{1 + \frac{1-\alpha}{2} \ln p} \, dx \Big\} \\ &= \frac{4}{1-\alpha^2} \Big\{ - \int p \cdot \left[ \frac{1 + \frac{1-\alpha}{2} \ln q}{1 + \frac{1-\alpha}{2} \ln p} - 1 \right] \, dx \Big\} \\ &= \frac{4}{1-\alpha^2} \Big\{ - \int p \cdot \frac{\frac{1-\alpha}{2} \ln q - \frac{1-\alpha}{2} \ln p}{1 + \frac{1-\alpha}{2} \ln p} \, dx \Big\} \\ &= \frac{2}{1+\alpha} \Big\{ - \int p \cdot \frac{\ln q - \ln p}{1 + \frac{1-\alpha}{2} \ln p} \, dx \Big\} \\ &\approx - \int p \cdot (\ln q - \ln p) \, dx = - \int p \cdot \ln \frac{q}{p} \, dx \end{split}$$
Here p and q is short for p(x) and q(x). It is similar when $\alpha \to -1$ . One important thing worthy mentioning is that if we directly approximate $p^{(1+\alpha)/2}$ by p instead of $p/p^{(1-\alpha)/2}$ in the first step, we won't get the desired result.
# **Problem 10.7 Solution**
Let's begin from Eq $= -\frac{\mathbb{E}[\tau]}{2} \left\{ \lambda_0 (\mu - \mu_0)^2 + \sum_{n=1}^{N} (x_n - \mu)^2 \right\} + \text{const.} \quad$.
$$\begin{split} \ln q_{\mu}^{}(\mu) &= -\frac{\mathbb{E}[\tau]}{2} \Big\{ \lambda_{0}(\mu - \mu_{0})^{2} + \sum_{n=1}^{N} (x_{n} - \mu)^{2} \Big\} + \text{const} \\ &= -\frac{\mathbb{E}[\tau]}{2} \Big\{ \lambda_{0} \mu^{2} - 2\lambda_{0} \mu_{0} \mu + \lambda_{0} \mu_{0}^{2} + N \mu^{2} - 2(\sum_{n=1}^{N} x_{n}) \mu + \sum_{n=1}^{N} x_{n}^{2} \Big\} + \text{const} \\ &= -\frac{\mathbb{E}[\tau]}{2} \Big\{ (\lambda_{0} + N) \mu^{2} - 2(\lambda_{0} \mu_{0} + \sum_{n=1}^{N} x_{n}) \mu + (\lambda_{0} \mu_{0}^{2} + \sum_{n=1}^{N} x_{n}^{2}) \Big\} + \text{const} \\ &= -\frac{\mathbb{E}[\tau](\lambda_{0} + N)}{2} \Big\{ \mu^{2} - 2\frac{\lambda_{0} \mu_{0} + \sum_{n=1}^{N} x_{n}}{\lambda_{0} + N} \mu + \frac{\lambda_{0} \mu_{0}^{2} + \sum_{n=1}^{N} x_{n}^{2}}{\lambda_{0} + N} \Big\} + \text{const} \end{split}$$
From this expression, we see that $q_{\mu}^{}(\mu)$ should be a Gaussian. Suppose that is has form: $q_{\mu}^{}(\mu) \sim \mathcal{N}(\mu|\mu_N, \lambda_N^{-1})$ , then its logarithm can be written as:
$$\ln q_{\mu}^{}(\mu) = \frac{1}{2} \ln \frac{\lambda_N}{2\pi} - \frac{\lambda_N}{2} (\mu - \mu_N)^2$$
We match the terms related to $\mu$ (the quadratic term and linear term), yielding:
$$\lambda_N = \mathbb{E}[\tau] \cdot (\lambda_0 + N)$$
, and $\lambda_N \mu_N = \mathbb{E}[\tau] \cdot (\lambda_0 + N) \cdot \frac{\lambda_0 \mu_0 + \sum_{n=1}^N x_n}{\lambda_0 + N}$
Therefore, we obtain:
$$\mu_N = \frac{\lambda_0 \, \mu_0 + N \bar{x}}{\lambda_0 + N}$$
Where $\bar{x}$ is the mean of $x_n$ , i.e.,
$$\bar{x} = \frac{1}{N} \sum_{n=1}^{N} x_n$$
Then we deal with the other factor $q_{\tau}(\tau)$ . Note that there is a typo in Eq $-\frac{\tau}{2} \mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_{n} - \mu)^{2} + \lambda_{0}(\mu - \mu_{0})^{2} \right] + \text{const} \quad$, the coefficient ahead of $\ln \tau$ should be $\frac{N+1}{2}$ . Let's verify this by considering the terms introducing $\ln \tau$ . The first term inside the expectation, i.e., $\ln p(D|\mu,\tau)$ , gives $\frac{N}{2} \ln \tau$ , and the second term inside the expectation, i.e., $\ln p(\mu|\tau)$ , gives $\frac{1}{2} \ln \tau$ . Finally the last term $\ln p(\tau)$ gives $(a_0-1)\ln \tau$ . Therefore, Eq $a_N = a_0 + \frac{N}{2}$, Eq $\frac{1}{\mathbb{E}[\tau]} = \mathbb{E}\left[\frac{1}{N} \sum_{n=1}^{N} (x_n - \mu)^2\right] = \overline{x^2} - 2\overline{x}\mathbb{E}[\mu] + \mathbb{E}[\mu^2].$ and Eq $= \frac{1}{N-1} \sum_{n=1}^{N} (x_n - \overline{x})^2.$ will also change consequently. The right forms of these equations will be given in this and following problems.
Now suppose that $q_{\tau}(\tau)$ is a Gamma distribution, i.e., $q_{\tau}(\tau) \sim \text{Gam}(\tau|a_N,b_N)$ , we have:
$$\ln q_{\tau}(\tau) = -\ln \Gamma(a_N) + a_N \ln b_N + (a_N - 1) \ln \tau - b_N \tau$$
Comparing it with Eq $-\frac{\tau}{2} \mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_{n} - \mu)^{2} + \lambda_{0}(\mu - \mu_{0})^{2} \right] + \text{const} \quad$ and matching the coefficients ahead of $\tau$ and $\ln \tau$ , we can obtain:
$$a_N - 1 = a_0 - 1 + \frac{N+1}{2}$$
$\Rightarrow$ $a_N = a_0 + \frac{N+1}{2}$
And similarly
$$b_N = b_0 + \frac{1}{2} \mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_n - \mu)^2 + \lambda_0 (\mu - \mu_0)^2 \right]$$
Just as required.
| 4,965
|
10
|
10.8
|
easy
|
Consider the variational posterior distribution for the precision of a univariate Gaussian whose parameters are given by $a_N = a_0 + \frac{N}{2}$ and $b_N = b_0 + \frac{1}{2} \mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_n - \mu)^2 + \lambda_0 (\mu - \mu_0)^2 \right].$. By using the standard results for the mean and variance of the gamma distribution given by (B.27) and (B.28), show that if we let $N \to \infty$ , this variational posterior distribution has a mean given by the inverse of the maximum likelihood estimator for the variance of the data, and a variance that goes to zero.
|
According to Eq (B.27), we have:
$$\mathbb{E}[\tau] = \frac{a_0 + (N+1)/2}{b_0 + \frac{1}{2} \mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_n - \mu)^2 + \lambda_0 (\mu - \mu_0)^2 \right]}$$
$$\approx \frac{N/2}{\frac{1}{2} \mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_n - \mu)^2 \right]}$$
$$= \frac{N}{\mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_n - \mu)^2 \right]}$$
$$= \left\{ \frac{1}{N} \cdot \mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_n - \mu)^2 \right] \right\}^{-1}$$
According to Eq (B.28), we have:
$$\begin{aligned} \text{var}[\tau] &= \frac{a_0 + (N+1)/2}{\left(b_0 + \frac{1}{2} \mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_n - \mu)^2 + \lambda_0 (\mu - \mu_0)^2 \right] \right)^2} \\ &\approx \frac{N/2}{\frac{1}{4} \mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_n - \mu)^2 \right]^2} \approx 0 \end{aligned}$$
Just as required.
| 831
|
10
|
10.9
|
medium
|
By making use of the standard result $\mathbb{E}[\tau] = a_N/b_N$ for the mean of a gamma distribution, together with $\mu_N = \frac{\lambda_0 \mu_0 + N\overline{x}}{\lambda_0 + N}$, $\lambda_N = (\lambda_0 + N) \mathbb{E}[\tau].$, $a_N = a_0 + \frac{N}{2}$, and $b_N = b_0 + \frac{1}{2} \mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_n - \mu)^2 + \lambda_0 (\mu - \mu_0)^2 \right].$, derive the result $= \frac{1}{N-1} \sum_{n=1}^{N} (x_n - \overline{x})^2.$ for the reciprocal of the expected precision in the factorized variational treatment of a univariate Gaussian.
|
The underlying assumption of this problem is $a_0 = b_0 = \lambda_0 = 0$ . According to Eq $\mu_N = \frac{\lambda_0 \mu_0 + N\overline{x}}{\lambda_0 + N}$, Eq $\lambda_N = (\lambda_0 + N) \mathbb{E}[\tau].$ and the definition of variance, we can obtain:
$$\begin{split} \mathbb{E}[\mu^2] &= \lambda_N^{-1} + \mathbb{E}[\mu]^2 = \frac{1}{(\lambda_0 + N)\mathbb{E}[\tau]} + (\frac{\lambda_0 \mu_0 + N\overline{x}}{\lambda_0 + N})^2 \\ &= \frac{1}{N\mathbb{E}[\tau]} + \overline{x}^2 \end{split}$$
Note that since there is a typo in Eq $a_N = a_0 + \frac{N}{2}$ as stated in the previous problem, i.e., missing a term $\frac{1}{2}$ . $\mathbb{E}[\tau]^{-1}$ actually equals:
$$\frac{1}{\mathbb{E}[\tau]} = \frac{b_N}{a_N} = \frac{b_0 + \frac{1}{2}\mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_n - \mu)^2 + \lambda_0 (\mu - \mu_0)^2 \right]}{a_0 + (N+1)/2}$$
$$= \frac{\frac{1}{2}\mathbb{E}_{\mu} \left[ \sum_{n=1}^{N} (x_n - \mu)^2 \right]}{(N+1)/2}$$
$$= \mathbb{E}_{\mu} \left[ \frac{1}{N+1} \sum_{n=1}^{N} (x_n - \mu)^2 \right]$$
$$= \frac{N}{N+1} \mathbb{E}_{\mu} \left[ \frac{1}{N} \sum_{n=1}^{N} (x_n - \mu)^2 \right]$$
$$= \frac{N}{N+1} \left\{ \overline{x^2} - 2\overline{x}\mathbb{E}[\mu] + \mathbb{E}[\mu^2] \right\}$$
$$= \frac{N}{N+1} \left\{ \overline{x^2} - 2\overline{x}^2 + \frac{1}{N\mathbb{E}[\tau]} + \overline{x}^2 \right\}$$
$$= \frac{N}{N+1} \left\{ \overline{x^2} - \overline{x}^2 + \frac{1}{N\mathbb{E}[\tau]} \right\}$$
$$= \frac{N}{N+1} \left\{ \overline{x^2} - \overline{x}^2 + \frac{1}{N\mathbb{E}[\tau]} \right\}$$
$$= \frac{N}{N+1} \left\{ \overline{x^2} - \overline{x}^2 \right\} + \frac{1}{(N+1)\mathbb{E}[\tau]}$$
Rearranging it, we can obtain:
$$\frac{1}{\mathbb{E}[\tau]} = (\overline{x^2} - \overline{x}^2) = \frac{1}{N} \sum_{n=1}^{N} (x_n - \overline{x})^2$$
Actually it is still a biased estimator.
| 1,875
|
11
|
11.1
|
easy
|
Show that the finite sample estimator $\hat{f}$ defined by $\widehat{f} = \frac{1}{L} \sum_{l=1}^{L} f(\mathbf{z}^{(l)}).$ has mean equal to $\mathbb{E}[f]$ and variance given by $\operatorname{var}[\widehat{f}] = \frac{1}{L} \mathbb{E}\left[ (f - \mathbb{E}[f])^2 \right]$.
|
Based on definition, we can write down:
$$\begin{split} \mathbb{E}[\widehat{f}] &= \mathbb{E}[\frac{1}{L}\sum_{l=1}^{L}f(\mathbf{z}^{(l)})] \\ &= \frac{1}{L}\sum_{l=1}^{L}\mathbb{E}[f(\mathbf{z}^{(l)})] \\ &= \frac{1}{L}\cdot L\cdot \mathbb{E}[f] = \mathbb{E}[f] \end{split}$$
Where we have used the fact that the expectation and the summation can exchange order because all the $\mathbf{z}^{(l)}$ are independent, and that $\mathbb{E}[f(\mathbf{z}^{(l)})] = \mathbb{E}[f]$ because all the $\mathbf{z}^{(l)}$ are drawn from $p(\mathbf{z})$ . Next, we deal with the variance:
$$\begin{aligned} & \text{var}[\hat{f}] &= & \mathbb{E}[(\hat{f} - \mathbb{E}[\hat{f}])^2] = \mathbb{E}[\hat{f}^2] - \mathbb{E}[\hat{f}]^2 = \mathbb{E}[\hat{f}^2] - \mathbb{E}[f]^2 \\ &= & \mathbb{E}[(\frac{1}{L} \sum_{l=1}^{L} f(\mathbf{z}^{(l)}))^2] - \mathbb{E}[f]^2 \\ &= & \frac{1}{L^2} \mathbb{E}[(\sum_{l=1}^{L} f(\mathbf{z}^{(l)}))^2] - \mathbb{E}[f]^2 \\ &= & \frac{1}{L^2} \mathbb{E}[\sum_{l=1}^{L} f^2(\mathbf{z}^{(l)}) + \sum_{i,j=1,i\neq j}^{L} f(\mathbf{z}^{(i)}) f(\mathbf{z}^{(j)})] - \mathbb{E}[f]^2 \\ &= & \frac{1}{L^2} \mathbb{E}[\sum_{l=1}^{L} f^2(\mathbf{z}^{(l)})] + \frac{L^2 - L}{L^2} \mathbb{E}[f]^2 - \mathbb{E}[f]^2 \\ &= & \frac{1}{L^2} \sum_{l=1}^{L} \mathbb{E}[f^2(\mathbf{z}^{(l)})] - \frac{1}{L} \mathbb{E}[f]^2 \\ &= & \frac{1}{L^2} \cdot L \cdot \mathbb{E}[f^2] - \frac{1}{L} \mathbb{E}[f]^2 \\ &= & \frac{1}{L} \mathbb{E}[f^2] - \frac{1}{L} \mathbb{E}[f]^2 = \frac{1}{L} \mathbb{E}[(f - \mathbb{E}[f])^2] \end{aligned}$$
Just as required.
| 1,560
|
11
|
11.10
|
easy
|
Show that the simple random walk over the integers defined by $p(z^{(\tau+1)} = z^{(\tau)}) = 0.5$, $p(z^{(\tau+1)} = z^{(\tau)} + 1) = 0.25$, and $p(z^{(\tau+1)} = z^{(\tau)} - 1) = 0.25$ has the property that $\mathbb{E}[(z^{(\tau)})^2] = \mathbb{E}[(z^{(\tau-1)})^2] + 1/2$ and hence by induction that $\mathbb{E}[(z^{(\tau)})^2] = \tau/2$ .
Figure 11.15 A probability distribution over two variables $z_1$ and $z_2$ that is uniform over the shaded regions and that is zero everywhere else.

|
Based on definition and Eq (11.34)-(11.36), we can write down:
$$\mathbb{E}_{\tau}[(z^{(\tau)})^{2}] = 0.5 \cdot \mathbb{E}_{\tau-1}[(z^{\tau-1})^{2}] + 0.25 \cdot \mathbb{E}_{\tau-1}[(z^{\tau-1}+1)^{2}] + 0.25 \cdot \mathbb{E}_{\tau-1}[(z^{\tau-1}-1)^{2}]$$
$$= \mathbb{E}_{\tau-1}[(z^{\tau-1})^{2}] + 0.5$$
If the initial state is $z^{(0)} = 0$ (there is a typo in the line below Eq $p(z^{(\tau+1)} = z^{(\tau)} - 1) = 0.25$), we can obtain $\mathbb{E}_{\tau}[(z^{(\tau)})^2] = \tau/2$ just as required.
| 521
|
11
|
11.11
|
medium
|
Show that the Gibbs sampling algorithm, discussed in Section 11.3, satisfies detailed balance as defined by $p^{}(\mathbf{z})T(\mathbf{z}, \mathbf{z}') = p^{}(\mathbf{z}')T(\mathbf{z}', \mathbf{z})$.
|
This problem requires you to know the definition of detailed balance, i.e., Eq (11.40):
$$p^{}(\mathbf{z})T(\mathbf{z},\mathbf{z}') = p^{}(\mathbf{z}')T(\mathbf{z}',\mathbf{z})$$
Note that here **z** and **z**' are the sampled values of $[z_1, z_2, ..., z_M]^T$ in two consecutive Gibbs Sampling step. Without loss of generality, we assume that we are now updating $z_j^{\tau}$ to $z_j^{\tau+1}$ in step $\tau$ :
$$\begin{split} p^{}(\mathbf{z})T(\mathbf{z},\mathbf{z}') &= p(z_1^{\tau},z_2^{\tau},...,z_M^{\tau}) \cdot p(z_j^{\tau+1}|\mathbf{z}_{/j}^{\tau}) \\ &= p(z_j^{\tau}|\mathbf{z}_{/j}^{\tau}) \cdot p(\mathbf{z}_{/j}^{\tau}) \cdot p(z_j^{\tau+1}|\mathbf{z}_{/j}^{\tau}) \\ &= p(z_j^{\tau}|\mathbf{z}_{/j}^{\tau+1}) \cdot p(\mathbf{z}_{/j}^{\tau+1}) \cdot p(z_j^{\tau+1}|\mathbf{z}_{/j}^{\tau+1}) \\ &= p(z_j^{\tau}|\mathbf{z}_{/j}^{\tau+1}) \cdot p(z_1^{\tau+1},z_2^{\tau+1},...,z_M^{\tau+1}) \\ &= T(\mathbf{z}',\mathbf{z}) \cdot p^{}(\mathbf{z}) \end{split}$$
To be more specific, we write down the first line based on Gibbs sampling, where $\mathbf{z}_{/j}^{\tau}$ denotes all the entries in vector $\mathbf{z}^{\tau}$ except $z_{j}^{\tau}$ . In the second line, we use the conditional property, i.e, p(a,b) = p(a|b)p(b) for the first term. In the third line, we use the fact that $\mathbf{z}_{/j}^{\tau} = \mathbf{z}_{/j}^{\tau+1}$ . Then we reversely use the conditional property for the last two terms in the fourth line, and finally obtain what has been asked.
| 1,513
|
11
|
11.12
|
easy
|
Consider the distribution shown in Figure 11.15. Discuss whether the standard Gibbs sampling procedure for this distribution is ergodic, and therefore whether it would sample correctly from this distribution
|
Obviously, Gibbs Sampling is not ergodic for this specific distribution, and the quick reason is that neither the projection of the two shaded region on $z_1$ axis nor $z_2$ axis overlaps. For instance, we denote the left down shaded region as region 1. If the initial sample falls into this region, no matter how many steps have been carried out, all the generated samples will be in region 1. It is the same for the right up region.
# **Problem 11.13 Solution**
Let's begin by definition.
$$p(\mu|x,\tau,\mu_0,s_0) \propto p(x|\mu,\tau,\mu_0,s_0) \cdot p(\mu|\tau,\mu_0,s_0)$$
$$= p(x|\mu,\tau) \cdot p(\mu|\mu_0,s_0)$$
$$= \mathcal{N}(x|\mu,\tau^{-1}) \cdot \mathcal{N}(\mu|\mu_0,s_0)$$
Where in the first line, we have used Bayes' Theorem:
$$p(\mu|x,c) \propto p(x|\mu,c) \cdot p(\mu|c)$$
Now we use Eq (2.113)-Eq $\Sigma = (\mathbf{\Lambda} + \mathbf{A}^{\mathrm{T}} \mathbf{L} \mathbf{A})^{-1}.$, we can obtain: $p(\mu|x,\tau,\mu_0,s_0)=\mathcal{N}(\mu|\mu^{},s^{})$ , where we have defined:
$$[s^{}]^{-1} = s_0^{-1} + \tau \quad , \quad \mu^{} = s^{} \cdot (\tau \cdot x + s_0^{-1} \mu_0)$$
It is similar for $p(\tau|x,\mu,a,b)$ :
$$p(\tau|x,\mu,a,b) \propto p(x|\tau,\mu,a,b) \cdot p(\tau|\mu,a,b)$$
$$= p(x|\mu,\tau) \cdot p(\tau|a,b)$$
$$= \mathcal{N}(x|\mu,\tau^{-1}) \cdot \operatorname{Gam}(\tau|a,b)$$
Based on Section 2.3.6, especially Eq (2.150)-(2.151), we can obtain $p(\tau|x,\mu,a,b) = \text{Gam}(\tau|a^*,b^*)$ , where we have defined:
$$a^* = a + 0.5$$
, $b^* = b + 0.5 \cdot (x - \mu)^2$
| 1,568
|
11
|
11.14
|
easy
|
Verify that the over-relaxation update $z_i' = \mu_i + \alpha(z_i - \mu_i) + \sigma_i(1 - \alpha_i^2)^{1/2}\nu$, in which $z_i$ has mean $\mu_i$ and variance $\sigma_i$ , and where $\nu$ has zero mean and unit variance, gives a value $z_i'$ with mean $\mu_i$ and variance $\sigma_i^2$ .
|
Based on definition, we can write down:
$$\begin{split} \mathbb{E}[z_i'] &= \mathbb{E}[\mu_i + \alpha(z_i - \mu_i) + \sigma_i (1 - \alpha_i^2)^{1/2} v] \\ &= \mu_i + \mathbb{E}[\alpha(z_i - \mu_i)] + \mathbb{E}[\sigma_i (1 - \alpha_i^2)^{1/2} v] \\ &= \mu_i + \alpha \cdot \mathbb{E}[z_i - \mu_i] + [\sigma_i (1 - \alpha_i^2)^{1/2}] \cdot \mathbb{E}[v] \\ &= \mu_i \end{split}$$
Where we have used the fact that the mean of $z_i$ is $\mu_i$ , i.e., $\mathbb{E}[z_i] = \mu_i$ , and that the mean of v is 0, i.e., $\mathbb{E}[v] = 0$ . Then we deal with the variance:
$$\begin{aligned} & \text{var}[z_i'] &= & \mathbb{E}[(z_i' - \mu_i)^2] \\ &= & \mathbb{E}[(\alpha(z_i - \mu_i) + \sigma_i(1 - \alpha_i^2)^{1/2}v)^2] \\ &= & \mathbb{E}[\alpha^2(z_i - \mu_i)^2] + \mathbb{E}[\sigma_i^2(1 - \alpha_i^2)v^2] + \mathbb{E}[2\alpha(z_i - \mu_i) \cdot \sigma_i(1 - \alpha_i^2)^{1/2}v] \\ &= & \alpha^2 \cdot \mathbb{E}[(z_i - \mu_i)^2] + \sigma_i^2(1 - \alpha_i^2) \cdot \mathbb{E}[v^2] + 2\alpha \cdot \sigma_i(1 - \alpha_i^2)^{1/2} \cdot \mathbb{E}[(z_i - \mu_i)v] \\ &= & \alpha^2 \cdot \text{var}[z_i] + \sigma_i^2(1 - \alpha_i^2) \cdot (\text{var}[v] + \mathbb{E}[v]^2) + 2\alpha \cdot \sigma_i(1 - \alpha_i^2)^{1/2} \cdot \mathbb{E}[(z_i - \mu_i)] \cdot \mathbb{E}[v] \\ &= & \alpha^2 \cdot \sigma_i^2 + \sigma_i^2(1 - \alpha_i^2) \cdot 1 + 0 \\ &= & \sigma_i^2 \end{aligned}$$
Where we have used the fact that $z_i$ and v are independent and thus $\mathbb{E}[(z_i - \mu_i)v] = \mathbb{E}[z_i - \mu_i] \cdot \mathbb{E}[v] = 0$
| 1,535
|
11
|
11.15
|
easy
|
Using $K(\mathbf{r}) = \frac{1}{2} \|\mathbf{r}\|^2 = \frac{1}{2} \sum_{i} r_i^2.$ and $H(\mathbf{z}, \mathbf{r}) = E(\mathbf{z}) + K(\mathbf{r})$, show that the Hamiltonian equation $\frac{\mathrm{d}z_i}{\mathrm{d}\tau} = \frac{\partial H}{\partial r_i}$ is equivalent to $r_i = \frac{\mathrm{d}z_i}{\mathrm{d}\tau}$. Similarly, using $H(\mathbf{z}, \mathbf{r}) = E(\mathbf{z}) + K(\mathbf{r})$ show that $\frac{\mathrm{d}r_i}{\mathrm{d}\tau} = -\frac{\partial H}{\partial z_i}.$ is equivalent to $\frac{\mathrm{d}r_i}{\mathrm{d}\tau} = -\frac{\partial E(\mathbf{z})}{\partial z_i}.$.
|
Using Eq $H(\mathbf{z}, \mathbf{r}) = E(\mathbf{z}) + K(\mathbf{r})$, we can write down:
$$\frac{\partial H}{\partial r_i} = \frac{\partial K}{\partial r_i} = r_i$$
Comparing this with Eq $r_i = \frac{\mathrm{d}z_i}{\mathrm{d}\tau}$, we obtain Eq $\frac{\mathrm{d}z_i}{\mathrm{d}\tau} = \frac{\partial H}{\partial r_i}$. Similarly, still using Eq $H(\mathbf{z}, \mathbf{r}) = E(\mathbf{z}) + K(\mathbf{r})$, we can obtain:
$$\frac{\partial H}{\partial z_i} = \frac{\partial E}{\partial z_i}$$
Comparing this with Eq $\frac{\mathrm{d}r_i}{\mathrm{d}\tau} = -\frac{\partial E(\mathbf{z})}{\partial z_i}.$, we obtain Eq $\frac{\mathrm{d}r_i}{\mathrm{d}\tau} = -\frac{\partial H}{\partial z_i}.$.
| 750
|
11
|
11.16
|
easy
|
By making use of $K(\mathbf{r}) = \frac{1}{2} \|\mathbf{r}\|^2 = \frac{1}{2} \sum_{i} r_i^2.$, $H(\mathbf{z}, \mathbf{r}) = E(\mathbf{z}) + K(\mathbf{r})$, and $p(\mathbf{z}, \mathbf{r}) = \frac{1}{Z_H} \exp(-H(\mathbf{z}, \mathbf{r})).$, show that the conditional distribution $p(\mathbf{r}|\mathbf{z})$ is a Gaussian.
**Figure 11.16** A graph involving an observed Gaussian variable x with prior distributions over its mean $\mu$ and precision $\tau$ .

### 558 11. SAMPLING METHODS
|
According to Bayes' Theorem and Eq $p(\mathbf{z}) = \frac{1}{Z_p} \exp\left(-E(\mathbf{z})\right)$, $p(\mathbf{z}, \mathbf{r}) = \frac{1}{Z_H} \exp(-H(\mathbf{z}, \mathbf{r})).$, we have:
$$p(\mathbf{r}|\mathbf{z}) = \frac{p(\mathbf{z}, \mathbf{r})}{p(\mathbf{z})} = \frac{1/Z_H \cdot \exp(-H(\mathbf{z}, \mathbf{r}))}{1/Z_D \cdot \exp(-E(\mathbf{z}))} = \frac{Z_p}{Z_H} \cdot \exp(-K(\mathbf{r}))$$
where we have used Eq $H(\mathbf{z}, \mathbf{r}) = E(\mathbf{z}) + K(\mathbf{r})$. Moreover, by noticing Eq $K(\mathbf{r}) = \frac{1}{2} \|\mathbf{r}\|^2 = \frac{1}{2} \sum_{i} r_i^2.$, we conclude that $p(\mathbf{r}|\mathbf{z})$ should satisfy a Gaussian distribution.
| 709
|
11
|
11.17
|
easy
|
Verify that the two probabilities $\frac{1}{Z_H} \exp(-H(\mathcal{R})) \delta V_{\frac{1}{2}} \min \{1, \exp(-H(\mathcal{R}) + H(\mathcal{R}'))\}.$ and $\frac{1}{Z_H} \exp(-H(\mathcal{R}')) \delta V_{\frac{1}{2}} \min \{1, \exp(-H(\mathcal{R}') + H(\mathcal{R}))\}.$ are equal, and hence that detailed balance holds for the hybrid Monte Carlo algorithm.

Appendix A
In Chapter 9, we discussed probabilistic models having discrete latent variables, such as the mixture of Gaussians. We now explore models in which some, or all, of the latent variables are continuous. An important motivation for such models is that many data sets have the property that the data points all lie close to a manifold of much lower dimensionality than that of the original data space. To see why this might arise, consider an artificial data set constructed by taking one of the off-line digits, represented by a $64 \times 64$ pixel grey-level image, and embedding it in a larger image of size $100 \times 100$ by padding with pixels having the value zero (corresponding to white pixels) in which the location and orientation of the digit is varied at random, as illustrated in Figure 12.1. Each of the resulting images is represented by a point in the $100 \times 100 = 10,000$ -dimensional data space. However, across a data set of such images, there are only three *degrees of freedom* of variability, corresponding to the vertical and horizontal translations and the rotations. The data points will therefore live on a subspace of the data space whose *intrinsic dimensionality* is three. Note

Figure 12.1 A synthetic data set obtained by taking one of the off-line digit images and creating multiple copies in each of which the digit has undergone a random displacement and rotation within some larger image field. The resulting images each have $100 \times 100 = 10,000$ pixels.
that the manifold will be nonlinear because, for instance, if we translate the digit past a particular pixel, that pixel value will go from zero (white) to one (black) and back to zero again, which is clearly a nonlinear function of the digit position. In this example, the translation and rotation parameters are latent variables because we observe only the image vectors and are not told which values of the translation or rotation variables were used to create them.
For real digit image data, there will be a further degree of freedom arising from scaling. Moreover there will be multiple additional degrees of freedom associated with more complex deformations due to the variability in an individual's writing as well as the differences in writing styles between individuals. Nevertheless, the number of such degrees of freedom will be small compared to the dimensionality of the data set.
Another example is provided by the oil flow data set, in which (for a given geometrical configuration of the gas, water, and oil phases) there are only two degrees of freedom of variability corresponding to the fraction of oil in the pipe and the fraction of water (the fraction of gas then being determined). Although the data space comprises 12 measurements, a data set of points will lie close to a two-dimensional manifold embedded within this space. In this case, the manifold comprises several distinct segments corresponding to different flow regimes, each such segment being a (noisy) continuous two-dimensional manifold. If our goal is data compression, or density modelling, then there can be benefits in exploiting this manifold structure.
In practice, the data points will not be confined precisely to a smooth low-dimensional manifold, and we can interpret the departures of data points from the manifold as 'noise'. This leads naturally to a generative view of such models in which we first select a point within the manifold according to some latent variable distribution and then generate an observed data point by adding noise, drawn from some conditional distribution of the data variables given the latent variables.
The simplest continuous latent variable model assumes Gaussian distributions for both the latent and observed variables and makes use of a linear-Gaussian dependence of the observed variables on the state of the latent variables. This leads to a probabilistic formulation of the well-known technique of principal component analysis (PCA), as well as to a related model called factor analysis.
In this chapter w will begin with a standard, nonprobabilistic treatment of PCA, and then we show how PCA arises naturally as the maximum likelihood solution to
Appendix A
Section 8.1.4
Section 12.1
Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal projection of the data points (red dots) onto this subspace maximizes the variance of the projected points (green dots). An alternative definition of PCA is based on minimizing the sum-of-squares of the projection errors, indicated by the blue

### Section 12.2
a particular form of linear-Gaussian latent variable model. This probabilistic reformulation brings many advantages, such as the use of EM for parameter estimation, principled extensions to mixtures of PCA models, and Bayesian formulations that allow the number of principal components to be determined automatically from the data. Finally, we discuss briefly several generalizations of the latent variable concept that go beyond the linear-Gaussian assumption including non-Gaussian latent variables, which leads to the framework of *independent component analysis*, as well as models having a nonlinear relationship between latent and observed variables.
### Section 12.4
### 12.1. Principal Component Analysis
Principal component analysis, or PCA, is a technique that is widely used for applications such as dimensionality reduction, lossy data compression, feature extraction, and data visualization (Jolliffe, 2002). It is also known as the *Karhunen-Loève* transform.
There are two commonly used definitions of PCA that give rise to the same algorithm. PCA can be defined as the orthogonal projection of the data onto a lower dimensional linear space, known as the *principal subspace*, such that the variance of the projected data is maximized (Hotelling, 1933). Equivalently, it can be defined as the linear projection that minimizes the average projection cost, defined as the mean squared distance between the data points and their projections (Pearson, 1901). The process of orthogonal projection is illustrated in Figure 12.2. We consider each of these definitions in turn.
### 12.1.1 Maximum variance formulation
Consider a data set of observations $\{x_n\}$ where $n=1,\ldots,N$ , and $x_n$ is a Euclidean variable with dimensionality D. Our goal is to project the data onto a space having dimensionality M < D while maximizing the variance of the projected data. For the moment, we shall assume that the value of M is given. Later in this
|
There are typos in Eq $\frac{1}{Z_H} \exp(-H(\mathcal{R})) \delta V_{\frac{1}{2}} \min \{1, \exp(-H(\mathcal{R}) + H(\mathcal{R}'))\}.$ and $\frac{1}{Z_H} \exp(-H(\mathcal{R}')) \delta V_{\frac{1}{2}} \min \{1, \exp(-H(\mathcal{R}') + H(\mathcal{R}))\}.$. The signs in the exponential of the second argument of the min function is not right. To be more specific, Eq $\frac{1}{Z_H} \exp(-H(\mathcal{R})) \delta V_{\frac{1}{2}} \min \{1, \exp(-H(\mathcal{R}) + H(\mathcal{R}'))\}.$ should be:
$$\frac{1}{Z_H} \exp(-H(R)) \delta V \frac{1}{2} \min\{1, \exp(H(R) - H(R'))\} \tag{*}$$
and Eq $\frac{1}{Z_H} \exp(-H(\mathcal{R}')) \delta V_{\frac{1}{2}} \min \{1, \exp(-H(\mathcal{R}') + H(\mathcal{R}))\}.$ is given by:
$$\frac{1}{Z_H} \exp(-H(R'))\delta V \frac{1}{2} \min\{1, \exp(H(R') - H(R))\}$$
(\*\*)
When H(R) = H(R'), they are clearly equal. When H(R) > H(R'), (\*) will reduce to:
$$\frac{1}{Z_H} \exp(-H(R))\delta V \frac{1}{2}$$
Because the min function will give 1, and in this case (\*\*) will give:
$$\frac{1}{Z_H} \exp(-H(R')) \delta V \frac{1}{2} \exp(H(R') - H(R)) \} = \frac{1}{Z_H} \exp(-H(R)) \delta V \frac{1}{2}$$
Therefore, they are identical, and it is similar when H(R) < H(R').
# 0.12 Continuous Latent Variables
| 1,280
|
11
|
11.2
|
easy
|
Suppose that z is a random variable with uniform distribution over (0,1) and that we transform z using $y = h^{-1}(z)$ where h(y) is given by $z = h(y) \equiv \int_{-\infty}^{y} p(\widehat{y}) \,\mathrm{d}\widehat{y}$. Show that y has the distribution p(y).
|
What this problem wants us to prove is that if we use $y = h^{-1}(z)$ to transform the value of z to y, where z satisfies a uniform distribution over [0,1] and $h(\cdot)$ is defined by Eq(11.6), we can enforce y to satisfy a specific desired distribution p(y). Let's prove it beginning by Eq (11.1):
$$p^{}(y) = p(z) \cdot \left| \frac{dz}{dy} \right| = 1 \cdot h'(y) = \frac{d}{dy} \int_{-\infty}^{y} p(\widehat{y}) d\widehat{y} = p(y)$$
Just as required.
| 467
|
11
|
11.3
|
easy
|
Given a random variable z that is uniformly distributed over (0, 1), find a transformation y = f(z) such that y has a Cauchy distribution given by $p(y) = \frac{1}{\pi} \frac{1}{1 + y^2}.$.
|
We use what we have obtained in the previous problem.
$$h(y) = \int_{-\infty}^{y} p(\hat{y}) d\hat{y}$$
$$= \int_{-\infty}^{y} \frac{1}{\pi} \frac{1}{1 + \hat{y}^2} d\hat{y}$$
$$= \tan^{-1}(y)$$
Therefore, since we know that $z = h(y) = \tan^{-1}(y)$ , we can obtain the transformation from z to y: $y = \tan(z)$ .
| 318
|
11
|
11.4
|
medium
|
Suppose that $z_1$ and $z_2$ are uniformly distributed over the unit circle, as shown in Figure 11.3, and that we make the change of variables given by $y_1 = z_1 \left(\frac{-2\ln z_1}{r^2}\right)^{1/2}$ and $y_2 = z_2 \left(\frac{-2\ln z_2}{r^2}\right)^{1/2}$. Show that $(y_1, y_2)$ will be distributed according to $= \left[ \frac{1}{\sqrt{2\pi}} \exp(-y_1^2/2) \right] \left[ \frac{1}{\sqrt{2\pi}} \exp(-y_2^2/2) \right]$.
|
First, I believe there is a typo in Eq $y_1 = z_1 \left(\frac{-2\ln z_1}{r^2}\right)^{1/2}$ and $y_2 = z_2 \left(\frac{-2\ln z_2}{r^2}\right)^{1/2}$. Both $\ln z_1$ and $\ln z_2$ should be $\ln(z_1^2 + z_2^2)$ . In the following, we will solve the problem under this assumption.
We only need to calculate the Jacobian matrix. First, based on Eq (11.10)-(11.11), it is not difficult to observe that $z_1$ only depends on $y_1$ , and $z_2$ only depends on $y_2$ , which means that $\partial z_1/\partial y_2 = 0$ and $\partial z_2/\partial y_1 = 0$ . To obtain the diagonal terms of the Jacobian matrix, i.e., $\partial z_1/\partial y_1$ and $\partial z_2/\partial y_2$ . To deal with the problem associated with a circle, it is always convenient to use polar coordinate:
$$z_1 = r\cos\theta$$
, and $z_2 = r\sin\theta$
It is easily to obtain:
$$\frac{\partial(z_1, z_2)}{\partial(r, \theta)} = \begin{bmatrix} \partial z_1 / \partial r & \partial z_1 / \partial \theta \\ \partial z_2 / \partial r & \partial z_2 / \partial \theta \end{bmatrix} = \begin{bmatrix} \cos \theta & -r \sin \theta \\ \sin \theta & r \cos \theta \end{bmatrix}$$
Therefore, we can obtain:
$$\left|\frac{\partial(z_1, z_2)}{\partial(r, \theta)}\right| = r(\cos^2 \theta + \sin^2 \theta) = r$$
Then we substitute r and $\theta$ into Eq $y_1 = z_1 \left(\frac{-2\ln z_1}{r^2}\right)^{1/2}$, yielding:
$$y_1 = r\cos\theta(\frac{-2\ln r^2}{r^2})^{1/2} = \cos\theta(-2\ln r^2)^{1/2}$$
(\*)
Similarly, we also have:
$$y_2 = \sin\theta (-2\ln r^2)^{1/2} \tag{**}$$
It is easily to obtain:
$$\frac{\partial(y_1,y_2)}{\partial(r,\theta)} = \begin{bmatrix} \partial y_1/\partial r & \partial y_1/\partial \theta \\ \partial y_2/\partial r & \partial y_2/\partial \theta \end{bmatrix} = \begin{bmatrix} -2\cos\theta(-2\ln r^2)^{-1/2} \cdot r^{-1} & -\sin\theta(-2\ln r^2)^{1/2} \\ -2\sin\theta(-2\ln r^2)^{-1/2} \cdot r^{-1} & \cos\theta(-2\ln r^2)^{1/2} \end{bmatrix}$$
Therefore, we can obtain:
$$\left|\frac{\partial(y_1, y_2)}{\partial(r, \theta)}\right| = (-2r^{-1}(\cos^2\theta + \sin^2\theta)) = -2r^{-1}$$
Next, we need to use the property of Jacobian Matrix:
$$\begin{aligned} |\frac{\partial(z_1, z_2)}{\partial(y_1, y_2)}| &= |\frac{\partial(z_1, z_2)}{\partial(r, \theta)} \cdot \frac{\partial(r, \theta)}{\partial(y_1, y_2)}| \\ &= |\frac{\partial(z_1, z_2)}{\partial(r, \theta)}| \cdot |\frac{\partial(r, \theta)}{\partial(y_1, y_2)}| \\ &= |\frac{\partial(z_1, z_2)}{\partial(r, \theta)}| \cdot |\frac{\partial(y_1, y_2)}{\partial(r, \theta)}|^{-1} \\ &= r \cdot (-2r^{-1})^{-1} = -\frac{r^2}{2} \end{aligned}$$
By squaring both sides of (\*) and (\*\*) and adding them together, we can obtain:
$$y_1^2 + y_2^2 = -2\ln r^2 = r^2 = \exp\left\{\frac{y_1^2 + y_2^2}{-2}\right\}$$
Finally, we can obtain:
$$p(y_1,y_2) = p(z_1,z_2) \left| \frac{\partial(z_1,z_2)}{\partial(y_1,y_2)} \right| = \frac{1}{\pi} \cdot \left| -\frac{r^2}{2} \right| = \frac{1}{2\pi} r^2 = \frac{1}{2\pi} \exp\left\{ \frac{y_1^2 + y_2^2}{-2} \right\}$$
Just as required.
| 3,089
|
11
|
11.5
|
easy
|
- 11.5 (\*) Let z be a D-dimensional random variable having a Gaussian distribution with zero mean and unit covariance matrix, and suppose that the positive definite symmetric matrix $\Sigma$ has the Cholesky decomposition $\Sigma = \mathbf{L}\mathbf{L}^T$ where $\mathbf{L}$ is a lower-triangular matrix (i.e., one with zeros above the leading diagonal). Show that the variable $\mathbf{y} = \mu + \mathbf{L}\mathbf{z}$ has a Gaussian distribution with mean $\mu$ and covariance $\Sigma$ . This provides a technique for generating samples from a general multivariate Gaussian using samples from a univariate Gaussian having zero mean and unit variance.
|
This is a linear transformation of $\mathbf{z}$ , we still obtain a Gaussian random variable $\mathbf{y}$ . We only need to match its moments (mean and variance). We know that $\mathbf{z} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ , $\mathbf{\Sigma} = \mathbf{L}\mathbf{L}^T$ , and $\mathbf{y} = \boldsymbol{\mu} + \mathbf{L}\mathbf{z}$ . Now, using $\mathbb{E}[\mathbf{z}] = \mathbf{0}$ , we obtain:
$$\mathbb{E}[\mathbf{y}] = \mathbb{E}[\mu + \mathbf{L}\mathbf{z}]$$
$$= \mu + \mathbf{L} \cdot \mathbb{E}[\mathbf{z}]$$
$$= \mu$$
Moreover, using $\text{cov}[\mathbf{z}] = \mathbb{E}[\mathbf{z}\mathbf{z}^T] - \mathbb{E}[\mathbf{z}]\mathbb{E}[\mathbf{z}^T] = \mathbb{E}[\mathbf{z}\mathbf{z}^T] = \mathbf{I}$ , we can obtain:
$$\begin{aligned}
\operatorname{cov}[\mathbf{y}] &= \mathbb{E}[\mathbf{y}\mathbf{y}^T] - \mathbb{E}[\mathbf{y}]\mathbb{E}[\mathbf{y}^T] \\
&= \mathbb{E}[(\boldsymbol{\mu} + \mathbf{L}\mathbf{z}) \cdot (\boldsymbol{\mu} + \mathbf{L}\mathbf{z})^T] - \boldsymbol{\mu}\boldsymbol{\mu}^T \\
&= \mathbb{E}[\boldsymbol{\mu}\boldsymbol{\mu}^T + 2\boldsymbol{\mu} \cdot (\mathbf{L}\mathbf{z})^T + (\mathbf{L}\mathbf{z}) \cdot (\mathbf{L}\mathbf{z})^T] - \boldsymbol{\mu}\boldsymbol{\mu}^T \\
&= 2\boldsymbol{\mu} \cdot \mathbb{E}[\mathbf{z}^T] \cdot \mathbf{L}^T + \mathbb{E}[\mathbf{L}\mathbf{z}\mathbf{z}^T\mathbf{L}^T] \\
&= \mathbf{L} \cdot \mathbb{E}[\mathbf{z}\mathbf{z}^T] \cdot \mathbf{L}^T = \mathbf{L} \cdot \mathbf{I} \cdot \mathbf{L}^T \\
&= \mathbf{\Sigma}
\end{aligned}$$
Just as required.
| 1,529
|
11
|
11.6
|
medium
|
- 11.6 (\*\*) In this exercise, we show more carefully that rejection sampling does indeed draw samples from the desired distribution $p(\mathbf{z})$ . Suppose the proposal distribution is $q(\mathbf{z})$ and show that the probability of a sample value $\mathbf{z}$ being accepted is given by $\widetilde{p}(\mathbf{z})/kq(\mathbf{z})$ where $\widetilde{p}$ is any unnormalized distribution that is proportional to $p(\mathbf{z})$ , and the constant k is set to the smallest value that ensures $kq(\mathbf{z}) \geqslant \widetilde{p}(\mathbf{z})$ for all values of $\mathbf{z}$ . Note that the probability of drawing a value $\mathbf{z}$ is given by the probability of drawing that value from $q(\mathbf{z})$ times the probability of accepting that value given that it has been drawn. Make use of this, along with the sum and product rules of probability, to write down the normalized form for the distribution over $\mathbf{z}$ , and show that it equals $p(\mathbf{z})$ .
|
This problem is all about definition. According to the description of rejection sampling, we know that: for a specific value $\mathbf{z}_0$ (drawn from $q(\mathbf{z})$ ), we will generate a random variable $u_0$ , which satisfies a uniform distribution in the interval $[0, kq(\mathbf{z}_0)]$ , and if the generated value of $u_0$ is less than $\tilde{p}(\mathbf{z}_0)$ , we will accept this value. Therefore, we obtain:
$$P[\text{accept}|\mathbf{z}_0] = \frac{\widetilde{p}(\mathbf{z}_0)}{kq(\mathbf{z}_0)}$$
Since we know $\mathbf{z}_0$ is drawn from $q(\mathbf{z})$ , we can obtain the total acceptance rate by integral:
$$P[\text{accept}] = \int P[\text{accept}|\mathbf{z}_0] \cdot q(\mathbf{z}_0) d\mathbf{z}_0 = \int \frac{\widetilde{p}(\mathbf{z}_0)}{k} d\mathbf{z}_0$$
It is identical to Eq $= \frac{1}{k} \int \widetilde{p}(z) dz.$. We substitute Eq $p(z) = \frac{1}{Z_p}\widetilde{p}(z)$ into the expression above, yielding:
$$P[\text{accept}] = \frac{Z_p}{k}$$
We define a very small vector $\boldsymbol{\epsilon}$ , and we can obtain:
$$P[\mathbf{x}_{0} \in (\mathbf{x}, \mathbf{x} + \boldsymbol{\epsilon})] = P[\mathbf{z}_{0} \in (\mathbf{x}, \mathbf{x} + \boldsymbol{\epsilon}) | \operatorname{accept}]$$
$$= \frac{P[\operatorname{accept}, \mathbf{z}_{0} \in (\mathbf{x}, \mathbf{x} + \boldsymbol{\epsilon})]}{P[\operatorname{accept}]}$$
$$= \frac{\int_{(\mathbf{x}, \mathbf{x} + \boldsymbol{\epsilon})} q(\mathbf{z}_{0}) P[\operatorname{accept} | \mathbf{z}_{0}] d\mathbf{z}_{0}}{Z_{p}/k}$$
$$= \int_{(\mathbf{x}, \mathbf{x} + \boldsymbol{\epsilon})} \frac{k}{Z_{p}} \cdot q(\mathbf{z}_{0}) \cdot p(\operatorname{accept} | \mathbf{z}_{0}) d\mathbf{z}_{0}$$
$$= \int_{(\mathbf{x}, \mathbf{x} + \boldsymbol{\epsilon})} \frac{k}{Z_{p}} \cdot q(\mathbf{z}_{0}) \cdot \frac{\tilde{p}(\mathbf{z}_{0})}{kq(\mathbf{z}_{0})} d\mathbf{z}_{0}$$
$$= \int_{(\mathbf{x}, \mathbf{x} + \boldsymbol{\epsilon})} \frac{1}{Z_{p}} \cdot \tilde{p}(\mathbf{z}_{0}) d\mathbf{z}_{0}$$
$$= \int_{(\mathbf{x}, \mathbf{x} + \boldsymbol{\epsilon})} p(\mathbf{z}_{0}) d\mathbf{z}_{0}$$
Just as required. **Several clarifications must be made here**:(1)we have used P[A] to represent the probability of event A occurs, and $p(\mathbf{z})$ or $q(\mathbf{z})$ to represent the Probability Density Function (PDF). (2) Please be careful with $P[\mathbf{x}_0 \in (\mathbf{x}, \mathbf{x} + \boldsymbol{\epsilon})] = P[\mathbf{z}_0 \in (\mathbf{x}, \mathbf{x} + \boldsymbol{\epsilon})|$ and this is the key point of this problem.
| 2,556
|
11
|
11.7
|
easy
|
Suppose that z has a uniform distribution over the interval [0,1]. Show that the variable $y = b \tan z + c$ has a Cauchy distribution given by $q(z) = \frac{k}{1 + (z - c)^2 / b^2}.$.
|
Notice that the symbols used in the main text is different from those in the problem description. in the following, we will use those in the main text. Namely, y satisfies a uniform distribution on interval [0,1], and $z = b \tan y + c$ . Then we aims to prove Eq $q(z) = \frac{k}{1 + (z - c)^2 / b^2}.$. Since we know that:
$$q(z) = p(y) \cdot |\frac{dy}{dz}|$$
and that:
$$y = \arctan \frac{z-c}{b}$$
$\Rightarrow$ $\frac{dy}{dz} = \frac{1}{b} \cdot \frac{1}{1 + [(z-c)/b]^2}$
Substituting it back, we obtain:
$$q(z) = 1 \cdot \frac{1}{b} \cdot \frac{1}{1 + [(z-c)/b]^2}$$
In my point of view, Eq $q(z) = \frac{k}{1 + (z - c)^2 / b^2}.$ is an expression for the comparison function kq(z), not the proposal function q(z). If we wish to use Eq $q(z) = \frac{k}{1 + (z - c)^2 / b^2}.$ to express the proposal function, the numerator in Eq $q(z) = \frac{k}{1 + (z - c)^2 / b^2}.$ should be 1/b instead of k. Because the proposal function q(z) is a PDF, it should integrate to 1. However, in rejection sampling, the comparison function is what we actually care about.
| 1,112
|
11
|
11.8
|
medium
|
Determine expressions for the coefficients $k_i$ in the envelope distribution $q(z) = k_i \lambda_i \exp\{-\lambda_i (z - z_{i-1})\} \qquad z_{i-1} < z \leqslant z_i.$ for adaptive rejection sampling using the requirements of continuity and normalization.
|
There is a typo in Eq $q(z) = k_i \lambda_i \exp\{-\lambda_i (z - z_{i-1})\} \qquad z_{i-1} < z \leqslant z_i.$, which is not difficult to observe, if we carefully examine Fig.11.6. The correct form should be:
$$q_i(z) = k_i \lambda_i \exp\{-\lambda_i (z - z_i)\}, \quad \tilde{z}_{i-1,i} < z \le \tilde{z}_{i,i+1}, \quad \text{where } i = 1, 2, ..., N$$
Here we use $\tilde{z}_{i,i+1}$ to represent the intersection point of the i-th and i+1-th envelope, $q_i(z)$ to represent the comparison function of the i-th envelope, and N is the total number of the envelopes. Notice that $\tilde{z}_{0,1}$ and $\tilde{z}_{N,N+1}$ could be $-\infty$ and $\infty$ correspondingly.
First, from Fig.11.6, we see that: $q(z_i) = \tilde{p}(z_i)$ , substituting the expression above into the equation and yielding:
$$k_i \lambda_i = \widetilde{p}(z_i) \tag{*}$$
One important thing should be made clear is that we can only evaluate $\tilde{p}(z)$ at specific point z, but not the normalized PDF p(z). This is the assumption of rejection sampling. For more details, please refer to section 11.1.2.
Notice that $q_i(z)$ and $q_{i+1}(z)$ should have the same value at $\widetilde{z}_{i,i+1}$ , we obtain:
$$k_i \lambda_i \exp\{-\lambda_i (\widetilde{z}_{i,i+1} - z_i)\} = k_{i+1} \lambda_{i+1} \exp\{-\lambda_{i+1} (\widetilde{z}_{i,i+1} - z_{i+1})\}$$
After several rearrangement, we obtain:
$$\widetilde{z}_{i,i+1} = \frac{1}{\lambda_i - \lambda_{i+1}} \left\{ \ln \frac{k_i \lambda_i}{k_{i+1} \lambda_{i+1}} + \lambda_i z_i - \lambda_{i+1} z_{i+1} \right\} \tag{**}$$
Before moving on, we should make some clarifications: the adaptive rejection sampling begins with several grid points, e.g., $z_1, z_2, ..., z_N$ , and then we evaluate the derivative of $\widetilde{p}(z)$ at those points, i.e., $\lambda_1, \lambda_2, ..., \lambda_N$ . Then we can easily obtain $k_i$ based on (\*), and next the intersection points $\widetilde{z}_{i,i+1}$ based on (\*\*).
| 1,990
|
11
|
11.9
|
medium
|
By making use of the technique discussed in Section 11.1.1 for sampling from a single exponential distribution, devise an algorithm for sampling from the piecewise exponential distribution defined by $q(z) = k_i \lambda_i \exp\{-\lambda_i (z - z_{i-1})\} \qquad z_{i-1} < z \leqslant z_i.$.
|
In this problem, we will still use the same notation as in the previous one. First, we need to know the probability of sampling from each segment. Notice that Eq $q(z) = k_i \lambda_i \exp\{-\lambda_i (z - z_{i-1})\} \qquad z_{i-1} < z \leqslant z_i.$ is not correctly normalized, we first calculate its normalization constant $Z_q$ :
$$\begin{split} Z_{q} &= \int_{\widetilde{z}_{0,1}}^{\widetilde{z}_{N,N+1}} q(z) \, dz = \sum_{i=1}^{N} \int_{\widetilde{z}_{i-1,i}}^{\widetilde{z}_{i,i+1}} q_{i}(z_{i}) \, dz_{i} \\ &= \sum_{i=1}^{N} \int_{\widetilde{z}_{i-1,i}}^{\widetilde{z}_{i,i+1}} k_{i} \lambda_{i} \exp\{-\lambda_{i}(z-z_{i})\} \, dz_{i} \\ &= \sum_{i=1}^{N} -k_{i} \exp\{-\lambda_{i}(z-z_{i})\} \Big|_{\widetilde{z}_{i-1,i}}^{\widetilde{z}_{i,i+1}} \\ &= \sum_{i=1}^{N} -k_{i} \left[ \exp\{-\lambda_{i}(\widetilde{z}_{i,i+1}-z_{i})\} - \exp\{-\lambda_{i}(\widetilde{z}_{i-1,i}-z_{i})\} \right] = \sum_{i=1}^{N} \widehat{k}_{i} \end{split}$$
Where we have defined:
$$\widehat{k}_i = -k_i \left[ \exp\{-\lambda_i(\widetilde{z}_{i,i+1} - z_i)\} - \exp\{-\lambda_i(\widetilde{z}_{i-1,i} - z_i)\} \right] \tag{*}$$
From this derivation, we know that the probability of sampling from the i-th segment is given by $\widehat{k}_i/Z_q$ , where $Z_q = \sum_{i=1}^N \widehat{k}_i$ . Therefore, now we define an auxiliary random variable $\eta$ , which is uniform in interval [0,1], and then define:
$$i = j$$
if $\eta \in \left[\frac{1}{Z_q} \sum_{m=0}^{j-1} \hat{k}_m, \frac{1}{Z_q} \sum_{m=0}^{j} \hat{k}_m\right], \quad j = 1, 2, ..., N$ (\*\*)
Where we have defined $\hat{k}_0 = 0$ for convenience. Until now, we have decide the chosen *i*-th segment. Next, we should sample from the *i*-th exponential distribution using the technique in section 11.1.1.. According to Eq $z = h(y) \equiv \int_{-\infty}^{y} p(\widehat{y}) \,\mathrm{d}\widehat{y}$, we
can write down:
$$\begin{split} h_i(z) &= \int_{\widetilde{z}_{i-1,i}}^z \frac{q_i(z_i)}{\widehat{k}_i} dz_i \\ &= \frac{1}{\widehat{k}_i} \cdot \int_{\widetilde{z}_{i-1,i}}^z k_i \lambda_i \exp\{-\lambda_i (z-z_i)\} dz_i \\ &= \frac{-k_i}{\widehat{k}_i} \cdot \exp\{-\lambda_i (z-z_i)\}\Big|_{\widetilde{z}_{i-1,i}}^z \\ &= \frac{-k_i}{\widehat{k}_i} \cdot \Big[ \exp\{-\lambda_i (z-z_i)\} - \exp\{-\lambda_i (\widetilde{z}_{i-1,i}-z_i)\} \Big] \\ &= \frac{k_i}{\widehat{k}_i} \cdot \exp(\lambda_i z_i) \Big[ \exp\{-\lambda_i \widetilde{z}_{i-1,i}\} - \exp\{-\lambda_i z\} \Big] \end{split}$$
Notice that $q_i(z)$ is not correctly normalized, and $q_i(z)/\hat{k}_i$ is the correct normalized form. With several rearrangement, we can obtain:
$$\begin{array}{ll} h_i^{-1}(\xi) & = & \displaystyle \frac{1}{-\lambda_i} \cdot \ln \left[ \exp\{-\lambda_i \widetilde{z}_{i-1,i}\} - \frac{\xi}{\frac{k_i}{\widehat{k}_i} \cdot \exp(\lambda_i z_i)} \right] \\ \\ & = & \displaystyle \frac{1}{-\lambda_i} \cdot \frac{\ln \left[ \exp\{-\lambda_i \widetilde{z}_{i-1,i}\} \right]}{\ln \frac{\widehat{k}_i \xi}{k_i \cdot \exp(\lambda_i z_i)}} \\ \\ & = & \displaystyle \frac{\widetilde{z}_{i-1,i}}{\ln \xi + \ln \frac{\widehat{k}_i}{k_i} - \lambda_i z_i} \end{array}$$
In conclusion, we first generate a random variable $\eta$ , which is uniform in interval [0,1], and obtain the value i according to (\*\*), and then we generate a random variable $\xi$ , which is also uniform in interval [0,1], and then transform it to z using $z = h_i^{-1}(\xi)$ .
Notice that here, $\lambda_i$ , $\tilde{z}_{i,i+1}$ and $k_i$ can be obtained once the grid points $z_1, z_2, ..., z_N$ are given. For more details, please refer to the previous problem. After these variables are obtained, $\hat{k}_i$ can also be determined using (\*), and thus $h_i^{-1}(\xi)$ can be determined.
| 3,763
|
12
|
12.11
|
medium
|
Show that in the limit $\sigma^2 \to 0$ , the posterior mean for the probabilistic PCA model becomes an orthogonal projection onto the principal subspace, as in conventional PCA.
|
Taking $\sigma^2 \to 0$ in $\mathbf{M} = \mathbf{W}^{\mathrm{T}} \mathbf{W} + \sigma^{2} \mathbf{I}.$ and substituting into $\mathbb{E}[\mathbf{z}|\mathbf{x}] = \mathbf{M}^{-1}\mathbf{W}_{\mathrm{ML}}^{\mathrm{T}}(\mathbf{x} - \overline{\mathbf{x}})$ we obtain the posterior mean for probabilistic PCA in the form
$$(\mathbf{W}_{\mathrm{ML}}^{\mathrm{T}}\mathbf{W}_{\mathrm{ML}})^{-1}\mathbf{W}_{\mathrm{ML}}^{\mathrm{T}}(\mathbf{x}-\overline{\mathbf{x}}).$$
Now substitute for $\mathbf{W}_{\mathrm{ML}}$ using $\mathbf{W}_{\mathrm{ML}} = \mathbf{U}_{M} (\mathbf{L}_{M} - \sigma^{2} \mathbf{I})^{1/2} \mathbf{R}$ in which we take $\mathbf{R} = \mathbf{I}$ for compatibility with conventional PCA. Using the orthogonality property $\mathbf{U}_{M}^{\mathrm{T}}\mathbf{U}_{M} = \mathbf{I}$ and setting $\sigma^{2} = 0$ , this reduces to
$$\mathbf{L}^{-1/2}\mathbf{U}_{M}^{\mathrm{T}}(\mathbf{x}-\overline{\mathbf{x}})$$
which is the orthogonal projection is given by the conventional PCA result $\mathbf{y}_n = \mathbf{L}^{-1/2} \mathbf{U}^{\mathrm{T}} (\mathbf{x}_n - \overline{\mathbf{x}})$.
| 1,139
|
12
|
12.28
|
medium
|
**** Use the transformation property $= p_{x}(g(y)) |g'(y)|.$ of a probability density under a change of variable to show that any density p(y) can be obtained from a fixed density q(x) that is everywhere nonzero by making a nonlinear change of variable y = f(x) in which f(x) is a monotonic function so that $0 \le f'(x) < \infty$ . Write down the differential equation satisfied by f(x) and draw a diagram illustrating the transformation of the density.
|
If we assume that the function y = f(x) is *strictly* monotonic, which is necessary to exclude the possibility for spikes of infinite density in p(y), we are guaranteed that the inverse function $x = f^{-1}(y)$ exists. We can then use $= p_{x}(g(y)) |g'(y)|.$ to write
$$p(y) = q(f^{-1}(y)) \left| \frac{\mathrm{d}f^{-1}}{\mathrm{d}y} \right|. \tag{147}$$
Since the only restriction on f is that it is monotonic, it can distribute the probability mass over x arbitrarily over y. This is illustrated on page 8, as a part of Solution 1.4. From (147) we see directly that
$$|f'(x)| = \frac{q(x)}{p(f(x))}.$$
| 617
|
12
|
12.3
|
easy
|
Verify that the eigenvectors defined by $\mathbf{u}_i = \frac{1}{(N\lambda_i)^{1/2}} \mathbf{X}^{\mathrm{T}} \mathbf{v}_i.$ are normalized to unit length, assuming that the eigenvectors $\mathbf{v}_i$ have unit length.
|
According to Eq $\mathbf{u}_i = \frac{1}{(N\lambda_i)^{1/2}} \mathbf{X}^{\mathrm{T}} \mathbf{v}_i.$, we can obtain:
$$\mathbf{u}_i^T \mathbf{u}_i = \frac{1}{N\lambda_i} \mathbf{v}_i^T \mathbf{X} \mathbf{X}^T \mathbf{v}_i$$
We left multiply $\mathbf{v}_i^T$ on both sides of Eq $\frac{1}{N} \mathbf{X} \mathbf{X}^{\mathrm{T}} \mathbf{v}_i = \lambda_i \mathbf{v}_i$, yielding:
$$\frac{1}{N} \mathbf{v}_i^T \mathbf{X} \mathbf{X}^T \mathbf{v}_i = \lambda_i \mathbf{v}_i^T \mathbf{v}_i = \lambda_i ||\mathbf{v}_i||^2 = \lambda_i$$
Here we have used the fact that $\mathbf{v}_i$ has unit length. Substituting it back into $\mathbf{u}_i^T \mathbf{u}_i$ , we can obtain:
$$\mathbf{u}_i^T \mathbf{u}_i = 1$$
Just as required.
| 745
|
12
|
12.4
|
easy
|
- 12.4 (\*) Suppose we replace the zero-mean, unit-covariance latent space distribution $p(\mathbf{z}) = \mathcal{N}(\mathbf{z}|\mathbf{0}, \mathbf{I}).$ in the probabilistic PCA model by a general Gaussian distribution of the form $\mathcal{N}(\mathbf{z}|\mathbf{m}, \boldsymbol{\Sigma})$ . By redefining the parameters of the model, show that this leads to an identical model for the marginal distribution $p(\mathbf{x})$ over the observed variables for any valid choice of $\mathbf{m}$ and $\boldsymbol{\Sigma}$ .
|
We know $p(\mathbf{z}) = \mathcal{N}(\mathbf{z}|\mathbf{m}, \boldsymbol{\Sigma})$ , and $p(\mathbf{x}|\mathbf{z}) = \mathcal{N}(\mathbf{x}|\mathbf{W}\mathbf{z} + \boldsymbol{\mu}, \sigma^2\mathbf{I})$ . According to Eq (2.113)-(2.115), we have:
$$p(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\mathbf{W}\mathbf{m} + \boldsymbol{\mu}, \sigma^2 \mathbf{I} + \mathbf{W}\boldsymbol{\Sigma}\mathbf{W}^T) = \mathcal{N}(\mathbf{x}|\hat{\boldsymbol{\mu}}, \sigma^2 \mathbf{I} + \hat{\mathbf{W}}\hat{\mathbf{W}}^T)$$
where we have defined:
$$\hat{\boldsymbol{\mu}} = \mathbf{Wm} + \boldsymbol{\mu}$$
and
$$\widehat{\boldsymbol{W}} = \boldsymbol{W} \boldsymbol{\Sigma}^{1/2}$$
Therefore, in the general case, the final form of $p(\mathbf{x})$ can still be written as Eq $p(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \mathbf{C})$.
# **Problem 12.5 Solution**
| 872
|
12
|
12.6
|
easy
|
Draw a directed probabilistic graph for the probabilistic PCA model described in Section 12.2 in which the components of the observed variable x are shown explicitly as separate nodes. Hence verify that the probabilistic PCA model has the same independence structure as the naive Bayes model discussed in Section 8.2.2.
|
Omitting the parameters, W, $\mu$ and $\sigma$ , leaving only the stochastic variables z and x, the graphical model for probabilistic PCA is identical with the the 'naive Bayes' model24 in Section 8.2.2. Hence these two models exhibit the same independence structure.
| 270
|
13
|
13.1
|
easy
|
Use the technique of d-separation, discussed in Section 8.2, to verify that the Markov model shown in Figure 13.3 having N nodes in total satisfies the conditional independence properties $p(\mathbf{x}_n|\mathbf{x}_1,\dots,\mathbf{x}_{n-1}) = p(\mathbf{x}_n|\mathbf{x}_{n-1})$ for n = 2, ..., N. Similarly, show that a model described by the graph in Figure 13.4 in which there are N nodes in total

Figure 13.23 Schematic illustration of the operation of the particle filter for a one-dimensional latent space. At time step n, the posterior $p(z_n|\mathbf{x}_n)$ is represented as a mixture distribution, shown schematically as circles whose sizes are proportional to the weights $w_n^{(l)}$ . A set of L samples is then drawn from this distribution and the new weights $w_{n+1}^{(l)}$ evaluated using $p(\mathbf{x}_{n+1}|\mathbf{z}_{n+1}^{(l)})$ .
satisfies the conditional independence properties
$$p(\mathbf{x}_n|\mathbf{x}_1,\dots,\mathbf{x}_{n-1}) = p(\mathbf{x}_n|\mathbf{x}_{n-1},\mathbf{x}_{n-2})$$
$p(\mathbf{x}_n|\mathbf{x}_1,\dots,\mathbf{x}_{n-1}) = p(\mathbf{x}_n|\mathbf{x}_{n-1},\mathbf{x}_{n-2})$
for n = 3, ..., N.
|
Since the arrows on the path from $x_m$ to $x_n$ , with m < n - 1, will meet head-to-tail at $x_{n-1}$ , which is in the conditioning set, all such paths are blocked by $x_{n-1}$ and hence $p(\mathbf{x}_n|\mathbf{x}_1,\dots,\mathbf{x}_{n-1}) = p(\mathbf{x}_n|\mathbf{x}_{n-1})$ holds.
The same argument applies in the case depicted4, with the modification that m < n - 2 and that paths are blocked by $x_{n-1}$ or $x_{n-2}$ .
| 443
|
13
|
13.13
|
medium
|
- 13.13 (\*\*) Use the definition $\mu_{f_s \to x}(x) \equiv \sum_{X_s} F_s(x, X_s)$ of the messages passed from a factor node to a variable node in a factor graph, together with the expression $p(\mathbf{x}_1, \dots, \mathbf{x}_N, \mathbf{z}_1, \dots, \mathbf{z}_N) = p(\mathbf{z}_1) \left[ \prod_{n=2}^N p(\mathbf{z}_n | \mathbf{z}_{n-1}) \right] \prod_{n=1}^N p(\mathbf{x}_n | \mathbf{z}_n).$ for the joint distribution in a hidden Markov model, to show that the definition $\alpha(\mathbf{z}_n) = \mu_{f_n \to \mathbf{z}_n}(\mathbf{z}_n)$ of the alpha message is the same as the definition $\alpha(\mathbf{z}_n) \equiv p(\mathbf{x}_1, \dots, \mathbf{x}_n, \mathbf{z}_n)$.
|
Using $\mu_{f_s \to x}(x) \equiv \sum_{X_s} F_s(x, X_s)$, we can rewrite $\alpha(\mathbf{z}_n) = \mu_{f_n \to \mathbf{z}_n}(\mathbf{z}_n)$ as
$$\alpha(\mathbf{z}_n) = \sum_{\mathbf{z}_{n-1}} F_n(\mathbf{z}_n, \{\mathbf{z}_1, \dots, \mathbf{z}_{n-1}\}), \tag{148}$$
where $F_n(\cdot)$ is the product of all factors connected to $\mathbf{z}_n$ via $f_n$ , including $f_n$ itself (15), so that
$$F_n(\mathbf{z}_n, {\mathbf{z}_1, \dots, \mathbf{z}_{n-1}}) = h(\mathbf{z}_1) \prod_{i=2}^n f_i(\mathbf{z}_i, \mathbf{z}_{i-1}),$$
(149)
where we have introduced $h(\mathbf{z}_1)$ and $f_i(\mathbf{z}_i, \mathbf{z}_{i-1})$ from $h(\mathbf{z}_1) = p(\mathbf{z}_1)p(\mathbf{x}_1|\mathbf{z}_1)$ and $f_n(\mathbf{z}_{n-1}, \mathbf{z}_n) = p(\mathbf{z}_n | \mathbf{z}_{n-1}) p(\mathbf{x}_n | \mathbf{z}_n).$, respectively. Using the corresponding r.h.s. definitions and repeatedly applying the product rule, we can rewrite (149) as
$$F_n(\mathbf{z}_n, {\mathbf{z}_1, \dots, \mathbf{z}_{n-1}}) = p(\mathbf{x}_1, \dots, \mathbf{x}_n, \mathbf{z}_1, \dots, \mathbf{z}_n).$$
Applying the sum rule, summing over $\mathbf{z}_1, \dots, \mathbf{z}_{n-1}$ as on the r.h.s. of (148), we obtain $\alpha(\mathbf{z}_n) \equiv p(\mathbf{x}_1, \dots, \mathbf{x}_n, \mathbf{z}_n)$.
| 1,314
|
13
|
13.17
|
easy
|
Show that the directed graph for the input-output hidden Markov model, given in Figure 13.18, can be expressed as a tree-structured factor graph of the form shown in Figure 13.15 and write down expressions for the initial factor $h(\mathbf{z}_1)$ and for the general factor $f_n(\mathbf{z}_{n-1}, \mathbf{z}_n)$ where $2 \le n \le N$ .
|
The emission probabilities over observed variables $\mathbf{x}_n$ are absorbed into the corresponding factors, $f_n$ , analogously to the way in which14 was transformed into15. The factors then take the form
$$h(\mathbf{z}_1) = p(\mathbf{z}_1|\mathbf{u}_1)p(\mathbf{x}_1|\mathbf{z}_1,\mathbf{u}_1)$$
(150)
$$f_n(\mathbf{z}_{n-1}, \mathbf{z}_n) = p(\mathbf{z}_n | \mathbf{z}_{n-1}, \mathbf{u}_n) p(\mathbf{x}_n | \mathbf{z}_n, \mathbf{u}_n).$$
(151)
| 455
|
13
|
13.22
|
medium
|
Using $c_1\widehat{\alpha}(\mathbf{z}_1) = p(\mathbf{z}_1)p(\mathbf{x}_1|\mathbf{z}_1).$, together with the definitions $p(\mathbf{x}_n|\mathbf{z}_n) = \mathcal{N}(\mathbf{x}_n|\mathbf{C}\mathbf{z}_n, \mathbf{\Sigma}).$ and $p(\mathbf{z}_1) = \mathcal{N}(\mathbf{z}_1 | \boldsymbol{\mu}_0, \mathbf{V}_0).$ and the result $p(\mathbf{y}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\boldsymbol{\mu} + \mathbf{b}, \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}})$, derive $c_1 = \mathcal{N}(\mathbf{x}_1 | \mathbf{C}\boldsymbol{\mu}_0, \mathbf{C}\mathbf{V}_0\mathbf{C}^{\mathrm{T}} + \boldsymbol{\Sigma})$.
|
Using $p(\mathbf{x}_n|\mathbf{z}_n) = \mathcal{N}(\mathbf{x}_n|\mathbf{C}\mathbf{z}_n, \mathbf{\Sigma}).$, $p(\mathbf{z}_1) = \mathcal{N}(\mathbf{z}_1 | \boldsymbol{\mu}_0, \mathbf{V}_0).$ and $\widehat{\alpha}(\mathbf{z}_n) = \mathcal{N}(\mathbf{z}_n | \boldsymbol{\mu}_n, \mathbf{V}_n).$, we can write $c_1\widehat{\alpha}(\mathbf{z}_1) = p(\mathbf{z}_1)p(\mathbf{x}_1|\mathbf{z}_1).$, for the case n = 1, as
$$c_1 \mathcal{N}(\mathbf{z}_1 | \boldsymbol{\mu}_1, \mathbf{V}_1) = \mathcal{N}(\mathbf{z}_1 | \boldsymbol{\mu}_0, \mathbf{V}_0) \mathcal{N}(\mathbf{x}_1 | \mathbf{C} \mathbf{z}_1, \boldsymbol{\Sigma}).$$
The r.h.s. define the joint probability distribution over $\mathbf{x}_1$ and $\mathbf{z}_1$ in terms of a conditional distribution over $\mathbf{x}_1$ given $\mathbf{z}_1$ and a distribution over $\mathbf{z}_1$ , corresponding to $p(\mathbf{y}|\mathbf{x}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\mathbf{x} + \mathbf{b}, \mathbf{L}^{-1})$ and $p(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Lambda}^{-1})$, respectively. What we need to do is to rewrite this into a conditional distribution over $\mathbf{z}_1$ given $\mathbf{x}_1$ and a distribution over $\mathbf{x}_1$ , corresponding to $p(\mathbf{x}|\mathbf{y}) = \mathcal{N}(\mathbf{x}|\mathbf{\Sigma}\{\mathbf{A}^{\mathrm{T}}\mathbf{L}(\mathbf{y}-\mathbf{b}) + \mathbf{\Lambda}\boldsymbol{\mu}\}, \mathbf{\Sigma})$ and $p(\mathbf{y}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\boldsymbol{\mu} + \mathbf{b}, \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}})$, respectively.
If we make the substitutions
$$\mathbf{x} \Rightarrow \mathbf{z}_1 \quad \boldsymbol{\mu} \Rightarrow \boldsymbol{\mu}_0 \quad \boldsymbol{\Lambda}^{-1} \Rightarrow \mathbf{V}_0$$
$$\mathbf{y}\Rightarrow\mathbf{x}_1\quad\mathbf{A}\Rightarrow\mathbf{C}\quad\mathbf{b}\Rightarrow\mathbf{0}\quad\mathbf{L}^{-1}\Rightarrow\mathbf{\Sigma},$$
in $p(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Lambda}^{-1})$ and $p(\mathbf{y}|\mathbf{x}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\mathbf{x} + \mathbf{b}, \mathbf{L}^{-1})$, (2.115) directly gives us the r.h.s. of (13.96).
13.24 This extension can be embedded in the existing framework by adopting the following modifications:
$$m{\mu}_0' = \left[ egin{array}{c} m{\mu}_0 \ 1 \end{array}
ight] \quad \mathbf{V}_0' = \left[ egin{array}{cc} \mathbf{V}_0 & \mathbf{0} \ \mathbf{0} & 0 \end{array}
ight] \quad \mathbf{\Gamma}' = \left[ egin{array}{cc} m{\Gamma} & \mathbf{0} \ \mathbf{0} & 0 \end{array}
ight]$$
$$\mathbf{A}' = \left[ \begin{array}{cc} \mathbf{A} & \mathbf{a} \\ \mathbf{0} & 1 \end{array} \right] \quad \mathbf{C}' = \left[ \begin{array}{cc} \mathbf{C} & \mathbf{c} \end{array} \right].$$
This will ensure that the constant terms **a** and **c** are included in the corresponding Gaussian means for $\mathbf{z}_n$ and $\mathbf{x}_n$ for $n = 1, \dots, N$ .
Note that the resulting covariances for $\mathbf{z}_n$ , $\mathbf{V}_n$ , will be singular, as will the corresponding prior covariances, $\mathbf{P}_{n-1}$ . This will, however, only be a problem where these matrices need to be inverted, such as in (13.102). These cases must be handled separately, using the 'inversion' formula
$$(\mathbf{P}_{n-1}')^{-1} = \left[ \begin{array}{cc} \mathbf{P}_{n-1}^{-1} & \mathbf{0} \\ \mathbf{0} & 0 \end{array} \right],$$
nullifying the contribution from the (non-existent) variance of the element in $\mathbf{z}_n$ that accounts for the constant terms $\mathbf{a}$ and $\mathbf{c}$ .
| 3,668
|
13
|
13.4
|
medium
|
Consider a hidden Markov model in which the emission densities are represented by a parametric model $p(\mathbf{x}|\mathbf{z},\mathbf{w})$ , such as a linear regression model or a neural network, in which $\mathbf{w}$ is a vector of adaptive parameters. Describe how the parameters $\mathbf{w}$ can be learned from data using maximum likelihood.
|
The learning of w would follow the scheme for maximum learning described in Section 13.2.1, with w replacing $\phi$ . As discussed towards the end of Section 13.2.1, the precise update formulae would depend on the form of regression model used and how it is being used.
The most obvious situation where this would occur is in a HMM similar to that depicted18, where the emmission densities not only depends on the latent variable **z**, but also on some input variable **u**. The regression model could then be used to map **u** to **x**, depending on the state of the latent variable **z**.
Note that when a nonlinear regression model, such as a neural network, is used, the M-step for w may not have closed form.
| 717
|
13
|
13.8
|
medium
|
- 13.8 (\*\*) For a hidden Markov model having discrete observations governed by a multinomial distribution, show that the conditional distribution of the observations given the hidden variables is given by $p(\mathbf{x}|\mathbf{z}) = \prod_{i=1}^{D} \prod_{k=1}^{K} \mu_{ik}^{x_i z_k}$ and the corresponding M step equations are given by $\mu_{ik} = \frac{\sum_{n=1}^{N} \gamma(z_{nk}) x_{ni}}{\sum_{n=1}^{N} \gamma(z_{nk})}.$. Write down the analogous equations for the conditional distribution and the M step equations for the case of a hidden Markov with multiple binary output variables each of which is governed by a Bernoulli conditional distribution. Hint: refer to Sections 2.1 and 2.2 for a discussion of the corresponding maximum likelihood solutions for i.i.d. data if required.
|
Only the final term of $Q(\theta, \theta^{\text{old}})$ given by $Q(\boldsymbol{\theta}, \boldsymbol{\theta}^{\text{old}}) = \sum_{k=1}^{K} \gamma(z_{1k}) \ln \pi_k + \sum_{n=2}^{N} \sum_{j=1}^{K} \sum_{k=1}^{K} \xi(z_{n-1,j}, z_{nk}) \ln A_{jk} + \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \ln p(\mathbf{x}_n | \boldsymbol{\phi}_k).$ depends on the parameters of the emission model. For the multinomial variable $\mathbf{x}$ , whose D components are all zero except for a single entry of 1,
$$\sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \ln p(\mathbf{x}_n | \boldsymbol{\phi}_k) = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \sum_{i=1}^{D} x_{ni} \ln \mu_{ki}.$$
Now when we maximize with respect to $\mu_{ki}$ we have to take account of the constraints that, for each value of k the components of $\mu_{ki}$ must sum to one. We therefore introduce Lagrange multipliers $\{\lambda_k\}$ and maximize the modified function given by
$$\sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \sum_{i=1}^{D} x_{ni} \ln \mu_{ki} + \sum_{k=1}^{K} \lambda_k \left( \sum_{i=1}^{D} \mu_{ki} - 1 \right).$$
Setting the derivative with respect to $\mu_{ki}$ to zero we obtain
$$0 = \sum_{n=1}^{N} \gamma(z_{nk}) \frac{x_{ni}}{\mu_{ki}} + \lambda_k.$$
Multiplying through by $\mu_{ki}$ , summing over i, and making use of the constraint on $\mu_{ki}$ together with the result $\sum_i x_{ni} = 1$ we have
$$\lambda_k = -\sum_{n=1}^N \gamma(z_{nk}).$$
Finally, back-substituting for $\lambda_k$ and solving for $\mu_{ki}$ we again obtain $\mu_{ik} = \frac{\sum_{n=1}^{N} \gamma(z_{nk}) x_{ni}}{\sum_{n=1}^{N} \gamma(z_{nk})}.$.
Similarly, for the case of a multivariate Bernoulli observed variable ${\bf x}$ whose D components independently take the value 0 or 1, using the standard expression for the multivariate Bernoulli distribution we have
$$\sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \ln p(\mathbf{x}_n | \boldsymbol{\phi}_k)$$
$$= \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \sum_{i=1}^{D} \left\{ x_{ni} \ln \mu_{ki} + (1 - x_{ni}) \ln(1 - \mu_{ki}) \right\}.$$
Maximizing with respect to $\mu_{ki}$ we obtain
$$\mu_{ki} = \frac{\sum_{n=1}^{N} \gamma(z_{nk}) x_{ni}}{\sum_{n=1}^{N} \gamma(z_{nk})}$$
which is equivalent to $\mu_{ik} = \frac{\sum_{n=1}^{N} \gamma(z_{nk}) x_{ni}}{\sum_{n=1}^{N} \gamma(z_{nk})}.$.
| 2,368
|
13
|
13.9
|
medium
|
Use the d-separation criterion to verify that the conditional independence properties (13.24)–(13.31) are satisfied by the joint distribution for the hidden Markov model defined by $p(\mathbf{x}_1, \dots, \mathbf{x}_N, \mathbf{z}_1, \dots, \mathbf{z}_N) = p(\mathbf{z}_1) \left[ \prod_{n=2}^N p(\mathbf{z}_n | \mathbf{z}_{n-1}) \right] \prod_{n=1}^N p(\mathbf{x}_n | \mathbf{z}_n).$.
|
We can verify all these independence properties using d-separation by refering to5.
$p(\mathbf{x}_{n+1}, \dots, \mathbf{x}_{N}|\mathbf{z}_{n}) \qquad$ follows from the fact that arrows on paths from any of $\mathbf{x}_1, \dots, \mathbf{x}_n$ to any of $\mathbf{x}_{n+1}, \dots, \mathbf{x}_N$ meet head-to-tail or tail-to-tail at $\mathbf{z}_n$ , which is in the conditioning set.
$p(\mathbf{x}_{1}, \dots, \mathbf{x}_{n-1}|\mathbf{x}_{n}, \mathbf{z}_{n}) = p(\mathbf{x}_{1}, \dots, \mathbf{x}_{n-1}|\mathbf{z}_{n}) \qquad$ follows from the fact that arrows on paths from any of $\mathbf{x}_1, \dots, \mathbf{x}_{n-1}$ to $\mathbf{x}_n$ meet head-to-tail at $\mathbf{z}_n$ , which is in the conditioning set.
$p(\mathbf{x}_{1}, \dots, \mathbf{x}_{n-1}|\mathbf{z}_{n-1}, \mathbf{z}_{n}) = p(\mathbf{x}_{1}, \dots, \mathbf{x}_{n-1}|\mathbf{z}_{n-1}) \qquad$ follows from the fact that arrows on paths from any of $\mathbf{x}_1, \dots, \mathbf{x}_{n-1}$ to $\mathbf{z}_n$ meet head-to-tail or tail-to-tail at $\mathbf{z}_{n-1}$ , which is in the conditioning set.
$p(\mathbf{x}_{n+1}, \dots, \mathbf{x}_{N}|\mathbf{z}_{n}, \mathbf{z}_{n+1}) = p(\mathbf{x}_{n+1}, \dots, \mathbf{x}_{N}|\mathbf{z}_{n+1}) \qquad$ follows from the fact that arrows on paths from $\mathbf{z}_n$ to any of $\mathbf{x}_{n+1}, \dots, \mathbf{x}_N$ meet head-to-tail at $\mathbf{z}_{n+1}$ , which is in the conditioning set.
$p(\mathbf{x}_{n+2}, \dots, \mathbf{x}_{N}|\mathbf{z}_{n+1}, \mathbf{x}_{n+1}) = p(\mathbf{x}_{n+2}, \dots, \mathbf{x}_{N}|\mathbf{z}_{n+1}) \qquad$ follows from the fact that arrows on paths from $\mathbf{x}_{n+1}$ to any of $\mathbf{x}_{n+2}, \dots, \mathbf{x}_N$ to meet tail-to-tail at $\mathbf{z}_{n+1}$ , which is in the conditioning set.
$p(\mathbf{x}_{n}|\mathbf{z}_{n})p(\mathbf{x}_{n+1}, \dots, \mathbf{x}_{N}|\mathbf{z}_{n}) \qquad$ follows from $p(\mathbf{x}_{n+1}, \dots, \mathbf{x}_{N}|\mathbf{z}_{n}) \qquad$ and the fact that arrows on paths from any of $\mathbf{x}_1, \ldots, \mathbf{x}_{n-1}$ to $\mathbf{x}_n$ meet head-to-tail or tail-to-tail at $\mathbf{z}_{n-1}$ , which is in the conditioning set.
$p(\mathbf{x}_{N+1}|\mathbf{X}, \mathbf{z}_{N+1}) = p(\mathbf{x}_{N+1}|\mathbf{z}_{N+1}) \qquad$ follows from the fact that arrows on paths from any of $\mathbf{x}_1, \dots, \mathbf{x}_N$ to $\mathbf{x}_{N+1}$ meet head-to-tail at $\mathbf{z}_{N+1}$ , which is in the conditioning set.
$p(\mathbf{z}_{N+1}|\mathbf{z}_{N}, \mathbf{X}) = p(\mathbf{z}_{N+1}|\mathbf{z}_{N}) \qquad$ follows from the fact that arrows on paths from any of $\mathbf{x}_1, \dots, \mathbf{x}_N$ to $\mathbf{z}_{N+1}$ meet head-to-tail or tail-to-tail at $\mathbf{z}_N$ , which is in the conditioning set.
| 2,825
|
14
|
14.1
|
medium
|
- 14.1 (\*\*) Consider a set models of the form $p(\mathbf{t}|\mathbf{x}, \mathbf{z}_h, \boldsymbol{\theta}_h, h)$ in which $\mathbf{x}$ is the input vector, $\mathbf{t}$ is the target vector, h indexes the different models, $\mathbf{z}_h$ is a latent variable for model h, and $\boldsymbol{\theta}_h$ is the set of parameters for model h. Suppose the models have prior probabilities p(h) and that we are given a training set $\mathbf{X} = \{\mathbf{x}_1, \dots, \mathbf{x}_N\}$ and $\mathbf{T} = \{\mathbf{t}_1, \dots, \mathbf{t}_N\}$ . Write down the formulae needed to evaluate the predictive distribution $p(\mathbf{t}|\mathbf{x}, \mathbf{X}, \mathbf{T})$ in which the latent variables and the model index are marginalized out. Use these formulae to highlight the difference between Bayesian averaging of different models and the use of latent variables within a single model.
|
The required predictive distribution is given by
$$p(\mathbf{t}|\mathbf{x}, \mathbf{X}, \mathbf{T}) = \sum_{h} p(h) \sum_{\mathbf{z}_{h}} p(\mathbf{z}_{h}) \int p(\mathbf{t}|\mathbf{x}, \boldsymbol{\theta}_{h}, \mathbf{z}_{h}, h) p(\boldsymbol{\theta}_{h}|\mathbf{X}, \mathbf{T}, h) d\boldsymbol{\theta}_{h}, \quad (154)$$
where
$$p(\boldsymbol{\theta}_{h}|\mathbf{X}, \mathbf{T}, h) = \frac{p(\mathbf{T}|\mathbf{X}, \boldsymbol{\theta}_{h}, h)p(\boldsymbol{\theta}_{h}|h)}{p(\mathbf{T}|\mathbf{X}, h)}$$
$$\propto p(\boldsymbol{\theta}|h) \prod_{n=1}^{N} p(\mathbf{t}_{n}|\mathbf{x}_{n}, \boldsymbol{\theta}, h)$$
$$= p(\boldsymbol{\theta}|h) \prod_{n=1}^{N} \left( \sum_{\mathbf{z}_{nh}} p(\mathbf{t}_{n}, \mathbf{z}_{nh}|\mathbf{x}_{n}, \boldsymbol{\theta}, h) \right)$$
(155)
The integrals and summations in (154) are examples of Bayesian averaging, accounting for the uncertainty about which model, h, is the correct one, the value of the corresponding parameters, $\theta_h$ , and the state of the latent variable, $\mathbf{z}_h$ . The summation in (155), on the other hand, is an example of the use of latent variables, where different data points correspond to different latent variable states, although all the data are assumed to have been generated by a single model, h.
| 1,289
|
14
|
14.13
|
easy
|
Verify that the complete-data log likelihood function for the mixture of linear regression models is given by $\ln p(\mathbf{t}, \mathbf{Z}|\boldsymbol{\theta}) = \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \ln \left\{ \pi_k \mathcal{N}(t_n | \mathbf{w}_k^{\mathrm{T}} \boldsymbol{\phi}_n, \beta^{-1}) \right\}.$.
|
Starting from the mixture distribution in $p(t|\boldsymbol{\theta}) = \sum_{k=1}^{K} \pi_k \mathcal{N}(t|\mathbf{w}_k^{\mathrm{T}} \boldsymbol{\phi}, \beta^{-1})$, we follow the same steps as for mixtures of Gaussians, presented in Section 9.2. We introduce a *K*-nomial latent variable, **z**, such that the joint distribution over **z** and *t* equals
$$p(t, \mathbf{z}) = p(t|\mathbf{z})p(\mathbf{z}) = \prod_{k=1}^{K} \left( \mathcal{N} \left( t | \mathbf{w}_k^{\mathrm{T}} \boldsymbol{\phi}, \beta^{-1} \right) \pi_k \right)^{z_k}.$$
Given a set of observations, $\{(t_n, \phi_n)\}_{n=1}^N$ , we can write the complete likelihood over these observations and the corresponding $\mathbf{z}_1, \dots, \mathbf{z}_N$ , as
$$\prod_{n=1}^{N} \prod_{k=1}^{K} \left( \pi_k \mathcal{N}(t_n | \mathbf{w}_k^{\mathrm{T}} \boldsymbol{\phi}_n, \beta^{-1}) \right)^{z_{nk}}.$$
Taking the logarithm, we obtain $\ln p(\mathbf{t}, \mathbf{Z}|\boldsymbol{\theta}) = \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \ln \left\{ \pi_k \mathcal{N}(t_n | \mathbf{w}_k^{\mathrm{T}} \boldsymbol{\phi}_n, \beta^{-1}) \right\}.$.
| 1,118
|
14
|
14.15
|
easy
|
- 14.15 (\*) We have already noted that if we use a squared loss function in a regression problem, the corresponding optimal prediction of the target variable for a new input vector is given by the conditional mean of the predictive distribution. Show that the conditional mean for the mixture of linear regression models discussed in Section 14.5.1 is given by a linear combination of the means of each component distribution. Note that if the conditional distribution of the target data is multimodal, the conditional mean can give poor predictions.
|
The predictive distribution from the mixture of linear regression models for a new input feature vector, $\hat{\phi}$ , is obtained from $p(t|\boldsymbol{\theta}) = \sum_{k=1}^{K} \pi_k \mathcal{N}(t|\mathbf{w}_k^{\mathrm{T}} \boldsymbol{\phi}, \beta^{-1})$, with $\phi$ replaced by $\hat{\phi}$ . Calculating the expectation of t under this distribution, we obtain
$$\mathbb{E}[t|\widehat{\boldsymbol{\phi}},\boldsymbol{\theta}] = \sum_{k=1}^{K} \pi_k \mathbb{E}[t|\widehat{\boldsymbol{\phi}}, \mathbf{w}_k, \beta].$$
Depending on the parameters, this expectation is potentially K-modal, with one mode for each mixture component. However, the weighted combination of these modes output by the mixture model may not be close to any single mode. For example, the combination of the two modes in the left panel of9 will end up in between the two modes, a region with no signicant probability mass.
| 910
|
14
|
14.17
|
medium
|
Consider a mixture model for a conditional distribution $p(t|\mathbf{x})$ of the form
$$p(t|\mathbf{x}) = \sum_{k=1}^{K} \pi_k \psi_k(t|\mathbf{x})$$
$p(t|\mathbf{x}) = \sum_{k=1}^{K} \pi_k \psi_k(t|\mathbf{x})$
in which each mixture component $\psi_k(t|\mathbf{x})$ is itself a mixture model. Show that this two-level hierarchical mixture is equivalent to a conventional single-level mixture model. Now suppose that the mixing coefficients in both levels of such a hierarchical model are arbitrary functions of $\mathbf{x}$ . Again, show that this hierarchical model is again equivalent to a single-level model with $\mathbf{x}$ -dependent mixing coefficients. Finally, consider the case in which the mixing coefficients at both levels of the hierarchical mixture are constrained to be linear classification (logistic or softmax) models. Show that the hierarchical mixture cannot in general be represented by a single-level mixture having linear classification models for the mixing coefficients. Hint: to do this it is sufficient to construct a single counter-example, so consider a mixture of two components in which one of those components is itself a mixture of two components, with mixing coefficients given by linear-logistic models. Show that this cannot be represented by a single-level mixture of 3 components having mixing coefficients determined by a linear-softmax model.
|
If we define $\psi_k(t|\mathbf{x})$ in $p(t|\mathbf{x}) = \sum_{k=1}^{K} \pi_k \psi_k(t|\mathbf{x})$ as
$$\psi_k(t|\mathbf{x}) = \sum_{m=1}^{M} \lambda_{mk} \phi_{mk}(t|\mathbf{x}),$$
we can rewrite $p(t|\mathbf{x}) = \sum_{k=1}^{K} \pi_k \psi_k(t|\mathbf{x})$ as
$$p(t|\mathbf{x}) = \sum_{k=1}^{K} \pi_k \sum_{m=1}^{M} \lambda_{mk} \phi_{mk}(t|\mathbf{x})$$
$$= \sum_{k=1}^{K} \sum_{m=1}^{M} \pi_k \lambda_{mk} \phi_{mk}(t|\mathbf{x}).$$
By changing the indexation, we can write this as
$$p(t|\mathbf{x}) = \sum_{l=1}^{L} \eta_l \phi_l(t|\mathbf{x}),$$
where L=KM, l=(k-1)M+m, $\eta_l=\pi_k\lambda_{mk}$ and $\phi_l(\cdot)=\phi_{mk}(\cdot)$ . By construction, $\eta_l\geqslant 0$ and $\sum_{l=1}^L\eta_l=1$ .
Note that this would work just as well if $\pi_k$ and $\lambda_{mk}$ were to be dependent on $\mathbf{x}$ , as long as they both respect the constraints of being non-negative and summing to 1 for every possible value of $\mathbf{x}$ .
Finally, consider a tree-structured, hierarchical mixture model, as illustrated in the left panel of On the top (root) level, this is a mixture with two components. The mixing coefficients are given by a linear logistic regression model and hence are input dependent. The left sub-tree correspond to a local conditional density model, $\psi_1(t|\mathbf{x})$ . In the right sub-tree, the structure from the root is replicated, with the difference that both sub-trees contain local conditional density models, $\psi_2(t|\mathbf{x})$ and $\psi_3(t|\mathbf{x})$ .
We can write the resulting mixture model on the form $p(t|\mathbf{x}) = \sum_{k=1}^{K} \pi_k \psi_k(t|\mathbf{x})$ with mixing coefficients
$$\pi_1(\mathbf{x}) = \sigma(\mathbf{v}_1^{\mathrm{T}}\mathbf{x})
\pi_2(\mathbf{x}) = (1 - \sigma(\mathbf{v}_1^{\mathrm{T}}\mathbf{x}))\sigma(\mathbf{v}_2^{\mathrm{T}}\mathbf{x})
\pi_3(\mathbf{x}) = (1 - \sigma(\mathbf{v}_1^{\mathrm{T}}\mathbf{x}))(1 - \sigma(\mathbf{v}_2^{\mathrm{T}}\mathbf{x})),$$
where $\sigma(\cdot)$ is defined in $\sigma(a) = \frac{1}{1 + \exp(-a)}$ and $\mathbf{v}_1$ and $\mathbf{v}_2$ are the parameter vectors of the logistic regression models. Note that $\pi_1(\mathbf{x})$ is independent of the value of $\mathbf{v}_2$ . This would not be the case if the mixing coefficients were modelled using a single level softmax model,
$$\pi_k(\mathbf{x}) = \frac{e^{\mathbf{u}_k^{\mathrm{T}} \mathbf{x}}}{\sum_{j}^{3} e^{\mathbf{u}_j^{\mathrm{T}} \mathbf{x}}},$$
where the parameters $\mathbf{u}_k$ , corresponding to $\pi_k(\mathbf{x})$ , will also affect the other mixing coefficients, $\pi_{j\neq k}(\mathbf{x})$ , through the denominator. This gives the hierarchical model different properties in the modelling of the mixture coefficients over the input space, as compared to a linear softmax model. An example is shown in the right panel of, where the red lines represent borders of equal mixing coefficients in the input space. These borders are formed from two straight lines, corresponding to the two logistic units in the left panel of 8. A corresponding division of the input space by a softmax model would involve three straight lines joined at a single point, looking, e.g., something like the red lines3 in PRML; note that a linear three-class softmax model could not implement the borders show in right panel of
| 3,368
|
14
|
14.3
|
easy
|
By making use of Jensen's inequality $f\left(\sum_{i=1}^{M} \lambda_i x_i\right) \leqslant \sum_{i=1}^{M} \lambda_i f(x_i)$, for the special case of the convex function $f(x) = x^2$ , show that the average expected sum-of-squares error $E_{AV}$ of the members of a simple committee model, given by $E_{\text{AV}} = \frac{1}{M} \sum_{m=1}^{M} \mathbb{E}_{\mathbf{x}} \left[ \epsilon_m(\mathbf{x})^2 \right].$, and the expected error $E_{COM}$ of the committee itself, given by $= \mathbb{E}_{\mathbf{x}} \left[ \left\{ \frac{1}{M} \sum_{m=1}^{M} \epsilon_m(\mathbf{x}) \right\}^2 \right]$, satisfy
$$E_{\text{COM}} \leqslant E_{\text{AV}}.$$
$E_{\text{COM}} \leqslant E_{\text{AV}}.$
|
We start by rearranging the r.h.s. of $E_{\text{AV}} = \frac{1}{M} \sum_{m=1}^{M} \mathbb{E}_{\mathbf{x}} \left[ \epsilon_m(\mathbf{x})^2 \right].$, by moving the factor 1/M inside the sum and the expectation operator outside the sum, yielding
$$\mathbb{E}_{\mathbf{x}} \left[ \sum_{m=1}^{M} \frac{1}{M} \epsilon_m(\mathbf{x})^2 \right].$$
If we then identify $\epsilon_m(\mathbf{x})$ and 1/M with $x_i$ and $\lambda_i$ in $f\left(\sum_{i=1}^{M} \lambda_i x_i\right) \leqslant \sum_{i=1}^{M} \lambda_i f(x_i)$, respectively, and take $f(x) = x^2$ , we see from $f\left(\sum_{i=1}^{M} \lambda_i x_i\right) \leqslant \sum_{i=1}^{M} \lambda_i f(x_i)$ that
$$\left(\sum_{m=1}^{M} \frac{1}{M} \epsilon_m(\mathbf{x})\right)^2 \leqslant \sum_{m=1}^{M} \frac{1}{M} \epsilon_m(\mathbf{x})^2.$$
Since this holds for all values of x, it must also hold for the expectation over x, proving $E_{\text{COM}} \leqslant E_{\text{AV}}.$.
| 966
|
14
|
14.5
|
medium
|
$ Consider a committee in which we allow unequal weighting of the constituent models, so that
$$y_{\text{COM}}(\mathbf{x}) = \sum_{m=1}^{M} \alpha_m y_m(\mathbf{x}). \tag{14.55}$$
In order to ensure that the predictions $y_{\text{COM}}(\mathbf{x})$ remain within sensible limits, suppose that we require that they be bounded at each value of $\mathbf{x}$ by the minimum and maximum values given by any of the members of the committee, so that
$$y_{\min}(\mathbf{x}) \leqslant y_{\text{COM}}(\mathbf{x}) \leqslant y_{\max}(\mathbf{x}).$$
$y_{\min}(\mathbf{x}) \leqslant y_{\text{COM}}(\mathbf{x}) \leqslant y_{\max}(\mathbf{x}).$
Show that a necessary and sufficient condition for this constraint is that the coefficients $\alpha_m$ satisfy
$$\alpha_m \geqslant 0, \qquad \sum_{m=1}^{M} \alpha_m = 1. \tag{14.57}$$
|
To prove that $\alpha_m \geqslant 0, \qquad \sum_{m=1}^{M} \alpha_m = 1.$ is a sufficient condition for $y_{\min}(\mathbf{x}) \leqslant y_{\text{COM}}(\mathbf{x}) \leqslant y_{\max}(\mathbf{x}).$ we have to show that $y_{\min}(\mathbf{x}) \leqslant y_{\text{COM}}(\mathbf{x}) \leqslant y_{\max}(\mathbf{x}).$ follows from $\alpha_m \geqslant 0, \qquad \sum_{m=1}^{M} \alpha_m = 1.$. To do this, consider a fixed set of $y_m(\mathbf{x})$ and imagine varying the $\alpha_m$ over all possible values allowed by $\alpha_m \geqslant 0, \qquad \sum_{m=1}^{M} \alpha_m = 1.$ and consider the values taken by
$y_{\text{COM}}(\mathbf{x})$ as a result. The maximum value of $y_{\text{COM}}(\mathbf{x})$ occurs when $\alpha_k = 1$ where $y_k(\mathbf{x}) \geqslant y_m(\mathbf{x})$ for $m \neq k$ , and hence all $\alpha_m = 0$ for $m \neq k$ . An analogous result holds for the minimum value. For other settings of $\alpha$ ,
$$y_{\min}(\mathbf{x}) < y_{\text{COM}}(\mathbf{x}) < y_{\max}(\mathbf{x}),$$
since $y_{\text{COM}}(\mathbf{x})$ is a convex combination of points, $y_m(\mathbf{x})$ , such that
$$\forall m: y_{\min}(\mathbf{x}) \leqslant y_m(\mathbf{x}) \leqslant y_{\max}(\mathbf{x}).$$
Thus, $\alpha_m \geqslant 0, \qquad \sum_{m=1}^{M} \alpha_m = 1.$ is a sufficient condition for $y_{\min}(\mathbf{x}) \leqslant y_{\text{COM}}(\mathbf{x}) \leqslant y_{\max}(\mathbf{x}).$.
Showing that $\alpha_m \geqslant 0, \qquad \sum_{m=1}^{M} \alpha_m = 1.$ is a necessary condition for $y_{\min}(\mathbf{x}) \leqslant y_{\text{COM}}(\mathbf{x}) \leqslant y_{\max}(\mathbf{x}).$ is equivalent to showing that $y_{\min}(\mathbf{x}) \leqslant y_{\text{COM}}(\mathbf{x}) \leqslant y_{\max}(\mathbf{x}).$ is a sufficient condition for (14.57). The implication here is that if (14.56) holds for any choice of values of the committee members $\{y_m(\mathbf{x})\}$ then (14.57) will be satisfied. Suppose, without loss of generality, that $\alpha_k$ is the smallest of the $\alpha$ values, i.e. $\alpha_k \leqslant \alpha_m$ for $k \neq m$ . Then consider $y_k(\mathbf{x}) = 1$ , together with $y_m(\mathbf{x}) = 0$ for all $m \neq k$ . Then $y_{\min}(\mathbf{x}) = 0$ while $y_{\text{COM}}(\mathbf{x}) = \alpha_k$ and hence from (14.56) we obtain $\alpha_k \geqslant 0$ . Since $\alpha_k$ is the smallest of the $\alpha$ values it follows that all of the coefficients must satisfy $\alpha_k \geqslant 0$ . Similarly, consider the case in which $y_m(\mathbf{x}) = 1$ for all m. Then $y_{\min}(\mathbf{x}) = y_{\max}(\mathbf{x}) = 1$ , while $y_{\text{COM}}(\mathbf{x}) = \sum_m \alpha_m$ . From (14.56) it then follows that $\sum_m \alpha_m = 1$ , as required.
| 2,788
|
14
|
14.6
|
easy
|
By differentiating the error function $= (e^{\alpha_m/2} - e^{-\alpha_m/2}) \sum_{n=1}^N w_n^{(m)} I(y_m(\mathbf{x}_n) \neq t_n) + e^{-\alpha_m/2} \sum_{n=1}^N w_n^{(m)}.$ with respect to $\alpha_m$ , show that the parameters $\alpha_m$ in the AdaBoost algorithm are updated using $\alpha_m = \ln \left\{ \frac{1 - \epsilon_m}{\epsilon_m} \right\}.$ in which $\epsilon_m$ is defined by $\epsilon_{m} = \frac{\sum_{n=1}^{N} w_{n}^{(m)} I(y_{m}(\mathbf{x}_{n}) \neq t_{n})}{\sum_{n=1}^{N} w_{n}^{(m)}}$.
|
If we differentiate $= (e^{\alpha_m/2} - e^{-\alpha_m/2}) \sum_{n=1}^N w_n^{(m)} I(y_m(\mathbf{x}_n) \neq t_n) + e^{-\alpha_m/2} \sum_{n=1}^N w_n^{(m)}.$ w.r.t. $\alpha_m$ we obtain
$$\frac{\partial E}{\partial \alpha_m} = \frac{1}{2} \left( (e^{\alpha_m/2} + e^{-\alpha_m/2}) \sum_{n=1}^{N} w_n^{(m)} I(y_m(\mathbf{x}_n) \neq t_n) - e^{-\alpha_m/2} \sum_{n=1}^{N} w_n^{(m)} \right).$$
Setting this equal to zero and rearranging, we get
$$\frac{\sum_{n} w_n^{(m)} I(y_m(\mathbf{x}_n) \neq t_n)}{\sum_{n} w_n^{(m)}} = \frac{e^{-\alpha_m/2}}{e^{\alpha_m/2} + e^{-\alpha_m/2}} = \frac{1}{e^{\alpha_m} + 1}.$$
Using $\epsilon_{m} = \frac{\sum_{n=1}^{N} w_{n}^{(m)} I(y_{m}(\mathbf{x}_{n}) \neq t_{n})}{\sum_{n=1}^{N} w_{n}^{(m)}}$, we can rewrite this as
$$\frac{1}{e^{\alpha_m} + 1} = \epsilon_m,$$
which can be further rewritten as
$$e^{\alpha_m} = \frac{1 - \epsilon_m}{\epsilon_m},$$
from which $\alpha_m = \ln \left\{ \frac{1 - \epsilon_m}{\epsilon_m} \right\}.$ follows directly.
| 1,018
|
14
|
14.9
|
easy
|
Show that the sequential minimization of the sum-of-squares error function for an additive model of the form $f_m(\mathbf{x}) = \frac{1}{2} \sum_{l=1}^{m} \alpha_l y_l(\mathbf{x})$ in the style of boosting simply involves fitting each new base classifier to the residual errors $t_n f_{m-1}(\mathbf{x}_n)$ from the previous model.
|
The sum-of-squares error for the additive model of $f_m(\mathbf{x}) = \frac{1}{2} \sum_{l=1}^{m} \alpha_l y_l(\mathbf{x})$ is defined as
$$E = \frac{1}{2} \sum_{n=1}^{N} (t_n - f_m(\mathbf{x}_n))^2.$$
Using $f_m(\mathbf{x}) = \frac{1}{2} \sum_{l=1}^{m} \alpha_l y_l(\mathbf{x})$, we can rewrite this as
$$\frac{1}{2} \sum_{n=1}^{N} (t_n - f_{m-1}(\mathbf{x}_n) - \frac{1}{2} \alpha_m y_m(\mathbf{x}))^2,$$
where we recognize the two first terms inside the square as the residual from the (m-1)-th model. Minimizing this error w.r.t. $y_m(\mathbf{x})$ will be equivalent to fitting $y_m(\mathbf{x})$ to the (scaled) residuals.
| 651
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.