chapter
int64 1
14
| question_number
stringlengths 3
5
| difficulty
stringclasses 3
values | question_text
stringlengths 64
8.24k
| answer
stringlengths 56
16.5k
| answer_length
int64 14
16.6k
|
|---|---|---|---|---|---|
6
|
6.7
|
easy
|
Verify the results $k(\mathbf{x}, \mathbf{x}') = k_1(\mathbf{x}, \mathbf{x}') + k_2(\mathbf{x}, \mathbf{x}')$ and $k(\mathbf{x}, \mathbf{x}') = k_1(\mathbf{x}, \mathbf{x}')k_2(\mathbf{x}, \mathbf{x}')$ for constructing valid kernels.
|
To prove $k(\mathbf{x}, \mathbf{x}') = k_1(\mathbf{x}, \mathbf{x}') + k_2(\mathbf{x}, \mathbf{x}')$, we will use the property stated below $= \phi(\mathbf{x})^{\mathrm{T}} \phi(\mathbf{z}).$. Since we know $k_1(\mathbf{x}, \mathbf{x}')$ and $k_2(\mathbf{x}, \mathbf{x}')$ are valid kernels, their Gram matrix $\mathbf{K}_1$ and $\mathbf{K}_2$
are both positive semidefinite. Given the relation $= \phi(\mathbf{x})^{\mathrm{T}} \phi(\mathbf{z}).$, it can be easily shown $\mathbf{K} = \mathbf{K}_1 + \mathbf{K}_2$ is also positive semidefinite and thus $k(\mathbf{x}, \mathbf{x}')$ is also a valid kernel.
To prove $k(\mathbf{x}, \mathbf{x}') = k_1(\mathbf{x}, \mathbf{x}')k_2(\mathbf{x}, \mathbf{x}')$, we assume the map function for kernel $k_1(\mathbf{x}, \mathbf{x}')$ is $\boldsymbol{\phi}^{(1)}(\mathbf{x})$ , and similarly $\boldsymbol{\phi}^{(2)}(\mathbf{x})$ for $k_2(\mathbf{x}, \mathbf{x}')$ . Moreover, we further assume the dimension of $\boldsymbol{\phi}^{(1)}(\mathbf{x})$ is M, and $\boldsymbol{\phi}^{(2)}(\mathbf{x})$ is N. We expand $k(\mathbf{x}, \mathbf{x}')$ based on (6.18):
$$k(\mathbf{x}, \mathbf{x}') = k_1(\mathbf{x}, \mathbf{x}') k_2(\mathbf{x}, \mathbf{x}')$$
$$= \phi^{(1)}(\mathbf{x})^T \phi^{(1)}(\mathbf{x}') \phi^{(2)}(\mathbf{x})^T \phi^{(2)}(\mathbf{x}')$$
$$= \sum_{i=1}^M \phi_i^{(1)}(\mathbf{x}) \phi_i^{(1)}(\mathbf{x}') \sum_{j=1}^N \phi_j^{(2)}(\mathbf{x}) \phi_j^{(2)}(\mathbf{x}')$$
$$= \sum_{i=1}^M \sum_{j=1}^N \left[ \phi_i^{(1)}(\mathbf{x}) \phi_j^{(2)}(\mathbf{x}) \right] \left[ \phi_i^{(1)}(\mathbf{x}') \phi_j^{(2)}(\mathbf{x}') \right]$$
$$= \sum_{k=1}^{MN} \phi_k(\mathbf{x}) \phi_k(\mathbf{x}') = \phi(\mathbf{x})^T \phi(\mathbf{x}')$$
where $\phi_i^{(1)}(\mathbf{x})$ is the ith element of $\phi^{(1)}(\mathbf{x})$ , and $\phi_j^{(2)}(\mathbf{x})$ is the jth element of $\phi^{(2)}(\mathbf{x})$ . To be more specific, we have proved that $k(\mathbf{x},\mathbf{x}')$ can be written as $\phi(\mathbf{x})^T\phi(\mathbf{x}')$ . Here $\phi(\mathbf{x})$ is a $MN\times 1$ column vector, and the kth (k=1,2,...,MN) element is given by $\phi_i^{(1)}(\mathbf{x})\times\phi_j^{(2)}(\mathbf{x})$ . What's more, we can also express i,j in terms of k:
$$i = (k-1) \otimes N + 1$$
and $j = (k-1) \otimes N + 1$
where $\emptyset$ and $\odot$ means integer division and remainder, respectively.
| 2,420
|
6
|
6.8
|
easy
|
Verify the results $k(\mathbf{x}, \mathbf{x}') = k_3(\phi(\mathbf{x}), \phi(\mathbf{x}'))$ and $k(\mathbf{x}, \mathbf{x}') = \mathbf{x}^{\mathrm{T}} \mathbf{A} \mathbf{x}'$ for constructing valid kernels.
|
For $k(\mathbf{x}, \mathbf{x}') = k_3(\phi(\mathbf{x}), \phi(\mathbf{x}'))$ we suppose $k_3(\mathbf{x}, \mathbf{x}') = \mathbf{g}(\mathbf{x})^T \mathbf{g}(\mathbf{x}')$ , and thus we have:
$$k(\mathbf{x}, \mathbf{x}') = k_3(\phi(\mathbf{x}), \phi(\mathbf{x}')) = g(\phi(\mathbf{x}))^T g(\phi(\mathbf{x}')) = f(\mathbf{x})^T f(\mathbf{x}')$$
where we have denoted $g(\phi(\mathbf{x})) = f(\mathbf{x})$ and now it is obvious that $k(\mathbf{x}, \mathbf{x}') = k_3(\phi(\mathbf{x}), \phi(\mathbf{x}'))$ holds. To prove $k(\mathbf{x}, \mathbf{x}') = \mathbf{x}^{\mathrm{T}} \mathbf{A} \mathbf{x}'$, we suppose $\mathbf{x}$ is a $N \times 1$ column vector and $\mathbf{A}$ is a $N \times N$ symmetric positive semidefinite matrix. We know that $\mathbf{A}$ can be decomposed to $\mathbf{Q}\mathbf{B}\mathbf{Q}^T$ . Here $\mathbf{Q}$ is a $N \times N$ orthogonal matrix, and $\mathbf{B}$ is a $N \times N$ diagonal matrix whose elements are no less than 0. Now we can derive:
$$k(\mathbf{x}, \mathbf{x}') = \mathbf{x}^T \mathbf{A} \mathbf{x}' = \mathbf{x}^T \mathbf{Q} \mathbf{B} \mathbf{Q}^T \mathbf{x}' = (\mathbf{Q}^T \mathbf{x})^T \mathbf{B} (\mathbf{Q}^T \mathbf{x}') = \mathbf{y}^T \mathbf{B} \mathbf{y}'$$
$$= \sum_{i=1}^N B_{ii} y_i y_i' = \sum_{i=1}^N (\sqrt{B_{ii}} y_i) (\sqrt{B_{ii}} y_i') = \boldsymbol{\phi}(\mathbf{x})^T \boldsymbol{\phi}(\mathbf{x}')$$
To be more specific, we have proved that $k(\mathbf{x}, \mathbf{x}') = \phi(\mathbf{x})^T \phi(\mathbf{x}')$ , and here $\phi(\mathbf{x})$ is a $N \times 1$ column vector, whose ith (i = 1, 2, ..., N) element is given by $\sqrt{B_{ii}} y_i$ , i.e., $\sqrt{B_{ii}} (\mathbf{Q}^T \mathbf{x})_i$ .
| 1,714
|
6
|
6.9
|
easy
|
Verify the results $k(\mathbf{x}, \mathbf{x}') = k_a(\mathbf{x}_a, \mathbf{x}'_a) + k_b(\mathbf{x}_b, \mathbf{x}'_b)$ and $k(\mathbf{x}, \mathbf{x}') = k_a(\mathbf{x}_a, \mathbf{x}'_a)k_b(\mathbf{x}_b, \mathbf{x}'_b)$ for constructing valid kernels.
|
To prove $k(\mathbf{x}, \mathbf{x}') = k_a(\mathbf{x}_a, \mathbf{x}'_a) + k_b(\mathbf{x}_b, \mathbf{x}'_b)$, let's first expand the expression:
$$k(\mathbf{x}, \mathbf{x}') = k_a(\mathbf{x}_a, \mathbf{x}'_a) + k_b(\mathbf{x}_b, \mathbf{x}'_b)$$
$$= \sum_{i=1}^{M} \phi_i^{(a)}(\mathbf{x}_a) \phi_i^{(a)}(\mathbf{x}'_a) + \sum_{j=1}^{N} \phi_i^{(b)}(\mathbf{x}_b) \phi_i^{(b)}(\mathbf{x}'_b)$$
$$= \sum_{k=1}^{M+N} \phi_k(\mathbf{x}) \phi_k(\mathbf{x}') = \phi(\mathbf{x})^T \phi(\mathbf{x}')$$
where we have assumed the dimension of $\mathbf{x}_a$ is M and the dimension of $\mathbf{x}_b$ is N. The mapping function $\phi(\mathbf{x})$ is a $(M+N)\times 1$ column vector, whose kth (k=1,2,...,M+N) element $\phi_k(\mathbf{x})$ is:
$$\phi_k(\mathbf{x}) = \begin{cases} \phi_k^{(a)}(\mathbf{x}) & 1 \le k \le M \\ \phi_{k-M}^{(b)}(\mathbf{x}_a) & M+1 \le k \le M+N \end{cases}$$
$k(\mathbf{x}, \mathbf{x}') = k_a(\mathbf{x}_a, \mathbf{x}'_a)k_b(\mathbf{x}_b, \mathbf{x}'_b)$ is quite similar to $k(\mathbf{x}, \mathbf{x}') = k_1(\mathbf{x}, \mathbf{x}')k_2(\mathbf{x}, \mathbf{x}')$. We follow the same procedure:
$$k(\mathbf{x}, \mathbf{x}') = k_a(\mathbf{x}_a, \mathbf{x}'_a) k_b(\mathbf{x}_b, \mathbf{x}'_b)$$
$$= \sum_{i=1}^{M} \phi_i^{(a)}(\mathbf{x}_a) \phi_i^{(a)}(\mathbf{x}'_a) \sum_{j=1}^{N} \phi_j^{(b)}(\mathbf{x}_b) \phi_j^{(b)}(\mathbf{x}'_b)$$
$$= \sum_{i=1}^{M} \sum_{j=1}^{N} \left[ \phi_i^{(a)}(\mathbf{x}_a) \phi_j^{(b)}(\mathbf{x}_b) \right] \left[ \phi_i^{(a)}(\mathbf{x}'_a) \phi_j^{(b)}(\mathbf{x}'_b) \right]$$
$$= \sum_{b=1}^{MN} \phi_k(\mathbf{x}) \phi_k(\mathbf{x}') = \phi(\mathbf{x})^T \phi(\mathbf{x}')$$
By analogy to $k(\mathbf{x}, \mathbf{x}') = k_1(\mathbf{x}, \mathbf{x}')k_2(\mathbf{x}, \mathbf{x}')$, the mapping function $\phi(\mathbf{x})$ is a $MN \times 1$ column vector, whose kth (k = 1, 2, ..., MN) element $\phi_k(\mathbf{x})$ is:
$$\phi_k(\mathbf{x}) = \phi_i^{(a)}(\mathbf{x}_a) \times \phi_j^{(b)}(\mathbf{x}_b)$$
To be more specific, $\mathbf{x}_a$ is the sub-vector of $\mathbf{x}$ made up of the first M element of $\mathbf{x}$ , and $\mathbf{x}_b$ is the sub-vector of $\mathbf{x}$ made up of the last N element of $\mathbf{x}$ . What's more, we can also express i, j in terms of k:
$$i = (k-1) \otimes N + 1$$
and $j = (k-1) \otimes N + 1$
where $\emptyset$ and $\odot$ means integer division and remainder, respectively.
| 2,450
|
7
|
7.1
|
medium
|
- 7.1 (\*\*) Suppose we have a data set of input vectors $\{\mathbf{x}_n\}$ with corresponding target values $t_n \in \{-1,1\}$ , and suppose that we model the density of input vectors within each class separately using a Parzen kernel density estimator (see Section 2.5.1) with a kernel $k(\mathbf{x}, \mathbf{x}')$ . Write down the minimum misclassification-rate decision rule assuming the two classes have equal prior probability. Show also that, if the kernel is chosen to be $k(\mathbf{x}, \mathbf{x}') = \mathbf{x}^T \mathbf{x}'$ , then the classification rule reduces to simply assigning a new input vector to the class having the closest mean. Finally, show that, if the kernel takes the form $k(\mathbf{x}, \mathbf{x}') = \phi(\mathbf{x})^T \phi(\mathbf{x}')$ , that the classification is based on the closest mean in the feature space $\phi(\mathbf{x})$ .
|
By analogy to Eq $p(\mathbf{x}) = \frac{1}{N} \sum_{n=1}^{N} \frac{1}{h^D} k\left(\frac{\mathbf{x} - \mathbf{x}_n}{h}\right)$, we can obtain:
$$p(\mathbf{x}|t) = \begin{cases} \frac{1}{N_{+1}} \sum_{n=1}^{N_{+1}} \frac{1}{Z_k} \cdot k(\mathbf{x}, \mathbf{x}_n) & t = +1\\ \frac{1}{N_{-1}} \sum_{n=1}^{N_{-1}} \frac{1}{Z_k} \cdot k(\mathbf{x}, \mathbf{x}_n) & t = -1 \end{cases}$$
where $N_{+1}$ represents the number of samples with label t = +1 and it is the same for $N_{-1}$ . $Z_k$ is a normalization constant representing the volume of the hypercube. Since we have equal prior for the class, i.e.,
$$p(t) = \begin{cases} 0.5 & t = +1 \\ 0.5 & t = -1 \end{cases}$$
Based on Bayes' Theorem, we have $p(t|\mathbf{x}) \propto p(\mathbf{x}|t) \cdot p(t)$ , yielding:
$$p(t|\mathbf{x}) = \begin{cases} \frac{1}{Z} \cdot \frac{1}{N_{+1}} \sum_{n=1}^{N_{+1}} \cdot k(\mathbf{x}, \mathbf{x}_n) & t = +1\\ \frac{1}{Z} \cdot \frac{1}{N_{-1}} \sum_{n=1}^{N_{-1}} \cdot k(\mathbf{x}, \mathbf{x}_n) & t = -1 \end{cases}$$
Where 1/Z is a normalization constant to guarantee the integration of the posterior equal to 1. To classify a new sample $\mathbf{x}^*$ , we try to find the value $t^*$ that can maximize $p(t|\mathbf{x})$ . Therefore, we can obtain:
$$t^{*} = \begin{cases} +1 & \text{if } \frac{1}{N_{+1}} \sum_{n=1}^{N_{+1}} \cdot k(\mathbf{x}, \mathbf{x}_{n}) \ge \frac{1}{N_{-1}} \sum_{n=1}^{N_{-1}} \cdot k(\mathbf{x}, \mathbf{x}_{n}) \\ -1 & \text{if } \frac{1}{N_{+1}} \sum_{n=1}^{N_{+1}} \cdot k(\mathbf{x}, \mathbf{x}_{n}) \le \frac{1}{N_{-1}} \sum_{n=1}^{N_{-1}} \cdot k(\mathbf{x}, \mathbf{x}_{n}) \end{cases}$$
(\*)
If we now choose the kernel function as $k(\mathbf{x}, \mathbf{x}') = \mathbf{x}^T \mathbf{x}'$ , we have:
$$\frac{1}{N_{+1}} \sum_{n=1}^{N_{+1}} k(\mathbf{x}, \mathbf{x}_n) = \frac{1}{N_{+1}} \sum_{n=1}^{N_{+1}} \mathbf{x}^T \mathbf{x}_n = \mathbf{x}^T \tilde{\mathbf{x}}_{+1}$$
Where we have denoted:
$$\tilde{\mathbf{x}}_{+1} = \frac{1}{N_{+1}} \sum_{n=1}^{N_{+1}} \mathbf{x}_n$$
and similarly for $\tilde{\mathbf{x}}_{-1}$ . Therefore, the classification criterion (\*) can be written as:
$$t^{} = \begin{cases} +1 & \text{if } \tilde{\mathbf{x}}_{+1} \ge \tilde{\mathbf{x}}_{-1} \\ -1 & \text{if } \tilde{\mathbf{x}}_{+1} \le \tilde{\mathbf{x}}_{-1} \end{cases}$$
When we choose the kernel function as $k(\mathbf{x}, \mathbf{x}') = \phi(\mathbf{x})^T \phi(\mathbf{x}')$ , we can similarly obtain the classification criterion:
$$t^* = \begin{cases} +1 & \text{if } \tilde{\phi}(\mathbf{x}_{+1}) \ge \tilde{\phi}(\mathbf{x}_{-1}) \\ -1 & \text{if } \tilde{\phi}(\mathbf{x}_{+1}) \le \tilde{\phi}(\mathbf{x}_{-1}) \end{cases}$$
Where we have defined:
$$\tilde{\phi}(\mathbf{x}_{+1}) = \frac{1}{N_{+1}} \sum_{n=1}^{N_{+1}} \phi(\mathbf{x}_n)$$
| 2,812
|
7
|
7.10
|
medium
|
Derive the result $= -\frac{1}{2} \left\{ N \ln(2\pi) + \ln |\mathbf{C}| + \mathbf{t}^{\mathrm{T}} \mathbf{C}^{-1} \mathbf{t} \right\}$ for the marginal likelihood function in the regression RVM, by performing the Gaussian integral over w in $p(\mathbf{t}|\mathbf{X}, \boldsymbol{\alpha}, \beta) = \int p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta) p(\mathbf{w}|\boldsymbol{\alpha}) \, d\mathbf{w}.$ using the technique of completing the square in the exponential.
|
We first note that this result is given immediately from (2.113)–(2.115), but the task set in the exercise was to practice the technique of completing the square. In this solution and that of Exercise 7.12, we broadly follow the presentation in Section 3.5.1. Using $p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta) = \prod_{n=1}^{N} p(t_n|\mathbf{x}_n, \mathbf{w}, \beta^{-1}).$ and $p(\mathbf{w}|\boldsymbol{\alpha}) = \prod_{i=1}^{M} \mathcal{N}(w_i|0, \alpha_i^{-1})$, we can write $p(\mathbf{t}|\mathbf{X}, \boldsymbol{\alpha}, \beta) = \int p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta) p(\mathbf{w}|\boldsymbol{\alpha}) \, d\mathbf{w}.$ in a form similar to $p(\mathbf{t}|\alpha,\beta) = \left(\frac{\beta}{2\pi}\right)^{N/2} \left(\frac{\alpha}{2\pi}\right)^{M/2} \int \exp\left\{-E(\mathbf{w})\right\} d\mathbf{w}$
$$p(\mathbf{t}|\mathbf{X}, \boldsymbol{\alpha}, \beta) = \left(\frac{\beta}{2\pi}\right)^{N/2} \frac{1}{(2\pi)^{N/2}} \prod_{i=1}^{M} \alpha_i \int \exp\left\{-E(\mathbf{w})\right\} d\mathbf{w}$$
(129)
where
$$E(\mathbf{w}) = \frac{\beta}{2} \|\mathbf{t} - \mathbf{\Phi} \mathbf{w}\|^2 + \frac{1}{2} \mathbf{w}^{\mathrm{T}} \mathbf{A} \mathbf{w}$$
and $\mathbf{A} = \operatorname{diag}(\boldsymbol{\alpha})$ .
Completing the square over w, we get
$$E(\mathbf{w}) = \frac{1}{2} (\mathbf{w} - \mathbf{m})^{\mathrm{T}} \mathbf{\Sigma}^{-1} (\mathbf{w} - \mathbf{m}) + E(\mathbf{t})$$
(130)
where m and $\Sigma$ are given by $\mathbf{m} = \beta \mathbf{\Sigma} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t}$ and $\Sigma = (\mathbf{A} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi})^{-1}$, respectively, and
$$E(\mathbf{t}) = \frac{1}{2} \left( \beta \mathbf{t}^{\mathrm{T}} \mathbf{t} - \mathbf{m}^{\mathrm{T}} \mathbf{\Sigma}^{-1} \mathbf{m} \right).$$
(131)
Using (130), we can evaluate the integral in (129) to obtain
$$\int \exp\{-E(\mathbf{w})\} d\mathbf{w} = \exp\{-E(\mathbf{t})\} (2\pi)^{M/2} |\mathbf{\Sigma}|^{1/2}.$$
(132)
Considering this as a function of **t** we see from $\Sigma = (\mathbf{A} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi})^{-1}$, that we only need to deal with the factor $\exp\{-E(\mathbf{t})\}$ . Using $\mathbf{m} = \beta \mathbf{\Sigma} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t}$, $\Sigma = (\mathbf{A} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi})^{-1}$, (C.7) and $\mathbf{C} = \beta^{-1} \mathbf{I} + \mathbf{\Phi} \mathbf{A}^{-1} \mathbf{\Phi}^{\mathrm{T}}.$, we can re-write
(131) as follows
$$E(\mathbf{t}) = \frac{1}{2} (\beta \mathbf{t}^{\mathrm{T}} \mathbf{t} - \mathbf{m}^{\mathrm{T}} \mathbf{\Sigma}^{-1} \mathbf{m})$$
$$= \frac{1}{2} (\beta \mathbf{t}^{\mathrm{T}} \mathbf{t} - \beta \mathbf{t}^{\mathrm{T}} \mathbf{\Phi} \mathbf{\Sigma} \mathbf{\Sigma}^{-1} \mathbf{\Sigma} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t} \beta)$$
$$= \frac{1}{2} \mathbf{t}^{\mathrm{T}} (\beta \mathbf{I} - \beta \mathbf{\Phi} \mathbf{\Sigma} \mathbf{\Phi}^{\mathrm{T}} \beta) \mathbf{t}$$
$$= \frac{1}{2} \mathbf{t}^{\mathrm{T}} (\beta \mathbf{I} - \beta \mathbf{\Phi} (\mathbf{A} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi})^{-1} \mathbf{\Phi}^{\mathrm{T}} \beta) \mathbf{t}$$
$$= \frac{1}{2} \mathbf{t}^{\mathrm{T}} (\beta^{-1} \mathbf{I} + \mathbf{\Phi} \mathbf{A}^{-1} \mathbf{\Phi}^{\mathrm{T}})^{-1} \mathbf{t}$$
$$= \frac{1}{2} \mathbf{t}^{\mathrm{T}} \mathbf{C}^{-1} \mathbf{t}.$$
This gives us the last term on the r.h.s. of (7.85); the two preceding terms are given implicitly, as they form the normalization constant for the posterior Gaussian distribution $p(\mathbf{t}|\mathbf{X}, \boldsymbol{\alpha}, \boldsymbol{\beta})$ .
| 3,684
|
7
|
7.12
|
medium
|
Show that direct maximization of the log marginal likelihood $= -\frac{1}{2} \left\{ N \ln(2\pi) + \ln |\mathbf{C}| + \mathbf{t}^{\mathrm{T}} \mathbf{C}^{-1} \mathbf{t} \right\}$ for the regression relevance vector machine leads to the re-estimation equations $\alpha_i^{\text{new}} = \frac{\gamma_i}{m_i^2}$ and $(\beta^{\text{new}})^{-1} = \frac{\|\mathbf{t} - \mathbf{\Phi}\mathbf{m}\|^2}{N - \sum_{i} \gamma_i}$ where $\gamma_i$ is defined by $\gamma_i = 1 - \alpha_i \Sigma_{ii}$.
|
According to the previous problem, we can explicitly write down the log marginal likelihood in an alternative form:
$$\ln p(\mathbf{t}|\mathbf{X}, \boldsymbol{\alpha}, \boldsymbol{\beta}) = \frac{N}{2} \ln \boldsymbol{\beta} - \frac{N}{2} \ln 2\pi + \frac{1}{2} \ln |\boldsymbol{\Sigma}| + \frac{1}{2} \sum_{i=1}^{M} \ln \alpha_i - E(\mathbf{t})$$
We first derive:
$$\begin{split} \frac{dE(\mathbf{t})}{d\alpha_i} &= -\frac{1}{2} \frac{d}{d\alpha_i} (\mathbf{m}^T \mathbf{\Sigma}^{-1} \mathbf{m}) \\ &= -\frac{1}{2} \frac{d}{d\alpha_i} (\beta^2 \mathbf{t}^T \mathbf{\Phi} \mathbf{\Sigma} \mathbf{\Sigma}^{-1} \mathbf{\Sigma} \mathbf{\Phi}^T \mathbf{t}) \\ &= -\frac{1}{2} \frac{d}{d\alpha_i} (\beta^2 \mathbf{t}^T \mathbf{\Phi} \mathbf{\Sigma} \mathbf{\Phi}^T \mathbf{t}) \\ &= -\frac{1}{2} Tr \left[ \frac{d}{d\mathbf{\Sigma}^{-1}} (\beta^2 \mathbf{t}^T \mathbf{\Phi} \mathbf{\Sigma} \mathbf{\Phi}^T \mathbf{t}) \cdot \frac{d\mathbf{\Sigma}^{-1}}{d\alpha_i} \right] \\ &= \frac{1}{2} \beta^2 Tr \left[ \mathbf{\Sigma} (\mathbf{\Phi}^T \mathbf{t}) (\mathbf{\Phi}^T \mathbf{t})^T \mathbf{\Sigma} \cdot \mathbf{I}_i \right] = \frac{1}{2} m_{ii}^2 \end{split}$$
In the last step, we have utilized the following equation:
$$\frac{d}{d\mathbf{X}}Tr(\mathbf{A}\mathbf{X}^{-1}\mathbf{B}) = -\mathbf{X}^{-T}\mathbf{A}^{T}\mathbf{B}^{T}\mathbf{X}^{-T}$$
Moreover, here $I_i$ is a matrix with all elements equal to zero, expect the i-th diagonal element, and the i-th diagonal element equals to 1. Then we utilize matrix identity Eq (C.22) to derive:
$$\frac{d\ln|\mathbf{\Sigma}|}{d\alpha_i} = -\frac{d\ln|\mathbf{\Sigma}^{-1}|}{d\alpha_i}$$
$$= -Tr\Big[\mathbf{\Sigma}\frac{d}{d\alpha_i}(\mathbf{A} + \beta\mathbf{\Phi}^T\mathbf{\Phi})\Big]$$
$$= -\Sigma_{ii}$$
Therefore, we can obtain:
$$\frac{d\ln p}{d\alpha_i} = \frac{1}{2\alpha_i} - \frac{1}{2}m_i^2 - \frac{1}{2}\Sigma_{ii}$$
Set it to zero and obtain:
$$\alpha_i = \frac{1 - \alpha_i \Sigma_{ii}}{m_i} = \frac{\gamma_i}{m_i^2}$$
Then we calculate the derivatives of $\ln p$ with respect to $\beta$ beginning by:
$$\frac{d\ln|\mathbf{\Sigma}|}{d\beta} = -\frac{d\ln|\mathbf{\Sigma}^{-1}|}{d\beta}$$
$$= -Tr\left[\mathbf{\Sigma}\frac{d}{d\beta}(\mathbf{A} + \beta\mathbf{\Phi}^T\mathbf{\Phi})\right]$$
$$= -Tr\left[\mathbf{\Sigma}\mathbf{\Phi}^T\mathbf{\Phi}\right]$$
Then we continue:
$$\begin{split} \frac{dE(\mathbf{t})}{d\beta} &= \frac{1}{2}\mathbf{t}^T\mathbf{t} - \frac{1}{2}\frac{d}{d\beta}(\mathbf{m}^T\mathbf{\Sigma}^{-1}\mathbf{m}) \\ &= \frac{1}{2}\mathbf{t}^T\mathbf{t} - \frac{1}{2}\frac{d}{d\beta}(\beta^2\mathbf{t}^T\mathbf{\Phi}\mathbf{\Sigma}\mathbf{\Sigma}^{-1}\mathbf{\Sigma}\mathbf{\Phi}^T\mathbf{t}) \\ &= \frac{1}{2}\mathbf{t}^T\mathbf{t} - \frac{1}{2}\frac{d}{d\beta}(\beta^2\mathbf{t}^T\mathbf{\Phi}\mathbf{\Sigma}\mathbf{\Phi}^T\mathbf{t}) \\ &= \frac{1}{2}\mathbf{t}^T\mathbf{t} - \beta\mathbf{t}^T\mathbf{\Phi}\mathbf{\Sigma}\mathbf{\Phi}^T\mathbf{t} - \frac{1}{2}\beta^2\frac{d}{d\beta}(\mathbf{t}^T\mathbf{\Phi}\mathbf{\Sigma}\mathbf{\Phi}^T\mathbf{t}) \\ &= \frac{1}{2}\Big\{\mathbf{t}^T\mathbf{t} - 2\beta\mathbf{t}^T\mathbf{\Phi}\mathbf{\Sigma}\mathbf{\Phi}^T\mathbf{t} - \beta^2\frac{d}{d\beta}(\mathbf{t}^T\mathbf{\Phi}\mathbf{\Sigma}\mathbf{\Phi}^T\mathbf{t})\Big\} \\ &= \frac{1}{2}\Big\{\mathbf{t}^T\mathbf{t} - 2\mathbf{t}^T(\mathbf{\Phi}\mathbf{m}) - \beta^2\frac{d}{d\beta}(\mathbf{t}^T\mathbf{\Phi}\mathbf{\Sigma}\mathbf{\Phi}^T\mathbf{t})\Big\} \\ &= \frac{1}{2}\Big\{\mathbf{t}^T\mathbf{t} - 2\mathbf{t}^T(\mathbf{\Phi}\mathbf{m}) - \beta^2Tr\Big[\frac{d}{d\mathbf{\Sigma}^{-1}}(\mathbf{t}^T\mathbf{\Phi}\mathbf{\Sigma}\mathbf{\Phi}^T\mathbf{t}) \cdot \frac{d\mathbf{\Sigma}^{-1}}{d\beta}\Big]\Big\} \\ &= \frac{1}{2}\Big\{\mathbf{t}^T\mathbf{t} - 2\mathbf{t}^T(\mathbf{\Phi}\mathbf{m}) + \beta^2Tr\Big[\mathbf{\Sigma}(\mathbf{\Phi}^T\mathbf{t})(\mathbf{\Phi}^T\mathbf{t})^T\mathbf{\Sigma} \cdot \mathbf{\Phi}^T\mathbf{\Phi}\Big]\Big\} \\ &= \frac{1}{2}\Big\{\mathbf{t}^T\mathbf{t} - 2\mathbf{t}^T(\mathbf{\Phi}\mathbf{m}) + Tr\Big[\mathbf{m}\mathbf{m}^T \cdot \mathbf{\Phi}^T\mathbf{\Phi}\Big]\Big\} \\ &= \frac{1}{2}\Big\{\mathbf{t}^T\mathbf{t} - 2\mathbf{t}^T(\mathbf{\Phi}\mathbf{m}) + Tr\Big[\mathbf{\Phi}\mathbf{m}\mathbf{m}^T \cdot \mathbf{\Phi}^T\Big]\Big\} \\ &= \frac{1}{2}||\mathbf{t} - \mathbf{\Phi}\mathbf{m}||^2 \end{split}$$
Therefore, we have obtained:
$$\frac{d \ln p}{d \beta} = \frac{1}{2} \left( \frac{N}{\beta} - ||\mathbf{t} - \mathbf{\Phi} \mathbf{m}||^2 - Tr[\mathbf{\Sigma} \mathbf{\Phi}^T \mathbf{\Phi}] \right)$$
Using Eq $\Sigma = (\mathbf{A} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi})^{-1}$, we can obtain:
$$\Sigma \Phi^{T} \Phi = \Sigma \Phi^{T} \Phi + \beta^{-1} \Sigma \mathbf{A} - \beta^{-1} \Sigma \mathbf{A}$$
$$= \Sigma (\beta \Phi^{T} \Phi + \mathbf{A}) \beta^{-1} - \beta^{-1} \Sigma \mathbf{A}$$
$$= \mathbf{I} \beta^{-1} - \beta^{-1} \Sigma \mathbf{A}$$
$$= (\mathbf{I} - \Sigma \mathbf{A}) \beta^{-1}$$
Setting the derivative equal to zero, we can obtain:
$$\beta^{-1} = \frac{||\mathbf{t} - \mathbf{\Phi} \mathbf{m}||^2}{N - Tr(\mathbf{I} - \mathbf{\Sigma} \mathbf{A})} = \frac{||\mathbf{t} - \mathbf{\Phi} \mathbf{m}||^2}{N - \sum_i \gamma_i}$$
Just as required.
| 5,244
|
7
|
7.13
|
medium
|
In the evidence framework for RVM regression, we obtained the re-estimation formulae $\alpha_i^{\text{new}} = \frac{\gamma_i}{m_i^2}$ and $(\beta^{\text{new}})^{-1} = \frac{\|\mathbf{t} - \mathbf{\Phi}\mathbf{m}\|^2}{N - \sum_{i} \gamma_i}$ by maximizing the marginal likelihood given by $= -\frac{1}{2} \left\{ N \ln(2\pi) + \ln |\mathbf{C}| + \mathbf{t}^{\mathrm{T}} \mathbf{C}^{-1} \mathbf{t} \right\}$. Extend this approach by inclusion of hyperpriors given by gamma distributions of the form (B.26) and obtain the corresponding re-estimation formulae for $\alpha$ and $\beta$ by maximizing the corresponding posterior probability $p(\mathbf{t}, \alpha, \beta | \mathbf{X})$ with respect to $\alpha$ and $\beta$ .
|
This problem is quite confusing. In my point of view, the posterior should be denoted as $p(\mathbf{w}|\mathbf{t}, \mathbf{X}, \{a_i, b_i\}, a_{\beta}, b_{\beta})$ , where $a_{\beta}, b_{\beta}$ controls the Gamma distribution of $\beta$ , and $a_i, b_i$ controls the Gamma distribution of $\alpha_i$ . What we should do is to maximize the marginal likelihood $p(\mathbf{t}|\mathbf{X}, \{a_i, b_i\}, a_{\beta}, b_{\beta})$ with respect to $\{a_i, b_i\}, a_{\beta}, b_{\beta}$ . Now we do not have a point estimation for the hyperparameters $\beta$ and $\alpha_i$ . We have a distribution (controlled by the hyper priors, i.e., $\{a_i, b_i\}, a_{\beta}, b_{\beta}$ ) instead.
| 688
|
7
|
7.14
|
medium
|
Derive the result $= \mathcal{N}\left(t|\mathbf{m}^{\mathrm{T}}\boldsymbol{\phi}(\mathbf{x}), \sigma^{2}(\mathbf{x})\right).$ for the predictive distribution in the relevance vector machine for regression. Show that the predictive variance is given by $\sigma^{2}(\mathbf{x}) = (\beta^{*})^{-1} + \phi(\mathbf{x})^{\mathrm{T}} \mathbf{\Sigma} \phi(\mathbf{x})$.
|
We begin by writing down $p(t|\mathbf{x}, \mathbf{w}, \beta^*)$ . Using Eq $p(t|\mathbf{x}, \mathbf{w}, \beta) = \mathcal{N}(t|y(\mathbf{x}), \beta^{-1})$ and Eq $y(\mathbf{x}) = \sum_{i=1}^{M} w_i \phi_i(\mathbf{x}) = \mathbf{w}^{\mathrm{T}} \phi(\mathbf{x})$, we can obtain:
$$p(t|\mathbf{x}, \mathbf{w}, \beta^*) = \mathcal{N}(t|\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x}), (\beta^*)^{-1})$$
Then we write down $p(\mathbf{w}|\mathbf{X},\mathbf{t},\alpha^*,\beta^*)$ . Using Eq $p(\mathbf{w}|\mathbf{t}, \mathbf{X}, \boldsymbol{\alpha}, \beta) = \mathcal{N}(\mathbf{w}|\mathbf{m}, \boldsymbol{\Sigma})$, $\mathbf{m} = \beta \mathbf{\Sigma} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t}$ and $\Sigma = (\mathbf{A} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi})^{-1}$, we can obtain:
$$p(\mathbf{w}|\mathbf{X}, \mathbf{t}, \alpha^*, \beta^*) = \mathcal{N}(\mathbf{w}|\mathbf{m}, \Sigma)$$
Where **m** and $\Sigma$ are evaluated using Eq $\mathbf{m} = \beta \mathbf{\Sigma} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t}$ and $\Sigma = (\mathbf{A} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi})^{-1}$ given $\alpha = \alpha^*$ and $\beta = \beta^*$ . Then we utilize Eq $= \mathcal{N}\left(t|\mathbf{m}^{\mathrm{T}}\boldsymbol{\phi}(\mathbf{x}), \sigma^{2}(\mathbf{x})\right).$ and obtain:
$$p(t|\mathbf{x}, \mathbf{X}, \mathbf{t}, \alpha^*, \beta^*) = \int \mathcal{N}(t|\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x}), (\beta^*)^{-1}) \mathcal{N}(\mathbf{w}|\mathbf{m}, \boldsymbol{\Sigma}) d\mathbf{w}$$
$$= \int \mathcal{N}(t|\boldsymbol{\phi}(\mathbf{x})^T \mathbf{w}, (\beta^*)^{-1}) \mathcal{N}(\mathbf{w}|\mathbf{m}, \boldsymbol{\Sigma}) d\mathbf{w}$$
Using Eq (2.113)-(2.117), we can obtain:
$$p(t|\mathbf{x}, \mathbf{X}, \mathbf{t}, \alpha^*, \beta^*) = \mathcal{N}(\mu, \sigma^2)$$
Where we have defined:
$$\mu = \mathbf{m}^T \boldsymbol{\phi}(\mathbf{x})$$
And
$$\sigma^2 = (\beta^*)^{-1} + \phi(\mathbf{x})^T \mathbf{\Sigma} \phi(\mathbf{x})$$
Just as required.
| 2,039
|
7
|
7.15
|
medium
|
Using the results $|\mathbf{C}| = |\mathbf{C}_{-i}||1 + \alpha_i^{-1} \boldsymbol{\varphi}_i^{\mathrm{T}} \mathbf{C}_{-i}^{-1} \boldsymbol{\varphi}_i|$ and $\mathbf{C}^{-1} = \mathbf{C}_{-i}^{-1} - \frac{\mathbf{C}_{-i}^{-1} \boldsymbol{\varphi}_{i} \boldsymbol{\varphi}_{i}^{\mathrm{T}} \mathbf{C}_{-i}^{-1}}{\alpha_{i} + \boldsymbol{\varphi}_{i}^{\mathrm{T}} \mathbf{C}_{-i}^{-1} \boldsymbol{\varphi}_{i}}.$, show that the marginal likelihood $= -\frac{1}{2} \left\{ N \ln(2\pi) + \ln |\mathbf{C}| + \mathbf{t}^{\mathrm{T}} \mathbf{C}^{-1} \mathbf{t} \right\}$ can be written in the form $L(\alpha) = L(\alpha_{-i}) + \lambda(\alpha_i)$, where $\lambda(\alpha_n)$ is defined by $\lambda(\alpha_i) = \frac{1}{2} \left[ \ln \alpha_i - \ln (\alpha_i + s_i) + \frac{q_i^2}{\alpha_i + s_i} \right]$ and the sparsity and quality factors are defined by $s_i = \varphi_i^{\mathrm{T}} \mathbf{C}_{-i}^{-1} \varphi_i$ and $q_i = \boldsymbol{\varphi}_i^{\mathrm{T}} \mathbf{C}_{-i}^{-1} \mathbf{t}.$, respectively.
|
We just follow the hint.
$$\begin{split} L(\pmb{\alpha}) &= -\frac{1}{2} \{ N \ln 2\pi + \ln |\mathbf{C}| + \mathbf{t}^T \mathbf{C}^{-1} \mathbf{t} \} \\ &= -\frac{1}{2} \Big\{ N \ln 2\pi + \ln |\mathbf{C}_{-i}| + \ln |1 + \alpha_i^{-1} \pmb{\varphi}_i^T \mathbf{C}_{-i}^{-1} \pmb{\varphi}_i | \\ &+ \mathbf{t}^T (\mathbf{C}_{-i}^{-1} - \frac{\mathbf{C}_{-i}^{-1} \pmb{\varphi}_i \pmb{\varphi}_i^T \mathbf{C}_{-i}^{-1}}{\alpha_i + \pmb{\varphi}_i^T \mathbf{C}_{-i}^{-1} \pmb{\varphi}_i}) \mathbf{t} \Big\} \\ &= L(\pmb{\alpha}_{-i}) - \frac{1}{2} \ln |1 + \alpha_i^{-1} \pmb{\varphi}_i^T \mathbf{C}_{-i}^{-1} \pmb{\varphi}_i| + \frac{1}{2} \mathbf{t}^T \frac{\mathbf{C}_{-i}^{-1} \pmb{\varphi}_i \pmb{\varphi}_i^T \mathbf{C}_{-i}^{-1}}{\alpha_i + \pmb{\varphi}_i^T \mathbf{C}_{-i}^{-1} \pmb{\varphi}_i} \mathbf{t} \\ &= L(\pmb{\alpha}_{-i}) - \frac{1}{2} \ln |1 + \alpha_i^{-1} s_i| + \frac{1}{2} \frac{q_i^2}{\alpha_i + s_i} \\ &= L(\pmb{\alpha}_{-i}) - \frac{1}{2} \ln \frac{\alpha_i + s_i}{\alpha_i} + \frac{1}{2} \frac{q_i^2}{\alpha_i + s_i} \\ &= L(\pmb{\alpha}_{-i}) + \frac{1}{2} \Big[ \ln \alpha_i - \ln(\alpha_i + s_i) + \frac{q_i^2}{\alpha_i + s_i} \Big] = L(\pmb{\alpha}_{-i}) + \lambda(\alpha_i) \end{split}$$
Where we have defined $\lambda(\alpha_i)$ , $s_i$ and $q_i$ as shown in Eq (7.97)-(7.99).
| 1,318
|
7
|
7.16
|
easy
|
By taking the second derivative of the log marginal likelihood $\lambda(\alpha_i) = \frac{1}{2} \left[ \ln \alpha_i - \ln (\alpha_i + s_i) + \frac{q_i^2}{\alpha_i + s_i} \right]$ for the regression RVM with respect to the hyperparameter $\alpha_i$ , show that the stationary point given by $\alpha_i = \frac{s_i^2}{q_i^2 - s_i}.$ is a maximum of the marginal likelihood.
|
We first calculate the first derivative of Eq(7.97) with respect to $\alpha_i$ :
$$\frac{\partial \lambda}{\partial \alpha_i} = \frac{1}{2} \left[ \frac{1}{\alpha_i} - \frac{1}{\alpha_i + s_i} - \frac{q_i^2}{(\alpha_i + s_i)^2} \right]$$
Then we calculate the second derivative:
$$\frac{\partial^2 \lambda}{\partial \alpha_i^2} = \frac{1}{2} \left[ -\frac{1}{\alpha_i^2} + \frac{1}{(\alpha_i + s_i)^2} + \frac{2q_i^2}{(\alpha_i + s_i)^3} \right]$$
Next we aim to prove that when $\alpha_i$ is given by Eq $\alpha_i = \frac{s_i^2}{q_i^2 - s_i}.$, i.e., setting the first derivative equal to 0, the second derivative (i.e., the expression above) is negative. First we can obtain:
$$\alpha_i + s_i = \frac{s_i^2}{q_i^2 - s_i} + s_i = \frac{s_i q_i^2}{q_i^2 - s_i}$$
Therefore, substituting $\alpha_i + s_i$ and $\alpha_i$ into the second derivative, we can obtain:
$$\begin{split} \frac{\partial^2 \lambda}{\partial \alpha_i^2} &= \frac{1}{2} \left[ -\frac{(q_i^2 - s_i)^2}{s_i^4} + \frac{(q_i^2 - s_i)^2}{s_i^2 q_i^4} + \frac{2q_i^2 (q_i^2 - s_i)^3}{s_i^3 q_i^6} \right] \\ &= \frac{1}{2} \left[ -\frac{q_i^4 (q_i^2 - s_i)^2}{q_i^4 s_i^4} + \frac{s_i^2 (q_i^2 - s_i)^2}{s_i^4 q_i^4} + \frac{2s_i (q_i^2 - s_i)^3}{s_i^4 q_i^4} \right] \\ &= \frac{1}{2} \frac{(q_i^2 - s_i)^2}{q_i^4 s_i^4} \left[ -q_i^4 + s_i^2 + 2s_i (q_i^2 - s_i) \right] \\ &= \frac{1}{2} \frac{(q_i^2 - s_i)^2}{q_i^4 s_i^4} \left[ -(q_i^2 - s_i)^2 \right] \\ &= -\frac{1}{2} \frac{(q_i^2 - s_i)^4}{q_i^4 s_i^4} < 0 \end{split}$$
Just as required.
| 1,537
|
7
|
7.17
|
medium
|
Using $\Sigma = (\mathbf{A} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi})^{-1}$ and $\mathbf{C} = \beta^{-1} \mathbf{I} + \mathbf{\Phi} \mathbf{A}^{-1} \mathbf{\Phi}^{\mathrm{T}}.$, together with the matrix identity (C.7), show that the quantities $S_n$ and $Q_n$ defined by $Q_i = \boldsymbol{\varphi}_i^{\mathrm{T}} \mathbf{C}^{-1} \mathbf{t}$ and $S_i = \boldsymbol{\varphi}_i^{\mathrm{T}} \mathbf{C}^{-1} \boldsymbol{\varphi}_i.$ can be written in the form (7.106) and $S_i = \beta \boldsymbol{\varphi}_i^{\mathrm{T}} \boldsymbol{\varphi}_i - \beta^2 \boldsymbol{\varphi}_i^{\mathrm{T}} \boldsymbol{\Phi} \boldsymbol{\Sigma} \boldsymbol{\Phi}^{\mathrm{T}} \boldsymbol{\varphi}_i$.
|
We just follow the hint. According to Eq $Q_i = \boldsymbol{\varphi}_i^{\mathrm{T}} \mathbf{C}^{-1} \mathbf{t}$, Eq $\mathbf{C} = \beta^{-1} \mathbf{I} + \mathbf{\Phi} \mathbf{A}^{-1} \mathbf{\Phi}^{\mathrm{T}}.$ and matrix identity (C.7), we have:
$$Q_{i} = \boldsymbol{\varphi}_{i}^{T} \mathbf{C}^{-1} \mathbf{t}$$
$$= \boldsymbol{\varphi}_{i}^{T} (\boldsymbol{\beta}^{-1} \mathbf{I} + \boldsymbol{\Phi} \mathbf{A}^{-1} \boldsymbol{\Phi}^{T})^{-1} \mathbf{t}$$
$$= \boldsymbol{\varphi}_{i}^{T} (\boldsymbol{\beta} \mathbf{I} - \boldsymbol{\beta} \mathbf{I} \boldsymbol{\Phi} (\mathbf{A} + \boldsymbol{\Phi}^{T} \boldsymbol{\beta} \mathbf{I} \boldsymbol{\Phi})^{-1} \boldsymbol{\Phi}^{T} \boldsymbol{\beta} \mathbf{I}) \mathbf{t}$$
$$= \boldsymbol{\varphi}_{i}^{T} (\boldsymbol{\beta} - \boldsymbol{\beta}^{2} \boldsymbol{\Phi} (\mathbf{A} + \boldsymbol{\beta} \boldsymbol{\Phi}^{T} \boldsymbol{\Phi})^{-1} \boldsymbol{\Phi}^{T}) \mathbf{t}$$
$$= \boldsymbol{\varphi}_{i}^{T} (\boldsymbol{\beta} - \boldsymbol{\beta}^{2} \boldsymbol{\Phi} \boldsymbol{\Sigma} \boldsymbol{\Phi}^{T}) \mathbf{t}$$
$$= \boldsymbol{\beta} \boldsymbol{\varphi}_{i}^{T} \mathbf{t} - \boldsymbol{\beta}^{2} \boldsymbol{\varphi}_{i}^{T} \boldsymbol{\Phi} \boldsymbol{\Sigma} \boldsymbol{\Phi}^{T} \mathbf{t}$$
Similarly, we can obtain:
$$S_{i} = \boldsymbol{\varphi}_{i}^{T} \mathbf{C}^{-1} \boldsymbol{\varphi}_{i}$$
$$= \boldsymbol{\varphi}_{i}^{T} (\beta - \beta^{2} \boldsymbol{\Phi} \boldsymbol{\Sigma} \boldsymbol{\Phi}^{T}) \boldsymbol{\varphi}_{i}$$
$$= \beta \boldsymbol{\varphi}_{i}^{T} \boldsymbol{\varphi}_{i} - \beta^{2} \boldsymbol{\varphi}_{i}^{T} \boldsymbol{\Phi} \boldsymbol{\Sigma} \boldsymbol{\Phi}^{T} \boldsymbol{\varphi}_{i}$$
Just as required.
| 1,771
|
7
|
7.18
|
easy
|
Show that the gradient vector and Hessian matrix of the log posterior distribution $= \sum_{n=1}^{N} \left\{ t_n \ln y_n + (1 - t_n) \ln(1 - y_n) \right\} - \frac{1}{2} \mathbf{w}^{\mathrm{T}} \mathbf{A} \mathbf{w} + \text{const} \quad$ for the classification relevance vector machine are given by $\nabla \ln p(\mathbf{w}|\mathbf{t}, \boldsymbol{\alpha}) = \boldsymbol{\Phi}^{\mathrm{T}}(\mathbf{t} - \mathbf{y}) - \mathbf{A}\mathbf{w}$ and $\nabla \nabla \ln p(\mathbf{w}|\mathbf{t}, \boldsymbol{\alpha}) = -(\boldsymbol{\Phi}^{\mathrm{T}} \mathbf{B} \boldsymbol{\Phi} + \mathbf{A})$.
|
We begin by deriving the first term in Eq $= \sum_{n=1}^{N} \left\{ t_n \ln y_n + (1 - t_n) \ln(1 - y_n) \right\} - \frac{1}{2} \mathbf{w}^{\mathrm{T}} \mathbf{A} \mathbf{w} + \text{const} \quad$ with respect to $\mathbf{w}$ . This can be easily evaluate based on Eq (4.90)-(4.91).
$$\frac{\partial}{\partial \mathbf{w}} \left\{ \sum_{n=1}^{N} t_n \ln y_n + (1 - t_n) \ln(1 - y_n) \right\} = \sum_{n=1}^{N} (t_n - y_n) \boldsymbol{\phi}_n = \boldsymbol{\Phi}^T (\mathbf{t} - \mathbf{y})$$
Since the derivative of the second term in Eq $= \sum_{n=1}^{N} \left\{ t_n \ln y_n + (1 - t_n) \ln(1 - y_n) \right\} - \frac{1}{2} \mathbf{w}^{\mathrm{T}} \mathbf{A} \mathbf{w} + \text{const} \quad$ with respect to $\mathbf{w}$ is rather simple to obtain. Therefore, The first derivative of Eq $= \sum_{n=1}^{N} \left\{ t_n \ln y_n + (1 - t_n) \ln(1 - y_n) \right\} - \frac{1}{2} \mathbf{w}^{\mathrm{T}} \mathbf{A} \mathbf{w} + \text{const} \quad$ with respect to $\mathbf{w}$ is:
$$\frac{\partial \ln p}{\partial \mathbf{w}} = \mathbf{\Phi}^T (\mathbf{t} - \mathbf{y}) - \mathbf{A}\mathbf{w}$$
For the Hessian matrix, we can first obtain:
$$\frac{\partial}{\partial \mathbf{w}} \left\{ \mathbf{\Phi}^{T} (\mathbf{t} - \mathbf{y}) \right\} = \sum_{n=1}^{N} \frac{\partial}{\partial \mathbf{w}} \left\{ (t_{n} - y_{n}) \boldsymbol{\phi}_{n} \right\}$$
$$= -\sum_{n=1}^{N} \frac{\partial}{\partial \mathbf{w}} \left\{ y_{n} \cdot \boldsymbol{\phi}_{n} \right\}$$
$$= -\sum_{n=1}^{N} \frac{\partial \sigma(\mathbf{w}^{T} \boldsymbol{\phi}_{n})}{\partial \mathbf{w}} \cdot \boldsymbol{\phi}_{n}^{T}$$
$$= -\sum_{n=1}^{N} \frac{\partial \sigma(a)}{\partial a} \cdot \frac{\partial a}{\partial a} \cdot \boldsymbol{\phi}_{n}^{T}$$
Where we have defined $a = \mathbf{w}^T \boldsymbol{\phi}_n$ . Then we can utilize Eq $\frac{d\sigma}{da} = \sigma(1 - \sigma).$ to derive:
$$\frac{\partial}{\partial \mathbf{w}} \Big\{ \mathbf{\Phi}^T (\mathbf{t} - \mathbf{y}) \Big\} = -\sum_{n=1}^N \sigma (1 - \sigma) \cdot \boldsymbol{\phi}_n \cdot \boldsymbol{\phi}_n^T = -\mathbf{\Phi}^T \mathbf{B} \mathbf{\Phi}$$
Where **B** is a diagonal $N \times N$ matrix with elements $b_n = y_n(1-y_n)$ . Therefore, we can obtain the Hessian matrix:
$$\mathbf{H} = \frac{\partial}{\partial \mathbf{w}} \left\{ \frac{\partial \ln p}{\partial \mathbf{w}} \right\} = -(\mathbf{\Phi}^T \mathbf{B} \mathbf{\Phi} + \mathbf{A})$$
Just as required.
| 2,459
|
7
|
7.19
|
medium
|
Verify that maximization of the approximate log marginal likelihood function $\simeq p(\mathbf{t}|\mathbf{w}^*)p(\mathbf{w}^*|\alpha)(2\pi)^{M/2}|\Sigma|^{1/2}.$ for the classification relevance vector machine leads to the result $\alpha_i^{\text{new}} = \frac{\gamma_i}{(w_i^{})^2}$ for re-estimation of the hyperparameters.
# Graphical Models
Probabilities play a central role in modern pattern recognition. We have seen in Chapter 1 that probability theory can be expressed in terms of two simple equations corresponding to the sum rule and the product rule. All of the probabilistic inference and learning manipulations discussed in this book, no matter how complex, amount to repeated application of these two equations. We could therefore proceed to formulate and solve complicated probabilistic models purely by algebraic manipulation. However, we shall find it highly advantageous to augment the analysis using diagrammatic representations of probability distributions, called *probabilistic graphical models*. These offer several useful properties:
- 1. They provide a simple way to visualize the structure of a probabilistic model and can be used to design and motivate new models.
- 2. Insights into the properties of the model, including conditional independence properties, can be obtained by inspection of the graph.
3. Complex computations, required to perform inference and learning in sophisticated models, can be expressed in terms of graphical manipulations, in which underlying mathematical expressions are carried along implicitly.
A graph comprises *nodes* (also called *vertices*) connected by *links* (also known as *edges* or *arcs*). In a probabilistic graphical model, each node represents a random variable (or group of random variables), and the links express probabilistic relationships between these variables. The graph then captures the way in which the joint distribution over all of the random variables can be decomposed into a product of factors each depending only on a subset of the variables. We shall begin by discussing *Bayesian networks*, also known as *directed graphical models*, in which the links of the graphs have a particular directionality indicated by arrows. The other major class of graphical models are *Markov random fields*, also known as *undirected graphical models*, in which the links do not carry arrows and have no directional significance. Directed graphs are useful for expressing causal relationships between random variables, whereas undirected graphs are better suited to expressing soft constraints between random variables. For the purposes of solving inference problems, it is often convenient to convert both directed and undirected graphs into a different representation called a *factor graph*.
In this chapter, we shall focus on the key aspects of graphical models as needed for applications in pattern recognition and machine learning. More general treatments of graphical models can be found in the books by Whittaker (1990), Lauritzen (1996), Jensen (1996), Castillo *et al.* (1997), Jordan (1999), Cowell *et al.* (1999), and Jordan (2007).
### 8.1. Bayesian Networks
In order to motivate the use of directed graphs to describe probability distributions, consider first an arbitrary joint distribution p(a,b,c) over three variables a,b, and c. Note that at this stage, we do not need to specify anything further about these variables, such as whether they are discrete or continuous. Indeed, one of the powerful aspects of graphical models is that a specific graph can make probabilistic statements for a broad class of distributions. By application of the product rule of probability $p(X,Y) = p(Y|X)p(X).$, we can write the joint distribution in the form
$$p(a, b, c) = p(c|a, b)p(a, b).$$
$p(a, b, c) = p(c|a, b)p(a, b).$
A second application of the product rule, this time to the second term on the right-hand side of $p(a, b, c) = p(c|a, b)p(a, b).$, gives
$$p(a, b, c) = p(c|a, b)p(b|a)p(a).$$
$p(a, b, c) = p(c|a, b)p(b|a)p(a).$
Note that this decomposition holds for any choice of the joint distribution. We now represent the right-hand side of $p(a, b, c) = p(c|a, b)p(b|a)p(a).$ in terms of a simple graphical model as follows. First we introduce a node for each of the random variables a, b, and c and associate each node with the corresponding conditional distribution on the right-hand side of
**Figure 8.1** A directed graphical model representing the joint probability distribution over three variables *a*, *b*, and *c*, corresponding to the decomposition on the right-hand side of $p(a, b, c) = p(c|a, b)p(b|a)p(a).$.

$p(a, b, c) = p(c|a, b)p(b|a)p(a).$. Then, for each conditional distribution we add directed links (arrows) to the graph from the nodes corresponding to the variables on which the distribution is conditioned. Thus for the factor p(c|a,b), there will be links from nodes a and b to node c, whereas for the factor p(a) there will be no incoming links. The result is the graph shown in Figure 8.1. If there is a link going from a node a to a node b, then we say that node a is the *parent* of node b, and we say that node b is the *child* of node a. Note that we shall not make any formal distinction between a node and the variable to which it corresponds but will simply use the same symbol to refer to both.
An interesting point to note about $p(a, b, c) = p(c|a, b)p(b|a)p(a).$ is that the left-hand side is symmetrical with respect to the three variables a, b, and c, whereas the right-hand side is not. Indeed, in making the decomposition in (8.2), we have implicitly chosen a particular ordering, namely a, b, c, and had we chosen a different ordering we would have obtained a different decomposition and hence a different graphical representation. We shall return to this point later.
For the moment let us extend the example of Figure 8.1 by considering the joint distribution over K variables given by $p(x_1, \ldots, x_K)$ . By repeated application of the product rule of probability, this joint distribution can be written as a product of conditional distributions, one for each of the variables
$$p(x_1, \dots, x_K) = p(x_K | x_1, \dots, x_{K-1}) \dots p(x_2 | x_1) p(x_1).$$
(8.3)
For a given choice of K, we can again represent this as a directed graph having K nodes, one for each conditional distribution on the right-hand side of (8.3), with each node having incoming links from all lower numbered nodes. We say that this graph is *fully connected* because there is a link between every pair of nodes.
So far, we have worked with completely general joint distributions, so that the decompositions, and their representations as fully connected graphs, will be applicable to any choice of distribution. As we shall see shortly, it is the *absence* of links in the graph that conveys interesting information about the properties of the class of distributions that the graph represents. Consider the graph shown in Figure 8.2. This is not a fully connected graph because, for instance, there is no link from $x_1$ to $x_2$ or from $x_3$ to $x_7$ .
We shall now go from this graph to the corresponding representation of the joint probability distribution written in terms of the product of a set of conditional distributions, one for each node in the graph. Each such conditional distribution will be conditioned only on the parents of the corresponding node in the graph. For instance, $x_5$ will be conditioned on $x_1$ and $x_3$ . The joint distribution of all 7 variables
**Figure 8.2** Example of a directed acyclic graph describing the joint distribution over variables $x_1, \ldots, x_7$ . The corresponding decomposition of the joint distribution is given by (8.4).
|
We begin from Eq $\simeq p(\mathbf{t}|\mathbf{w}^*)p(\mathbf{w}^*|\alpha)(2\pi)^{M/2}|\Sigma|^{1/2}.$.
$$p(\mathbf{t}|\alpha) = p(\mathbf{t}|\mathbf{w}^*)p(\mathbf{w}^*|\alpha)(2\pi)^{M/2}|\mathbf{\Sigma}|^{1/2}$$
$$= \left[\prod_{n=1}^{N} p(t_n|x_n, \mathbf{w})\right] \left[\prod_{i=1}^{M} \mathcal{N}(w_i|0, \alpha_i^{-1})\right] (2\pi)^{M/2}|\mathbf{\Sigma}|^{1/2}\Big|_{\mathbf{w}=\mathbf{w}^*}$$
$$= \left[\prod_{n=1}^{N} p(t_n|x_n, \mathbf{w})\right] \cdot \mathcal{N}(\mathbf{w}|\mathbf{0}, \mathbf{A}) \cdot (2\pi)^{M/2}|\mathbf{\Sigma}|^{1/2}\Big|_{\mathbf{w}=\mathbf{w}^*}$$
We further take logarithm for both sides.
$$\begin{split} \ln p(\mathbf{t}|\alpha) &= \left[ \left. \sum_{n=1}^{N} \ln p(t_n|x_n, \mathbf{w}) + \ln \mathcal{N}(\mathbf{w}|\mathbf{0}, \mathbf{A}) + \frac{M}{2} \ln 2\pi + \frac{1}{2} \ln |\mathbf{\Sigma}| \right] \right|_{\mathbf{w} = \mathbf{w}^*} \\ &= \left[ \left. \sum_{n=1}^{N} \left[ t_n \ln y_n + (1 - t_n) \ln (1 - y_n) \right] - \frac{1}{2} \mathbf{w}^T \mathbf{A} \mathbf{w} - \frac{1}{2} \ln |\mathbf{A}| + \frac{1}{2} \ln |\mathbf{\Sigma}| + const \right] \right|_{\mathbf{w} = \mathbf{w}^*} \\ &= \left[ \left. \sum_{n=1}^{N} \left[ t_n \ln y_n + (1 - t_n) \ln (1 - y_n) \right] - \frac{1}{2} \mathbf{w}^T \mathbf{A} \mathbf{w} \right] + \left[ \frac{1}{2} \ln |\mathbf{\Sigma}| - \frac{1}{2} \ln |\mathbf{A}| + const \right] \right|_{\mathbf{w} = \mathbf{w}^*} \end{split}$$
Using the Chain rule, we can obtain:
$$\left. \frac{\partial \ln p(\mathbf{t}|\alpha)}{\partial \alpha_i} \right|_{\mathbf{w} = \mathbf{w}^*} = \frac{\partial \ln p(\mathbf{t}|\alpha)}{\partial \mathbf{w}} \frac{\partial \mathbf{w}}{\partial \alpha_i} \right|_{\mathbf{w} = \mathbf{w}^*}$$
Observing Eq $= \sum_{n=1}^{N} \left\{ t_n \ln y_n + (1 - t_n) \ln(1 - y_n) \right\} - \frac{1}{2} \mathbf{w}^{\mathrm{T}} \mathbf{A} \mathbf{w} + \text{const} \quad$, $\nabla \ln p(\mathbf{w}|\mathbf{t}, \boldsymbol{\alpha}) = \boldsymbol{\Phi}^{\mathrm{T}}(\mathbf{t} - \mathbf{y}) - \mathbf{A}\mathbf{w}$ and that $\nabla \ln p(\mathbf{w}|\mathbf{t}, \boldsymbol{\alpha}) = \boldsymbol{\Phi}^{\mathrm{T}}(\mathbf{t} - \mathbf{y}) - \mathbf{A}\mathbf{w}$ will equal 0 at $\mathbf{w}^*$ , we can conclude that the first term on the right hand side of $\ln p(\mathbf{t}|\alpha)$ will have zero derivative with respect to $\mathbf{w}$ at $\mathbf{w}^*$ . Therefore, we only need to focus on the second term:
$$\left. \frac{\partial \ln p(\mathbf{t}|\alpha)}{\partial \alpha_i} \right|_{\mathbf{w} = \mathbf{w}^*} = \frac{\partial}{\partial \alpha_i} \left[ \frac{1}{2} \ln |\mathbf{\Sigma}| - \frac{1}{2} \ln |\mathbf{A}| \right] \bigg|_{\mathbf{w} = \mathbf{w}^*}$$
It is rather easy to obtain:
$$\frac{\partial}{\partial \alpha_i} [-\frac{1}{2} \ln |\mathbf{A}|] = -\frac{1}{2} \frac{\partial}{\partial \alpha_i} \left[ \sum_i \ln \alpha_i^{-1} \right] = \frac{1}{2\alpha_i}$$
Then we follow the same procedure as in Prob.7.12, we can obtain:
$$\frac{\partial}{\partial \alpha_i} \left[ \frac{1}{2} \ln |\mathbf{\Sigma}| \right] = -\frac{1}{2} \Sigma_{ii}$$
Therefore, we obtain:
$$\frac{\partial \ln p(\mathbf{t}|\alpha)}{\partial \alpha_i} = \frac{1}{2\alpha_i} - \frac{1}{2} \Sigma_{ii}$$
Note: here I draw a different conclusion as the main text. I have also verified my result in another way. You can write the prior as the product of $\mathcal{N}(w_i|0,\alpha_i^{-1})$ instead of $\mathcal{N}(\mathbf{w}|\mathbf{0},\mathbf{A})$ . In this form, since we know that:
$$\frac{\partial}{\partial \alpha_i} \sum_{i=1}^M \ln \mathcal{N}(w_i|0,\alpha_i^{-1}) = \frac{\partial}{\partial \alpha_i} (\frac{1}{2} \ln \alpha_i - \frac{\alpha_i}{2} w_i^2) = \frac{1}{2\alpha_i} - \frac{1}{2} (w_i^*)^2$$
The above expression can be used to replace the derivative of $-1/2\mathbf{w}^T\mathbf{A}\mathbf{w}-1/2\ln|\mathbf{A}|$ . Since the derivative of the likelihood with respect to $\alpha_i$ is not zero at $\mathbf{w}^*$ , $-\frac{1}{2}(w_i^{})^2 + \frac{1}{2\alpha_i} - \frac{1}{2}\Sigma_{ii} = 0.$ seems not right anyway.
# 0.8 Graphical Models
| 4,137
|
7
|
7.2
|
easy
|
Show that, if the 1 on the right-hand side of the constraint $t_n\left(\mathbf{w}^{\mathrm{T}}\boldsymbol{\phi}(\mathbf{x}_n) + b\right) \geqslant 1, \qquad n = 1, \dots, N.$ is replaced by some arbitrary constant $\gamma > 0$ , the solution for the maximum margin hyperplane is unchanged.
|
Suppose we have find $\mathbf{w}_0$ and $b_0$ , which can let all points satisfy Eq $t_n\left(\mathbf{w}^{\mathrm{T}}\boldsymbol{\phi}(\mathbf{x}_n) + b\right) \geqslant 1, \qquad n = 1, \dots, N.$ and simultaneously minimize Eq $\underset{\mathbf{w},b}{\operatorname{arg\,max}} \left\{ \frac{1}{\|\mathbf{w}\|} \min_{n} \left[ t_n \left( \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) + b \right) \right] \right\}$. This hyperlane decided by $\mathbf{w}_0$ and $b_0$ is the optimal classification margin. Now if the constraint in Eq $t_n\left(\mathbf{w}^{\mathrm{T}}\boldsymbol{\phi}(\mathbf{x}_n) + b\right) \geqslant 1, \qquad n = 1, \dots, N.$ becomes:
$$t_n(\mathbf{w}^T\phi(\mathbf{x}_n)+b) \ge \gamma$$
We can conclude that if we perform change of variables: $\mathbf{w}_0 - \gamma \mathbf{w}_0$ and $b - \gamma p$ , the constraint will still satisfy and Eq $\underset{\mathbf{w},b}{\operatorname{arg\,max}} \left\{ \frac{1}{\|\mathbf{w}\|} \min_{n} \left[ t_n \left( \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) + b \right) \right] \right\}$ will be minimize. In other words, if the right side of the constraint changes from 1 to $\gamma$ , The new hyperlane decided by $\gamma \mathbf{w}_0$ and $\gamma b_0$ is the optimal classification margin. However, the minimum distance from the points to the classification margin is still the same.
| 1,413
|
7
|
7.3
|
medium
|
$ Show that, irrespective of the dimensionality of the data space, a data set consisting of just two data points, one from each class, is sufficient to determine the location of the maximum-margin hyperplane.
|
Suppose we have $\mathbf{x}_1$ belongs to class one and we denote its target value $t_1 = 1$ , and similarly $\mathbf{x}_2$ belongs to class two and we denote its target value $t_2 = -1$ . Since we only have two points, they must have $t_i \cdot y(\mathbf{x}_i) = 1$ as shown in Fig. 7.1. Therefore, we have an equality constrained optimization problem:
minimize
$$\frac{1}{2}||\mathbf{w}||^2$$
s.t. $\begin{cases} \mathbf{w}^T \boldsymbol{\phi}(\mathbf{x}_1) + b = 1 \\ \mathbf{w}^T \boldsymbol{\phi}(\mathbf{x}_2) + b = -1 \end{cases}$
This is an convex optimization problem and it has been proved that global optimal exists.
| 641
|
7
|
7.4
|
medium
|
Show that the value $\rho$ of the margin for the maximum-margin hyperplane is given by
$$\frac{1}{\rho^2} = \sum_{n=1}^{N} a_n \tag{7.123}$$
where $\{a_n\}$ are given by maximizing $\widetilde{L}(\mathbf{a}) = \sum_{n=1}^{N} a_n - \frac{1}{2} \sum_{n=1}^{N} \sum_{m=1}^{N} a_n a_m t_n t_m k(\mathbf{x}_n, \mathbf{x}_m)$ subject to the constraints $a_n \geqslant 0, \qquad n = 1, \dots, N,$ and $\sum_{n=1}^{N} a_n t_n = 0.$.
|
Since we know that
$$\rho = \frac{1}{||\mathbf{w}||}$$
Therefore, we have:
$$\frac{1}{\rho^2} = ||\mathbf{w}||^2$$
In other words, we only need to prove that
$$||\mathbf{w}||^2 = \sum_{n=1}^N a_n$$
When we find th optimal solution, the second term on the right hand side of Eq $L(\mathbf{w}, b, \mathbf{a}) = \frac{1}{2} \|\mathbf{w}\|^2 - \sum_{n=1}^{N} a_n \left\{ t_n(\mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) + b) - 1 \right\}$ vanishes. Based on Eq $\mathbf{w} = \sum_{n=1}^{N} a_n t_n \phi(\mathbf{x}_n)$ and Eq $\widetilde{L}(\mathbf{a}) = \sum_{n=1}^{N} a_n - \frac{1}{2} \sum_{n=1}^{N} \sum_{m=1}^{N} a_n a_m t_n t_m k(\mathbf{x}_n, \mathbf{x}_m)$, we also observe that its dual is given by:
$$\tilde{L}(\mathbf{a}) = \sum_{n=1}^{N} a_n - \frac{1}{2} ||\mathbf{w}||^2$$
Therefore, we have:
$$\frac{1}{2}||\mathbf{w}||^2 = L(\mathbf{a}) = \tilde{L}(\mathbf{a}) = \sum_{n=1}^{N} a_n - \frac{1}{2}||\mathbf{w}||^2$$
Rearranging it, we will obtain what we are required.
| 1,020
|
7
|
7.5
|
medium
|
$ Show that the values of $\rho$ and $\{a_n\}$ in the previous exercise also satisfy
$$\frac{1}{\rho^2} = 2\widetilde{L}(\mathbf{a}) \tag{7.124}$$
where $\widetilde{L}(\mathbf{a})$ is defined by $\widetilde{L}(\mathbf{a}) = \sum_{n=1}^{N} a_n - \frac{1}{2} \sum_{n=1}^{N} \sum_{m=1}^{N} a_n a_m t_n t_m k(\mathbf{x}_n, \mathbf{x}_m)$. Similarly, show that
$$\frac{1}{\rho^2} = \|\mathbf{w}\|^2. \tag{7.125}$$
|
We have already proved this problem in the previous one.
| 56
|
7
|
7.6
|
easy
|
Consider the logistic regression model with a target variable $t \in \{-1, 1\}$ . If we define $p(t = 1|y) = \sigma(y)$ where $y(\mathbf{x})$ is given by $y(\mathbf{x}) = \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}) + b$, show that the negative log likelihood, with the addition of a quadratic regularization term, takes the form $\sum_{n=1}^{N} E_{LR}(y_n t_n) + \lambda \|\mathbf{w}\|^2.$.
|
If the target variable can only choose from $\{-1,1\}$ , and we know that
$$p(t=1|y) = \sigma(y)$$
We can obtain:
$$p(t = -1|y) = 1 - p(t = 1|y) = 1 - \sigma(y) = \sigma(-y)$$
Therefore, combining these two situations, we can derive:
$$p(t|y) = \sigma(yt)$$
Consequently, we can obtain the negative log likelihood:
$$-\ln p(\mathbf{D}) = -\ln \prod_{n=1}^N \sigma(y_n t_n) = -\sum_{n=1}^N \ln \sigma(y_n t_n) = \sum_{n=1}^N E_{LR}(y_n t_n)$$
Here **D** represents the dataset, i.e., $\mathbf{D} = \{(\mathbf{x}_n, t_n); n = 1, 2, ..., N\}$ , and $E_{LR}(yt)$ is given by Eq $E_{LR}(yt) = \ln(1 + \exp(-yt)).$. With the addition of a quadratic regularization, we obtain exactly Eq $\sum_{n=1}^{N} E_{LR}(y_n t_n) + \lambda \|\mathbf{w}\|^2.$.
| 769
|
7
|
7.7
|
easy
|
Consider the Lagrangian $- \sum_{n=1}^{N} a_n (\epsilon + \xi_n + y_n - t_n) - \sum_{n=1}^{N} \hat{a}_n (\epsilon + \hat{\xi}_n - y_n + t_n). \quad$ for the regression support vector machine. By setting the derivatives of the Lagrangian with respect to $\mathbf{w}$ , b, $\xi_n$ , and $\hat{\xi}_n$ to zero and then back substituting to eliminate the corresponding variables, show that the dual Lagrangian is given by $-\epsilon \sum_{n=1}^{N} (a_n + \widehat{a}_n) + \sum_{n=1}^{N} (a_n - \widehat{a}_n)t_n$.
|
The derivatives are easy to obtain. Our main task is to derive Eq $-\epsilon \sum_{n=1}^{N} (a_n + \widehat{a}_n) + \sum_{n=1}^{N} (a_n - \widehat{a}_n)t_n$
using Eq (7.57)-(7.60).
$$\begin{split} L &= C \sum_{n=1}^{N} (\xi_{n} + \widehat{\xi}_{n}) + \frac{1}{2} ||\mathbf{w}||^{2} - \sum_{n=1}^{N} (\mu_{n} \xi_{n} + \widehat{\mu}_{n} \widehat{\xi}_{n}) \\ &- \sum_{n=1}^{N} a_{n} (\epsilon + \xi_{n} + y_{n} - t_{n}) - \sum_{n=1}^{N} \widehat{a}_{n} (\epsilon + \widehat{\xi}_{n} + y_{n} - t_{n}) \\ &= C \sum_{n=1}^{N} (\xi_{n} + \widehat{\xi}_{n}) + \frac{1}{2} ||\mathbf{w}||^{2} - \sum_{n=1}^{N} (a_{n} + \mu_{n}) \xi_{n} - \sum_{n=1}^{N} (\widehat{a}_{n} + \widehat{\mu}_{n}) \widehat{\xi}_{n} \\ &- \sum_{n=1}^{N} a_{n} (\epsilon + y_{n} - t_{n}) - \sum_{n=1}^{N} \widehat{a}_{n} (\epsilon + y_{n} - t_{n}) \\ &= C \sum_{n=1}^{N} (\xi_{n} + \widehat{\xi}_{n}) + \frac{1}{2} ||\mathbf{w}||^{2} - \sum_{n=1}^{N} C \xi_{n} - \sum_{n=1}^{N} C \widehat{\xi}_{n} \\ &- \sum_{n=1}^{N} (a_{n} + \widehat{a}_{n}) \epsilon - \sum_{n=1}^{N} (a_{n} - \widehat{a}_{n}) (y_{n} - t_{n}) \\ &= \frac{1}{2} ||\mathbf{w}||^{2} - \sum_{n=1}^{N} (a_{n} + \widehat{a}_{n}) \epsilon - \sum_{n=1}^{N} (a_{n} - \widehat{a}_{n}) (y_{n} - t_{n}) \\ &= \frac{1}{2} ||\mathbf{w}||^{2} - \sum_{n=1}^{N} (a_{n} - \widehat{a}_{n}) (\mathbf{w}^{T} \phi(\mathbf{x}_{n}) + b - t_{n}) - \sum_{n=1}^{N} (a_{n} + \widehat{a}_{n}) \epsilon + \sum_{n=1}^{N} (a_{n} - \widehat{a}_{n}) t_{n} \\ &= \frac{1}{2} ||\mathbf{w}||^{2} - \sum_{n=1}^{N} (a_{n} - \widehat{a}_{n}) \mathbf{w}^{T} \phi(\mathbf{x}_{n}) - \sum_{n=1}^{N} (a_{n} + \widehat{a}_{n}) \epsilon + \sum_{n=1}^{N} (a_{n} - \widehat{a}_{n}) t_{n} \\ &= \frac{1}{2} ||\mathbf{w}||^{2} - \sum_{n=1}^{N} (a_{n} - \widehat{a}_{n}) \mathbf{w}^{T} \phi(\mathbf{x}_{n}) - \sum_{n=1}^{N} (a_{n} - \widehat{a}_{n}) t_{n} \\ &= -\frac{1}{2} ||\mathbf{w}||^{2} - \sum_{n=1}^{N} (a_{n} + \widehat{a}_{n}) \epsilon + \sum_{n=1}^{N} (a_{n} - \widehat{a}_{n}) t_{n} \\ &= -\frac{1}{2} ||\mathbf{w}||^{2} - \sum_{n=1}^{N} (a_{n} + \widehat{a}_{n}) \epsilon + \sum_{n=1}^{N} (a_{n} - \widehat{a}_{n}) t_{n} \end{aligned}$$
Just as required.
| 2,163
|
7
|
7.8
|
easy
|
For the regression support vector machine considered in Section 7.1.4, show that all training data points for which $\xi_n > 0$ will have $a_n = C$ , and similarly all points for which $\hat{\xi}_n > 0$ will have $\hat{a}_n = C$ .
|
This obviously follows from the KKT condition, described in Eq $(C - a_n)\xi_n = 0$ and $(C - \widehat{a}_n)\widehat{\xi}_n = 0.$.
| 146
|
7
|
7.9
|
easy
|
Verify the results $\mathbf{m} = \beta \mathbf{\Sigma} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t}$ and $\Sigma = (\mathbf{A} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi})^{-1}$ for the mean and covariance of the posterior distribution over weights in the regression RVM.
|
The prior is given by Eq $p(\mathbf{w}|\boldsymbol{\alpha}) = \prod_{i=1}^{M} \mathcal{N}(w_i|0, \alpha_i^{-1})$.
$$p(\mathbf{w}|\boldsymbol{\alpha}) = \prod_{i=1}^{M} \mathcal{N}(0, \alpha_i^{-1}) = \mathcal{N}(\mathbf{w}|\mathbf{0}, \mathbf{A}^{-1})$$
Where we have defined:
$$\mathbf{A} = diag(\alpha_i)$$
The likelihood is given by Eq $p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta) = \prod_{n=1}^{N} p(t_n|\mathbf{x}_n, \mathbf{w}, \beta^{-1}).$.
$$p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta) = \prod_{n=1}^{N} p(t_n|\mathbf{x}_n, \mathbf{w}, \beta^{-1})$$
$$= \prod_{n=1}^{N} \mathcal{N}(t_n|\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x}_n), \beta^{-1})$$
$$= \mathcal{N}(\mathbf{t}|\mathbf{\Phi}\mathbf{w}, \beta^{-1}\mathbf{I})$$
Where we have defined:
$$\mathbf{\Phi} = [\boldsymbol{\phi}(\mathbf{x}_1), \boldsymbol{\phi}(\mathbf{x}_2), ..., \boldsymbol{\phi}(\mathbf{x}_n)]^T$$
Our definitions of $\Phi$ and A as consistent with the main text. Therefore, according to Eq (2.113)-Eq $\Sigma = (\mathbf{\Lambda} + \mathbf{A}^{\mathrm{T}} \mathbf{L} \mathbf{A})^{-1}.$, we have:
$$p(\mathbf{w}|\mathbf{t}, \mathbf{X}, \boldsymbol{\alpha}, \boldsymbol{\beta}) = \mathcal{N}(\mathbf{m}, \boldsymbol{\Sigma})$$
Where we have defined:
$$\mathbf{\Sigma} = (\mathbf{A} + \beta \mathbf{\Phi}^T \mathbf{\Phi})^{-1}$$
And
$$\mathbf{m} = \beta \mathbf{\Sigma} \mathbf{\Phi}^T \mathbf{t}$$
Just as required.
## Problem 7.10&7.11 Solution
It is quite similar to the previous problem. We begin by writting down the prior:
$$p(\mathbf{w}|\boldsymbol{\alpha}) = \prod_{i=1}^{M} \mathcal{N}(0, \alpha_i^{-1}) = \mathcal{N}(\mathbf{w}|\mathbf{0}, \mathbf{A}^{-1})$$
Then we write down the likelihood:
$$p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta) = \prod_{n=1}^{N} p(t_n|\mathbf{x}_n, \mathbf{w}, \beta^{-1})$$
$$= \prod_{n=1}^{N} \mathcal{N}(t_n|\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x}_n), \beta^{-1})$$
$$= \mathcal{N}(\mathbf{t}|\boldsymbol{\Phi}\mathbf{w}, \beta^{-1}\mathbf{I})$$
Since we know that:
$$p(\mathbf{t}|\mathbf{X}, \boldsymbol{\alpha}, \boldsymbol{\beta}) = \int p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \boldsymbol{\beta}) p(\mathbf{w}|\boldsymbol{\alpha}) d\mathbf{w}$$
First as required by Prob.7.10, we will solve it by completing the square. We begin by write down the expression for $p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta)$ :
$$p(\mathbf{t}|\mathbf{X}, \boldsymbol{\alpha}, \boldsymbol{\beta}) = \int \mathcal{N}(\mathbf{w}|\mathbf{0}, \mathbf{A}^{-1}) \mathcal{N}(\mathbf{t}|\mathbf{\Phi}\mathbf{w}, \boldsymbol{\beta}^{-1}\mathbf{I}) d\mathbf{w}$$
$$= (\frac{\beta}{2\pi})^{N/2} \cdot \frac{1}{(2\pi)^{M/2}} \cdot \prod_{m=1}^{M} \alpha_i^{1/2} \cdot \int exp\{-E(\mathbf{w})\} d\mathbf{w}$$
Where we have defined:
$$E(\mathbf{w}) = \frac{1}{2}\mathbf{w}^T \mathbf{A} \mathbf{w} + \frac{\beta}{2}||\mathbf{t} - \mathbf{\Phi} \mathbf{w}||^2$$
We expand $E(\mathbf{w})$ with respect to $\mathbf{w}$ :
$$E(\mathbf{w}) = \frac{1}{2} \left\{ \mathbf{w}^T (\mathbf{A} + \beta \mathbf{\Phi}^T \mathbf{\Phi}) \mathbf{w} - 2\beta \mathbf{t}^T (\mathbf{\Phi} \mathbf{w}) + \beta \mathbf{t}^T \mathbf{t} \right\}$$
$$= \frac{1}{2} \left\{ \mathbf{w}^T \mathbf{\Sigma}^{-1} \mathbf{w} - 2\mathbf{m}^T \mathbf{\Sigma}^{-1} \mathbf{w} + \beta \mathbf{t}^T \mathbf{t} \right\}$$
$$= \frac{1}{2} \left\{ (\mathbf{w} - \mathbf{m})^T \mathbf{\Sigma}^{-1} (\mathbf{w} - \mathbf{m}) + \beta \mathbf{t}^T \mathbf{t} - \mathbf{m}^T \mathbf{\Sigma}^{-1} \mathbf{m} \right\}$$
Where we have used Eq $\mathbf{m} = \beta \mathbf{\Sigma} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t}$ and Eq $\Sigma = (\mathbf{A} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi})^{-1}$. Substituting $E(\mathbf{w})$ into the integral, we will obtain:
$$\begin{split} p(\mathbf{t}|\mathbf{X}, \boldsymbol{\alpha}, \boldsymbol{\beta}) &= (\frac{\beta}{2\pi})^{N/2} \cdot \frac{1}{(2\pi)^{M/2}} \cdot \prod_{m=1}^{M} \alpha_i^{1/2} \cdot \int exp\{-E(\mathbf{w})\} \, d\mathbf{w} \\ &= (\frac{\beta}{2\pi})^{N/2} \cdot \frac{1}{(2\pi)^{M/2}} \cdot \prod_{m=1}^{M} \alpha_i^{1/2} \cdot (2\pi)^{M/2} \cdot |\mathbf{\Sigma}|^{1/2} exp\left\{-\frac{1}{2}(\boldsymbol{\beta} \mathbf{t}^T \mathbf{t} - \mathbf{m}^T \mathbf{\Sigma}^{-1} \mathbf{m})\right\} \\ &= (\frac{\beta}{2\pi})^{N/2} \cdot |\mathbf{\Sigma}|^{1/2} \cdot \prod_{m=1}^{M} \alpha_i^{1/2} \cdot exp\left\{-\frac{1}{2}(\boldsymbol{\beta} \mathbf{t}^T \mathbf{t} - \mathbf{m}^T \mathbf{\Sigma}^{-1} \mathbf{m})\right\} \\ &= (\frac{\beta}{2\pi})^{N/2} \cdot |\mathbf{\Sigma}|^{1/2} \cdot \prod_{m=1}^{M} \alpha_i^{1/2} \cdot exp\left\{-E(\mathbf{t})\right\} \end{split}$$
We further expand $E(\mathbf{t})$ :
$$\begin{split} E(\mathbf{t}) &= \frac{1}{2}(\beta \mathbf{t}^T \mathbf{t} - \mathbf{m}^T \mathbf{\Sigma}^{-1} \mathbf{m}) \\ &= \frac{1}{2}(\beta \mathbf{t}^T \mathbf{t} - (\beta \mathbf{\Sigma} \mathbf{\Phi}^T \mathbf{t})^T \mathbf{\Sigma}^{-1}(\beta \mathbf{\Sigma} \mathbf{\Phi}^T \mathbf{t})) \\ &= \frac{1}{2}(\beta \mathbf{t}^T \mathbf{t} - \beta^2 \mathbf{t}^T \mathbf{\Phi} \mathbf{\Sigma} \mathbf{\Sigma}^{-1} \mathbf{\Sigma} \mathbf{\Phi}^T \mathbf{t}) \\ &= \frac{1}{2}(\beta \mathbf{t}^T \mathbf{t} - \beta^2 \mathbf{t}^T \mathbf{\Phi} \mathbf{\Sigma} \mathbf{\Phi}^T \mathbf{t}) \\ &= \frac{1}{2}(\beta \mathbf{t}^T \mathbf{t} - \beta^2 \mathbf{\Phi} \mathbf{\Sigma} \mathbf{\Phi}^T \mathbf{t}) \\ &= \frac{1}{2} \mathbf{t}^T (\beta \mathbf{I} - \beta^2 \mathbf{\Phi} \mathbf{\Sigma} \mathbf{\Phi}^T) \mathbf{t} \\ &= \frac{1}{2} \mathbf{t}^T \left[ \beta \mathbf{I} - \beta \mathbf{\Phi} (\mathbf{A} + \beta \mathbf{\Phi}^T \mathbf{\Phi})^{-1} \mathbf{\Phi}^T \beta \right] \mathbf{t} \\ &= \frac{1}{2} \mathbf{t}^T (\beta^{-1} \mathbf{I} + \mathbf{\Phi} \mathbf{A}^{-1} \mathbf{\Phi}^T)^{-1} \mathbf{t} = \frac{1}{2} \mathbf{t}^T \mathbf{C}^{-1} \mathbf{t} \end{split}$$
Note that in the last step we have used matrix identity Eq (C.7). Therefore, as we know that the pdf is Gaussian and the exponential term has been given by $E(\mathbf{t})$ , we can easily write down Eq $= -\frac{1}{2} \left\{ N \ln(2\pi) + \ln |\mathbf{C}| + \mathbf{t}^{\mathrm{T}} \mathbf{C}^{-1} \mathbf{t} \right\}$ considering those normalization constant.
What's more, as required by Prob.7.11, the evaluation of the integral can be easily performed using Eq(2.113)- Eq(2.117).
| 6,386
|
8
|
8.1
|
easy
|
By marginalizing out the variables in order, show that the representation $p(\mathbf{x}) = \prod_{k=1}^{K} p(x_k | \mathbf{pa}_k)$ for the joint distribution of a directed graph is correctly normalized, provided each of the conditional distributions is normalized.
|
We are required to prove:
$$\int_{\mathbf{x}} p(\mathbf{x}) d\mathbf{x} = \int_{\mathbf{x}} \prod_{k=1}^{K} p(x_k | pa_k) d\mathbf{x} = 1$$
Here we adopt the same assumption as in the main text: No arrows lead from a higher numbered node to a According to Eq(8.5), we can write:
$$\int_{\mathbf{x}} p(\mathbf{x}) d\mathbf{x} = \int_{\mathbf{x}} \prod_{k=1}^{K} p(x_k | pa_k) d\mathbf{x}
= \int_{\mathbf{x}} p(x_K | pa_K) \prod_{k=1}^{K-1} p(x_k | pa_k) d\mathbf{x}
= \int_{[x_1, x_2, ..., x_{K-1}]} \int_{x_K} \left[ p(x_K | pa_K) \prod_{k=1}^{K-1} p(x_k | pa_k) dx_K \right] dx_1 dx_2, ... dx_{K-1}
= \int_{[x_1, x_2, ..., x_{K-1}]} \left[ \prod_{k=1}^{K-1} p(x_k | pa_k) \int_{x_K} p(x_K | pa_K) dx_K \right] dx_1 dx_2, ... dx_{K-1}
= \int_{[x_1, x_2, ..., x_{K-1}]} \left[ \prod_{k=1}^{K-1} p(x_k | pa_k) \right] dx_1 dx_2, ... dx_{K-1}
= \int_{[x_1, x_2, ..., x_{K-1}]} \prod_{k=1}^{K-1} p(x_k | pa_k) dx_1 dx_2, ... dx_{K-1}$$
Note that from the third line to the fourth line, we have used the fact that $x_1, x_2, ... x_{K-1}$ do not depend on $x_K$ , and thus the product from k = 1 to K-1 can be moved to the outside of the integral with respect to $x_K$ , and that we have used the fact that the conditional probability is correctly normalized from the fourth line to the fifth line. The aforementioned procedure will be repeated for K times until all the variables have been integrated out.
| 1,413
|
8
|
8.10
|
easy
|
Consider the directed graph shown in Figure 8.54 in which none of the variables is observed. Show that $a \perp \!\!\!\perp b \mid \emptyset$ . Suppose we now observe the variable d. Show that in general $a \perp \!\!\!\!\perp b \mid d$ .
|
By examining Fig.8.54, we can obtain:
$$p(a,b,c,d) = p(a)p(b)p(c|a,b)p(d|c)$$
Next we performing summation on both sides with respect to c and d, we can obtain:
$$p(a,b) = p(a)p(b) \sum_{c} \sum_{d} p(c|a,b)p(d|c)$$
$$= p(a)p(b) \sum_{c} p(c|a,b) \left[ \sum_{d} p(d|c) \right]$$
$$= p(a)p(b) \sum_{c} p(c|a,b) \times 1$$
$$= p(a)p(b) \times 1$$
$$= p(a)p(b)$$
If we want to prove that a and b are dependent conditioned on d, we only need to prove:
$$p(a,b|d) = p(a|d)p(b|d)$$
We multiply both sides by p(d) and use Bayes' Theorem, yielding:
$$p(a,b,d) = p(a)p(b|d) \tag{*}$$
In other words, we can equivalently prove the expression above instead. Recall that we have:
$$p(a,b,c,d) = p(a) p(b) p(c|a,b) p(d|c)$$
We perform summation on both sides with respect to c, yielding:
$$p(a,b,d) = p(a)p(b)\sum_{c}p(c|a,b)p(d|c)$$
Combining with (\*), we only need to prove:
$$p(b|d) = p(b) \sum_{c} p(c|a,b) p(d|c)$$
However, we can see that the value of the right hand side depends on a,b and d, while the left hand side only depends on b and d. In general, this expression will not hold, and, thus, a and b are not dependent conditioned on d.
| 1,154
|
8
|
8.11
|
medium
|
Consider the example of the car fuel system shown in Figure 8.21, and suppose that instead of observing the state of the fuel gauge G directly, the gauge is seen by the driver D who reports to us the reading on the gauge. This report is either that the gauge shows full D=1 or that it shows empty D=0. Our driver is a bit unreliable, as expressed through the following probabilities
$$p(D=1|G=1) = 0.9 (8.105)$$
$$p(D=0|G=0) = 0.9. (8.106)$$
Suppose that the driver tells us that the fuel gauge shows empty, in other words that we observe D=0. Evaluate the probability that the tank is empty given only this observation. Similarly, evaluate the corresponding probability given also the observation that the battery is flat, and note that this second probability is lower. Discuss the intuition behind this result, and relate the result to Figure 8.54.
|
This problem is quite straightforward, but it needs some patience. According to the Bayes' Theorem, we have:
$$p(F=0|D=0) = \frac{p(D=0|F=0)p(F=0)}{p(D=0)} \tag{*}$$
We will calculate each of the term on the right hand side. Let's begin from the numerator p(D = 0). According to the sum rule, we have:
$$p(D=0) = p(D=0,G=0) + p(D=0,G=1)$$
$$= p(D=0|G=0)p(G=0) + p(D=0|G=1)p(G=1)$$
$$= 0.9 \times 0.315 + (1-0.9) \times (1-0.315)$$
$$= 0.352$$
Where we have used Eq(8.30), Eq(8.105) and Eq(8.106). Note that the second term in the denominator, i.e., p(F=0), equals 0.1, which can be easily derived from the main test above Eq(8.30). We now only need to calculate p(D=0|F=0). Similarly, according to the sum rule, we have:
$$\begin{split} p(D=0|F=0) &= \sum_{G=0,1} p(D=0,G|F=0) \\ &= \sum_{G=0,1} p(D=0|G,F=0) \, p(G|F=0) \\ &= \sum_{G=0,1} p(D=0|G) \, p(G|F=0) \\ &= 0.9 \times 0.81 + (1-0.9) \times (1-0.81) \\ &= 0.748 \end{split}$$
Several clarifications must be made here. First, from the second line to the third line, we simply eliminate the dependence on F=0 because we know that D only depends on G according to Eq(8.105) and Eq(8.106). Second, from the third line to the fourth line, we have used Eq(8.31), Eq(8.105) and Eq(8.106). Now, we substitute all of them back to (\*), yielding:
$$p(F=0|D=0) = \frac{p(D=0|F=0)p(F=0)}{p(D=0)} = \frac{0.748 \times 0.1}{0.352} = 0.2125$$
Next, we are required to calculate the probability conditioned on both D = 0 and B = 0. Similarly, we can write:
$$\begin{split} p(F=0|D=0,B=0) &= \frac{p(D=0,B=0,F=0)}{p(D=0,B=0)} \\ &= \frac{\sum_{G} p(D=0,B=0,F=0,G)}{\sum_{G} p(D=0,B=0,G)} \\ &= \frac{\sum_{G} p(B=0,F=0,G) p(D=0|B=0,F=0,G)}{\sum_{G} p(B=0,G) p(D=0|B=0,G)} \\ &= \frac{\sum_{G} p(B=0,F=0,G) p(D=0|G)}{\sum_{G} p(B=0,G) p(D=0|G)} \quad (**) \end{split}$$
We need to calculate p(B = 0, F = 0, G) and p(B = 0, G), where G = 0, 1.
We begin by calculating p(B = 0, F = 0, G = 0):
$$p(B=0,F=0,G=0) = p(G=0|B=0,F=0) \times p(B=0,F=0)$$
$$= p(G=0|B=0,F=0) \times p(B=0) \times p(F=0)$$
$$= (1-0.1) \times (1-0.9) \times (1-0.9)$$
$$= 0.009$$
Similarly, we can obtain p(B = 0, F = 0, G = 1) = 0.001. Next we calculate p(B = 0, G):
$$\begin{split} p(B=0,G=0) &= \sum_{F=0,1} p(B=0,G=0,F) \\ &= \sum_{F=0,1} p(G=0|B=0,F) \times p(B=0,F) \\ &= \sum_{F=0,1} p(G=0|B=0,F) \times p(B=0) \times p(F) \\ &= (1-0.1) \times (1-0.9) \times (1-0.9) + (1-0.2) \times (1-0.9) \times 0.9 \\ &= 0.081 \end{split}$$
Similarly, we can obtain p(B = 0, G = 1) = 0.019. We substitute them back into (\*\*), yielding:
$$\begin{split} p(F=0|D=0,B=0) &= \frac{\sum_G p(B=0,F=0,G) \, p(D=0|G)}{\sum_G p(B=0,G) \, p(D=0|G)} \\ &= \frac{0.009 \times 0.9 + 0.001 \times (1-0.9)}{0.081 \times 0.9 + 0.019 \times (1-0.9)} \\ &= 0.1096 \end{split}$$
Just as required. The intuition behind this result coincides with the common sense. Moreover, by analogy to Fig.8.54, the node a and b in Fig.8.54 represents B and F in our case. Node c represents G, while node d represents D. You can use d-separation criterion to verify the conditional properties.
| 3,089
|
8
|
8.12
|
easy
|
Show that there are $2^{M(M-1)/2}$ distinct undirected graphs over a set of M distinct random variables. Draw the 8 possibilities for the case of M=3.
|
An intuitive solution is that we construct a matrix $\mathbf{A}$ with size of $M \times M$ . If there is a link from node i to node j, the entry on the i-th row and j-th column of matrix $\mathbf{A}$ , i.e., $A_{i,j}$ , will equal to 1. Otherwise, it will equal to 0. Since the graph is undirected, the matrix $\mathbf{A}$ will be symmetric. What's more, the element on the diagonal is 0 by definition. For a undirected graph, we can use a matrix $\mathbf{A}$ to represent it. It is also a one-to-one mapping.
In other words, we equivalently count the number of possible matrix **A** satisfying the following criteria: (i) each of the entry is either 0 or 1, (ii) it is symmetric, and (iii) all of the entries on the diagonal are already determined (i.e., they all equal 0).
Using the property of symmetry, we only need to count the free variables on the lower triangle of the matrix. In the first column, there are (M-1) free variables. In the second column, there are (M-2) free variables. Therefore, the total free variables are given by:
$$(M-1)+(M-2)+...+0=\frac{M(M-1)}{2}$$
Each value of these free variables has two choices, i.e., 1 or 0. Therefore, the total number of such matrix is $2^{M(M-1)/2}$ . In the case of M=3, there are 8 possible undirected graphs:

Figure 3: the undirected graph when M = 3
| 1,357
|
8
|
8.13
|
easy
|
Consider the use of iterated conditional modes (ICM) to minimize the energy function given by $E(\mathbf{x}, \mathbf{y}) = h \sum_{i} x_i - \beta \sum_{\{i,j\}} x_i x_j - \eta \sum_{i} x_i y_i$. Write down an expression for the difference in the values of the energy associated with the two states of a particular variable $x_j$ , with all other variables held fixed, and show that it depends only on quantities that are local to $x_j$ in the graph.
|
It is straightforward. Suppose that $x_k$ is the target variable whose state may be $\{-1.1\}$ while all other variables are fixed. According to Eq $E(\mathbf{x}, \mathbf{y}) = h \sum_{i} x_i - \beta \sum_{\{i,j\}} x_i x_j - \eta \sum_{i} x_i y_i$, we can obtain:
$$E(\mathbf{x}, \mathbf{y}) = h \sum_{i \neq k} x_i - \beta \sum_{i,j \neq k} x_i x_j - \eta \sum_{i \neq k} x_i y_i$$
$$+ h x_k - \beta \sum_{m} x_k x_m - \eta x_k y_k$$
Note that we write down the dependence of $E(\mathbf{x}, \mathbf{y})$ on $x_k$ explicitly, which is expressed via the second line. Moreover, the $x_i x_j$ term in the first line doesn't include the pairs $\{x_i, x_j\}$ , which one of them is $x_k$ . These terms are considered by $x_k x_m$ in the secone line. To be more specific, here $x_m$ represents the neighbor of $x_k$ . Noticing that the first line doesn't depend on $x_k$ , we can obtain:
$$E(\mathbf{x}, \mathbf{y})|_{x_k = 1} - E(\mathbf{x}, \mathbf{y})|_{x_k = -1} = 2h - 2\beta \sum_{\mathbf{x}} x_m - 2\eta y_k$$
Obviously, the difference depends locally on $x_k$ , implied by h, the neighbors $x_m$ and its observed value $y_k$ .
| 1,162
|
8
|
8.14
|
easy
|
Consider a particular case of the energy function given by $E(\mathbf{x}, \mathbf{y}) = h \sum_{i} x_i - \beta \sum_{\{i,j\}} x_i x_j - \eta \sum_{i} x_i y_i$ in which the coefficients $\beta = h = 0$ . Show that the most probable configuration of the latent variables is given by $x_i = y_i$ for all i.
|
It is quite obvious. When h = 0, $\beta = 0$ , the energy function reduces to
$$E(\mathbf{x}, \mathbf{y}) = -\eta \sum_{i} x_i y_i$$
If there exists some index j which satisfies $x_j \neq y_j$ , considering that $x_j, y_j \in \{-1.1\}$ , then $x_jy_j$ will equal to -1. By changing the sign of $x_j$ , we can always increase the value of $x_jy_j$ from -1 to 1, and, thus, decrease the energy function $E(\mathbf{x}, \mathbf{y})$ .
Therefore, given the observed binary pixels $y_i \in \{-1.1\}$ , where i = 1, 2, ..., D, in order to obtain the minimum of energy, the optimal choice for $x_i$ is to set it equal to $y_i$ .
| 636
|
8
|
8.15
|
medium
|
Show that the joint distribution $p(x_{n-1}, x_n)$ for two neighbouring nodes in the graph shown in Figure 8.38 is given by an expression of the form $p(x_{n-1}, x_n) = \frac{1}{Z} \mu_{\alpha}(x_{n-1}) \psi_{n-1,n}(x_{n-1}, x_n) \mu_{\beta}(x_n).$.
|
This problem can be solved by analogy to Eq $p(\mathbf{x}) = \frac{1}{Z} \psi_{1,2}(x_1, x_2) \psi_{2,3}(x_2, x_3) \cdots \psi_{N-1,N}(x_{N-1}, x_N).$ - Eq(8.54). We begin by noticing:
$$p(x_{n-1},x_n) = \sum_{x_1} ... \sum_{x_{n-2}} \sum_{x_{n+1}} ... \sum_{x_N} p(\mathbf{x})$$
We also have:
$$p(\mathbf{x}) = \frac{1}{Z} \psi_{1,2}(x_1, x_2) \psi_{2,3}(x_2, x_3) \dots \psi_{N-1,N}(x_{N-1}, x_N)$$
By analogy to Eq(8.52), we can obtain:
$$p(x_{n-1},x_n) = \frac{1}{Z} \left[ \sum_{x_{n-2}} \psi_{n-2,n-1}(x_{n-2},x_{n-1}) \dots \left[ \sum_{x_2} \psi_{2,3}(x_2,x_3) \left[ \sum_{x_1} \psi_{1,2}(x_1,x_2) \right] \right] \dots \right] \\ \times \psi_{n-1,n}(x_{n-1,x_n}) \\ \times \left[ \sum_{x_{n+1}} \psi_{n,n+1}(x_n,x_{n+1}) \dots \left[ \sum_{x_N} \psi_{N-1,N}(x_{N-1},x_N) \right] \dots \right] \\ = \frac{1}{Z} \times \mu_{\alpha}(x_{n-1}) \times \psi_{n-1,n}(x_{n-1},x_n) \times \mu_{\beta}(x_n)$$
Just as required.
| 939
|
8
|
8.16
|
medium
|
Consider the inference problem of evaluating $p(\mathbf{x}_n|\mathbf{x}_N)$ for the graph shown in Figure 8.38, for all nodes $n\in\{1,\ldots,N-1\}$ . Show that the message passing algorithm discussed in Section 8.4.1 can be used to solve this efficiently, and discuss which messages are modified and in what way.
|
We can simply obtain $p(x_N)$ using Eq(8.52) and Eq(8.54):
$$p(x_N) = \frac{1}{Z} \mu_{\alpha}(x_N) \tag{*}$$
According to Bayes' Theorem, we have:
$$p(x_n|x_N) = \frac{p(x_n, x_N)}{p(x_N)}$$
Therefore, now we only need to derive an expression for $p(x_n, x_N)$ , where n = 1, 2, ..., N - 1. We follow the same procedure as in the previous problem. Since we know that:
$$p(x_n, x_N) = \sum_{x_1} ... \sum_{x_{n-1}} \sum_{x_{n+1}} ... \sum_{x_{N-1}} p(\mathbf{x})$$
We can obtain:
$$p(x_{n},x_{N}) = \frac{1}{Z} \left[ \sum_{x_{n-1}} \psi_{n-1,n}(x_{n-1},x_{n}) \dots \left[ \sum_{x_{2}} \psi_{2,3}(x_{2},x_{3}) \left[ \sum_{x_{1}} \psi_{1,2}(x_{1},x_{2}) \right] \right] \dots \right] \times \left[ \sum_{x_{n+1}} \psi_{n,n+1}(x_{n},x_{n+1}) \dots \left[ \sum_{x_{N-1}} \psi_{N-2,N-1}(x_{N-2},x_{N-1}) \psi_{N-1,N}(x_{N-1},x_{N}) \right] \dots \right]$$
Note that in the second line, the summation term with respect to $x_{N-1}$ is the product of $\psi_{N-2,N-1}(x_{N-2},x_{N-1})$ and $\psi_{N-1,N}(x_{N-1},x_N)$ . So here we can actually draw an undirected graph with N-1 nodes, and adopt the proposed algorithm to solve $p(x_n,x_N)$ . If we use $x_n^{}$ to represent the new node, then the joint distribution can be written as:
$$p(\mathbf{x}^{}) = \frac{1}{Z^{}} \psi_{1,2}^{}(x_{1}^{}, x_{2}^{}) \psi_{2,3}^{}(x_{2}^{}, x_{3}^{}) \dots \psi_{N-2, N-1}^{}(x_{N-2}^{}, x_{N-1}^{})$$
Where $\psi_{n,n+1}^{}(x_n^{},x_{n+1}^{})$ is defined as:
$$\psi_{n,n+1}^{}(x_{n}^{},x_{n+1}^{}) = \left\{ \begin{array}{ll} \psi_{n,n+1}(x_{n},x_{n+1}), & n=1,2,...,N-3 \\ \psi_{N-2,N-1}(x_{N-2},x_{N-1})\psi_{N-1,N}(x_{N-1},x_{N}), & n=N-2 \end{array} \right.$$
In other words, we have combined the original node $x_{N-1}$ and $x_N$ . Moreover, we have the relationship:
$$p(x_n, x_N) = p(x_n^*) = \frac{1}{Z^*} \mu_\alpha^*(x_n^*) \mu_\beta^*(x_n^*) \quad n = 1, 2, ..., N-1$$
By adopting the proposed algorithm to the new undirected graph, $p(x_n^*)$ can be easily evaluated, and so is $p(x_n, x_N)$ .
| 2,112
|
8
|
8.17
|
medium
|
Consider a graph of the form shown in Figure 8.38 having N=5 nodes, in which nodes $x_3$ and $x_5$ are observed. Use d-separation to show that $x_2 \perp \!\!\! \perp x_5 \mid x_3$ . Show that if the message passing algorithm of Section 8.4.1 is applied to the evaluation of $p(x_2|x_3,x_5)$ , the result will be independent of the value of $x_5$ .
|
It is straightforward to see that for every path connecting node $x_2$ and $x_5$ in Fig.8.38, it must pass through node $x_3$ . Therefore, all paths are blocked and the conditional property holds. For more details, you should read section 8.3.1. According to Bayes' Theorem, we can obtain:
$$p(x_2|x_3,x_5) = \frac{p(x_2,x_3,x_5)}{p(x_2)}$$
Using the proposed algorithm in section 8.4.1, we can obtain:
$$p(x_{2}|x_{3},x_{5}) = \frac{p(x_{2},x_{3},x_{5})}{p(x_{3},x_{5})} = \frac{\sum_{x_{1}} \sum_{x_{2}} \sum_{x_{4}} p(\mathbf{x})}{\sum_{x_{1}} \sum_{x_{2}} \sum_{x_{4}} p(\mathbf{x})}$$
$$= \frac{\sum_{x_{1}} \sum_{x_{4}} \psi_{1,2} \psi_{2,3} \psi_{3,4} \psi_{4,5}}{\sum_{x_{1}} \sum_{x_{2}} \sum_{x_{4}} \psi_{1,2} \psi_{2,3} \psi_{3,4} \psi_{4,5}}$$
$$= \frac{\left(\sum_{x_{1}} \psi_{1,2}\right) \cdot \psi_{2,3} \cdot \left(\sum_{x_{4}} \psi_{3,4} \psi_{4,5}\right)}{\sum_{x_{2}} \left[\left(\sum_{x_{1}} \psi_{1,2}\right) \cdot \psi_{2,3}\right] \cdot \left(\sum_{x_{4}} \psi_{3,4} \psi_{4,5}\right)}$$
$$= \frac{\left(\sum_{x_{1}} \psi_{1,2}\right) \cdot \psi_{2,3}}{\sum_{x_{2}} \left[\left(\sum_{x_{1}} \psi_{1,2}\right) \psi_{2,3}\right]}$$
It is obvious that the right hand side doesn't depend on $x_5$ .
| 1,232
|
8
|
8.18
|
medium
|
Show that a distribution represented by a directed tree can trivially be written as an equivalent distribution over the corresponding undirected tree. Also show that a distribution expressed as an undirected tree can, by suitable normalization of the clique potentials, be written as a directed tree. Calculate the number of distinct directed trees that can be constructed from a given undirected tree.
|
First, the distribution represented by a directed tree can be trivially be written as an equivalent distribution over an undirected tree by moralization. You can find more details in section 8.4.2.
Alternatively, now we want to represent a distribution, which is given by a directed graph, via a directed graph. For example, the distribution defined by the undirected tree in Fig.4 can be written as:
$$p(\mathbf{x}) = \frac{1}{Z} \psi_{1,3}(x_1, x_3) \, \psi_{2,3}(x_2, x_3) \, \psi_{3,4}(x_3, x_4) \, \psi_{4,5}(x_4, x_5)$$
We simply choose $x_4$ as the root and the corresponding directed tree is well defined by working outwards. In this case, the distribution defined by the directed tree is:
$$p(\mathbf{x}) = p(x_4) p(x_5|x_4) p(x_3|x_4) p(x_1|x_3) p(x_2|x_3)$$
Thus it is not difficult to change an undirected tree to a directed on if performing:
$$p(x_4)p(x_5|x_4) \propto \psi_{5,4}, p(x_3|x_4) \propto \psi_{3,4}, p(x_2|x_3) \propto \psi_{2,3}, p(x_1|x_3) \propto \psi_{1,3},$$

Figure 4: Example of changing an undirected tree to a directed one $x_i$
The symbol $\propto$ is used to represent a normalization term, which is used to guarantee the integral of PDF equal to 1. In summary, in the particular case of an undirected tree, there is only one path between any pair of nodes, and thus the maximal clique is given by a pair of two nodes in an undirected tree. This is because if we choose any three nodes $x_1, x_2, x_3$ , according to the definition there cannot exist a loop. Otherwise there are two paths between $x_1$ and $x_3$ : (i) $x_1 - > x_3$ and (ii) $x_1 - > x_3 - > x_3$ . In the directed tree, each node
only depends on only one node (except the root), i.e., its parent. Thus we can easily change a undirected tree to a directed one by matching the potential function with the corresponding conditional PDF, as shown in the example.
Moreover, we can choose any node in the undirected tree to be the root and then work outwards to obtain a directed tree. Therefore, in an undirected tree with n nodes, there is n corresponding directed trees in total.
## Problem 8.19-8.29 Solution
I am quite confused by the deduction in Eq(8.66). I do not understand the sum-product algorithm and the max-sum algorithm very well.
## 0.9 Mixture Models and EM
| 2,330
|
8
|
8.2
|
easy
|
Show that the property of there being no directed cycles in a directed graph follows from the statement that there exists an ordered numbering of the nodes such that for each node there are no links going to a lower-numbered node.
| T 11 00 7 | | 10 4 91 40 | | | |
|-----------|------------|--------------|------------|--------|--------------|
| Table 8.2 | i ne ioint | distribution | over three | binarv | / variables. |
| a | b | c | p(a,b,c) |
|---|---|---|----------|
| 0 | 0 | 0 | 0.192 |
| 0 | 0 | 1 | 0.144 |
| 0 | 1 | 0 | 0.048 |
| 0 | 1 | 1 | 0.216 |
| 1 | 0 | 0 | 0.192 |
| 1 | 0 | 1 | 0.064 |
| 1 | 1 | 0 | 0.048 |
| 1 | 1 | 1 | 0.096 |
| | | | |
|
This statement is obvious. Suppose that there exists an ordered numbering of the nodes such that for each node there are no links going to a lower-numbered node, and that there is a directed cycle in the graph:
$$a_1 \rightarrow a_2 \rightarrow \dots \rightarrow a_N$$
To make it a real cycle, we also require $a_N \to a_1$ . According to the assumption, we have $a_1 \le a_2 \le ... \le a_N$ . Therefore, the last link $a_N \to a_1$ is invalid since $a_N \ge a_1$ .
| 473
|
8
|
8.20
|
easy
|
Consider the message passing protocol for the sum-product algorithm on a tree-structured factor graph in which messages are first propagated from the leaves to an arbitrarily chosen root node and then from the root node out to the leaves. Use proof by induction to show that the messages can be passed in such an order that at every step, each node that must send a message has received all of the incoming messages necessary to construct its outgoing messages.
|
We do the induction over the size of the tree and we grow the tree one node at a time while, at the same time, we update the message passing schedule. Note that we can build up any tree this way.
For a single root node, the required condition holds trivially true, since there are no messages to be passed. We then assume that it holds for a tree with N nodes. In the induction step we add a new leaf node to such a tree. This new leaf node need not to wait for any messages from other nodes in order to send its outgoing message and so it can be scheduled to send it first, before any other messages are sent. Its parent node will receive this message, whereafter the message propagation will follow the schedule for the original tree with N nodes, for which the condition is assumed to hold.
For the propagation of the outward messages from the root back to the leaves, we first follow the propagation schedule for the original tree with N nodes, for which the condition is assumed to hold. When this has completed, the parent of the new leaf node will be ready to send its outgoing message to the new leaf node, thereby completing the propagation for the tree with N+1 nodes.
| 1,180
|
8
|
8.23
|
medium
|
In Section 8.4.4, we showed that the marginal distribution $p(x_i)$ for a variable node $x_i$ in a factor graph is given by the product of the messages arriving at this node from neighbouring factor nodes in the form $= \prod_{s \in ne(x)} \mu_{f_s \to x}(x).$. Show that the marginal $p(x_i)$ can also be written as the product of the incoming message along any one of the links with the outgoing message along the same link.
|
This follows from the fact that the message that a node, $x_i$ , will send to a factor $f_s$ , consists of the product of all other messages received by $x_i$ . From $= \prod_{s \in ne(x)} \mu_{f_s \to x}(x).$ and $= \prod_{l \in \text{ne}(x_m) \setminus f_s} \mu_{f_l \to x_m}(x_m)$, we have
$$p(x_i) = \prod_{s \in ne(x_i)} \mu_{f_s \to x_i}(x_i)$$
$$= \mu_{f_s \to x_i}(x_i) \prod_{t \in ne(x_i) \setminus f_s} \mu_{f_t \to x_i}(x_i)$$
$$= \mu_{f_s \to x_i}(x_i) \mu_{x_i \to f_s}(x_i).$$
| 513
|
8
|
8.28
|
medium
|
The concept of a *pending* message in the sum-product algorithm for a factor graph was defined in Section 8.4.7. Show that if the graph has one or more cycles, there will always be at least one pending message irrespective of how long the algorithm runs.
|
If a graph has one or more cycles, there exists at least one set of nodes and edges such that, starting from an arbitrary node in the set, we can visit all the nodes in the set and return to the starting node, without traversing any edge more than once.
Consider one particular such cycle. When one of the nodes $n_1$ in the cycle sends a message to one of its neighbours $n_2$ in the cycle, this causes a pending messages on the edge to the next node $n_3$ in that cycle. Thus sending a pending message along an edge in the cycle always generates a pending message on the next edge in that cycle. Since this is true for every node in the cycle it follows that there will always exist at least one pending message in the graph.
| 734
|
8
|
8.29
|
medium
|
Show that if the sum-product algorithm is run on a factor graph with a tree structure (no loops), then after a finite number of messages have been sent, there will be no pending messages.

If we define a joint distribution over observed and latent variables, the corresponding distribution of the observed variables alone is obtained by marginalization. This allows relatively complex marginal distributions over observed variables to be expressed in terms of more tractable joint distributions over the expanded space of observed and latent variables. The introduction of latent variables thereby allows complicated distributions to be formed from simpler components. In this chapter, we shall see that mixture distributions, such as the Gaussian mixture discussed in Section 2.3.9, can be interpreted in terms of discrete latent variables. Continuous latent variables will form the subject of Chapter 12.
As well as providing a framework for building more complex probability distributions, mixture models can also be used to cluster data. We therefore begin our discussion of mixture distributions by considering the problem of finding clusters in a set of data points, which we approach first using a nonprobabilistic technique called the K-means algorithm (Lloyd, 1982). Then we introduce the latent variable
Section 9.1
Section 9.2
Section 9.3
Section 9.4
view of mixture distributions in which the discrete latent variables can be interpreted as defining assignments of data points to specific components of the mixture. A general technique for finding maximum likelihood estimators in latent variable models is the expectation-maximization (EM) algorithm. We first of all use the Gaussian mixture distribution to motivate the EM algorithm in a fairly informal way, and then we give a more careful treatment based on the latent variable viewpoint. We shall see that the K-means algorithm corresponds to a particular nonprobabilistic limit of EM applied to mixtures of Gaussians. Finally, we discuss EM in some generality.
Gaussian mixture models are widely used in data mining, pattern recognition, machine learning, and statistical analysis. In many applications, their parameters are determined by maximum likelihood, typically using the EM algorithm. However, as we shall see there are some significant limitations to the maximum likelihood approach, and in Chapter 10 we shall show that an elegant Bayesian treatment can be given using the framework of variational inference. This requires little additional computation compared with EM, and it resolves the principal difficulties of maximum likelihood while also allowing the number of components in the mixture to be inferred automatically from the data.
### 9.1. K-means Clustering
We begin by considering the problem of identifying groups, or clusters, of data points in a multidimensional space. Suppose we have a data set $\{\mathbf{x}_1,\ldots,\mathbf{x}_N\}$ consisting of N observations of a random D-dimensional Euclidean variable $\mathbf{x}$ . Our goal is to partition the data set into some number K of clusters, where we shall suppose for the moment that the value of K is given. Intuitively, we might think of a cluster as comprising a group of data points whose inter-point distances are small compared with the distances to points outside of the cluster. We can formalize this notion by first introducing a set of D-dimensional vectors $\mu_k$ , where $k=1,\ldots,K$ , in which $\mu_k$ is a prototype associated with the $k^{\text{th}}$ cluster. As we shall see shortly, we can think of the $\mu_k$ as representing the centres of the clusters. Our goal is then to find an assignment of data points to clusters, as well as a set of vectors $\{\mu_k\}$ , such that the sum of the squares of the distances of each data point to its closest vector $\mu_k$ , is a minimum.
It is convenient at this point to define some notation to describe the assignment of data points to clusters. For each data point $\mathbf{x}_n$ , we introduce a corresponding set of binary indicator variables $r_{nk} \in \{0,1\}$ , where $k=1,\ldots,K$ describing which of the K clusters the data point $\mathbf{x}_n$ is assigned to, so that if data point $\mathbf{x}_n$ is assigned to cluster k then $r_{nk}=1$ , and $r_{nj}=0$ for $j\neq k$ . This is known as the 1-of-K coding scheme. We can then define an objective function, sometimes called a *distortion measure*, given by
$$J = \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \|\mathbf{x}_n - \boldsymbol{\mu}_k\|^2$$
$J = \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \|\mathbf{x}_n - \boldsymbol{\mu}_k\|^2$
which represents the sum of the squares of the distances of each data point to its
assigned vector $\mu_k$ . Our goal is to find values for the $\{r_{nk}\}$ and the $\{\mu_k\}$ so as to minimize J. We can do this through an iterative procedure in which each iteration involves two successive steps corresponding to successive optimizations with respect to the $r_{nk}$ and the $\mu_k$ . First we choose some initial values for the $\mu_k$ . Then in the first phase we minimize J with respect to the $r_{nk}$ , keeping the $\mu_k$ fixed. In the second phase we minimize J with respect to the $\mu_k$ , keeping $r_{nk}$ fixed. This two-stage optimization is then repeated until convergence. We shall see that these two stages of updating $r_{nk}$ and updating $\mu_k$ correspond respectively to the E (expectation) and M (maximization) steps of the EM algorithm, and to emphasize this we shall use the terms E step and M step in the context of the K-means algorithm.
Section 9.4
Consider first the determination of the $r_{nk}$ . Because J in $J = \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \|\mathbf{x}_n - \boldsymbol{\mu}_k\|^2$ is a linear function of $r_{nk}$ , this optimization can be performed easily to give a closed form solution. The terms involving different n are independent and so we can optimize for each n separately by choosing $r_{nk}$ to be 1 for whichever value of k gives the minimum value of $\|\mathbf{x}_n - \boldsymbol{\mu}_k\|^2$ . In other words, we simply assign the $n^{\text{th}}$ data point to the closest cluster centre. More formally, this can be expressed as
$$r_{nk} = \begin{cases} 1 & \text{if } k = \arg\min_{j} \|\mathbf{x}_n - \boldsymbol{\mu}_j\|^2 \\ 0 & \text{otherwise.} \end{cases}$$
$r_{nk} = \begin{cases} 1 & \text{if } k = \arg\min_{j} \|\mathbf{x}_n - \boldsymbol{\mu}_j\|^2 \\ 0 & \text{otherwise.} \end{cases}$
Now consider the optimization of the $\mu_k$ with the $r_{nk}$ held fixed. The objective function J is a quadratic function of $\mu_k$ , and it can be minimized by setting its derivative with respect to $\mu_k$ to zero giving
$$2\sum_{n=1}^{N} r_{nk}(\mathbf{x}_n - \boldsymbol{\mu}_k) = 0$$
$$(9.3)$$
which we can easily solve for $\mu_k$ to give
$$\mu_k = \frac{\sum_n r_{nk} \mathbf{x}_n}{\sum_n r_{nk}}.$$
$\mu_k = \frac{\sum_n r_{nk} \mathbf{x}_n}{\sum_n r_{nk}}.$
|
We show this by induction over the number of nodes in the tree-structured factor graph.
First consider a graph with two nodes, in which case only two messages will be sent across the single edge, one in each direction. None of these messages will induce any pending messages and so the algorithm terminates.
We then assume that for a factor graph with N nodes, there will be no pending messages after a finite number of messages have been sent. Given such a graph, we can construct a new graph with N+1 nodes by adding a new node. This new node will have a single edge to the original graph (since the graph must remain a tree) and so if this new node receives a message on this edge, it will induce no pending messages. A message sent from the new node will trigger propagation of messages in the original graph with N nodes, but by assumption, after a finite number of messages have been sent, there will be no pending messages and the algorithm will terminate.
# **Chapter 9** Mixture Models and EM
| 1,004
|
8
|
8.3
|
medium
|
Consider three binary variables $a, b, c \in \{0, 1\}$ having the joint distribution given in Table 8.2. Show by direct evaluation that this distribution has the property that a and b are marginally dependent, so that $p(a,b) \neq p(a)p(b)$ , but that they become independent when conditioned on c, so that p(a,b|c) = p(a|c)p(b|c) for both c = 0 and c = 1.
|
Based on definition, we can obtain:
$$p(a,b) = p(a,b,c=0) + p(a,b,c=1) = \begin{cases} 0.336, & \text{if } a = 0, b = 0\\ 0.264, & \text{if } a = 0, b = 1\\ 0.256, & \text{if } a = 1, b = 0\\ 0.144, & \text{if } a = 1, b = 1 \end{cases}$$
Similarly, we can obtain:
$$p(a) = p(a, b = 0) + p(a, b = 1) =$$
$$\begin{cases}
0.6, & \text{if } a = 0 \\
0.4, & \text{if } a = 1
\end{cases}$$
And
$$p(b) = p(a = 0, b) + p(a = 1, b) =$$
$$\begin{cases}
0.592, & \text{if } b = 0 \\
0.408, & \text{if } b = 1
\end{cases}$$
Therefore, we conclude that $p(a,b) \neq p(a)p(b)$ . For instance, we have $p(a=1,b=1)=0.144,\ p(a=1)=0.4$ and p(b=1)=0.408. It is obvious that:
$$0.144 = p(a = 1, b = 1) \neq p(a = 1)p(b = 1) = 0.4 \times 0.408$$
To prove the conditional dependency, we first calculate p(c):
$$p(c) = \sum_{a,b=0,1} p(a,b,c) = \begin{cases} 0.480, & \text{if } c = 0\\ 0.520, & \text{if } c = 1 \end{cases}$$
According to Bayes' Theorem, we have:
$$p(a,b|c) = \frac{p(a,b,c)}{p(c)} = \begin{cases} 0.400, & \text{if } a = 0, b = 0, c = 0\\ 0.277, & \text{if } a = 0, b = 0, c = 1\\ 0.100, & \text{if } a = 0, b = 1, c = 0\\ 0.415, & \text{if } a = 0, b = 1, c = 1\\ 0.400, & \text{if } a = 1, b = 0, c = 0\\ 0.123, & \text{if } a = 1, b = 0, c = 1\\ 0.100, & \text{if } a = 1, b = 1, c = 0\\ 0.185, & \text{if } a = 1, b = 1, c = 1 \end{cases}$$
Similarly, we also have:
$$p(a|c) = \frac{p(a,c)}{p(c)} = \begin{cases} 0.240/0.480 = 0.500, & \text{if } a = 0, c = 0\\ 0.360/0.520 = 0.692, & \text{if } a = 0, c = 1\\ 0.240/0.480 = 0.500, & \text{if } a = 1, c = 0\\ 0.160/0.520 = 0.308, & \text{if } a = 1, c = 1 \end{cases}$$
Where we have used p(a,c) = p(a,b=0,c) + p(a,b=1,c). Similarly, we can obtain:
$$p(b|c) = \frac{p(b,c)}{p(c)} = \begin{cases} 0.384/0.480 = 0.800, \text{ if } b = 0, c = 0\\ 0.208/0.520 = 0.400, \text{ if } b = 0, c = 1\\ 0.096/0.480 = 0.200, \text{ if } b = 1, c = 0\\ 0.312/0.520 = 0.600, \text{ if } b = 1, c = 1 \end{cases}$$
Now we can easily verify the statement p(a,b|c) = p(a|c)p(b|c). For instance, we have:
$$0.1 = p(a = 1, b = 1 | c = 0) = p(a = 1 | c = 0)p(b = 1 | c = 0) = 0.5 \times 0.2 = 0.1$$
| 2,153
|
8
|
8.4
|
hard
|
Evaluate the distributions p(a), p(b|c), and p(c|a) corresponding to the joint distribution given in Table 8.2. Hence show by direct evaluation that p(a,b,c) = p(a)p(c|a)p(b|c). Draw the corresponding directed graph.
|
This problem follows the previous one. We have already calculated p(a) and p(b|c), we rewrite it here.
$$p(a) = p(a, b = 0) + p(a, b = 1) = \begin{cases} 0.6, & \text{if } a = 0 \\ 0.4, & \text{if } a = 1 \end{cases}$$
And
$$p(b|c) = \frac{p(b,c)}{p(c)} = \begin{cases} 0.384/0.480 = 0.800, & \text{if } b = 0, c = 0 \\ 0.208/0.520 = 0.400, & \text{if } b = 0, c = 1 \\ 0.096/0.480 = 0.200, & \text{if } b = 1, c = 0 \\ 0.312/0.520 = 0.600, & \text{if } b = 1, c = 1 \end{cases}$$
We can also obtain p(c|a):
$$p(c|a) = \frac{p(a,c)}{p(a)} = \begin{cases} 0.24/0.6 = 0.4, & \text{if } a = 0, c = 0\\ 0.36/0.6 = 0.6, & \text{if } a = 0, c = 1\\ 0.24/0.4 = 0.6, & \text{if } a = 1, c = 0\\ 0.16/0.4 = 0.4, & \text{if } a = 1, c = 1 \end{cases}$$
Now we can easily verify the statement that p(a,b,c) = p(a)p(c|a)p(b|c) given Table 8.2. The directed graph looks like:
$$a \rightarrow c \rightarrow b$$
| 903
|
8
|
8.5
|
easy
|
Draw a directed probabilistic graphical model corresponding to the relevance vector machine described by $p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta) = \prod_{n=1}^{N} p(t_n|\mathbf{x}_n, \mathbf{w}, \beta^{-1}).$ and $p(\mathbf{w}|\boldsymbol{\alpha}) = \prod_{i=1}^{M} \mathcal{N}(w_i|0, \alpha_i^{-1})$.
|
It looks quite like Figure 8.6. The difference is that we introduce $\alpha_i$ for each $w_i$ , where i = 1, 2, ..., M.

Figure 1: probabilistic graphical model corresponding to the RVM described in $p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta) = \prod_{n=1}^{N} p(t_n|\mathbf{x}_n, \mathbf{w}, \beta^{-1}).$ and $p(\mathbf{w}|\boldsymbol{\alpha}) = \prod_{i=1}^{M} \mathcal{N}(w_i|0, \alpha_i^{-1})$.
## Problem 8.6 Solution(Wait for update)
| 493
|
8
|
8.7
|
medium
|
Using the recursion relations $\mathbb{E}[x_i] = \sum_{j \in \text{pa}_i} w_{ij} \mathbb{E}[x_j] + b_i.$ and (8.16), show that the mean and covariance of the joint distribution for the graph shown in Figure 8.14 are given by (8.17) and $\Sigma = \begin{pmatrix} v_1 & w_{21}v_1 & w_{32}w_{21}v_1 \\ w_{21}v_1 & v_2 + w_{21}^2v_1 & w_{32}(v_2 + w_{21}^2v_1) \\ w_{32}w_{21}v_1 & w_{32}(v_2 + w_{21}^2v_1) & v_3 + w_{22}^2(v_2 + w_{21}^2v_1) \end{pmatrix} .$, respectively.
|
Let's just follow the hint. We begin by calculating the mean $\mu$ .
$$\mathbb{E}[x_1] = b_1$$
According to Eq $\mathbb{E}[x_i] = \sum_{j \in \text{pa}_i} w_{ij} \mathbb{E}[x_j] + b_i.$, we can obtain:
$$\mathbb{E}[x_2] = \sum_{j \in pa_2} w_{2j} \mathbb{E}[x_j] + b_2 = w_{21}b_1 + b_2$$
Then we can obtain:
$$\mathbb{E}[x_3] = w_{32}\mathbb{E}[x_2] + b_3$$
$$= w_{32}(w_{21}b_1 + b_2) + b_3$$
$$= w_{32}w_{21}b_1 + w_{32}b_2 + b_3$$
Therefore, we obtain Eq (8.17) just as required. Next, we deal with the covariance matrix.
$$cov[x_1, x_1] = v_1$$
Then we can obtain:
$$cov[x_1, x_2] = \sum_{k=1}^{\infty} w_{2k} cov[x_1, x_k] + I_{12}v_2 = w_{21} cov[x_1, x_1] = w_{21}v_1$$
And also $cov[x_2,x_1] = cov[x_1,x_2] = w_{21}v_1$ . Hence, we can obtain:
$$cov[x_2, x_2] = \sum_{k=1}^{\infty} w_{2k} cov[x_2, x_k] + I_{22}v_2 = w_{21}^2 v_1 + v_2$$
Next, we can obtain:
$$cov[x_1, x_3] = \sum_{k=2} w_{3k} cov[x_1, x_k] + I_{31}v_1 = w_{32}w_{21}v_1$$
Then, we can obtain:
$$cov[x_2, x_3] = \sum_{k=2} w_{3k} cov[x_2, x_k] + I_{23}v_3 = w_{32}(v_2 + w_{21}^2 v_1)$$
Finally, we can obtain:
$$\begin{array}{lll} \mathrm{cov}[x_3,x_3] & = & \sum_{k=2} w_{3k} \mathrm{cov}[x_3,x_k] + I_{33} v_3 \\ \\ & = & w_{32} \Big[ w_{32} (v_2 + w_{21}^2 v_1) \Big] + v_3 \end{array}$$
Where we have used the fact that $cov[x_3, x_2] = cov[x_2, x_3]$ . By now, we have obtained Eq $\Sigma = \begin{pmatrix} v_1 & w_{21}v_1 & w_{32}w_{21}v_1 \\ w_{21}v_1 & v_2 + w_{21}^2v_1 & w_{32}(v_2 + w_{21}^2v_1) \\ w_{32}w_{21}v_1 & w_{32}(v_2 + w_{21}^2v_1) & v_3 + w_{22}^2(v_2 + w_{21}^2v_1) \end{pmatrix} .$ just as required.
| 1,641
|
8
|
8.8
|
easy
|
Show that $a \perp \!\!\!\perp b, c \mid d$ implies $a \perp \!\!\!\!\perp b \mid d$ .
|
According to the definition, we can write:
$$p(a,b,c|d) = p(a|d)p(b,c|d)$$
We marginalize both sides with respect to c, yielding:
$$p(a,b|d) = p(a|d)p(b|d)$$
Just as required.
| 179
|
8
|
8.9
|
easy
|
Using the d-separation criterion, show that the conditional distribution for a node x in a directed graph, conditioned on all of the nodes in the Markov blanket, is independent of the remaining variables in the graph.
Figure 8.54 Example of a graphical model used to explore the conditional independence properties of the head-to-head path a–c–b when a descendant of c, namely the node d, is observed.

|
This statement is easy to see but a little bit difficult to prove. We put Fig 8.26 here to give a better illustration.

Figure 2: Markov blanket of a node $x_i$
Markov blanket $\Phi$ of node $x_i$ is made up of three kinds of nodes:(i) the set $\Phi_1$ containing all the parents of node $x_i$ ( $x_1$ and $x_2$ in Fig.2), (ii) the set $\Phi_2$ containing all the children of node $x_i$ ( $x_5$ and $x_6$ in Fig.2), and (iii) the set $\Phi_3$ containing all the co-parents of node $x_i$ ( $x_3$ and $x_4$ in Fig.2). According to the d-separation criterion, we need to show that all the paths from node $x_i$ to an arbitrary node $\hat{x} \notin \Phi = \{\Phi_1 \cup \Phi_2 \cup \Phi_3\}$ are blocked given that the Markov blanket $\Phi$ are observed.
It is obvious that $\hat{x}$ can only connect to the target node $x_i$ via two kinds of node: $\Phi_1, \Phi_2$ . First, suppose that $\hat{x}$ connects to $x_i$ via some node $x^* \in \Phi_1$ . The arrows definitely meet head-to-tail or tail-to-tail at node $x^*$ because the link from a parent node $x^*$ to $x_i$ has its tail connected to the parent node $x^*$ , and since $x^*$ is in $\Phi_1 \subseteq \Phi$ , we see that this path is blocked.
In the second case, suppose that $\hat{x}$ connects to $x_i$ via some node $x^* \in \Phi_2$ . We need to further divide this situation. If the path from $\hat{x}$ to $x_i$ also goes through a node $x^{**}$ from $\Phi_3$ (e.g., in Fig.2, some node $\hat{x}$ connects to node $x_3$ , and in this example $x^{**} = x_3$ , $x^* = x_5$ ), it is clearly that the arrows meet head-to-tail or tail-to-tail at the node $x^{**} \in \Phi_3 \subseteq \Phi$ , this path is blocked.
In the final case, suppose that $\hat{x}$ connects to $x_i$ via some node $x^* \in \Phi_2$ and the path doesn't go through any node from $\Phi_3$ . An important observation is that the arrows cannot meet head-to-head at node $x^*$ (otherwise, this path will go through a node from $\Phi_3$ ). Thus, the arrows must meet either head-to-tail or tail-to-tail at node $x^* \in \Phi_2 \subseteq \Phi$ . Therefore, the path is also blocked.
| 2,219
|
9
|
9.1
|
easy
|
Consider the K-means algorithm discussed in Section 9.1. Show that as a consequence of there being a finite number of possible assignments for the set of discrete indicator variables $r_{nk}$ , and that for each such assignment there is a unique optimum for the $\{\mu_k\}$ , the K-means algorithm must converge after a finite number of iterations.
|
For each $r_{nk}$ when n is fixed and k=1,2,...,K, only one of them equals 1 and others are all 0. Therefore, there are K possible choices. When N data are given, there are $K^N$ possible assignments for $\{r_{nk}; n=1,2,...,N; k=1,2,...,K\}$ . For each assignments, the optimal $\{\mu_k; k=1,2,...,K\}$ are well determined by Eq $\mu_k = \frac{\sum_n r_{nk} \mathbf{x}_n}{\sum_n r_{nk}}.$.
As discussed in the main text, by iteratively performing E-step and M-step, the distortion measure in Eq $J = \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \|\mathbf{x}_n - \boldsymbol{\mu}_k\|^2$ is gradually minimized. The worst case is that we find the optimal assignment and $\{\mu_k\}$ in the last iteration. In other words, $K^N$ iterations are required. However, it is guaranteed to converge because the assignments are finite and the optimal $\{\mu_k\}$ is determined once the assignment is given.
| 915
|
9
|
9.10
|
medium
|
$ Consider a density model given by a mixture distribution
$$p(\mathbf{x}) = \sum_{k=1}^{K} \pi_k p(\mathbf{x}|k)$$
$p(\mathbf{x}) = \sum_{k=1}^{K} \pi_k p(\mathbf{x}|k)$
and suppose that we partition the vector $\mathbf{x}$ into two parts so that $\mathbf{x} = (\mathbf{x}_a, \mathbf{x}_b)$ . Show that the conditional density $p(\mathbf{x}_b|\mathbf{x}_a)$ is itself a mixture distribution and find expressions for the mixing coefficients and for the component densities.
|
According to the property of PDF, we know that:
$$p(\mathbf{x}_b|\mathbf{x}_a) = \frac{p(\mathbf{x}_a, \mathbf{x}_b)}{p(\mathbf{x}_a)} = \frac{p(\mathbf{x})}{p(\mathbf{x}_a)} = \sum_{k=1}^K \frac{\pi_k}{p(\mathbf{x}_a)} \cdot p(\mathbf{x}|k)$$
Note that here $p(\mathbf{x}_a)$ can be viewed as a normalization constant used to guarantee that the integration of $p(\mathbf{x}_b|\mathbf{x}_a)$ equal to 1. Moreover, similarly, we can also obtain:
$$p(\mathbf{x}_a|\mathbf{x}_b) = \sum_{k=1}^{K} \frac{\pi_k}{p(\mathbf{x}_b)} \cdot p(\mathbf{x}|k)$$
| 553
|
9
|
9.11
|
easy
|
In Section 9.3.2, we obtained a relationship between K means and EM for Gaussian mixtures by considering a mixture model in which all components have covariance $\epsilon \mathbf{I}$ . Show that in the limit $\epsilon \to 0$ , maximizing the expected completedata log likelihood for this model, given by $\mathbb{E}_{\mathbf{Z}}[\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \left\{ \ln \pi_k + \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right\}. \quad$, is equivalent to minimizing the distortion measure J for the K-means algorithm given by $J = \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \|\mathbf{x}_n - \boldsymbol{\mu}_k\|^2$.
|
According to the problem description, the expectation, i.e., Eq(9.40), can now be written as:
$$\mathbb{E}_{z}[\ln p] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \left\{ \ln \pi_{k} + \ln \mathcal{N}(\mathbf{x}_{n} | \boldsymbol{\mu}_{k}, \epsilon \mathbf{I}) \right\}$$
In the M-step, we are required to maximize the expression above with respect to $\mu_k$ and $\pi_k$ . In Prob.9.8, we have already proved that $\mu_k$ should be given by Eq (9.17):
$$\boldsymbol{\mu}_k = \frac{1}{N_k} \sum_{n=1}^N \gamma(z_{nk}) \mathbf{x}_n \tag{*}$$
Where $N_k$ is given by Eq $N_k = \sum_{n=1}^{N} \gamma(z_{nk}).$. Moreover, in this case, by analogy to Eq $0 = -\sum_{n=1}^{N} \frac{\pi_k \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k)}{\sum_{j} \pi_j \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_j, \boldsymbol{\Sigma}_j)} \boldsymbol{\Sigma}_k(\mathbf{x}_n - \boldsymbol{\mu}_k)$, $\gamma(z_{nk})$ is slightly different:
$$\gamma(z_{nk}) = \frac{\pi_k \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \epsilon \mathbf{I})}{\sum_j \pi_j \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_j, \epsilon \mathbf{I})}$$
When $\epsilon \to 0$ , we can obtain:
$$\sum_{j} \pi_{j} \mathcal{N}(\mathbf{x}_{n} | \boldsymbol{\mu}_{j}, \epsilon \mathbf{I}) \approx \pi_{m} \mathcal{N}(\mathbf{x}_{n} | \boldsymbol{\mu}_{m}, \epsilon \mathbf{I}), \text{ where } m = \operatorname{argmin}_{j} ||\mathbf{x}_{n} - \boldsymbol{\mu}_{j}||^{2}$$
To be more clear, the summation is dominated by the max of $\pi_j \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_j, \epsilon \mathbf{I})$ , and this term is further determined by the exponent, i.e., $-||\mathbf{x}_n - \boldsymbol{\mu}_j||^2$ . Therefore, $\gamma(z_{nk})$ is given by exactly Eq $r_{nk} = \begin{cases} 1 & \text{if } k = \arg\min_{j} \|\mathbf{x}_n - \boldsymbol{\mu}_j\|^2 \\ 0 & \text{otherwise.} \end{cases}$, i.e., we have $\gamma(z_{nk}) = r_{nk}$ . Combining with (\*), we can obtain exactly Eq $\mu_k = \frac{\sum_n r_{nk} \mathbf{x}_n}{\sum_n r_{nk}}.$. Next, according to Prob.9.9, $\pi_k$ is given by Eq(9.22):
$$\pi_k = rac{N_k}{N} = rac{\sum_{n=1}^N \gamma(z_{nk})}{N} = rac{r_{nk}}{N}$$
In other words, $\pi_k$ equals the fraction of the data points assigned to the k-th cluster.
| 2,304
|
9
|
9.12
|
easy
|
Consider a mixture distribution of the form
$$p(\mathbf{x}) = \sum_{k=1}^{K} \pi_k p(\mathbf{x}|k)$$
$p(\mathbf{x}) = \sum_{k=1}^{K} \pi_k p(\mathbf{x}|k)$
where the elements of $\mathbf{x}$ could be discrete or continuous or a combination of these. Denote the mean and covariance of $p(\mathbf{x}|k)$ by $\mu_k$ and $\Sigma_k$ , respectively. Show that the mean and covariance of the mixture distribution are given by $\mathbb{E}[\mathbf{x}] = \sum_{k=1}^{K} \pi_k \boldsymbol{\mu}_k$ and $\operatorname{cov}[\mathbf{x}] = \sum_{k=1}^{K} \pi_k \left\{ \mathbf{\Sigma}_k + \boldsymbol{\mu}_k \boldsymbol{\mu}_k^{\mathrm{T}} \right\} - \mathbb{E}[\mathbf{x}] \mathbb{E}[\mathbf{x}]^{\mathrm{T}}$.
|
First we calculate the mean $\mu_k$ :
$$\mu_k = \int \mathbf{x} p(\mathbf{x}) d\mathbf{x}$$
$$= \int \mathbf{x} \sum_{k=1}^K \pi_k p(\mathbf{x}|k) d\mathbf{x}$$
$$= \sum_{k=1}^K \pi_k \int \mathbf{x} p(\mathbf{x}|k) d\mathbf{x}$$
$$= \sum_{k=1}^K \pi_k \mu_k$$
Then we deal with the covariance matrix. For an arbitrary random variable $\mathbf{x}$ , according to Eq $cov[\mathbf{x}] = \mathbb{E}\left[ (\mathbf{x} - \mathbb{E}[\mathbf{x}])(\mathbf{x} - \mathbb{E}[\mathbf{x}])^{\mathrm{T}} \right].$ we have:
$$cov[\mathbf{x}] = \mathbb{E}[(\mathbf{x} - \mathbb{E}[\mathbf{x}])(\mathbf{x} - \mathbb{E}[\mathbf{x}])^T]$$
$$= \mathbb{E}[\mathbf{x}\mathbf{x}^T] - \mathbb{E}[\mathbf{x}]\mathbb{E}[\mathbf{x}]^T$$
Since $\mathbb{E}[\mathbf{x}]$ is already obtained, we only need to solve $\mathbb{E}[\mathbf{x}\mathbf{x}^T]$ . First we only focus on the k-th component and rearrange the expression above, yielding:
$$\mathbb{E}_{k}[\mathbf{x}\mathbf{x}^{T}] = \operatorname{cov}_{k}[\mathbf{x}] + \mathbb{E}_{k}[\mathbf{x}]\mathbb{E}_{k}[\mathbf{x}]^{T} = \mathbf{\Sigma}_{k} + \boldsymbol{\mu}_{k}\boldsymbol{\mu}_{k}^{T}$$
We further use Eq $\mathbb{E}[\mathbf{x}\mathbf{x}^{\mathrm{T}}] = \boldsymbol{\mu}\boldsymbol{\mu}^{\mathrm{T}} + \boldsymbol{\Sigma}.$, yielding:
$$\mathbb{E}[\mathbf{x}\mathbf{x}^T] = \int \mathbf{x}\mathbf{x}^T \sum_{k=1}^K \pi_k \, p(\mathbf{x}|k) \, d\mathbf{x}$$
$$= \sum_{k=1}^K \pi_k \int \mathbf{x}\mathbf{x}^T \, p(\mathbf{x}|k) \, d\mathbf{x}$$
$$= \sum_{k=1}^K \pi_k \, \mathbb{E}_k[\mathbf{x}\mathbf{x}^T]$$
$$= \sum_{k=1}^K \pi_k \, (\boldsymbol{\mu}_k \boldsymbol{\mu}_k^T + \boldsymbol{\Sigma}_k)$$
Therefore, we obtain Eq $\operatorname{cov}[\mathbf{x}] = \sum_{k=1}^{K} \pi_k \left\{ \mathbf{\Sigma}_k + \boldsymbol{\mu}_k \boldsymbol{\mu}_k^{\mathrm{T}} \right\} - \mathbb{E}[\mathbf{x}] \mathbb{E}[\mathbf{x}]^{\mathrm{T}}$ just as required.
| 1,926
|
9
|
9.13
|
medium
|
Using the re-estimation equations for the EM algorithm, show that a mixture of Bernoulli distributions, with its parameters set to values corresponding to a maximum of the likelihood function, has the property that
$$\mathbb{E}[\mathbf{x}] = \frac{1}{N} \sum_{n=1}^{N} \mathbf{x}_n \equiv \overline{\mathbf{x}}.$$
$\mathbb{E}[\mathbf{x}] = \frac{1}{N} \sum_{n=1}^{N} \mathbf{x}_n \equiv \overline{\mathbf{x}}.$
Hence show that if the parameters of this model are initialized such that all components have the same mean $\mu_k = \widehat{\mu}$ for $k = 1, \ldots, K$ , then the EM algorithm will converge after one iteration, for any choice of the initial mixing coefficients, and that this solution has the property $\mu_k = \overline{\mathbf{x}}$ . Note that this represents a degenerate case of the mixture model in which all of the components are identical, and in practice we try to avoid such solutions by using an appropriate initialization.
|
First, let's make this problem more clear. In a mixture of Bernoulli distribution, whose complete-data log likelihood is given by Eq $\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\pi}) = \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \left\{ \ln \pi_k + \sum_{i=1}^{D} \left[ x_{ni} \ln \mu_{ki} + (1 - x_{ni}) \ln(1 - \mu_{ki}) \right] \right\}$ and whose model parameters are $\pi_k$ and $\mu_k$ . If we want to obtain those parameters, we can adopt EM algorithm. In the E-step, we calculate $\gamma(z_{nk})$ as shown in Eq $= \frac{\pi_k p(\mathbf{x}_n | \boldsymbol{\mu}_k)}{\sum_{j=1}^K \pi_j p(\mathbf{x}_n | \boldsymbol{\mu}_j)}.$. In the M-step, we update $\pi_k$ and $\mu_k$ according to Eq $\mu_k = \overline{\mathbf{x}}_k.$ and Eq $\pi_k = \frac{N_k}{N}$, where $N_k$ and $\mathbf{x}_k$ are defined in Eq $N_k = \sum_{n=1}^N \gamma(z_{nk})$ and Eq $\overline{\mathbf{x}}_k = \frac{1}{N_k} \sum_{n=1}^N \gamma(z_{nk}) \mathbf{x}_n$. Now let's back to this problem. The expectation of $\mathbf{x}$ is given by Eq (9.49):
$$\mathbb{E}[\mathbf{x}] = \sum_{k=1}^{K} \pi_k^{(opt)} \boldsymbol{\mu}_k^{(opt)}$$
Here $\pi_k^{(opt)}$ and $\pmb{\mu}_k^{(opt)}$ are the parameters obtained when EM is converged.
Using Eq $\overline{\mathbf{x}}_k = \frac{1}{N_k} \sum_{n=1}^N \gamma(z_{nk}) \mathbf{x}_n$ and Eq(9.59), we can obtain:
$$\mathbb{E}[\mathbf{x}] = \sum_{k=1}^{K} \pi_{k}^{(opt)} \boldsymbol{\mu}_{k}^{(opt)}$$
$$= \sum_{k=1}^{K} \pi_{k}^{(opt)} \frac{1}{N_{K}^{(opt)}} \sum_{n=1}^{N} \gamma(z_{nk})^{(opt)} \mathbf{x}_{n}$$
$$= \sum_{k=1}^{K} \frac{N_{k}^{(opt)}}{N} \frac{1}{N_{K}^{(opt)}} \sum_{n=1}^{N} \gamma(z_{nk})^{(opt)} \mathbf{x}_{n}$$
$$= \sum_{k=1}^{K} \frac{1}{N} \sum_{n=1}^{N} \gamma(z_{nk})^{(opt)} \mathbf{x}_{n}$$
$$= \sum_{n=1}^{N} \sum_{k=1}^{K} \frac{\gamma(z_{nk})^{(opt)} \mathbf{x}_{n}}{N}$$
$$= \sum_{n=1}^{N} \frac{\mathbf{x}_{n}}{N} \sum_{k=1}^{K} \gamma(z_{nk})^{(opt)}$$
$$= \frac{1}{N} \sum_{n=1}^{N} \mathbf{x}_{n} = \bar{\mathbf{x}}$$
If we set all $\mu_k$ equal to $\hat{\mu}$ in initialization, in the first E-step, we can obtain:
$$\gamma(z_{nk})^{(1)} = \frac{\pi_k^{(0)} p(\mathbf{x}_n | \boldsymbol{\mu}_k = \widehat{\boldsymbol{\mu}})}{\sum_{j=1}^K \pi_j^{(0)} p(\mathbf{x}_n | \boldsymbol{\mu}_j = \widehat{\boldsymbol{\mu}})} = \frac{\pi_k^{(0)}}{\sum_{j=1}^K \pi_j^{(0)}} = \pi_k^{(0)}$$
Note that here $\hat{\mu}$ and $\pi_k^{(0)}$ are the initial values. In the subsequent M-step, according to Eq (9.57)-(9.60), we can obtain:
$$\boldsymbol{\mu}_{k}^{(1)} = \frac{1}{N_{k}^{(1)}} \sum_{n=1}^{N} \gamma(z_{nk})^{(1)} \mathbf{x}_{n} = \frac{\sum_{n=1}^{N} \gamma(z_{nk})^{(1)} \mathbf{x}_{n}}{\sum_{n=1}^{N} \gamma(z_{nk})^{(1)}} = \frac{\sum_{n=1}^{N} \pi_{k}^{(0)} \mathbf{x}_{n}}{\sum_{n=1}^{N} \pi_{k}^{(0)}} = \frac{\sum_{n=1}^{N} \mathbf{x}_{n}}{N}$$
And
$$\pi_k^{(1)} = \frac{N_k^{(1)}}{N} = \frac{\sum_{n=1}^N \gamma(z_{nk})^{(1)}}{N} = \frac{\sum_{n=1}^N \pi_k^{(0)}}{N} = \pi_k^{(0)}$$
In other words, in this case, after the first EM iteration, we find that the new $\boldsymbol{\mu}_k^{(1)}$ are all identical, which are all given by $\bar{\mathbf{x}}$ . Moreover, the new $\pi_k^{(1)}$ are identical to their corresponding initial value $\pi_k^{(0)}$ . Therefore, in the second EM iteration, we can similarly conclude that:
$$\mu_k^{(2)} = \mu_k^{(1)} = \bar{\mathbf{x}} , \quad \pi_k^{(2)} = \pi_k^{(1)} = \pi_k^{(0)}$$
In other words, the EM algorithm actually stops after the first iteration.
| 3,576
|
9
|
9.14
|
easy
|
Consider the joint distribution of latent and observed variables for the Bernoulli distribution obtained by forming the product of $p(\mathbf{x}|\mathbf{z}, \boldsymbol{\mu})$ given by $p(\mathbf{x}|\mathbf{z}, \boldsymbol{\mu}) = \prod_{k=1}^{K} p(\mathbf{x}|\boldsymbol{\mu}_k)^{z_k}$ and $p(\mathbf{z}|\boldsymbol{\pi})$ given by $p(\mathbf{z}|\boldsymbol{\pi}) = \prod_{k=1}^{K} \pi_k^{z_k}.$. Show that if we marginalize this joint distribution with respect to $\mathbf{z}$ , then we obtain $p(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\pi}) = \sum_{k=1}^{K} \pi_k p(\mathbf{x}|\boldsymbol{\mu}_k)$.
|
Let's follow the hint.
$$p(\mathbf{x}, \mathbf{z} | \boldsymbol{\mu}, \boldsymbol{\pi}) = p(\mathbf{x} | \mathbf{z}, \boldsymbol{\mu}) \cdot p(\mathbf{z} | \boldsymbol{\pi})$$
$$= \prod_{k=1}^{K} p(\mathbf{x} | \boldsymbol{\mu}_{k})^{z_{k}} \cdot \prod_{k=1}^{K} \pi_{k}^{z_{k}}$$
$$= \prod_{k=1}^{K} \left[ \pi_{k} p(\mathbf{x} | \boldsymbol{\mu}_{k}) \right]^{z_{k}}$$
Then we marginalize over z, yielding:
$$p(\mathbf{x}|\boldsymbol{\mu}) = \sum_{\mathbf{z}} p(\mathbf{x}, \mathbf{z}|\boldsymbol{\mu}, \boldsymbol{\pi}) = \sum_{\mathbf{z}} \prod_{k=1}^{K} \left[ \pi_k p(\mathbf{x}|\boldsymbol{\mu}_k) \right]^{z_k}$$
The summation over $\mathbf{z}$ is made up of K terms and the k-th term corresponds to $z_k = 1$ and other $z_j$ , where $j \neq k$ , equals 0. Therefore, the k-th term will simply reduce to $\pi_k p(\mathbf{x}|\boldsymbol{\mu}_k)$ . Hence, performing the summation over $\mathbf{z}$ will finally give Eq $p(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\pi}) = \sum_{k=1}^{K} \pi_k p(\mathbf{x}|\boldsymbol{\mu}_k)$ just as required. To be more clear, we summarize the aforementioned statement:
$$\begin{aligned} p(\mathbf{x}|\boldsymbol{\mu}) &= \sum_{\mathbf{z}} \prod_{k=1}^{K} \left[ \pi_k p(\mathbf{x}|\boldsymbol{\mu}_k) \right]^{z_k} \\ &= \prod_{k=1}^{K} \left[ \pi_k p(\mathbf{x}|\boldsymbol{\mu}_k) \right]^{z_k} \Big|_{z_1=1} + \dots + \prod_{k=1}^{K} \left[ \pi_k p(\mathbf{x}|\boldsymbol{\mu}_k) \right]^{z_k} \Big|_{z_K=1} \\ &= \pi_1 p(\mathbf{x}|\boldsymbol{\mu}_1) + \dots + \pi_K p(\mathbf{x}|\boldsymbol{\mu}_K) \\ &= \sum_{k=1}^{K} \pi_k p(\mathbf{x}|\boldsymbol{\mu}_k) \end{aligned}$$
| 1,647
|
9
|
9.15
|
easy
|
Show that if we maximize the expected complete-data log likelihood function $\mathbb{E}_{\mathbf{Z}}[\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \left\{ \ln \pi_{k} + \sum_{i=1}^{D} \left[ x_{ni} \ln \mu_{ki} + (1 - x_{ni}) \ln(1 - \mu_{ki}) \right] \right\}$ for a mixture of Bernoulli distributions with respect to $\mu_k$ , we obtain the M step equation $\mu_k = \overline{\mathbf{x}}_k.$.
|
Noticing that $\pi_k$ doesn't depend on any $\mu_{ki}$ , we can omit the first term in the open brace when calculating the derivative of Eq $\mathbb{E}_{\mathbf{Z}}[\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \left\{ \ln \pi_{k} + \sum_{i=1}^{D} \left[ x_{ni} \ln \mu_{ki} + (1 - x_{ni}) \ln(1 - \mu_{ki}) \right] \right\}$ with respect to $\mu_{ki}$ :
$$\frac{\partial \mathbb{E}_{z}[\ln p]}{\partial \mu_{ki}} = \frac{\partial}{\partial \mu_{ki}} \sum_{n=1}^{N} \sum_{k=1}^{K} \left\{ \gamma(z_{nk}) \sum_{i=1}^{D} \left[ x_{ni} \ln \mu_{ki} + (1 - x_{ni}) \ln(1 - \mu_{ki}) \right] \right\}
= \frac{\partial}{\partial \mu_{ki}} \sum_{n=1}^{N} \sum_{k=1}^{K} \sum_{i=1}^{D} \left\{ \gamma(z_{nk}) \left[ x_{ni} \ln \mu_{ki} + (1 - x_{ni}) \ln(1 - \mu_{ki}) \right] \right\}
= \sum_{n=1}^{N} \frac{\partial}{\partial \mu_{ki}} \left\{ \gamma(z_{nk}) \left[ x_{ni} \ln \mu_{ki} + (1 - x_{ni}) \ln(1 - \mu_{ki}) \right] \right\}
= \sum_{n=1}^{N} \gamma(z_{nk}) \left( \frac{x_{ni}}{\mu_{ki}} - \frac{1 - x_{ni}}{1 - \mu_{ki}} \right)
= \sum_{n=1}^{N} \gamma(z_{nk}) \frac{x_{ni} - \mu_{ki}}{\mu_{ki}(1 - \mu_{ki})}$$
Setting the derivative equal to 0, we can obtain:
$$\mu_{ki} = \frac{\sum_{n=1}^{N} \gamma(z_{nk}) x_{ni}}{\sum_{n=1}^{N} \gamma(z_{nk})} = \frac{1}{N_k} \sum_{n=1}^{N} \gamma(z_{nk}) x_{ni}$$
Where $N_k$ is defined as Eq $N_k = \sum_{n=1}^N \gamma(z_{nk})$. If we group all the $\mu_{ki}$ as a column vector, i.e., $\boldsymbol{\mu}_k = [\mu_{k1}, \mu_{k2}, ..., \mu_{kD}]^T$ , we will obtain Eq $\mu_k = \overline{\mathbf{x}}_k.$ just as required.
| 1,677
|
9
|
9.16
|
easy
|
Show that if we maximize the expected complete-data log likelihood function $\mathbb{E}_{\mathbf{Z}}[\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \left\{ \ln \pi_{k} + \sum_{i=1}^{D} \left[ x_{ni} \ln \mu_{ki} + (1 - x_{ni}) \ln(1 - \mu_{ki}) \right] \right\}$ for a mixture of Bernoulli distributions with respect to the mixing coefficients $\pi_k$ , using a Lagrange multiplier to enforce the summation constraint, we obtain the M step equation $\pi_k = \frac{N_k}{N}$.
|
We follow the hint beginning by introducing a Lagrange multiplier:
$$L = \mathbb{E}_{z}[\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\pi})] + \lambda (\sum_{k=1}^{K} \pi_{k} - 1)$$
We calculate the derivative of L with respect to $\pi_k$ and then set it equal to 0:
$$\frac{\partial L}{\partial \pi_k} = \sum_{n=1}^{N} \frac{\gamma(z_{nk})}{\pi_k} + \lambda = 0 \tag{*}$$
Here $\mathbb{E}_z[\ln p]$ is given by Eq $\mathbb{E}_{\mathbf{Z}}[\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \left\{ \ln \pi_{k} + \sum_{i=1}^{D} \left[ x_{ni} \ln \mu_{ki} + (1 - x_{ni}) \ln(1 - \mu_{ki}) \right] \right\}$. We first multiply both sides of the expression by $\pi_k$ and then adopt summation with respect to k, which gives:
$$\sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) + \sum_{k=1}^{K} \lambda \pi_{k} = 0$$
Noticing that $\sum_{k=1}^{K} \pi_k$ equals 1, we can obtain:
$$\lambda = -\sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk})$$
Finally, substituting it back into (\*) and rearranging it, we can obtain:
$$\pi_k = -\frac{\sum_{k=1}^K \gamma(z_{nk})}{\lambda} = \frac{\sum_{k=1}^K \gamma(z_{nk})}{\sum_{n=1}^N \sum_{k=1}^K \gamma(z_{nk})} = \frac{N_k}{N}$$
Where $N_k$ is defined by Eq $N_k = \sum_{n=1}^N \gamma(z_{nk})$ and N is the summation of $N_k$ over k, and also equal to the number of data points.
| 1,423
|
9
|
9.17
|
easy
|
Show that as a consequence of the constraint $0 \le p(\mathbf{x}_n | \boldsymbol{\mu}_k) \le 1$ for the discrete variable $\mathbf{x}_n$ , the incomplete-data log likelihood function for a mixture of Bernoulli distributions is bounded above, and hence that there are no singularities for which the likelihood goes to infinity.
|
The incomplete-data log likelihood is given by Eq $\ln p(\mathbf{X}|\boldsymbol{\mu}, \boldsymbol{\pi}) = \sum_{n=1}^{N} \ln \left\{ \sum_{k=1}^{K} \pi_k p(\mathbf{x}_n | \boldsymbol{\mu}_k) \right\}.$, and $p(\mathbf{x}_n|\boldsymbol{\mu}_k)$ lies in the interval [0, 1], which can be easily verified by its definition, i.e., Eq $p(\mathbf{x}|\boldsymbol{\mu}) = \prod_{i=1}^{D} \mu_i^{x_i} (1 - \mu_i)^{(1 - x_i)}$. Therefore, we can obtain:
$$\ln p(\mathbf{X}|\boldsymbol{\mu}, \boldsymbol{\pi}) = \sum_{n=1}^{N} \ln \left\{ \sum_{k=1}^{K} \pi_k p(\mathbf{x}_n | \boldsymbol{\mu}_k) \right\} \le \sum_{n=1}^{N} \ln \left\{ \sum_{k=1}^{K} \pi_k \times 1 \right\} \le \sum_{n=1}^{N} \ln 1 = 0$$
Where we have used the fact that the logarithm is monotonic increasing, and that the summation of $\pi_k$ over k equals 1. Moreover, if we want to achieve the equality, we need $p(\mathbf{x}_n|\boldsymbol{\mu}_k)$ equal to 1 for all n=1,2,...,N. However, this is hardly possible.
To illustrate this, suppose that $p(\mathbf{x}_n|\boldsymbol{\mu}_k)$ equals 1 for all data points. Without loss of generality, consider two data points $\mathbf{x}_1 = [x_{11}, x_{12}, ..., x_{1D}]^T$ and $\mathbf{x}_2 = [x_{21}, x_{22}, ..., x_{2D}]^T$ , whose *i*-th entries are different. We further assume $x_{1i} = 1$ and $x_{2i} = 0$ since $x_i$ is a binary variable. According to Eq $p(\mathbf{x}|\boldsymbol{\mu}) = \prod_{i=1}^{D} \mu_i^{x_i} (1 - \mu_i)^{(1 - x_i)}$, if we want $p(\mathbf{x}_1|\boldsymbol{\mu}_k) = 1$ , we must have $\mu_i = 1$ (otherwise it muse be less than 1). However, this will lead $p(\mathbf{x}_2|\boldsymbol{\mu}_k)$ equal to 0 since there is a term $1 - \mu_i = 0$ in the product shown in Eq $p(\mathbf{x}|\boldsymbol{\mu}) = \prod_{i=1}^{D} \mu_i^{x_i} (1 - \mu_i)^{(1 - x_i)}$.
Therefore, when the data set is pathological, we will achieve this singularity point by adopting EM. Note that in the main text, the author states that the condition should be pathological initialization. This is also true. For instance, in the extreme case, when the data set is not pathological, if we initialize one $\pi_k$ equal to 1 and others all 0, and some of $\mu_i$ to 1 and others 0, we may also achieve the singularity.
| 2,290
|
9
|
9.18
|
medium
|
Consider a Bernoulli mixture model as discussed in Section 9.3.3, together with a prior distribution $p(\mu_k|a_k,b_k)$ over each of the parameter vectors $\mu_k$ given by the beta distribution $(\mu|a,b) = \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} \mu^{a-1} (1-\mu)^{b-1}$, and a Dirichlet prior $p(\pi|\alpha)$ given by $Dir(\boldsymbol{\mu}|\boldsymbol{\alpha}) = \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)\cdots\Gamma(\alpha_K)} \prod_{k=1}^K \mu_k^{\alpha_k - 1}$. Derive the EM algorithm for maximizing the posterior probability $p(\mu,\pi|\mathbf{X})$ .
|
In Prob.9.4, we have proved that if we want to maximize the posterior by EM, the only modification is that in the M-step, we need to maximize $Q'(\theta, \theta^{\text{old}}) = Q(\theta, \theta^{\text{old}}) + \ln p(\theta)$ . Here $Q(\theta, \theta^{\text{old}})$ has already been given by $\mathbb{E}_z[\ln p]$ , i.e., Eq $\mathbb{E}_{\mathbf{Z}}[\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \left\{ \ln \pi_{k} + \sum_{i=1}^{D} \left[ x_{ni} \ln \mu_{ki} + (1 - x_{ni}) \ln(1 - \mu_{ki}) \right] \right\}$. Therefore, we derive for $\ln p(\theta)$ . Note that $\ln p(\theta)$ is made up of two parts:(i) the prior for $\mu_k$ and (ii) the prior for $\pi$ , we begin by dealing with the first part. Here we assume the Beta prior for $\mu_{ki}$ , where k is fixed, is the same, i.e.,:
$$p(\mu_{ki}|a_k,b_k) = \frac{\Gamma(a_k+b_k)}{\Gamma(a_k)\Gamma(b_k)} \mu_{ki}^{a_k-1} \left(1-\mu_{ki}\right)^{b_k-1}, \quad i=1,2,...,D$$
Therefore, the contribution of this Beta prior to $\ln p(\theta)$ should be given by:
$$\sum_{k=1}^{K} \sum_{i=1}^{D} (a_i - 1) \ln \mu_{ki} + (b_i - 1) \ln (1 - \mu_{ki})$$
One thing worthy mentioned is that since we will maximize $Q'(\theta, \theta^{\text{old}})$ with respect to $\pi, \mu_k$ , we can omit the terms which do not depend on $\pi, \mu_k$ , such as $\Gamma(a_k + b_k) / \Gamma(a_k) \Gamma(b_k)$ . Then we deal with the second part. According to Eq $Dir(\boldsymbol{\mu}|\boldsymbol{\alpha}) = \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)\cdots\Gamma(\alpha_K)} \prod_{k=1}^K \mu_k^{\alpha_k - 1}$, we can obtain:
$$p(\boldsymbol{\pi}|\boldsymbol{\alpha}) = \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)...\Gamma(\alpha_K)} \prod_{k=1}^K \pi_k^{\alpha_k - 1}$$
Therefore, the contribution of the Dirichlet prior to $\ln p(\theta)$ should be given by:
$$\sum_{k=1}^{K} (\alpha_k - 1) \ln \pi_k$$
Therefore, now $Q'(\theta, \theta^{\text{old}})$ can be written as:
$$Q'(\theta, \theta^{\text{old}}) = \mathbb{E}_{z}[\ln p] + \sum_{k=1}^{K} \sum_{i=1}^{D} \left[ (\alpha_{i} - 1) \ln \mu_{ki} + (b_{i} - 1) \ln (1 - \mu_{ki}) \right] + \sum_{k=1}^{K} (\alpha_{k} - 1) \ln \pi_{ki}$$
Similarly, we calculate the derivative of $Q'(\boldsymbol{\theta}, \boldsymbol{\theta}^{\text{old}})$ with respect to $\mu_{ki}$ . This can be simplified by reusing the deduction in Prob.9.15:
$$\begin{split} \frac{\partial Q^{'}}{\partial \mu_{ki}} &= \frac{\partial \mathbb{E}_{z}[\ln p]}{\partial \mu_{ki}} + \frac{a_{i} - 1}{\mu_{ki}} - \frac{b_{i} - 1}{1 - \mu_{ki}} \\ &= \sum_{n=1}^{N} \gamma(z_{nk}) (\frac{x_{ni}}{\mu_{ki}} - \frac{1 - x_{ni}}{1 - \mu_{ki}}) + \frac{a_{i} - 1}{\mu_{ki}} - \frac{b_{i} - 1}{1 - \mu_{ki}} \\ &= \frac{\sum_{n=1}^{N} x_{ni} \cdot \gamma(z_{nk}) + a_{i} - 1}{\mu_{ki}} - \frac{\sum_{n=1}^{N} (1 - x_{ni}) \gamma(z_{nk}) + b_{i} - 1}{1 - \mu_{ki}} \\ &= \frac{N_{k} \bar{x}_{ki} + a_{i} - 1}{\mu_{ki}} - \frac{N_{k} - N_{k} \bar{x}_{ki} + b_{i} - 1}{1 - \mu_{ki}} \end{split}$$
Note that here $\bar{x}_{ki}$ is defined as the *i*-th entry of $\bar{x}_k$ defined in Eq $\overline{\mathbf{x}}_k = \frac{1}{N_k} \sum_{n=1}^N \gamma(z_{nk}) \mathbf{x}_n$. To be more clear, we have used Eq $N_k = \sum_{n=1}^N \gamma(z_{nk})$ and Eq $\overline{\mathbf{x}}_k = \frac{1}{N_k} \sum_{n=1}^N \gamma(z_{nk}) \mathbf{x}_n$ in the last step:
$$\sum_{n=1}^{N} x_{ni} \cdot \gamma(z_{nk}) = N_k \cdot \left[ \frac{1}{N_k} \sum_{n=1}^{N} x_{ni} \cdot \gamma(z_{nk}) \right] = N_k \cdot \bar{x}_{ki}$$
Setting the derivative equal to 0 and rearranging it, we can obtain:
$$\mu_{ki} = \frac{N_k \bar{x}_{ki} + a_i - 1}{N_k + a_i - 1 + b_i - 1}$$
Next we maximize $Q'(\theta, \theta^{\text{old}})$ with respect to $\pi$ . By analogy to Prob.9.16, we introduce Lagrange multiplier:
$$L \propto \mathbb{E}_z + \sum_{k=1}^K (\alpha_k - 1) \ln \pi_k + \lambda (\sum_{k=1}^K \pi_k - 1)$$
Note that the second term on the right hand side of Q' in its definition has been omitted, since that term can be viewed as a constant with regard to $\pi$ . We then calculate the derivative of L with respect to $\pi_k$ by taking advantage of Prob.9.16:
$$\frac{\partial L}{\partial \pi_k} = \sum_{n=1}^{N} \frac{\gamma(z_{nk})}{\pi_k} + \frac{\alpha_k - 1}{\pi_k} + \lambda = 0$$
Similarly, We first multiply both sides of the expression by $\pi_k$ and then adopt summation with respect to k, which gives:
$$\sum_{k=1}^{K} \sum_{n=1}^{N} \gamma(z_{nk}) + \sum_{k=1}^{K} (\alpha_k - 1) + \sum_{k=1}^{K} \lambda \pi_k = 0$$
Noticing that $\sum_{k=1}^{K} \pi_k$ equals 1, we can obtain:
$$\lambda = -\sum_{k=1}^{K} N_k - \sum_{k=1}^{K} (\alpha_k - 1) = -N - \alpha_0 + K$$
Here we have used Eq $\alpha_0 = \sum_{k=1}^K \alpha_k.$. Substituting it back into the derivative, we can obtain:
$\pi_k = \frac{\sum_{n=1}^N \gamma(z_{nk}) + \alpha_k - 1}{-\lambda} = \frac{N_k + \alpha_k - 1}{N + \alpha_0 - K}$
It is not difficult to show that if N is large, the update formula for $\pi$ and $\mu$ in this case (MAP), will reduce to the results given in the main text (MLE).
| 5,170
|
9
|
9.19
|
medium
|
Consider a D-dimensional variable $\mathbf{x}$ each of whose components i is itself a multinomial variable of degree M so that $\mathbf{x}$ is a binary vector with components $x_{ij}$ where $i=1,\ldots,D$ and $j=1,\ldots,M$ , subject to the constraint that $\sum_j x_{ij}=1$ for all i. Suppose that the distribution of these variables is described by a mixture of the discrete multinomial distributions considered in Section 2.2 so that
$$p(\mathbf{x}) = \sum_{k=1}^{K} \pi_k p(\mathbf{x} | \boldsymbol{\mu}_k)$$
$p(\mathbf{x}) = \sum_{k=1}^{K} \pi_k p(\mathbf{x} | \boldsymbol{\mu}_k)$
where
$$p(\mathbf{x}|\boldsymbol{\mu}_k) = \prod_{i=1}^{D} \prod_{j=1}^{M} \mu_{kij}^{x_{ij}}.$$
$p(\mathbf{x}|\boldsymbol{\mu}_k) = \prod_{i=1}^{D} \prod_{j=1}^{M} \mu_{kij}^{x_{ij}}.$
The parameters $\mu_{kij}$ represent the probabilities $p(x_{ij}=1|\boldsymbol{\mu}_k)$ and must satisfy $0 \leqslant \mu_{kij} \leqslant 1$ together with the constraint $\sum_j \mu_{kij} = 1$ for all values of k and i. Given an observed data set $\{\mathbf{x}_n\}$ , where $n=1,\ldots,N$ , derive the E and M step equations of the EM algorithm for optimizing the mixing coefficients $\pi_k$ and the component parameters $\mu_{kij}$ of this distribution by maximum likelihood.
|
We first introduce a latent variable $\mathbf{z} = [z_1, z_2, ..., z_K]^T$ , only one of which equals 1 and others all 0. The conditional distribution of $\mathbf{x}$ is given by:
$$p(\mathbf{x}|\mathbf{z}, \boldsymbol{\mu}) = \prod_{k=1}^{K} p(\mathbf{x}|\boldsymbol{\mu}_k)^{z_k}$$
The distribution of the latent variable is given by:
$$p(\mathbf{z}|\boldsymbol{\pi}) = \prod_{k=1}^K \pi_k^{z_k}$$
If we follow the same procedure as in Prob.9.14, we can show that Eq $p(\mathbf{x}) = \sum_{k=1}^{K} \pi_k p(\mathbf{x} | \boldsymbol{\mu}_k)$ holds. In other words, the introduction of the latent variable is valid. Therefore, according to Bayes' Theorem, we can obtain:
$$p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\pi}) = \prod_{n=1}^{N} p(\mathbf{z}_n | \boldsymbol{\pi}) p(\mathbf{x}_n | \mathbf{z}_n, \boldsymbol{\mu}) = \prod_{n=1}^{N} \prod_{k=1}^{K} \left[ \pi_k p(\mathbf{x} | \boldsymbol{\mu}) \right]^{z_{nk}}$$
We further use Eq $p(\mathbf{x}|\boldsymbol{\mu}_k) = \prod_{i=1}^{D} \prod_{j=1}^{M} \mu_{kij}^{x_{ij}}.$, which gives:
$$\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\pi}) = \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \ln \left[ \pi_{k} \prod_{d=1}^{D} \prod_{j=1}^{M} \mu_{kij}^{x_{nij}} \right]$$
$$= \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \left[ \ln \pi_{k} + \sum_{d=1}^{D} \sum_{j=1}^{M} x_{nij} \ln \mu_{kij} \right]$$
Similarly, in the E-step, the responsibilities are evaluated using Bayes' theorem, which gives:
$$\gamma(z_{nk}) = \mathbb{E}[z_{nk}] = \frac{\pi_k p(\mathbf{x}_n | \boldsymbol{\mu}_k)}{\sum_{j=1}^K \pi_j p(\mathbf{x}_n | \boldsymbol{\mu}_j)}$$
Next, in the M-step, we are required to maximize $\mathbb{E}_z[\ln p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\mu}, \boldsymbol{\pi})]$ with respect to $\boldsymbol{\pi}$ and $\boldsymbol{\mu}_k$ , where $\mathbb{E}_z[\ln p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\mu}, \boldsymbol{\pi})]$ is given by:
$$\mathbb{E}_{z}[\ln p(\mathbf{X},\mathbf{Z}|\boldsymbol{\mu},\boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \Big[ \ln \pi_{k} + \sum_{i=1}^{D} \sum_{j=1}^{M} x_{nij} \ln \mu_{kij} \Big]$$
Notice that there exists two constraints: (i) the summation of $\pi_k$ over k equals 1, and (ii) the summation of $\mu_{kij}$ over j equals 1 for any k and i, we need to introduce Lagrange multiplier:
$$L = \mathbb{E}_{z}[\ln p] + \lambda(\sum_{k=1}^{K} \pi_{k} - 1) + \sum_{k=1}^{K} \sum_{i=1}^{D} \eta_{ki}(\sum_{i=1}^{M} \mu_{kij} - 1)$$
First we maximize L with respect to $\pi_k$ . This is actually identical to the case in the main text. To be more clear, we calculate the derivative of L with respect to $\pi_k$ :
$$\frac{\partial L}{\partial \pi_k} = \sum_{n=1}^{N} \frac{\gamma(z_{nk})}{\pi_k} + \lambda$$
As in Prob.9.16, we can obtain:
$$\pi_k = \frac{N_k}{N}$$
Where $N_k$ is defined as:
$$N_k = \sum_{n=1}^N \gamma(z_{nk})$$
N is the summation of $N_k$ over k, and also equals the number of data points. Then we calculate the derivative of L with respect to $\mu_{kij}$ :
$$\frac{\partial L}{\partial \mu_{kij}} = \sum_{n=1}^{N} \frac{\gamma(z_{nk}) x_{nij}}{\mu_{kij}} + \eta_{ki}$$
We set it to 0 and multiply both sides by $\mu_{kij}$ , which gives:
$$\sum_{n=1}^{N} \gamma(z_{nk}) x_{nij} + \eta_{ki} \mu_{kij} = 0$$
By analogy to deriving $\pi_k$ , an intuitive idea is to perform summation for the above expression over j and hence we can use the constraint $\sum_j \mu_{kij} = 1$ .
$$\eta_{ki} = -\sum_{i=1}^{M} \sum_{n=1}^{N} \gamma(z_{nk}) x_{nij} = -\sum_{n=1}^{N} \gamma(z_{nk}) \left[ \sum_{i=1}^{M} x_{nij} \right] = -\sum_{n=1}^{N} \gamma(z_{nk}) = -N_k$$
Where we have used the fact that $\sum_{j} x_{nij} = 1$ . Substituting back into the derivative, we can obtain:
$$\mu_{kij} = -\frac{\sum_{n=1}^{N} \gamma(z_{nk}) x_{nij}}{\eta_{ki}} = \frac{1}{N_k} \sum_{n=1}^{N} \gamma(z_{nk}) x_{nij}$$
| 3,905
|
9
|
9.2
|
easy
|
Apply the Robbins-Monro sequential estimation procedure described in Section 2.3.5 to the problem of finding the roots of the regression function given by the derivatives of J in $J = \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \|\mathbf{x}_n - \boldsymbol{\mu}_k\|^2$ with respect to $\mu_k$ . Show that this leads to a stochastic K-means algorithm in which, for each data point $\mathbf{x}_n$ , the nearest prototype $\mu_k$ is updated using $\boldsymbol{\mu}_k^{\text{new}} = \boldsymbol{\mu}_k^{\text{old}} + \eta_n(\mathbf{x}_n - \boldsymbol{\mu}_k^{\text{old}})$.
|
By analogy to Eq $J = \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \|\mathbf{x}_n - \boldsymbol{\mu}_k\|^2$, we can write down:
$$J_N = J_{N-1} + \sum_{k=1}^{K} r_{Nk} ||\mathbf{x}_N - \boldsymbol{\mu}_k||^2$$
In the E-step, we still assign the N-th data $\mathbf{x}_N$ to the closet center and suppose that this closet center is $\boldsymbol{\mu}_m$ . Therefore, the expression above will reduce to:
$$J_N = J_{N-1} + ||\mathbf{x}_n - \boldsymbol{\mu}_m||^2$$
In the M-step, we set the derivative of $J_N$ with respect to $\mu_k$ to 0, where k = 1, 2, ..., K. We can observe that for those $\mu_k$ , $k \neq m$ , we have:
$$\frac{\partial J_N}{\partial \boldsymbol{\mu}_k} = \frac{\partial J_{N-1}}{\partial \boldsymbol{\mu}_k}$$
In other words, we will only update $\mu_m$ in the M-step by setting the derivative of $J_N$ equal to 0. Utilizing Eq $\mu_k = \frac{\sum_n r_{nk} \mathbf{x}_n}{\sum_n r_{nk}}.$, we can obtain:
$$\begin{split} \boldsymbol{\mu}_{m}^{(N)} &= \frac{\sum_{n=1}^{N-1} r_{nk} \mathbf{x}_{n} + \mathbf{x}_{N}}{\sum_{n=1}^{N-1} r_{nk} + 1} \\ &= \frac{\frac{\sum_{n=1}^{N-1} r_{nk} \mathbf{x}_{n}}{\sum_{n=1}^{N-1} r_{nk}} + \frac{\mathbf{x}_{N}}{\sum_{n=1}^{N-1} r_{nk}}}{1 + \frac{1}{\sum_{n=1}^{N-1} r_{nk}}} \\ &= \frac{\boldsymbol{\mu}_{m}^{(N-1)} + \frac{\mathbf{x}_{N}}{\sum_{n=1}^{N-1} r_{nk}}}{1 + \frac{1}{\sum_{n=1}^{N-1} r_{nk}}} \\ &= \boldsymbol{\mu}_{m}^{(N-1)} + \frac{\frac{\mathbf{x}_{N}}{\sum_{n=1}^{N-1} r_{nk}} - \frac{\boldsymbol{\mu}_{m}^{(N-1)}}{\sum_{n=1}^{N-1} r_{nk}}}{1 + \frac{1}{\sum_{n=1}^{N-1} r_{nk}}} \\ &= \boldsymbol{\mu}_{m}^{(N-1)} + \frac{\mathbf{x}_{N} - \boldsymbol{\mu}_{m}^{(N-1)}}{1 + \sum_{n=1}^{N-1} r_{nk}} \end{split}$$
So far we have obtained a sequential on-line update formula just as required.
| 1,795
|
9
|
9.20
|
easy
|
Show that maximization of the expected complete-data log likelihood function $\mathbb{E}\left[\ln p(\mathbf{t}, \mathbf{w} | \alpha, \beta)\right] = \frac{M}{2} \ln \left(\frac{\alpha}{2\pi}\right) - \frac{\alpha}{2} \mathbb{E}\left[\mathbf{w}^{\mathrm{T}} \mathbf{w}\right] + \frac{N}{2} \ln \left(\frac{\beta}{2\pi}\right) - \frac{\beta}{2} \sum_{n=1}^{N} \mathbb{E}\left[(t_n - \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}_n)^2\right].$ for the Bayesian linear regression model leads to the M step reestimation result $\alpha = \frac{M}{\mathbb{E}\left[\mathbf{w}^{\mathrm{T}}\mathbf{w}\right]} = \frac{M}{\mathbf{m}_{N}^{\mathrm{T}}\mathbf{m}_{N} + \mathrm{Tr}(\mathbf{S}_{N})}.$ for $\alpha$ .
|
We first calculate the derivative of Eq $\mathbb{E}\left[\ln p(\mathbf{t}, \mathbf{w} | \alpha, \beta)\right] = \frac{M}{2} \ln \left(\frac{\alpha}{2\pi}\right) - \frac{\alpha}{2} \mathbb{E}\left[\mathbf{w}^{\mathrm{T}} \mathbf{w}\right] + \frac{N}{2} \ln \left(\frac{\beta}{2\pi}\right) - \frac{\beta}{2} \sum_{n=1}^{N} \mathbb{E}\left[(t_n - \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}_n)^2\right].$ with respect to $\alpha$ and set it to 0:
$$\frac{\partial E[\ln p]}{\partial \alpha} = \frac{M}{2} \frac{1}{2\pi} \frac{2\pi}{\alpha} - \frac{\mathbb{E}[\mathbf{w}^T \mathbf{w}]}{2} = 0$$
We rearrange the equation above, which gives:
$$\alpha = \frac{M}{\mathbb{E}[\mathbf{w}^T \mathbf{w}]} \tag{*}$$
Therefore, we now need to calculate the expectation $\mathbb{E}[\mathbf{w}^T\mathbf{w}]$ . Notice that the posterior has already been given by Eq (3.49):
$$p(\mathbf{w}|\mathbf{t}) = \mathcal{N}(\mathbf{m}_N, \mathbf{S}_N)$$
To calculate $\mathbb{E}[\mathbf{w}^T\mathbf{w}]$ , here we write down an property for a Gaussian random variable: if $\mathbf{x} \sim \mathcal{N}(\mathbf{m}, \mathbf{\Sigma})$ , we have:
$$\mathbb{F}[\mathbf{x}^T \mathbf{A} \mathbf{x}] = \mathbf{Tr}[\mathbf{A} \mathbf{\Sigma}] + \mathbf{m}^T \mathbf{A} \mathbf{m}$$
This property has been shown in Eq(378) in 'the Matrix Cookbook'. Utilizing this property, we can obtain:
$$\mathbb{E}[\mathbf{w}^T\mathbf{w}] = \mathrm{Tr}[\mathbf{S}_N] + \mathbf{m}_N^T\mathbf{m}_N$$
Substituting it back into (\*), we obtain what is required.
| 1,529
|
9
|
9.21
|
medium
|
Using the evidence framework of Section 3.5, derive the M-step re-estimation equations for the parameter $\beta$ in the Bayesian linear regression model, analogous to the result $\alpha = \frac{M}{\mathbb{E}\left[\mathbf{w}^{\mathrm{T}}\mathbf{w}\right]} = \frac{M}{\mathbf{m}_{N}^{\mathrm{T}}\mathbf{m}_{N} + \mathrm{Tr}(\mathbf{S}_{N})}.$ for $\alpha$ .
|
We calculate the derivative of Eq $\mathbb{E}\left[\ln p(\mathbf{t}, \mathbf{w} | \alpha, \beta)\right] = \frac{M}{2} \ln \left(\frac{\alpha}{2\pi}\right) - \frac{\alpha}{2} \mathbb{E}\left[\mathbf{w}^{\mathrm{T}} \mathbf{w}\right] + \frac{N}{2} \ln \left(\frac{\beta}{2\pi}\right) - \frac{\beta}{2} \sum_{n=1}^{N} \mathbb{E}\left[(t_n - \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}_n)^2\right].$ with respect to $\beta$ and set it equal to 0:
$$\frac{\partial \ln p}{\partial \beta} = \frac{N}{2} \frac{1}{2\pi} \frac{2\pi}{\beta} - \frac{1}{2} \sum_{n=1}^{N} \mathbb{E}[(t_n - \mathbf{w}^T \boldsymbol{\phi}_n)^2] = 0$$
Rearranging it, we obtain:
$$\beta = \frac{N}{\sum_{n=1}^{N} \mathbb{E}[(t_n - \mathbf{w}^T \boldsymbol{\phi}_n)^2]}$$
Therefore, we are required to calculate the expectation. To be more clear, this expectation is with respect to the posterior defined by Eq (3.49):
$$p(\mathbf{w}|\mathbf{t}) = \mathcal{N}(\mathbf{m}_N, \mathbf{S}_N)$$
We expand the expectation:
$$\begin{split} \mathbb{E}[(t_n - \mathbf{w}^T \boldsymbol{\phi}_n)^2] &= \mathbb{E}[t_n^2 - 2t_n \cdot \mathbf{w}^T \boldsymbol{\phi}_n + \mathbf{w}^T \boldsymbol{\phi}_n \boldsymbol{\phi}_n^T \mathbf{w}] \\ &= \mathbb{E}[t_n^2] - \mathbb{E}[2t_n \cdot \mathbf{w}^T \boldsymbol{\phi}_n] + \mathbb{E}[\mathbf{w}^T (\boldsymbol{\phi}_n \boldsymbol{\phi}_n^T) \mathbf{w}] \\ &= t_n^2 - 2t_n \cdot \mathbb{E}[\boldsymbol{\phi}_n^T \mathbf{w}] + \text{Tr}[\boldsymbol{\phi}_n \boldsymbol{\phi}_n^T \mathbf{S}_N] + \mathbf{m}_N^T \boldsymbol{\phi}_n \boldsymbol{\phi}_n^T \mathbf{m}_N \\ &= t_n^2 - 2t_n \boldsymbol{\phi}_n^T \cdot \mathbb{E}[\mathbf{w}] + \text{Tr}[\boldsymbol{\phi}_n \boldsymbol{\phi}_n^T \mathbf{S}_N] + \mathbf{m}_N^T \boldsymbol{\phi}_n \boldsymbol{\phi}_n^T \mathbf{m}_N \\ &= t_n^2 - 2t_n \boldsymbol{\phi}_n^T \mathbf{m}_N + \text{Tr}[\boldsymbol{\phi}_n \boldsymbol{\phi}_n^T \mathbf{S}_N] + \mathbf{m}_N^T \boldsymbol{\phi}_n \boldsymbol{\phi}_n^T \mathbf{m}_N \\ &= (t_n - \mathbf{m}_N^T \boldsymbol{\phi}_N)^2 + \text{Tr}[\boldsymbol{\phi}_n \boldsymbol{\phi}_n^T \mathbf{S}_N] \end{split}$$
Substituting it back into the derivative, we can obtain:
$$\frac{1}{\beta} = \frac{1}{N} \sum_{n=1}^{N} \left\{ (t_n - \mathbf{m}_N^T \boldsymbol{\phi}_N)^2 + \text{Tr}[\boldsymbol{\phi}_n \boldsymbol{\phi}_n^T \mathbf{S}_N] \right\}$$
$$= \frac{1}{N} \left\{ ||\mathbf{t} - \mathbf{\Phi} \mathbf{m}_N||^2 + \text{Tr}[\mathbf{\Phi}^T \mathbf{\Phi} \mathbf{S}_N] \right\}$$
Note that in the last step, we have performed vectorization. Here the *j*-th row of $\Phi$ is given by $\phi_j$ , identical to the definition given in Chapter 3.
| 2,657
|
9
|
9.22
|
medium
|
By maximization of the expected complete-data log likelihood defined by $\mathbb{E}_{\mathbf{w}} \left[ \ln p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta) p(\mathbf{w}|\alpha) \right]$, derive the M step equations $\alpha_i^{\text{new}} = \frac{1}{m_i^2 + \Sigma_{ii}}$ and $(\beta^{\text{new}})^{-1} = \frac{\|\mathbf{t} - \mathbf{\Phi} \mathbf{m}_N\|^2 + \beta^{-1} \sum_i \gamma_i}{N}$ for re-estimating the hyperparameters of the relevance vector machine for regression.
|
First let's expand the complete-data log likelihood using Eq $p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta) = \prod_{n=1}^{N} p(t_n|\mathbf{x}_n, \mathbf{w}, \beta^{-1}).$, Eq $p(\mathbf{w}|\boldsymbol{\alpha}) = \prod_{i=1}^{M} \mathcal{N}(w_i|0, \alpha_i^{-1})$ and Eq $p(t|\mathbf{x}, \mathbf{w}, \beta) = \mathcal{N}(t|y(\mathbf{x}), \beta^{-1})$.
$$\begin{split} \ln p(\mathbf{t}|\mathbf{X},\mathbf{w},\beta)p(\mathbf{w}|\boldsymbol{\alpha}) &= & \ln p(\mathbf{t}|\mathbf{X},\mathbf{w},\beta) + \ln p(\mathbf{w}|\boldsymbol{\alpha}) \\ &= & \sum_{n=1}^{N} \ln p(t_n|x_n,\mathbf{w},\beta^{-1}) + \sum_{i=1}^{M} \ln \mathcal{N}(w_i|0,\alpha_i^{-1}) \\ &= & \sum_{n=1}^{N} \ln \mathcal{N}(t_n|\mathbf{w}^T\boldsymbol{\phi}_n,\beta^{-1}) + \sum_{i=1}^{M} \ln \mathcal{N}(w_i|0,\alpha_i^{-1}) \\ &= & \frac{N}{2} \ln \frac{\beta}{2\pi} - \frac{\beta}{2} \sum_{n=1}^{N} (t_n - \mathbf{w}^T\boldsymbol{\phi}_n)^2 + \frac{1}{2} \sum_{i=1}^{M} \ln \frac{\alpha_i}{2\pi} - \sum_{i=1}^{M} \frac{\alpha_i}{2} w_i^2 \end{split}$$
Therefore, the expectation of the complete-data log likelihood with respect to the posterior of $\mathbf{w}$ equals:
$$\mathbb{E}_{\mathbf{w}}[\ln p] = \frac{N}{2} \ln \frac{\beta}{2\pi} - \frac{\beta}{2} \sum_{n=1}^{N} \mathbb{E}_{\mathbf{w}}[(t_n - \mathbf{w}^T \boldsymbol{\phi}_n)^2] + \frac{1}{2} \sum_{i=1}^{M} \ln \frac{\alpha_i}{2\pi} - \sum_{i=1}^{M} \frac{\alpha_i}{2} \mathbb{E}_{\mathbf{w}}[w_i^2]$$
We calculate the derivative of $\mathbb{E}_{\mathbf{w}}[\ln p]$ with respect to $\alpha_i$ and set it to 0:
$$\frac{\partial \mathbb{E}_{\mathbf{w}}[\ln p]}{\partial \alpha_i} = \frac{1}{2} \frac{1}{2\pi} \frac{2\pi}{\alpha_i} - \frac{1}{2} \mathbb{E}_{\mathbf{w}}[w_i^2] = 0$$
Rearranging it, we can obtain:
$$\alpha_i = \frac{1}{\mathbb{E}_{\mathbf{w}}[w_i^2]} = \frac{1}{\mathbb{E}_{\mathbf{w}}[\mathbf{w}\mathbf{w}^T]_{(i,i)}}$$
Here the subscript (i,i) represents the entry on the i-th row and i-th column of the matrix $\mathbb{E}_{\mathbf{w}}[\mathbf{w}\mathbf{w}^T]$ . So now, we are required to calculate the expectation. To be more clear, this expectation is with respect to the posterior defined by Eq (7.81):
$$p(\mathbf{w}|\mathbf{t}, \mathbf{X}, \boldsymbol{\alpha}, \boldsymbol{\beta}) = \mathcal{N}(\mathbf{m}, \boldsymbol{\Sigma})$$
Here we use Eq (377) described in 'the Matrix Cookbook'. We restate it here: if $\mathbf{w} \sim \mathcal{N}(\mathbf{m}, \Sigma)$ , we have:
$$\mathbb{E}[\mathbf{w}\mathbf{w}^T] = \mathbf{\Sigma} + \mathbf{m}\mathbf{m}^T$$
According to this equation, we can obtain:
$$\alpha_i = \frac{1}{\mathbb{E}_{\mathbf{w}}[\mathbf{w}\mathbf{w}^T]_{(i,i)}} = \frac{1}{(\mathbf{\Sigma} + \mathbf{m}\mathbf{m}^T)_{(i,i)}} = \frac{1}{\Sigma_{ii} + m_i^2}$$
Now We calculate the derivative of $\mathbb{E}_{\mathbf{w}}[\ln p]$ with respect to $\beta$ and set it to 0:
$$\frac{\partial \mathbb{E}_{\mathbf{w}}[\ln p]}{\partial \beta} = \frac{N}{2} \frac{1}{2\pi} \frac{2\pi}{\beta} - \frac{1}{2} \sum_{n=1}^{N} \mathbb{E}_{\mathbf{w}}[(t_n - \mathbf{w}^T \boldsymbol{\phi}_n)^2] = 0$$
Rearranging it, we obtain:
$$\boldsymbol{\beta}^{(new)} = \frac{N}{\sum_{n=1}^{N} \mathbb{E}_{\mathbf{w}}[(t_n - \mathbf{w}^T \boldsymbol{\phi}_n)^2]}$$
Therefore, we are required to calculate the expectation. By analogy to the deduction in Prob.9.21, we can obtain:
$$\begin{split} \frac{1}{\beta^{(new)}} &= \frac{1}{N} \sum_{n=1}^{N} \left\{ (t_n - \mathbf{m}^T \boldsymbol{\phi}_N)^2 + \text{Tr}[\boldsymbol{\phi}_n \boldsymbol{\phi}_n^T \boldsymbol{\Sigma}] \right\} \\ &= \frac{1}{N} \left\{ ||\mathbf{t} - \boldsymbol{\Phi} \mathbf{m}||^2 + \text{Tr}[\boldsymbol{\Phi}^T \boldsymbol{\Phi} \boldsymbol{\Sigma}] \right\} \end{split}$$
To make it consistent with Eq $(\beta^{\text{new}})^{-1} = \frac{\|\mathbf{t} - \mathbf{\Phi} \mathbf{m}_N\|^2 + \beta^{-1} \sum_i \gamma_i}{N}$, let's first prove a statement:
$$(\boldsymbol{\beta}^{-1}\mathbf{A} + \mathbf{\Phi}^T\mathbf{\Phi})\mathbf{\Sigma} = \boldsymbol{\beta}^{-1}\mathbf{I}$$
This can be easily shown by substituting $\Sigma$ , i.e., Eq(7.83), back into the expression:
$$(\beta^{-1}\mathbf{A} + \mathbf{\Phi}^T\mathbf{\Phi})\,\mathbf{\Sigma} = (\beta^{-1}\mathbf{A} + \mathbf{\Phi}^T\mathbf{\Phi})(\mathbf{A} + \beta\mathbf{\Phi}^T\mathbf{\Phi})^{-1} = \beta^{-1}\mathbf{I}$$
Now we start from this statement and rearrange it, which gives:
$$\mathbf{\Phi}^T \mathbf{\Phi} \mathbf{\Sigma} = \beta^{-1} \mathbf{I} - \beta^{-1} \mathbf{A} \mathbf{\Sigma} = \beta^{-1} (\mathbf{I} - \mathbf{A} \mathbf{\Sigma})$$
Substituting back into the expression for $\beta^{(new)}$ :
$$\begin{split} \frac{1}{\beta^{(new)}} &= \frac{1}{N} \Big\{ ||\mathbf{t} - \mathbf{\Phi} \mathbf{m}||^2 + \mathrm{Tr}[\mathbf{\Phi}^T \mathbf{\Phi} \mathbf{\Sigma}] \Big\} \\ &= \frac{1}{N} \Big\{ ||\mathbf{t} - \mathbf{\Phi} \mathbf{m}||^2 + \mathrm{Tr}[\beta^{-1}(\mathbf{I} - \mathbf{A} \mathbf{\Sigma})] \Big\} \\ &= \frac{1}{N} \Big\{ ||\mathbf{t} - \mathbf{\Phi} \mathbf{m}||^2 + \beta^{-1} \mathrm{Tr}[\mathbf{I} - \mathbf{A} \mathbf{\Sigma}] \Big\} \\ &= \frac{1}{N} \Big\{ ||\mathbf{t} - \mathbf{\Phi} \mathbf{m}||^2 + \beta^{-1} \sum_i (1 - \alpha_i \Sigma_{ii}) \Big\} \\ &= \frac{||\mathbf{t} - \mathbf{\Phi} \mathbf{m}||^2 + \beta^{-1} \sum_i \gamma_i}{N} \end{split}$$
Here we have defined $\gamma_i = 1 - \alpha_i \Sigma_{ii}$ as in Eq $\gamma_i = 1 - \alpha_i \Sigma_{ii}$. Note that there is a typo in Eq $(\beta^{\text{new}})^{-1} = \frac{\|\mathbf{t} - \mathbf{\Phi} \mathbf{m}_N\|^2 + \beta^{-1} \sum_i \gamma_i}{N}$, $\mathbf{m}_N$ should be $\mathbf{m}$ .
| 5,656
|
9
|
9.23
|
medium
|
In Section 7.2.1 we used direct maximization of the marginal likelihood to derive the re-estimation equations $\alpha_i^{\text{new}} = \frac{\gamma_i}{m_i^2}$ and $(\beta^{\text{new}})^{-1} = \frac{\|\mathbf{t} - \mathbf{\Phi}\mathbf{m}\|^2}{N - \sum_{i} \gamma_i}$ for finding values of the hyperparameters $\alpha$ and $\beta$ for the regression RVM. Similarly, in Section 9.3.4 we used the EM algorithm to maximize the same marginal likelihood, giving the re-estimation equations $\alpha_i^{\text{new}} = \frac{1}{m_i^2 + \Sigma_{ii}}$ and $(\beta^{\text{new}})^{-1} = \frac{\|\mathbf{t} - \mathbf{\Phi} \mathbf{m}_N\|^2 + \beta^{-1} \sum_i \gamma_i}{N}$. Show that these two sets of re-estimation equations are formally equivalent.
|
Some clarifications must be made here, Eq (7.87)-(7.88) only gives the same stationary points, i.e., the same $\alpha^*$ and $\beta^*$ , as those given by Eq (9.67)-(9.68). However, the hyper-parameters estimated at some specific iteration may not be the same by those two different methods.
When convergence is reached, Eq $\alpha_i^{\text{new}} = \frac{\gamma_i}{m_i^2}$ can be written as:
$$\alpha^{} = \frac{1 - \alpha^{} \Sigma_{ii}}{m_i^2}$$
Rearranging it, we can obtain:
$$\alpha^{} = \frac{1}{m_i^2 + \Sigma_{ii}}$$
This is identical to Eq $\alpha_i^{\text{new}} = \frac{1}{m_i^2 + \Sigma_{ii}}$. When convergence is reached, Eq $(\beta^{\text{new}})^{-1} = \frac{\|\mathbf{t} - \mathbf{\Phi} \mathbf{m}_N\|^2 + \beta^{-1} \sum_i \gamma_i}{N}$ can be written as:
$$(\boldsymbol{\beta}^{})^{-1} = \frac{||\mathbf{t} - \mathbf{\Phi} \mathbf{m}||^2 + (\boldsymbol{\beta}^{})^{-1} \sum_{i} \gamma_i}{N}$$
Rearranging it, we can obtain:
$$(\boldsymbol{\beta}^{})^{-1} = \frac{||\mathbf{t} - \boldsymbol{\Phi} \mathbf{m}||^2}{N - \sum_i \gamma_i}$$
This is identical to Eq $(\beta^{\text{new}})^{-1} = \frac{\|\mathbf{t} - \mathbf{\Phi}\mathbf{m}\|^2}{N - \sum_{i} \gamma_i}$.
| 1,253
|
9
|
9.24
|
easy
|
Verify the relation $\ln p(\mathbf{X}|\boldsymbol{\theta}) = \mathcal{L}(q,\boldsymbol{\theta}) + \mathrm{KL}(q||p)$ in which $\mathcal{L}(q, \theta)$ and $\mathrm{KL}(q||p)$ are defined by $\mathcal{L}(q, \boldsymbol{\theta}) = \sum_{\mathbf{Z}} q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\theta})}{q(\mathbf{Z})} \right\}$ and $KL(q||p) = -\sum_{\mathbf{Z}} q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta})}{q(\mathbf{Z})} \right\}.$, respectively.
|
We substitute Eq $\mathcal{L}(q, \boldsymbol{\theta}) = \sum_{\mathbf{Z}} q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\theta})}{q(\mathbf{Z})} \right\}$ and Eq $KL(q||p) = -\sum_{\mathbf{Z}} q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta})}{q(\mathbf{Z})} \right\}.$ into Eq (9.70):
$$\begin{split} L(q, \pmb{\theta}) + \mathrm{KL}(q||p) &= \sum_{\mathbf{Z}} q(\mathbf{Z}) \Big\{ \ln \frac{p(\mathbf{X}, \mathbf{Z}|\pmb{\theta})}{q(\mathbf{Z})} - \ln \frac{p(\mathbf{Z}|\mathbf{X}, \pmb{\theta})}{q(\mathbf{Z})} \Big\} \\ &= \sum_{\mathbf{Z}} q(\mathbf{Z}) \Big\{ \ln \frac{p(\mathbf{X}, \mathbf{Z}|\pmb{\theta})}{p(\mathbf{Z}|\mathbf{X}, \pmb{\theta})} \Big\} \\ &= \sum_{\mathbf{Z}} q(\mathbf{Z}) \ln p(\mathbf{X}|\pmb{\theta}) \\ &= \ln p(\mathbf{X}|\pmb{\theta}) \end{split}$$
Note that in the last step, we have used the fact that $\ln p(\mathbf{X}|\boldsymbol{\theta})$ doesn't depend on $\mathbf{Z}$ , and that the summation of $q(\mathbf{Z})$ over $\mathbf{Z}$ equal to 1 because $q(\mathbf{Z})$ is a PDF.
| 1,096
|
9
|
9.25
|
easy
|
Show that the lower bound $\mathcal{L}(q, \theta)$ given by $\mathcal{L}(q, \boldsymbol{\theta}) = \sum_{\mathbf{Z}} q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\theta})}{q(\mathbf{Z})} \right\}$, with $q(\mathbf{Z}) = p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{(\text{old})})$ , has the same gradient with respect to $\boldsymbol{\theta}$ as the log likelihood function $\ln p(\mathbf{X}|\boldsymbol{\theta})$ at the point $\boldsymbol{\theta} = \boldsymbol{\theta}^{(\text{old})}$ .
|
We calculate the derivative of Eq $\mathcal{L}(q, \boldsymbol{\theta}) = \sum_{\mathbf{Z}} q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\theta})}{q(\mathbf{Z})} \right\}$ with respect to $\theta$ , given $q(\mathbf{Z}) = p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{(\text{old})})$ :
$$\begin{split} \frac{\partial L(q, \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} &= \frac{\partial}{\partial \boldsymbol{\theta}} \Big\{ \sum_{\mathbf{Z}} p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta}^{(\text{old})}) \ln \frac{p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\theta})}{p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta}^{(\text{old})})} \Big\} \\ &= \frac{\partial}{\partial \boldsymbol{\theta}} \Big\{ \sum_{\mathbf{Z}} p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta}^{(\text{old})}) \ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\theta}) - \sum_{\mathbf{Z}} p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta}^{(\text{old})}) \ln p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta}^{(\text{old})}) \Big\} \\ &= \frac{\partial}{\partial \boldsymbol{\theta}} \Big\{ \sum_{\mathbf{Z}} p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta}^{(\text{old})}) \ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\theta}) \Big\} \\ &= \sum_{\mathbf{Z}} p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta}^{(\text{old})}) \frac{\partial \ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \\ &= \sum_{\mathbf{Z}} p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta}^{(\text{old})}) \frac{1}{p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\theta})} \frac{\partial p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \\ &= \sum_{\mathbf{Z}} p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta}^{(\text{old})}) \frac{1}{p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\theta})} \frac{\partial p(\mathbf{X} | \boldsymbol{\theta}) \cdot p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \\ &= \sum_{\mathbf{Z}} \frac{p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta}^{(\text{old})})}{p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\theta})} \Big[ p(\mathbf{X} | \boldsymbol{\theta}) \frac{\partial p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} + p(\mathbf{Z} | \mathbf{X}, \boldsymbol{\theta}) \frac{\partial p(\mathbf{X} | \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big] \end{split}$$
We evaluate this derivative at $\theta = \theta^{\text{old}}$ :
$$\begin{split} \frac{\partial L(q, \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{\text{old}}} &= \Big\{ \sum_{\mathbf{Z}} \frac{p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{(\text{old})})}{p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta})} \Big[ p(\mathbf{X}|\boldsymbol{\theta}) \frac{\partial p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} + p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}) \frac{\partial p(\mathbf{X}|\boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big] \Big\} \Big|_{\boldsymbol{\theta}^{\text{old}}} \\ &= \sum_{\mathbf{Z}} \frac{p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{(\text{old})})}{p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}^{(\text{old})})} \Big[ p(\mathbf{X}|\boldsymbol{\theta}^{(\text{old})}) \frac{\partial p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{(\text{old})}} + p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{(\text{old})}) \frac{\partial p(\mathbf{X}|\boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{(\text{old})}} \Big] \\ &= \sum_{\mathbf{Z}} \frac{1}{p(\mathbf{X}|\boldsymbol{\theta}^{(\text{old})})} \Big[ p(\mathbf{X}|\boldsymbol{\theta}^{(\text{old})}) \frac{\partial p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{(\text{old})}} + p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{(\text{old})}) \frac{\partial p(\mathbf{X}|\boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{(\text{old})}} \Big] \\ &= \sum_{\mathbf{Z}} \frac{\partial p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{(\text{old})}} + \sum_{\mathbf{Z}} \frac{p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{(\text{old})})}{p(\mathbf{X}|\boldsymbol{\theta}^{(\text{old})})} \cdot \frac{\partial p(\mathbf{X}|\boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{(\text{old})}} \\ &= \sum_{\mathbf{Z}} \frac{\partial p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{(\text{old})}} + \frac{1}{p(\mathbf{X}|\boldsymbol{\theta}^{(\text{old})})} \cdot \frac{\partial p(\mathbf{X}|\boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{(\text{old})}} \\ &= \sum_{\mathbf{Z}} \frac{\partial p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{(\text{old})}} + \frac{\partial \ln p(\mathbf{X}|\boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{(\text{old})}} \\ &= \left\{ \frac{\partial}{\partial \boldsymbol{\theta}} \sum_{\mathbf{Z}} p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}) \right\} \Big|_{\boldsymbol{\theta}^{(\text{old})}} + \frac{\partial \ln p(\mathbf{X}|\boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{(\text{old})}} \\ &= \frac{\partial \ln p(\mathbf{X}|\boldsymbol{\theta})}{\partial \boldsymbol{\theta}} \Big|_{\boldsymbol{\theta}^{(\text{old})}} \end{aligned}$$
This problem can be much easier to prove if we view it from the perspective of KL divergence. Note that when $q(\mathbf{Z}) = p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{(\text{old})})$ , the KL divergence vanishes, and that in general KL divergence is less or equal to zero. Therefore, we must have:
$$\frac{\partial KL(q||p)}{\partial \boldsymbol{\theta}}\Big|_{\boldsymbol{\theta}^{(\text{old})}} = 0$$
Otherwise, there exists a point $\theta$ in the neighborhood near $\theta^{\text{(old)}}$ which leads the KL divergence less than 0. Then using Eq $\ln p(\mathbf{X}|\boldsymbol{\theta}) = \mathcal{L}(q,\boldsymbol{\theta}) + \mathrm{KL}(q||p)$, it is trivial to prove.
| 6,224
|
9
|
9.26
|
easy
|
Consider the incremental form of the EM algorithm for a mixture of Gaussians, in which the responsibilities are recomputed only for a specific data point $\mathbf{x}_m$ . Starting from the M-step formulae $\boldsymbol{\mu}_k = \frac{1}{N_k} \sum_{n=1}^{N} \gamma(z_{nk}) \mathbf{x}_n$ and $N_k = \sum_{n=1}^{N} \gamma(z_{nk}).$, derive the results $\boldsymbol{\mu}_{k}^{\text{new}} = \boldsymbol{\mu}_{k}^{\text{old}} + \left(\frac{\gamma^{\text{new}}(z_{mk}) - \gamma^{\text{old}}(z_{mk})}{N_{k}^{\text{new}}}\right) \left(\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}\right)$ and $N_k^{\text{new}} = N_k^{\text{old}} + \gamma^{\text{new}}(z_{mk}) - \gamma^{\text{old}}(z_{mk}).$ for updating the component means.
|
From Eq $N_k = \sum_{n=1}^{N} \gamma(z_{nk}).$, we have:
$$N_k^{\mathrm{old}} = \sum_n \gamma^{\mathrm{old}}(z_{nk})$$
If now we just re-evaluate the responsibilities for one data point $\mathbf{x}_m$ , we can obtain:
$$\begin{split} N_k^{\text{new}} &= \sum_{n \neq m} \gamma^{\text{old}}(z_{nk}) + \gamma^{\text{new}}(z_{mk}) \\ &= \sum_{n} \gamma^{\text{old}}(z_{nk}) + \gamma^{\text{new}}(z_{mk}) - \gamma^{\text{old}}(z_{mk}) \\ &= N_k^{\text{old}} + \gamma^{\text{new}}(z_{mk}) - \gamma^{\text{old}}(z_{mk}) \end{split}$$
Similarly, according to Eq $\boldsymbol{\mu}_k = \frac{1}{N_k} \sum_{n=1}^{N} \gamma(z_{nk}) \mathbf{x}_n$, we can obtain:
$$\begin{split} \boldsymbol{\mu}_{k}^{\text{new}} &= \frac{1}{N_{k}^{\text{new}}} \sum_{n \neq m} \gamma^{\text{old}}(\boldsymbol{z}_{nk}) \mathbf{x}_{n} + \frac{\gamma^{\text{new}}(\boldsymbol{z}_{mk}) \mathbf{x}_{m}}{N_{k}^{\text{new}}} \\ &= \frac{1}{N_{k}^{\text{new}}} \sum_{n} \gamma^{\text{old}}(\boldsymbol{z}_{nk}) \mathbf{x}_{n} + \frac{\gamma^{\text{new}}(\boldsymbol{z}_{mk}) \mathbf{x}_{m}}{N_{k}^{\text{new}}} - \frac{\gamma^{\text{old}}(\boldsymbol{z}_{mk}) \mathbf{x}_{m}}{N_{k}^{\text{new}}} \\ &= \frac{N_{k}^{\text{old}}}{N_{k}^{\text{new}}} \frac{1}{N_{k}^{\text{old}}} \sum_{n} \gamma^{\text{old}}(\boldsymbol{z}_{nk}) \mathbf{x}_{n} + \frac{\gamma^{\text{new}}(\boldsymbol{z}_{mk}) \mathbf{x}_{m}}{N_{k}^{\text{new}}} - \frac{\gamma^{\text{old}}(\boldsymbol{z}_{mk}) \mathbf{x}_{m}}{N_{k}^{\text{new}}} \\ &= \frac{N_{k}^{\text{old}}}{N_{k}^{\text{new}}} \boldsymbol{\mu}_{k}^{\text{old}} + \left[ \gamma^{\text{new}}(\boldsymbol{z}_{mk}) - \gamma^{\text{old}}(\boldsymbol{z}_{mk}) \right] \frac{\mathbf{x}_{m}}{N_{k}^{\text{new}}} \\ &= \boldsymbol{\mu}_{k}^{\text{old}} - \frac{N_{k}^{\text{new}} - N_{k}^{\text{old}}}{N_{k}^{\text{new}}} \boldsymbol{\mu}_{k}^{\text{old}} + \left[ \gamma^{\text{new}}(\boldsymbol{z}_{mk}) - \gamma^{\text{old}}(\boldsymbol{z}_{mk}) \right] \frac{\mathbf{x}_{m}}{N_{k}^{\text{new}}} \\ &= \boldsymbol{\mu}_{k}^{\text{old}} - \frac{\gamma^{\text{new}}(\boldsymbol{z}_{mk}) - \gamma^{\text{old}}(\boldsymbol{z}_{mk})}{N_{k}^{\text{new}}} \boldsymbol{\mu}_{k}^{\text{old}} + \left[ \gamma^{\text{new}}(\boldsymbol{z}_{mk}) - \gamma^{\text{old}}(\boldsymbol{z}_{mk}) \right] \frac{\mathbf{x}_{m}}{N_{k}^{\text{new}}} \\ &= \boldsymbol{\mu}_{k}^{\text{old}} + \frac{\gamma^{\text{new}}(\boldsymbol{z}_{mk}) - \gamma^{\text{old}}(\boldsymbol{z}_{mk})}{N_{k}^{\text{new}}} \cdot \left( \mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}} \right) \end{split}$$
Just as required.
| 2,600
|
9
|
9.27
|
medium
|
Derive M-step formulae for updating the covariance matrices and mixing coefficients in a Gaussian mixture model when the responsibilities are updated incrementally, analogous to the result $\boldsymbol{\mu}_{k}^{\text{new}} = \boldsymbol{\mu}_{k}^{\text{old}} + \left(\frac{\gamma^{\text{new}}(z_{mk}) - \gamma^{\text{old}}(z_{mk})}{N_{k}^{\text{new}}}\right) \left(\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}\right)$ for updating the means.
|
By analogy to the previous problem, we use Eq (9.24)-Eq(9.27), beginning by first deriving an update formula for mixing coefficients $\pi_k$ :
$$\begin{split} \pi_k^{\text{new}} &= \frac{N_k^{\text{new}}}{N} = \frac{1}{N} \Big\{ N_k^{\text{old}} + \gamma^{\text{new}}(z_{mk}) - \gamma^{\text{old}}(z_{mk}) \Big\} \\ &= \pi_k^{\text{old}} + \frac{\gamma^{\text{new}}(z_{mk}) - \gamma^{\text{old}}(z_{mk})}{N} \end{split}$$
Here we have used the conclusion (the update formula for $N_k^{\text{new}}$ ) in the previous problem. Next we deal with the covariance matrix $\Sigma$ . By analogy to
the previous problem, we can obtain:
$$\begin{split} \boldsymbol{\Sigma}_{k}^{new} &= \frac{1}{N_{k}^{new}} \sum_{n \neq m} \gamma^{\text{old}}(\boldsymbol{z}_{nk}) (\mathbf{x}_{n} - \boldsymbol{\mu}_{k}^{new}) (\mathbf{x}_{n} - \boldsymbol{\mu}_{k}^{new})^{T} \\ &+ \frac{1}{N_{k}^{new}} \gamma^{\text{new}}(\boldsymbol{z}_{mk}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{new}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{new})^{T} \\ &\approx \frac{1}{N_{k}^{new}} \sum_{n \neq m} \gamma^{\text{old}}(\boldsymbol{z}_{nk}) (\mathbf{x}_{n} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{n} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} \\ &+ \frac{1}{N_{k}^{new}} \gamma^{\text{new}}(\boldsymbol{z}_{mk}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} \\ &= \frac{1}{N_{k}^{new}} \sum_{n} \gamma^{\text{old}}(\boldsymbol{z}_{nk}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} \\ &+ \frac{1}{N_{k}^{new}} \gamma^{\text{new}}(\boldsymbol{z}_{mk}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} \\ &= \frac{1}{N_{k}^{new}} \gamma^{\text{old}}(\boldsymbol{z}_{mk}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} \\ &= \frac{1}{N_{k}^{new}} \gamma^{\text{old}}(\boldsymbol{z}_{mk}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} \\ &= \left\{1 + \frac{N_{k}^{\text{old}} - N_{k}^{\text{new}}}{N_{k}^{new}}\right\} \boldsymbol{\Sigma}_{k}^{\text{old}} \\ &+ \frac{1}{N_{k}^{new}} \gamma^{\text{new}}(\boldsymbol{z}_{mk}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} \\ &= \left\{1 + \frac{\gamma^{\text{old}}(\boldsymbol{z}_{mk}) - \gamma^{\text{new}}(\boldsymbol{z}_{mk})}{N_{k}^{new}} (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} \\ &= \left\{1 + \frac{\gamma^{\text{old}}(\boldsymbol{z}_{mk}) - \gamma^{\text{new}}(\boldsymbol{z}_{mk})}{N_{k}^{new}} (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} \\ &= \sum_{k}^{\text{old}} \\ &+ \frac{\gamma^{\text{new}}(\boldsymbol{z}_{mk})}{N_{k}^{new}} (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} \\ &= \boldsymbol{\Sigma}_{k}^{\text{old}} \\ &+ \frac{\gamma^{\text{new}}(\boldsymbol{z}_{mk})}{N_{k}^{new}} \left\{ (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} - \boldsymbol{\Sigma}_{k}^{\text{old}} \right\} \\ &- \frac{\gamma^{\text{old}}(\boldsymbol{z}_{mk})}{N_{k}^{new}} \left\{ (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} - \boldsymbol{\Sigma}_{k}^{\text{old}} \right\} \\ &- \frac{\gamma^{\text{old}}(\boldsymbol{z}_{mk})}{N_{k}^{new}} \left\{ (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}) (\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}})^{T} - \boldsymbol{\Sigma}_{k}^{\text{old}} \right\} \end{aligned}$$
One important thing worthy mentioned is that in the second step, there is an approximate equal sign. Note that in the previous problem, we have
shown that if we only recompute the data point $\mathbf{x}_m$ , all the center $\boldsymbol{\mu}_k$ will also change from $\boldsymbol{\mu}_k^{\text{old}}$ to $\boldsymbol{\mu}_k^{\text{new}}$ , and the update formula is given by Eq $\boldsymbol{\mu}_{k}^{\text{new}} = \boldsymbol{\mu}_{k}^{\text{old}} + \left(\frac{\gamma^{\text{new}}(z_{mk}) - \gamma^{\text{old}}(z_{mk})}{N_{k}^{\text{new}}}\right) \left(\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}\right)$. However, for the convenience of computing, we have made an approximation here. Other approximation methods can also be applied here. For instance, you can replace $\boldsymbol{\mu}_k^{\text{new}}$ with $\boldsymbol{\mu}_k^{\text{old}}$ whenever it occurs.
The complete solution should be given by substituting Eq $\boldsymbol{\mu}_{k}^{\text{new}} = \boldsymbol{\mu}_{k}^{\text{old}} + \left(\frac{\gamma^{\text{new}}(z_{mk}) - \gamma^{\text{old}}(z_{mk})}{N_{k}^{\text{new}}}\right) \left(\mathbf{x}_{m} - \boldsymbol{\mu}_{k}^{\text{old}}\right)$ into the right side of the first equal sign and then rearranging it, in order to construct a relation between $\Sigma_k^{\mathrm{new}}$ and $\Sigma_k^{\mathrm{old}}$ . However, this is too complicated.
## 0.10 Variational Inference
| 5,281
|
9
|
9.3
|
easy
|
Consider a Gaussian mixture model in which the marginal distribution $p(\mathbf{z})$ for the latent variable is given by $p(\mathbf{z}) = \prod_{k=1}^{K} \pi_k^{z_k}.$, and the conditional distribution $p(\mathbf{x}|\mathbf{z})$ for the observed variable is given by $p(\mathbf{x}|\mathbf{z}) = \prod_{k=1}^{K} \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k)^{z_k}.$. Show that the marginal distribution $p(\mathbf{x})$ , obtained by summing $p(\mathbf{z})p(\mathbf{x}|\mathbf{z})$ over all possible values of $\mathbf{z}$ , is a Gaussian mixture of the form $p(\mathbf{x}) = \sum_{k=1}^{K} \pi_k \mathcal{N}(\mathbf{x} | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k).$.
|
We simply follow the hint.
$$p(\mathbf{x}) = \sum_{\mathbf{z}} p(\mathbf{z}) p(\mathbf{x}|\mathbf{z})$$
$$= \sum_{\mathbf{z}} \prod_{k=1}^{K} \left[ (\pi_k \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k)) \right]^{z_k}$$
Note that we have used 1-of-K coding scheme for $\mathbf{z} = [z_1, z_2, ..., z_K]^T$ . To be more specific, only one of $z_1, z_2, ..., z_K$ will be 1 and all others will equal 0. Therefore, the summation over $\mathbf{z}$ actually consists of K terms and the k-th term corresponds to $z_k$ equal to 1 and others 0. Moreover, for the k-th term, the product will reduce to $\pi_k \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k)$ . Therefore, we can obtain:
$$p(\mathbf{x}) = \sum_{\mathbf{z}} \prod_{k=1}^{K} \left[ (\pi_k \mathcal{N}(\mathbf{x} | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k))^{z_k} = \sum_{k=1}^{K} \pi_k \mathcal{N}(\mathbf{x} | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right]$$
Just as required.
| 985
|
9
|
9.4
|
easy
|
Suppose we wish to use the EM algorithm to maximize the posterior distribution over parameters $p(\theta|\mathbf{X})$ for a model containing latent variables, where $\mathbf{X}$ is the observed data set. Show that the E step remains the same as in the maximum likelihood case, whereas in the M step the quantity to be maximized is given by $\mathcal{Q}(\theta, \theta^{\text{old}}) + \ln p(\theta)$ where $\mathcal{Q}(\theta, \theta^{\text{old}})$ is defined by $Q(\boldsymbol{\theta}, \boldsymbol{\theta}^{\text{old}}) = \sum_{\mathbf{Z}} p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{\text{old}}) \ln p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}).$.
|
According to Bayes' Theorem, we can write:
$$p(\boldsymbol{\theta}|\mathbf{X}) \propto p(\mathbf{X}|\boldsymbol{\theta})p(\boldsymbol{\theta})$$
Taking logarithm on both sides, we can write:
$$\ln p(\boldsymbol{\theta}|\mathbf{X}) \propto \ln p(\mathbf{X}|\boldsymbol{\theta}) + \ln p(\boldsymbol{\theta})$$
Further utilizing Eq $\ln p(\mathbf{X}|\boldsymbol{\theta}) = \ln \left\{ \sum_{\mathbf{Z}} p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}) \right\}.$, we can obtain:
$$\ln p(\boldsymbol{\theta}|\mathbf{X}) \propto \ln \left\{ \sum_{\mathbf{Z}} p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}) \right\} + \ln p(\boldsymbol{\theta})$$
$$= \ln \left\{ \left[ \sum_{\mathbf{Z}} p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}) \right] \cdot p(\boldsymbol{\theta}) \right\}$$
$$= \ln \left\{ \sum_{\mathbf{Z}} p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}) p(\boldsymbol{\theta}) \right\}$$
In other words, in thise case, the only modification is that the term $p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta})$ in Eq $\ln p(\mathbf{X}|\boldsymbol{\theta}) = \ln \left\{ \sum_{\mathbf{Z}} p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}) \right\}.$ will be replaced by $p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta})p(\boldsymbol{\theta})$ . Therefore, in the E-step, we still need to calculate the posterior $p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{old})$ and then in the M-step, we are required to maximize $Q'(\boldsymbol{\theta}, \boldsymbol{\theta}^{old})$ . In this case, by analogy to Eq $Q(\boldsymbol{\theta}, \boldsymbol{\theta}^{\text{old}}) = \sum_{\mathbf{Z}} p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{\text{old}}) \ln p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}).$, we can write down $Q'(\boldsymbol{\theta}, \boldsymbol{\theta}^{old})$ :
$$\begin{aligned} Q'(\boldsymbol{\theta}, \boldsymbol{\theta}^{old}) &= \sum_{Z} p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{old}) \ln \left[ p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}) p(\boldsymbol{\theta}) \right] \\ &= \sum_{Z} p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{old}) \left[ \ln p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}) + \ln p(\boldsymbol{\theta}) \right] \\ &= \sum_{Z} p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{old}) \ln p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}) + \sum_{Z} p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{old}) \ln p(\boldsymbol{\theta}) \\ &= \sum_{Z} p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{old}) \ln p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}) + \ln p(\boldsymbol{\theta}) \cdot \sum_{Z} p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{old}) \\ &= \sum_{Z} p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\theta}^{old}) \ln p(\mathbf{X}, \mathbf{Z}|\boldsymbol{\theta}) + \ln p(\boldsymbol{\theta}) \\ &= Q(\boldsymbol{\theta}, \boldsymbol{\theta}^{old}) + \ln p(\boldsymbol{\theta}) \end{aligned}$$
Just as required.
| 2,859
|
9
|
9.5
|
easy
|
- 9.5 (\*) Consider the directed graph for a Gaussian mixture model shown in Figure 9.6. By making use of the d-separation criterion discussed in Section 8.2, show that the posterior distribution of the latent variables factorizes with respect to the different data points so that
$$p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi}) = \prod_{n=1}^{N} p(\mathbf{z}_n | \mathbf{x}_n, \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi}).$$
$p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi}) = \prod_{n=1}^{N} p(\mathbf{z}_n | \mathbf{x}_n, \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi}).$
|
Notice that the condition on $\mu$ , $\Sigma$ and $\pi$ can be omitted here, and we only need to prove $p(\mathbf{Z}|\mathbf{X})$ can be written as the product of $p(\mathbf{z}_n|\mathbf{x}_n)$ . Correspondingly, the small dots representing $\mu$ , $\Sigma$ and $\pi$ can also be omitted in Fig 9.6. Observing Fig 9.6 and based on definition, we can write:
$$p(\mathbf{X}, \mathbf{Z}) = p(\mathbf{x}_1, \mathbf{z}_1)p(\mathbf{z}_1)...p(\mathbf{x}_N, \mathbf{z}_N)p(\mathbf{z}_N) = p(\mathbf{x}_1, \mathbf{z}_1)...p(\mathbf{x}_N, \mathbf{z}_N)$$
Moreover, since there is no link from $\mathbf{z}_m$ to $\mathbf{z}_n$ , from $\mathbf{x}_m$ to $\mathbf{x}_n$ , and from $\mathbf{z}_m$ to $\mathbf{x}_n$ ( $m \neq n$ ), we can obtain:
$$p(\mathbf{Z}) = p(\mathbf{z}_1)...p(\mathbf{z}_N), \quad p(\mathbf{X}) = p(\mathbf{x}_1)...p(\mathbf{x}_N)$$
These can also be verified by calculating the marginal distribution from $p(\mathbf{X}, \mathbf{Z})$ , for example:
$$p(\mathbf{Z}) = \sum_{\mathbf{X}} p(\mathbf{X}, \mathbf{Z}) = \sum_{\mathbf{x}_1, \dots, \mathbf{x}_N} p(\mathbf{x}_1, \mathbf{z}_1) \dots p(\mathbf{x}_N, \mathbf{z}_N) = p(\mathbf{z}_1) \dots p(\mathbf{z}_N)$$
According to Bayes' Theorem, we have
$$p(\mathbf{Z}|\mathbf{X}) = \frac{p(\mathbf{X}|\mathbf{Z})p(\mathbf{Z})}{p(\mathbf{X})}$$
$$= \frac{\left[\prod_{n=1}^{N} p(\mathbf{x}_{n}|\mathbf{z}_{n})\right] \left[\prod_{n=1}^{N} p(\mathbf{z}_{n})\right]}{\prod_{n=1}^{N} p(\mathbf{x}_{n})}$$
$$= \prod_{n=1}^{N} \frac{p(\mathbf{x}_{n}|\mathbf{z}_{n})p(\mathbf{z}_{n})}{p(\mathbf{x}_{n})}$$
$$= \prod_{n=1}^{N} p(\mathbf{z}_{n}|\mathbf{x}_{n})$$
Just as required. The essence behind the problem is that in the directed graph, there are only links from $\mathbf{z}_n$ to $\mathbf{x}_n$ . The deeper reason is that (i) the mixture model is given by Fig 9.4, and (ii) we assume the data $\{\mathbf{x}_n\}$ is i.i.d, and thus there is no link from $\mathbf{x}_m$ to $\mathbf{x}_n$ .
| 1,984
|
9
|
9.6
|
medium
|
- 9.6 (\*\*) Consider a special case of a Gaussian mixture model in which the covariance matrices Σ<sub>k</sub> of the components are all constrained to have a common value Σ. Derive the EM equations for maximizing the likelihood function under such a model.
|
By analogy to Eq $\Sigma_k = \frac{1}{N_k} \sum_{n=1}^N \gamma(z_{nk}) (\mathbf{x}_n - \boldsymbol{\mu}_k) (\mathbf{x}_n - \boldsymbol{\mu}_k)^{\mathrm{T}}$, we calculate the derivative of Eq $\ln p(\mathbf{X}|\boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Sigma}) = \sum_{n=1}^{N} \ln \left\{ \sum_{k=1}^{K} \pi_k \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right\}.$ with respect to $\Sigma$ :
$$\frac{\partial \ln p}{\partial \Sigma} = \frac{\partial}{\partial \Sigma} \{ \sum_{n=1}^{N} \ln \alpha_n \} = \sum_{n=1}^{N} \frac{1}{\alpha_n} \frac{\partial \alpha_n}{\partial \Sigma}$$
(\*)
Where we have defined:
$$a_n = \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma})$$
Recall that in Prob.2.34, we have proved:
$$\frac{\partial \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma})}{\partial \boldsymbol{\Sigma}} = -\frac{1}{2} \boldsymbol{\Sigma}^{-1} + \frac{1}{2} \boldsymbol{\Sigma}^{-1} \mathbf{S}_{nk} \boldsymbol{\Sigma}^{-1}$$
Where we have defined:
$$\mathbf{S}_{nk} = (\mathbf{x}_n - \boldsymbol{\mu}_k)(\mathbf{x}_n - \boldsymbol{\mu}_k)^T$$
Therefore, we can obtain:
$$\begin{split} \frac{\partial a_n}{\partial \boldsymbol{\Sigma}} &= \frac{\partial}{\partial \boldsymbol{\Sigma}} \Big\{ \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}) \Big\} \\ &= \sum_{k=1}^K \frac{\partial}{\partial \boldsymbol{\Sigma}} \Big\{ \pi_k \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}) \Big\} \\ &= \sum_{k=1}^K \pi_k \frac{\partial}{\partial \boldsymbol{\Sigma}} \Big\{ \exp \big[ \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}) \big] \Big\} \\ &= \sum_{k=1}^K \pi_k \cdot \exp \big[ \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}) \big] \cdot \frac{\partial}{\partial \boldsymbol{\Sigma}} \Big[ \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}) \Big] \\ &= \sum_{k=1}^K \pi_k \cdot \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}) \cdot (-\frac{1}{2} \boldsymbol{\Sigma}^{-1} + \frac{1}{2} \boldsymbol{\Sigma}^{-1} \mathbf{S}_{nk} \boldsymbol{\Sigma}^{-1}) \end{split}$$
Substitute the equation above into (\*), we can obtain:
$$\begin{split} \frac{\partial \ln p}{\partial \boldsymbol{\Sigma}} &= \sum_{n=1}^{N} \frac{1}{a_n} \frac{\partial a_n}{\partial \boldsymbol{\Sigma}} \\ &= \sum_{n=1}^{N} \frac{\sum_{k=1}^{K} \pi_k \cdot \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}) \cdot (-\frac{1}{2} \boldsymbol{\Sigma}^{-1} + \boldsymbol{\Sigma}^{-1} \mathbf{S}_{nk} \boldsymbol{\Sigma}^{-1})}{\sum_{j=1}^{K} \pi_j \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_j, \boldsymbol{\Sigma})} \\ &= \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \cdot (-\frac{1}{2} \boldsymbol{\Sigma}^{-1} + \frac{1}{2} \boldsymbol{\Sigma}^{-1} \mathbf{S}_{nk} \boldsymbol{\Sigma}^{-1}) \\ &= -\frac{1}{2} \Big\{ \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \Big\} \boldsymbol{\Sigma}^{-1} + \frac{1}{2} \boldsymbol{\Sigma}^{-1} \Big\{ \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \mathbf{S}_{nk} \Big\} \boldsymbol{\Sigma}^{-1} \end{split}$$
If we set the derivative equal to 0, we can obtain:
$$\boldsymbol{\Sigma} = \frac{\sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \mathbf{S}_{nk}}{\sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk})}$$
| 3,394
|
9
|
9.7
|
easy
|
Verify that maximization of the complete-data log likelihood $\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi}) = \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \left\{ \ln \pi_k + \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right\}.$ for a Gaussian mixture model leads to the result that the means and covariances of each component are fitted independently to the corresponding group of data points, and the mixing coefficients are given by the fractions of points in each group.
|
We begin by calculating the derivative of Eq $\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi}) = \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \left\{ \ln \pi_k + \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right\}.$ with respect to $\mu_k$ :
$$\frac{\partial \ln p}{\partial \boldsymbol{\mu}_{k}} = \frac{\partial}{\partial \boldsymbol{\mu}_{k}} \left\{ \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \left[ \ln \pi_{k} + \ln \mathcal{N}(\mathbf{x}_{n} | \boldsymbol{\mu}_{k}, \boldsymbol{\Sigma}_{k}) \right] \right\}$$
$$= \frac{\partial}{\partial \boldsymbol{\mu}_{k}} \left\{ \sum_{n=1}^{N} z_{nk} \left[ \ln \pi_{k} + \ln \mathcal{N}(\mathbf{x}_{n} | \boldsymbol{\mu}_{k}, \boldsymbol{\Sigma}_{k}) \right] \right\}$$
$$= \sum_{n=1}^{N} \frac{\partial}{\partial \boldsymbol{\mu}_{k}} \left\{ z_{nk} \ln \mathcal{N}(\mathbf{x}_{n} | \boldsymbol{\mu}_{k}, \boldsymbol{\Sigma}_{k}) \right\}$$
$$= \sum_{\mathbf{x}_{n} \in C_{k}} \frac{\partial}{\partial \boldsymbol{\mu}_{k}} \left\{ \ln \mathcal{N}(\mathbf{x}_{n} | \boldsymbol{\mu}_{k}, \boldsymbol{\Sigma}_{k}) \right\}$$
Where we have used $\mathbf{x}_n \in C_k$ to represent the data point $\mathbf{x}_n$ which are assigned to the k-th cluster. Therefore, $\boldsymbol{\mu}_k$ is given by the mean of those $x_n \in C_k$ just as the case of a single Gaussian. It is exactly the same for the covariance. Next, we maximize Eq $\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi}) = \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \left\{ \ln \pi_k + \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right\}.$ with respect to $\pi_k$ by enforcing a Lagrange multiplier:
$$L = \ln p + \lambda (\sum_{k=1}^{K} \pi_k - 1)$$
We calculate the derivative of L with respect to $\pi_k$ and set it to 0:
$$\frac{\partial L}{\partial \pi_k} = \sum_{n=1}^{N} \frac{z_{nk}}{\pi_k} + \lambda = 0$$
We multiply both sides by $\pi_k$ and sum over k making use of the constraint Eq $\sum_{k=1}^{K} \pi_k = 1$, yielding $\lambda = -N$ . Substituting it back into the expression, we can obtain:
$$\pi_k = \frac{1}{N} \sum_{n=1}^N z_{nk}$$
Just as required.
| 2,243
|
9
|
9.8
|
easy
|
Show that if we maximize $\mathbb{E}_{\mathbf{Z}}[\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \left\{ \ln \pi_k + \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right\}. \quad$ with respect to $\mu_k$ while keeping the responsibilities $\gamma(z_{nk})$ fixed, we obtain the closed form solution given by $\boldsymbol{\mu}_k = \frac{1}{N_k} \sum_{n=1}^{N} \gamma(z_{nk}) \mathbf{x}_n$.
|
Since $\gamma(z_{nk})$ is fixed, the only dependency of Eq $\mathbb{E}_{\mathbf{Z}}[\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \left\{ \ln \pi_k + \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right\}. \quad$ on $\mu_k$ occurs in the Gaussian, yielding:
$$\frac{\partial \mathbb{E}_{z}[\ln p]}{\partial \boldsymbol{\mu}_{k}} = \frac{\partial}{\partial \boldsymbol{\mu}_{k}} \left\{ \sum_{n=1}^{N} \gamma(z_{nk}) \ln \mathcal{N}(\mathbf{x}_{n} | \boldsymbol{\mu}_{k}, \boldsymbol{\Sigma}_{k}) \right\}$$
$$= \sum_{n=1}^{N} \gamma(z_{nk}) \cdot \frac{\partial \ln \mathcal{N}(\mathbf{x}_{n} | \boldsymbol{\mu}_{k}, \boldsymbol{\Sigma}_{k})}{\partial \boldsymbol{\mu}_{k}}$$
$$= \sum_{n=1}^{N} \gamma(z_{nk}) \cdot \left[ -\boldsymbol{\Sigma}_{k}^{-1}(\mathbf{x}_{n} - \boldsymbol{\mu}_{k}) \right]$$
Setting the derivative equal to 0, we obtain exactly Eq $0 = -\sum_{n=1}^{N} \frac{\pi_k \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k)}{\sum_{j} \pi_j \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_j, \boldsymbol{\Sigma}_j)} \boldsymbol{\Sigma}_k(\mathbf{x}_n - \boldsymbol{\mu}_k)$, and consequently Eq $\boldsymbol{\mu}_k = \frac{1}{N_k} \sum_{n=1}^{N} \gamma(z_{nk}) \mathbf{x}_n$ just as required. Note that there is a typo in Eq $0 = -\sum_{n=1}^{N} \frac{\pi_k \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k)}{\sum_{j} \pi_j \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_j, \boldsymbol{\Sigma}_j)} \boldsymbol{\Sigma}_k(\mathbf{x}_n - \boldsymbol{\mu}_k)$, $\Sigma_k$ shoule be $\Sigma_b^{-1}$ .
| 1,705
|
9
|
9.9
|
easy
|
Show that if we maximize $\mathbb{E}_{\mathbf{Z}}[\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \left\{ \ln \pi_k + \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right\}. \quad$ with respect to $\Sigma_k$ and $\pi_k$ while keeping the responsibilities $\gamma(z_{nk})$ fixed, we obtain the closed form solutions given by $\Sigma_k = \frac{1}{N_k} \sum_{n=1}^N \gamma(z_{nk}) (\mathbf{x}_n - \boldsymbol{\mu}_k) (\mathbf{x}_n - \boldsymbol{\mu}_k)^{\mathrm{T}}$ and $\pi_k = \frac{N_k}{N}$.
|
We first calculate the derivative of Eq $\mathbb{E}_{\mathbf{Z}}[\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \left\{ \ln \pi_k + \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right\}. \quad$ with respect to $\Sigma_k$ :
$$\frac{\partial \mathbb{E}_{z}}{\partial \boldsymbol{\Sigma}_{k}} = \frac{\partial}{\partial \boldsymbol{\Sigma}_{k}} \left\{ \sum_{n=1}^{N} \gamma(z_{nk}) \ln \mathcal{N}(\mathbf{x}_{n} | \boldsymbol{\mu}_{k}, \boldsymbol{\Sigma}_{k}) \right\}
= \sum_{n=1}^{N} \gamma(z_{nk}) \frac{\partial \ln \mathcal{N}(\mathbf{x}_{n} | \boldsymbol{\mu}_{k}, \boldsymbol{\Sigma}_{k})}{\partial \boldsymbol{\Sigma}_{k}}
= \sum_{n=1}^{N} \gamma(z_{nk}) \cdot \left[ -\frac{1}{2} \boldsymbol{\Sigma}_{k}^{-1} + \frac{1}{2} \boldsymbol{\Sigma}_{k}^{-1} \mathbf{S}_{nk} \boldsymbol{\Sigma}_{k}^{-1} \right]$$
As in Prob 9.6, we have defined:
$$\mathbf{S}_{nk} = (\mathbf{x}_n - \boldsymbol{\mu}_k)(\mathbf{x}_n - \boldsymbol{\mu}_k)^T$$
Setting the derivative equal to 0 and rearranging it, we obtain:
$$\boldsymbol{\Sigma}_k = \frac{\sum_{n=1}^N \gamma(z_{nk}) \, \mathbf{S}_{nk}}{\sum_{n=1}^N \gamma(z_{nk})} = \frac{\sum_{n=1}^N \gamma(z_{nk}) \, \mathbf{S}_{nk}}{N_k}$$
Where $N_k$ is given by Eq $N_k = \sum_{n=1}^{N} \gamma(z_{nk}).$. So now we have obtained Eq $\Sigma_k = \frac{1}{N_k} \sum_{n=1}^N \gamma(z_{nk}) (\mathbf{x}_n - \boldsymbol{\mu}_k) (\mathbf{x}_n - \boldsymbol{\mu}_k)^{\mathrm{T}}$ just as required. Next to maximize Eq $\mathbb{E}_{\mathbf{Z}}[\ln p(\mathbf{X}, \mathbf{Z} | \boldsymbol{\mu}, \boldsymbol{\Sigma}, \boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \gamma(z_{nk}) \left\{ \ln \pi_k + \ln \mathcal{N}(\mathbf{x}_n | \boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k) \right\}. \quad$ with respect to $\pi_k$ , we still need to introduce Lagrange multiplier to enforce the summation of $pi_k$ over k equal to 1, as in Prob 9.7:
$$L = \mathbb{E}_z + \lambda (\sum_{k=1}^K \pi_k - 1)$$
We calculate the derivative of L with respect to $\pi_k$ and set it to 0:
$$\frac{\partial L}{\partial \pi_k} = \sum_{n=1}^{N} \frac{\gamma(z_{nk})}{\pi_k} + \lambda = 0$$
We multiply both sides by $\pi_k$ and sum over k making use of the constraint Eq $\sum_{k=1}^{K} \pi_k = 1$, yielding $\lambda = -N$ (you can see Eq (9.20)- Eq $\pi_k = \frac{N_k}{N}$ for more details). Substituting it back into the expression, we can obtain:
$$\pi_k = \frac{1}{N} \sum_{n=1}^N \gamma(z_{nk}) = \frac{N_k}{N}$$
Just as Eq $\pi_k = \frac{N_k}{N}$.
| 2,658
|
10
|
10.1
|
easy
|
Verify that the log marginal distribution of the observed data $\ln p(\mathbf{X})$ can be decomposed into two terms in the form $\ln p(\mathbf{X}) = \mathcal{L}(q) + \mathrm{KL}(q||p)$ where $\mathcal{L}(q)$ is given by $\mathcal{L}(q) = \int q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{X}, \mathbf{Z})}{q(\mathbf{Z})} \right\} d\mathbf{Z}$ and $\mathrm{KL}(q||p)$ is given by $KL(q||p) = -\int q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{Z}|\mathbf{X})}{q(\mathbf{Z})} \right\} d\mathbf{Z}.$.
|
This problem is very similar to Prob.9.24. We substitute Eq $\mathcal{L}(q) = \int q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{X}, \mathbf{Z})}{q(\mathbf{Z})} \right\} d\mathbf{Z}$ and Eq $KL(q||p) = -\int q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{Z}|\mathbf{X})}{q(\mathbf{Z})} \right\} d\mathbf{Z}.$ into Eq (10.2):
$$L(q) + \text{KL}(q||p) = \int_{\mathbf{Z}} q(\mathbf{Z}) \left\{ \ln \frac{p(\mathbf{X}, \mathbf{Z})}{q(\mathbf{Z})} - \ln \frac{p(\mathbf{Z}|\mathbf{X})}{q(\mathbf{Z})} \right\} d\mathbf{Z}$$
$$= \int_{\mathbf{Z}} q(\mathbf{Z}) \left\{ \ln \frac{p(\mathbf{X}, \mathbf{Z})}{p(\mathbf{Z}|\mathbf{X})} \right\} d\mathbf{Z}$$
$$= \int_{\mathbf{Z}} q(\mathbf{Z}) \ln p(\mathbf{X}) d\mathbf{Z}$$
$$= \ln p(\mathbf{X})$$
Note that in the last step, we have used the fact that $\ln p(\mathbf{X})$ doesn't depend on $\mathbf{Z}$ , and that the integration of $q(\mathbf{Z})$ over $\mathbf{Z}$ equal to 1 because $q(\mathbf{Z})$ is a PDF.
| 977
|
10
|
10.10
|
easy
|
Derive the decomposition given by (10.34) that is used to find approximate posterior distributions over models using variational inference.
|
We substitute $\mathcal{L}_m$ , i.e., Eq $\mathcal{L}_{m} = \sum_{m} \sum_{\mathbf{Z}} q(\mathbf{Z}|m)q(m) \ln \left\{ \frac{p(\mathbf{Z}, \mathbf{X}, m)}{q(\mathbf{Z}|m)q(m)} \right\}.$, back into the right hand side of Eq (10.34), yielding:
(right)
$$= \sum_{m} \sum_{\mathbf{Z}} q(\mathbf{Z}|m) q(m) \left\{ \ln \frac{p(\mathbf{Z}, \mathbf{X}, m)}{q(\mathbf{Z}|m) q(m)} - \ln \frac{p(\mathbf{Z}, m | \mathbf{X})}{q(\mathbf{Z}|m) q(m)} \right\}$$
$$= \sum_{m} \sum_{\mathbf{Z}} q(\mathbf{Z}|m) q(m) \left\{ \ln \frac{p(\mathbf{Z}, \mathbf{X}, m)}{p(\mathbf{Z}, m | \mathbf{X})} \right\}$$
$$= \sum_{m} \sum_{\mathbf{Z}} q(\mathbf{Z}, m) \ln p(\mathbf{X})$$
$$= \ln p(\mathbf{X})$$
Just as required.
| 716
|
10
|
10.11
|
medium
|
By using a Lagrange multiplier to enforce the normalization constraint on the distribution q(m), show that the maximum of the lower bound $\mathcal{L}_{m} = \sum_{m} \sum_{\mathbf{Z}} q(\mathbf{Z}|m)q(m) \ln \left\{ \frac{p(\mathbf{Z}, \mathbf{X}, m)}{q(\mathbf{Z}|m)q(m)} \right\}.$ is given by (10.36).
|
We introduce the Lagrange Multiplier:
$$\begin{split} L &= \sum_{m} \sum_{\mathbf{Z}} q(\mathbf{Z}|m) q(m) \ln \left\{ \frac{p(\mathbf{Z}, \mathbf{X}, m)}{q(\mathbf{Z}|m) q(m)} \right\} - \lambda \left\{ \sum_{m} q(m) - 1 \right\} \\ &= \sum_{m} \sum_{\mathbf{Z}} q(\mathbf{Z}|m) q(m) \ln \left\{ p(\mathbf{Z}, \mathbf{X}, m) - q(\mathbf{Z}|m) \right\} - \sum_{m} \sum_{\mathbf{Z}} q(\mathbf{Z}|m) q(m) \ln q(m) - \lambda \left\{ \sum_{m} q(m) - 1 \right\} \\ &= \sum_{m} q(m) \cdot C - \sum_{\mathbf{Z}} q(\mathbf{Z}|m) \left\{ \sum_{m} q(m) \ln q(m) \right\} - \lambda \left\{ \sum_{m} q(m) - 1 \right\} \end{split}$$
Where we have defined:
$$C = \sum_{\mathbf{Z}} q(\mathbf{Z}|m) \ln \left\{ p(\mathbf{Z}, \mathbf{X}, m) - q(\mathbf{Z}|m) \right\}$$
According to Calculus of Variations given in Appendix D, and also Prob.1.34, we can obtain the derivative of L with respect to q(m) and set it to 0:
$$\frac{\partial L}{\partial q(m)} = C + \sum_{\mathbf{Z}} q(\mathbf{Z}|m) \left[ \ln q(m) + 1 \right] - \lambda$$
$$= \sum_{\mathbf{Z}} q(\mathbf{Z}|m) \ln \left\{ p(\mathbf{Z}, \mathbf{X}, m) - q(\mathbf{Z}|m) \right\} + \sum_{\mathbf{Z}} q(\mathbf{Z}|m) \ln q(m) + 1 - \lambda$$
$$= \sum_{\mathbf{Z}} q(\mathbf{Z}|m) \ln \left\{ \frac{p(\mathbf{Z}, \mathbf{X}, m)}{q(\mathbf{Z}|m)q(m)} \right\} + 1 - \lambda = 0$$
We multiply both sides by q(m) and then perform summation over m, yielding:
$$\sum_{m} \sum_{\mathbf{Z}} q(\mathbf{Z}|m)q(m) \ln \left\{ \frac{p(\mathbf{Z}, \mathbf{X}, m)}{q(\mathbf{Z}|m)q(m)} \right\} + (1 - \lambda) \sum_{m} q(m) = 0$$
Notice that the first term is actually $\mathcal{L}_m$ defined in Eq $\mathcal{L}_{m} = \sum_{m} \sum_{\mathbf{Z}} q(\mathbf{Z}|m)q(m) \ln \left\{ \frac{p(\mathbf{Z}, \mathbf{X}, m)}{q(\mathbf{Z}|m)q(m)} \right\}.$ and that the summation of q(m) over m equals 1, we can obtain:
$$\lambda = \mathcal{L}_m + 1$$
We substitute $\lambda$ back into the derivative, yielding:
$$\sum_{\mathbf{Z}} q(\mathbf{Z}|m) \ln \left\{ \frac{p(\mathbf{Z}, \mathbf{X}, m)}{q(\mathbf{Z}|m)q(m)} \right\} - \mathcal{L}_m = 0$$
(\*)
One important thing must be clarified here, there is a typo in Eq (10.36), $\mathcal{L}_m$ in Eq (10.36) should be $\mathcal{L}^{''}$ , which is defined as:
$$\mathscr{L}^{"} = \sum_{\mathbf{Z}} q(\mathbf{Z}|m) \ln \left\{ \frac{p(\mathbf{Z}, \mathbf{X}|m)}{q(\mathbf{Z}|m)} \right\}$$
Now with the definition of $\mathscr{L}^{''}$ , we expand (\*):
$$(*) = \sum_{\mathbf{Z}} q(\mathbf{Z}|m) \ln \left\{ \frac{p(\mathbf{Z}, \mathbf{X}, m)}{q(\mathbf{Z}|m)q(m)} \right\} - \mathcal{L}_{m}$$
$$= \sum_{\mathbf{Z}} q(\mathbf{Z}|m) \ln \left\{ \frac{p(\mathbf{Z}, \mathbf{X}|m)p(m)}{q(\mathbf{Z}|m)q(m)} \right\} - \mathcal{L}_{m}$$
$$= \mathcal{L}'' + \sum_{\mathbf{Z}} q(\mathbf{Z}|m) \ln \frac{p(m)}{q(m)} - \mathcal{L}_{m}$$
$$= \mathcal{L}'' + \ln \frac{p(m)}{q(m)} - \sum_{m} \sum_{\mathbf{Z}} q(\mathbf{Z}|m)q(m) \ln \left\{ \frac{p(\mathbf{Z}, \mathbf{X}, m)}{q(\mathbf{Z}|m)q(m)} \right\}$$
$$= \mathcal{L}'' + \ln \frac{p(m)}{q(m)} - \sum_{m} q(m) \left\{ \sum_{\mathbf{Z}} q(\mathbf{Z}|m) \ln \frac{p(\mathbf{Z}, \mathbf{X}|m)p(m)}{q(\mathbf{Z}|m)q(m)} \right\}$$
$$= \mathcal{L}'' + \ln \frac{p(m)}{q(m)} - \sum_{m} q(m) \left\{ \mathcal{L}'' + \sum_{\mathbf{Z}} q(\mathbf{Z}|m) \ln \frac{p(m)}{q(m)} \right\}$$
$$= \mathcal{L}''' + \ln \frac{p(m)}{q(m)} - \sum_{m} q(m) \left\{ \mathcal{L}'' + \ln \frac{p(m)}{q(m)} \right\}$$
$$= \ln \frac{p(m) \exp(\mathcal{L}'')}{q(m)} - \sum_{m} q(m) \ln \frac{p(m) \exp(\mathcal{L}'')}{q(m)} = 0$$
The solution is given by:
$$q(m) = \frac{1}{A} \cdot p(m) \exp(\mathcal{L}^{''})$$
Where $\frac{1}{A}$ is a normalization constant, used to guarantee the summation of q(m) over m equals 1. More specific, it is given by:
$$A = \sum_{\mathbf{Z}} p(m) \exp(\mathcal{L}'')$$
Therefore, it is obvious that A does not depend on the value of $\mathbf{Z}$ . You can verify the result of q(m) by substituting it back into the last line of (\*), yielding:
$$\ln \frac{p(m)\exp(\mathcal{L}'')}{q(m)} - \sum_{m} q(m) \ln \frac{p(m)\exp(\mathcal{L}'')}{q(m)} = \ln A - \sum_{m} q(m) \cdot \ln A = 0$$
One last thing worthy mentioning is that you can directly start from $\mathcal{L}_m$ given in Eq $\mathcal{L}_{m} = \sum_{m} \sum_{\mathbf{Z}} q(\mathbf{Z}|m)q(m) \ln \left\{ \frac{p(\mathbf{Z}, \mathbf{X}, m)}{q(\mathbf{Z}|m)q(m)} \right\}.$, without enforcing Lagrange Multiplier, to obtain q(m). In this way, we can actually obtain:
$$\mathcal{L}_m = \sum_{m} q(m) \ln \frac{p(m) \exp(\mathcal{L}^{''})}{q(m)}$$
It is actually the KL divergence between q(m) and $p(m)\exp(\mathcal{L}'')$ . Note that $p(m)\exp(\mathcal{L}'')$ is not normalized, we cannot let q(m) equal to $p(m)\exp(\mathcal{L}'')$ to achieve the minimum of a KL distance, i.e., 0, since q(m) is a probability distribution and should sum to 1 over m.
Therefore, we can guess that the optimal q(m) is given by the normalized $p(m) \exp(\mathcal{L}^n)$ . In this way, the constraint, i.e., summation of q(m) over m equals 1, is implicitly guaranteed. The more strict proof using Lagrange Multiplier has been shown above.
| 5,164
|
10
|
10.12
|
medium
|
Starting from the joint distribution $p(\mathbf{X}, \mathbf{Z}, \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) = p(\mathbf{X}|\mathbf{Z}, \boldsymbol{\mu}, \boldsymbol{\Lambda})p(\mathbf{Z}|\boldsymbol{\pi})p(\boldsymbol{\pi})p(\boldsymbol{\mu}|\boldsymbol{\Lambda})p(\boldsymbol{\Lambda})$, and applying the general result $\ln q_i^{}(\mathbf{Z}_i) = \mathbb{E}_{i \neq i}[\ln p(\mathbf{X}, \mathbf{Z})] + \text{const.}$, show that the optimal variational distribution $q^{}(\mathbf{Z})$ over the latent variables for the Bayesian mixture of Gaussians is given by $q^{}(\mathbf{Z}) = \prod_{n=1}^{N} \prod_{k=1}^{K} r_{nk}^{z_{nk}}$ by verifying the steps given in the text.
|
The solution procedure has already been given in Eq $\ln q^{}(\mathbf{Z}) = \mathbb{E}_{\pi,\mu,\Lambda}[\ln p(\mathbf{X}, \mathbf{Z}, \pi, \mu, \Lambda)] + \text{const.}$ - $r_{nk} = \frac{\rho_{nk}}{\sum_{j=1}^{K} \rho_{nj}}.$, so here we explain it in more details, starting from Eq (10.43):
$$\begin{split} & \ln q^{}(\mathbf{Z}) &= \mathbb{E}_{\pi,\mu,\Lambda}[\ln p(\mathbf{X},\mathbf{Z},\pi,\mu,\Lambda)] + \operatorname{const} \\ &= \mathbb{E}_{\pi}[\ln p(\mathbf{Z}|\pi)] + \mathbb{E}_{\mu,\Lambda}[\ln p(\mathbf{X}|\mathbf{Z},\mu,\Lambda)] + \operatorname{const} \\ &= \operatorname{const} + \mathbb{E}_{\pi}[\sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \ln \pi_{k}] \\ &+ \mathbb{E}_{\mu,\Lambda}[\sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \{ \frac{1}{2} \ln |\Lambda_{k}| - \frac{D}{2} \ln 2\pi - \frac{1}{2} (\mathbf{x}_{n} - \boldsymbol{\mu}_{k})^{T} \Lambda_{k} (\mathbf{x}_{n} - \boldsymbol{\mu}_{k}) \}] \\ &= \operatorname{const} + \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \mathbb{E}_{\pi}[\ln \pi_{k}] \\ &+ \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \mathbb{E}_{\mu,\Lambda}[\{ \frac{1}{2} \ln |\Lambda_{k}| - \frac{D}{2} \ln 2\pi - \frac{1}{2} (\mathbf{x}_{n} - \boldsymbol{\mu}_{k})^{T} \Lambda_{k} (\mathbf{x}_{n} - \boldsymbol{\mu}_{k}) \}] \\ &= \sum_{n=1}^{N} \sum_{k=1}^{K} z_{nk} \ln \rho_{nk} + \operatorname{const} \end{split}$$
Where we have substituted used Eq $p(\mathbf{Z}|\boldsymbol{\pi}) = \prod_{n=1}^{N} \prod_{k=1}^{K} \pi_k^{z_{nk}}.$ and Eq $p(\mathbf{X}|\mathbf{Z}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) = \prod_{n=1}^{N} \prod_{k=1}^{K} \mathcal{N} \left( \mathbf{x}_{n} | \boldsymbol{\mu}_{k}, \boldsymbol{\Lambda}_{k}^{-1} \right)^{z_{nk}}$, and D is the dimension of $\mathbf{x}_n$ . Here $\ln \rho_{nk}$ is defined as:
$$\begin{split} &\ln \rho_{nk} &= &\mathbb{E}_{\boldsymbol{\pi}}[\ln \pi_k] + \mathbb{E}_{\boldsymbol{\mu},\boldsymbol{\Lambda}}[\{\frac{1}{2}\ln |\boldsymbol{\Lambda}_k| - \frac{D}{2}\ln 2\pi - \frac{1}{2}(\mathbf{x}_n - \boldsymbol{\mu}_k)^T\boldsymbol{\Lambda}_k(\mathbf{x}_n - \boldsymbol{\mu}_k)\}] \\ &= &\mathbb{E}_{\boldsymbol{\pi}}[\ln \pi_k] + \frac{1}{2}\mathbb{E}_{\boldsymbol{\mu},\boldsymbol{\Lambda}}[\ln |\boldsymbol{\Lambda}_k|] - \frac{D}{2}\ln 2\pi - \frac{1}{2}\mathbb{E}_{\boldsymbol{\mu}_k,\boldsymbol{\Lambda}_k}[(\mathbf{x}_n - \boldsymbol{\mu}_k)^T\boldsymbol{\Lambda}_k(\mathbf{x}_n - \boldsymbol{\mu}_k)] \end{split}$$
Taking exponential of both sides, we can obtain:
$$q^{}(\mathbf{Z}) \propto \prod_{n=1}^{N} \prod_{k=1}^{K} \rho_{nk}^{z_{nk}}$$
Because $q^*(\mathbf{Z})$ should be correctly normalized, we are required to find the normalization constant. In this problem, we find that directly calculate the normalization constant by performing summation of $q^*(\mathbf{Z})$ over $\mathbf{Z}$ is non trivial. Therefore, we will proof that Eq $r_{nk} = \frac{\rho_{nk}}{\sum_{j=1}^{K} \rho_{nj}}.$ is the correct normalization by mathematical induction. When N=1, $q^*(\mathbf{Z})$ will reduce to: $\prod_{k=1}^K \rho_{1k}^{z_{1k}}$ , and it is easy to see that the normalization constant is given by:
$$A = \sum_{\mathbf{z}_1} \prod_{k=1}^K \rho_{1k}^{z_{1k}} = \sum_{j=1}^K \rho_{1j}$$
Here we have used 1-of-K coding scheme for $\mathbf{z}_1 = [z_{11}, z_{12}, ..., z_{1K}]^T$ , i.e., only one of $\{z_{11}, z_{12}, ..., z_{1K}\}$ will be 1 and others all 0. Therefore the summation over $\mathbf{z}_1$ is made up of K terms, and the j-th term corresponds to $z_{1j} = 1$ and other $z_{1i}$ equals 0. In this case, we have obtained:
$$q^{}(\mathbf{Z}) = \frac{1}{A} \prod_{k=1}^{K} \rho_{1k}^{z_{1k}} = \prod_{k=1}^{K} \left( \frac{\rho_{1k}}{\sum_{j=1}^{K} \ln \rho_{1j}} \right)^{z_{1k}}$$
It is exactly the same as Eq $q^{}(\mathbf{Z}) = \prod_{n=1}^{N} \prod_{k=1}^{K} r_{nk}^{z_{nk}}$ and Eq $r_{nk} = \frac{\rho_{nk}}{\sum_{j=1}^{K} \rho_{nj}}.$. Suppose now we have proved that for N-1, the normalized $q^*(\mathbf{Z})$ is given by Eq $q^{}(\mathbf{Z}) = \prod_{n=1}^{N} \prod_{k=1}^{K} r_{nk}^{z_{nk}}$ and Eq $r_{nk} = \frac{\rho_{nk}}{\sum_{j=1}^{K} \rho_{nj}}.$. For N, we have:
$$\begin{split} \sum_{\mathbf{Z}} q^{}(\mathbf{Z}) &= \sum_{\mathbf{z}_{1}, \dots, \mathbf{z}_{N}} \prod_{n=1}^{N} \prod_{k=1}^{K} r_{nk}^{z_{nk}} \\ &= \sum_{\mathbf{z}_{N}} \left\{ \sum_{\mathbf{z}_{1}, \dots, \mathbf{z}_{N-1}} \prod_{n=1}^{N} \prod_{k=1}^{K} r_{nk}^{z_{nk}} \right\} \\ &= \sum_{\mathbf{z}_{N}} \left\{ \sum_{\mathbf{z}_{1}, \dots, \mathbf{z}_{N-1}} \left[ \prod_{k=1}^{K} r_{Nk}^{z_{Nk}} \right] \cdot \left[ \prod_{n=1}^{N-1} \prod_{k=1}^{K} r_{nk}^{z_{nk}} \right] \right\} \\ &= \sum_{\mathbf{z}_{N}} \left\{ \left[ \prod_{k=1}^{K} r_{Nk}^{z_{Nk}} \right] \cdot \sum_{\mathbf{z}_{1}, \dots, \mathbf{z}_{N-1}} \prod_{n=1}^{N-1} \prod_{k=1}^{K} r_{nk}^{z_{nk}} \right\} \\ &= \sum_{\mathbf{z}_{N}} \left\{ \left[ \prod_{k=1}^{K} r_{Nk}^{z_{Nk}} \right] \cdot 1 \right\} \\ &= \sum_{\mathbf{z}_{N}} \prod_{k=1}^{K} r_{Nk}^{z_{Nk}} = \sum_{k=1}^{K} r_{Nk} = 1 \end{split}$$
The proof of the final step is exactly the same as that for N=1. So now, with the assumption Eq $q^{}(\mathbf{Z}) = \prod_{n=1}^{N} \prod_{k=1}^{K} r_{nk}^{z_{nk}}$ and Eq (10.49) are right for N-1, we have shown that they are also correct for N. The proof is complete.
| 5,396
|
10
|
10.13
|
medium
|
Starting from $+\sum_{k=1}^{K}\sum_{n=1}^{N}\mathbb{E}[z_{nk}]\ln\mathcal{N}\left(\mathbf{x}_{n}|\boldsymbol{\mu}_{k},\boldsymbol{\Lambda}_{k}^{-1}\right)+\text{const.}$, derive the result $q^{}(\boldsymbol{\mu}_{k}, \boldsymbol{\Lambda}_{k}) = \mathcal{N}\left(\boldsymbol{\mu}_{k} | \mathbf{m}_{k}, (\beta_{k} \boldsymbol{\Lambda}_{k})^{-1}\right) \, \mathcal{W}(\boldsymbol{\Lambda}_{k} | \mathbf{W}_{k}, \nu_{k})$ for the optimum variational posterior distribution over $\mu_k$ and $\Lambda_k$ in the Bayesian mixture of Gaussians, and hence verify the expressions for the parameters of this distribution given by (10.60)–(10.63).
|
Let's start from Eq $+\sum_{k=1}^{K}\sum_{n=1}^{N}\mathbb{E}[z_{nk}]\ln\mathcal{N}\left(\mathbf{x}_{n}|\boldsymbol{\mu}_{k},\boldsymbol{\Lambda}_{k}^{-1}\right)+\text{const.}$.
$$\begin{split} \ln q^{}(\boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{\Lambda}) &\propto & \ln p(\boldsymbol{\pi}) + \sum_{k=1}^{K} \ln p(\boldsymbol{\mu}_{k},\boldsymbol{\Lambda}_{k}) + \mathbb{E}[\ln p(\mathbf{Z}|\boldsymbol{\pi})] + \sum_{k=1}^{K} \sum_{n=1}^{N} \mathbb{E}[\boldsymbol{z}_{nk}] \ln \mathcal{N}(\mathbf{x}_{n}|\boldsymbol{\mu}_{k},\boldsymbol{\Lambda}_{k}^{-1}) \\ &= & \ln C(\boldsymbol{\alpha}_{0}) + \sum_{k=1}^{K} (\alpha_{0} - 1) \ln \pi_{k} \\ &+ \sum_{k=1}^{K} \ln \mathcal{N}(\boldsymbol{\mu}_{k}|\mathbf{m}_{0}, (\beta_{0}\boldsymbol{\Lambda}_{k})^{-1}) + \sum_{k=1}^{K} \ln \mathcal{W}(\boldsymbol{\Lambda}_{k}|\mathbf{W}_{0}, \boldsymbol{v}_{0}) \\ &+ \sum_{n=1}^{N} \sum_{k=1}^{K} \ln \pi_{k} \mathbb{E}[\boldsymbol{z}_{nk}] + \sum_{k=1}^{K} \sum_{n=1}^{N} \mathbb{E}[\boldsymbol{z}_{nk}] \ln \mathcal{N}(\mathbf{x}_{n}|\boldsymbol{\mu}_{k},\boldsymbol{\Lambda}_{k}^{-1}) \end{split}$$
It is easy to observe that the equation above can be decomposed into a sum of terms involving only $\pi$ together with those only involving $\mu$ and $\Lambda$ . In other words, $q(\pi, \mu, \Lambda)$ can be factorized into the product of $q(\pi)$ and $q(\mu, \Lambda)$ . We first extract those terms depend on $\pi$ .
$$\ln q^{}(\pi) \propto (\alpha_0 - 1) \sum_{k=1}^{K} \ln \pi_k + \sum_{k=1}^{K} \sum_{n=1}^{N} \ln \pi_k \mathbb{E}[z_{nk}]$$
$$= (\alpha_0 - 1) \sum_{k=1}^{K} \ln \pi_k + \sum_{k=1}^{K} \sum_{n=1}^{N} r_{nk} \ln \pi_k$$
$$= \sum_{k=1}^{K} \ln \pi_k \cdot \left[\alpha_0 - 1 + \sum_{n=1}^{N} r_{nk}\right]$$
Comparing it to the standard form of a Dirichlet distribution, we can conclude that $q^*(\pi) = \text{Dir}(\pi | \alpha)$ , where the k-th entry of $\alpha$ , i.e., $\alpha_k$ is given by:
$$\alpha_k = \alpha_0 + \sum_{n=1}^N r_{nk} = \alpha_0 + N_k$$
Next we gather all the terms dependent on $\mu = \{\mu_k\}$ and $\Lambda = \{\Lambda_k\}$ :
$$\begin{aligned} \ln q^{}(\boldsymbol{\mu}, \boldsymbol{\Lambda}) &= \sum_{k=1}^{K} \ln \mathcal{N}(\boldsymbol{\mu}_{k} | \mathbf{m}_{0}, (\beta_{0} \boldsymbol{\Lambda}_{k})^{-1}) + \sum_{k=1}^{K} \ln \mathcal{W}(\boldsymbol{\Lambda}_{k} | \mathbf{W}_{0}, v_{0}) + \sum_{k=1}^{K} \sum_{n=1}^{N} \mathbb{E}[\boldsymbol{z}_{nk}] \ln \mathcal{N}(\mathbf{x}_{n} | \boldsymbol{\mu}_{k}, \boldsymbol{\Lambda}_{k}^{-1}) \\ &\propto \sum_{k=1}^{K} \left\{ \frac{1}{2} \ln |\beta_{0} \boldsymbol{\Lambda}_{k}| - \frac{1}{2} (\boldsymbol{\mu}_{k} - \mathbf{m}_{0})^{T} \beta_{0} \boldsymbol{\Lambda}_{k} (\boldsymbol{\mu}_{k} - \mathbf{m}_{0}) \right\} \\ &+ \sum_{k=1}^{K} \left\{ \frac{v_{0} - D - 1}{2} \ln |\boldsymbol{\Lambda}_{k}| - \frac{1}{2} \mathrm{Tr}(\mathbf{W}_{0}^{-1} \boldsymbol{\Lambda}_{k}) \right\} \\ &+ \sum_{k=1}^{K} \sum_{n=1}^{N} r_{nk} \left\{ \frac{1}{2} \ln |\boldsymbol{\Lambda}_{k}| - \frac{1}{2} (\mathbf{x}_{n} - \boldsymbol{\mu}_{k})^{T} \boldsymbol{\Lambda}_{k} (\mathbf{x}_{n} - \boldsymbol{\mu}_{k}) \right\} \end{aligned}$$
With the knowledge that the optimal $q^*(\mu, \Lambda)$ can be written as:
$$q^{}(\boldsymbol{\mu}, \boldsymbol{\Lambda}) = \prod_{k=1}^{K} q^{}(\boldsymbol{\mu}_{k} | \boldsymbol{\Lambda}_{k}) q^{}(\boldsymbol{\Lambda}_{k}) = \prod_{k=1}^{K} \mathcal{N}(\boldsymbol{\mu}_{k} | \mathbf{m}_{k}, (\beta_{k} \boldsymbol{\Lambda}_{k})^{-1}) \mathcal{W}(\boldsymbol{\Lambda}_{k} | \mathbf{W}_{k}, v_{k}) \quad (*)$$
We first complete square with respect to $\mu_k$ . The quadratic term is given by:
$$-\frac{1}{2}\boldsymbol{\mu}_k^T(\beta_0\boldsymbol{\Lambda}_k)\boldsymbol{\mu}_k - \sum_{k=1}^K \sum_{n=1}^N r_{nk} \frac{1}{2}\boldsymbol{\mu}_k^T \boldsymbol{\Lambda}_k \boldsymbol{\mu}_k = -\frac{1}{2}\boldsymbol{\mu}_k^T (\beta_0\boldsymbol{\Lambda}_k + N_k\boldsymbol{\Lambda}_k)\boldsymbol{\mu}_k$$
Therefore, comparing with (\*), we can obtain:
$$\beta_k = \beta_0 + N_k$$
Next, we write down the linear term with respect to $\mu_k$ :
$$\mu_k^T(\beta_0 \mathbf{\Lambda}_k \mathbf{m}_0) + \sum_{n=1}^N r_{nk} \cdot \boldsymbol{\mu}_k^T(\mathbf{\Lambda}_k \mathbf{x}_n) = \boldsymbol{\mu}_k^T(\beta_0 \mathbf{\Lambda}_k \mathbf{m}_0 + \sum_{n=1}^N r_{nk} \mathbf{\Lambda}_k \mathbf{x}_n)$$
$$= \boldsymbol{\mu}_k^T \mathbf{\Lambda}_k(\beta_0 \mathbf{m}_0 + \sum_{n=1}^N r_{nk} \mathbf{x}_n)$$
$$= \boldsymbol{\mu}_k^T \mathbf{\Lambda}_k(\beta_0 \mathbf{m}_0 + N_k \bar{\mathbf{x}}_k)$$
Where we have defined:
$$ar{\mathbf{x}}_k = rac{1}{\sum_{n=1}^{N} r_{nk}} \sum_{n=1}^{N} r_{nk} \mathbf{x}_n = rac{1}{N_k} \sum_{n=1}^{N} r_{nk} \mathbf{x}_n$$
Comparing to the standard form, we can obtain:
$$\mathbf{m}_k = rac{1}{eta_k}(eta_0 \mathbf{m}_0 + N_k ar{\mathbf{x}}_k)$$
Now we have obtained $q^*(\mu_k|\Lambda_k)=\mathcal{N}(\mu_k|\mathbf{m}_k,(\beta_k\Lambda_k)^{-1})$ , using the relation:
$$\ln q^{}(\boldsymbol{\Lambda}_k) = \ln q^{}(\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k) - \ln q^{}(\boldsymbol{\mu}_k | \boldsymbol{\Lambda}_k)$$
And focusing only on the terms dependent on $\Lambda_k$ , we can obtain:
$$\begin{split} \ln q^{}(\boldsymbol{\Lambda}_{k}) &\propto & \left\{\frac{1}{2}\ln|\beta_{0}\boldsymbol{\Lambda}_{k}| - \frac{1}{2}(\boldsymbol{\mu}_{k} - \mathbf{m}_{0})^{T}\beta_{0}\boldsymbol{\Lambda}_{k}(\boldsymbol{\mu}_{k} - \mathbf{m}_{0})\right\} \\ &+ \left\{\frac{v_{0} - D - 1}{2}\ln|\boldsymbol{\Lambda}_{k}| - \frac{1}{2}\mathrm{Tr}(\mathbf{W}_{0}^{-1}\boldsymbol{\Lambda}_{k})\right\} \\ &+ \sum_{n=1}^{N}r_{nk}\left\{\frac{1}{2}\ln|\boldsymbol{\Lambda}_{k}| - \frac{1}{2}(\mathbf{x}_{n} - \boldsymbol{\mu}_{k})^{T}\boldsymbol{\Lambda}_{k}(\mathbf{x}_{n} - \boldsymbol{\mu}_{k})\right\} \\ &- \left\{\frac{1}{2}\ln|\beta_{k}\boldsymbol{\Lambda}_{k}| - \frac{1}{2}(\boldsymbol{\mu}_{k} - \mathbf{m}_{k})^{T}\beta_{k}\boldsymbol{\Lambda}_{k}(\boldsymbol{\mu}_{k} - \mathbf{m}_{k})\right\} \\ &\propto & \left\{\frac{1}{2}\ln|\boldsymbol{\Lambda}_{k}| - \frac{1}{2}\mathrm{Tr}\left[\beta_{0}(\boldsymbol{\mu}_{k} - \mathbf{m}_{0})(\boldsymbol{\mu}_{k} - \mathbf{m}_{0})^{T} \cdot \boldsymbol{\Lambda}_{k}\right] \right. \\ &+ \left\{\frac{v_{0} - D - 1}{2}\ln|\boldsymbol{\Lambda}_{k}| - \frac{1}{2}\mathrm{Tr}\left[\sum_{n=1}^{N}r_{nk}(\mathbf{x}_{n} - \boldsymbol{\mu}_{k})^{T}(\mathbf{x}_{n} - \boldsymbol{\mu}_{k}) \cdot \boldsymbol{\Lambda}_{k}\right] \\ &- \left\{\frac{1}{2}\ln|\boldsymbol{\Lambda}_{k}| - \frac{1}{2}\mathrm{Tr}\left[\beta_{k}(\boldsymbol{\mu}_{k} - \mathbf{m}_{k})^{T}(\boldsymbol{\mu}_{k} - \mathbf{m}_{k}) \cdot \boldsymbol{\Lambda}_{k}\right] \right. \\ &= & \frac{v_{0} - D - 1 + N_{k}}{2}\ln|\boldsymbol{\Lambda}_{k}| - \frac{1}{2}\mathrm{Tr}[\mathbf{T}\cdot\boldsymbol{\Lambda}_{k}] \end{split}$$
Where we have defined:
$$\mathbf{T} = \beta_0(\boldsymbol{\mu}_k - \mathbf{m}_0)(\boldsymbol{\mu}_k - \mathbf{m}_0)^T + \mathbf{W}_0^{-1} + \sum_{n=1}^N r_{nk}(\mathbf{x}_n - \boldsymbol{\mu}_k)^T(\mathbf{x}_n - \boldsymbol{\mu}_k) - \beta_k(\boldsymbol{\mu}_k - \mathbf{m}_k)^T(\boldsymbol{\mu}_k - \mathbf{m}_k)$$
By matching the coefficient ahead of $\ln |\Lambda_k|$ , we can obtain:
$$v_k = v_0 + N_k$$
Next, by matching the coefficient in the Trace, we see that:
$$\mathbf{W}_{h}^{-1} = \mathbf{T}$$
Let's further simplify T, beginning by introducing a useful equation, which will be used here and later in Prob.10.16:
$$\begin{split} \sum_{n=1}^{N} r_{nk} \mathbf{x}_{n} \mathbf{x}_{n}^{T} &= \sum_{n=1}^{N} r_{nk} (\mathbf{x}_{n} - \bar{\mathbf{x}}_{k} + \bar{\mathbf{x}}_{k}) (\mathbf{x}_{n} - \bar{\mathbf{x}}_{k} + \bar{\mathbf{x}}_{k})^{T} \\ &= \sum_{n=1}^{N} r_{nk} \left[ (\mathbf{x}_{n} - \bar{\mathbf{x}}_{k}) (\mathbf{x}_{n} - \bar{\mathbf{x}}_{k})^{T} + \bar{\mathbf{x}}_{k} \bar{\mathbf{x}}_{k}^{T} + 2(\mathbf{x}_{n} - \bar{\mathbf{x}}_{k}) \bar{\mathbf{x}}_{k}^{T} \right] \\ &= \sum_{n=1}^{N} r_{nk} \left[ (\mathbf{x}_{n} - \bar{\mathbf{x}}_{k}) (\mathbf{x}_{n} - \bar{\mathbf{x}}_{k})^{T} \right] + \sum_{n=1}^{N} r_{nk} \left[ \bar{\mathbf{x}}_{k} \bar{\mathbf{x}}_{k}^{T} \right] + \sum_{n=1}^{N} r_{nk} \left[ 2(\mathbf{x}_{n} - \bar{\mathbf{x}}_{k}) \bar{\mathbf{x}}_{k}^{T} \right] \\ &= N_{k} \mathbf{S}_{k} + N_{k} \bar{\mathbf{x}}_{k} \bar{\mathbf{x}}_{k}^{T} + 2 \left[ (N_{k} \bar{\mathbf{x}}_{k} - N_{k} \bar{\mathbf{x}}_{k}) \bar{\mathbf{x}}_{k}^{T} \right] \\ &= N_{k} \mathbf{S}_{k} + N_{k} \bar{\mathbf{x}}_{k} \bar{\mathbf{x}}_{k}^{T} + 2 \left[ (N_{k} \bar{\mathbf{x}}_{k} - N_{k} \bar{\mathbf{x}}_{k}) \bar{\mathbf{x}}_{k}^{T} \right] \\ &= N_{k} \mathbf{S}_{k} + N_{k} \bar{\mathbf{x}}_{k} \bar{\mathbf{x}}_{k}^{T} \end{split}$$
Where in the last step we have used Eq (10.51). Now we are ready to prove that **T** is exactly given by Eq $\mathbf{W}_{k}^{-1} = \mathbf{W}_{0}^{-1} + N_{k}\mathbf{S}_{k} + \frac{\beta_{0}N_{k}}{\beta_{0} + N_{k}}(\overline{\mathbf{x}}_{k} - \mathbf{m}_{0})(\overline{\mathbf{x}}_{k} - \mathbf{m}_{0})^{\mathrm{T}}$. Let's first consider the coefficients ahead of the quadratic term with repsect to $\mu_k$ :
(quad) =
$$\beta_0 \mu_k \mu_k^T + \sum_{n=1}^N r_{nk} \mu_k \mu_k^T - \beta_k \mu_k \mu_k^T = (\beta_0 + \sum_{n=1}^N r_{nk} - \beta_k) \mu_k \mu_k^T = 0$$
Where the summation is actually equal to $N_k$ and we have also used Eq $\beta_k = \beta_0 + N_k$. Next we focus on the linear term:
(linear) =
$$-2\beta_0 \mathbf{m}_0 \boldsymbol{\mu}_k^T + \sum_{n=1}^N 2r_{nk} \mathbf{x}_n \boldsymbol{\mu}_k^T + 2\beta_k \mathbf{m}_k \boldsymbol{\mu}_k^T$$
= $2(-\beta_0 \mathbf{m}_0 + \sum_{n=1}^N r_{nk} \mathbf{x}_n + \beta_k \mathbf{m}_k) \boldsymbol{\mu}_k^T = 0$
Finally we deal with the constant term:
$$\begin{aligned} &(\text{const}) &= & \mathbf{W}_0^{-1} + \beta_0 \mathbf{m}_0 \mathbf{m}_0^T + \sum_{n=1}^N r_{nk} \mathbf{x}_n \mathbf{x}_n^T - \beta_k \mathbf{m}_k \mathbf{m}_k^T \\ &= & \mathbf{W}_0^{-1} + \beta_0 \mathbf{m}_0 \mathbf{m}_0^T + N_k \mathbf{S}_k + N_k \bar{\mathbf{x}}_k \bar{\mathbf{x}}_k^T - \beta_k \mathbf{m}_k \mathbf{m}_k^T \\ &= & \mathbf{W}_0^{-1} + N_k \mathbf{S}_k + \beta_0 \mathbf{m}_0 \mathbf{m}_0^T + N_k \bar{\mathbf{x}}_k \bar{\mathbf{x}}_k^T - \frac{1}{\beta_k} \beta_k^2 \mathbf{m}_k \mathbf{m}_k^T \\ &= & \mathbf{W}_0^{-1} + N_k \mathbf{S}_k + \beta_0 \mathbf{m}_0 \mathbf{m}_0^T + N_k \bar{\mathbf{x}}_k \bar{\mathbf{x}}_k^T - \frac{1}{\beta_k} (\beta_0 \mathbf{m}_0 + N_K \bar{\mathbf{x}}_k) (\beta_0 \mathbf{m}_0 + N_K \bar{\mathbf{x}}_k)^T \\ &= & \mathbf{W}_0^{-1} + N_k \mathbf{S}_k + (\beta_0 - \frac{\beta_0^2}{\beta_k}) \mathbf{m}_0 \mathbf{m}_0^T + (N_k - \frac{N_k^2}{\beta_k}) \bar{\mathbf{x}}_k \bar{\mathbf{x}}_k^T - \frac{1}{\beta_k} 2(\beta_0 \mathbf{m}_0) \cdot (N_K \bar{\mathbf{x}}_k)^T \\ &= & \mathbf{W}_0^{-1} + N_k \mathbf{S}_k + \frac{\beta_0 N_k}{\beta_k} \mathbf{m}_0 \mathbf{m}_0^T + \frac{\beta_0 N_k}{\beta_k} \bar{\mathbf{x}}_k \bar{\mathbf{x}}_k^T - \frac{\beta_0 N_K}{\beta_k} 2(\mathbf{m}_0) \cdot (\bar{\mathbf{x}}_k)^T \\ &= & \mathbf{W}_0^{-1} + N_k \mathbf{S}_k + \frac{\beta_0 N_k}{\beta_k} (\mathbf{m}_0 - \bar{\mathbf{x}}_k) (\mathbf{m}_0 - \bar{\mathbf{x}}_k)^T \end{aligned}$$
Just as required.
| 11,294
|
10
|
10.14
|
medium
|
Using the distribution $q^{}(\boldsymbol{\mu}_{k}, \boldsymbol{\Lambda}_{k}) = \mathcal{N}\left(\boldsymbol{\mu}_{k} | \mathbf{m}_{k}, (\beta_{k} \boldsymbol{\Lambda}_{k})^{-1}\right) \, \mathcal{W}(\boldsymbol{\Lambda}_{k} | \mathbf{W}_{k}, \nu_{k})$, verify the result $= D\beta_{k}^{-1} + \nu_{k}(\mathbf{x}_{n}-\mathbf{m}_{k})^{\mathrm{T}}\mathbf{W}_{k}(\mathbf{x}_{n}-\mathbf{m}_{k}) \quad$.
|
Let's begin by definition.
$$\begin{split} \mathbb{E}_{\boldsymbol{\mu}_{k},\boldsymbol{\Lambda}_{k}}[(\mathbf{x}_{n}-\boldsymbol{\mu}_{k})^{T}\boldsymbol{\Lambda}_{k}(\mathbf{x}_{n}-\boldsymbol{\mu}_{k})] &= \int \int (\mathbf{x}_{n}-\boldsymbol{\mu}_{k})^{T}\boldsymbol{\Lambda}_{k}(\mathbf{x}_{n}-\boldsymbol{\mu}_{k}))q^{}(\boldsymbol{\mu}_{k},\boldsymbol{\Lambda}_{k})d\boldsymbol{\mu}_{k}d\boldsymbol{\Lambda}_{k} \\ &= \int \left\{ \int (\mathbf{x}_{n}-\boldsymbol{\mu}_{k})^{T}\boldsymbol{\Lambda}_{k}(\mathbf{x}_{n}-\boldsymbol{\mu}_{k}))q^{}(\boldsymbol{\mu}_{k}|\boldsymbol{\Lambda}_{k})d\boldsymbol{\mu}_{k} \right\}q^{}(\boldsymbol{\Lambda}_{k})d\boldsymbol{\Lambda}_{k} \\ &= \int \mathbb{E}_{\boldsymbol{\mu}_{k}}[(\boldsymbol{\mu}_{k}-\mathbf{x}_{n})^{T}\boldsymbol{\Lambda}_{k}(\boldsymbol{\mu}_{k}-\mathbf{x}_{n})]\cdot q^{}(\boldsymbol{\Lambda}_{k})d\boldsymbol{\Lambda}_{k} \end{split}$$
The inner expectation is with respect to $\mu_k$ , which satisfies a Gaussian distribution. We use Eq (380) in 'MatrixCookbook': if $\mathbf{x} \sim \mathcal{N}(\mathbf{m}, \Sigma)$ , we have:
$$\mathbb{E}[(\mathbf{x} - \mathbf{m}')^{T} \mathbf{A} (\mathbf{x} - \mathbf{m}')] = (\mathbf{m} - \mathbf{m}')^{T} \mathbf{A} (\mathbf{m} - \mathbf{m}') + \text{Tr}(\mathbf{A} \mathbf{\Sigma})$$
Therefore, here we can obtain:
$$\mathbb{E}_{\boldsymbol{\mu}_k}[(\boldsymbol{\mu}_k - \mathbf{x}_n)^T \boldsymbol{\Lambda}_k (\boldsymbol{\mu}_k - \mathbf{x}_n)] = (\mathbf{m}_k - \mathbf{x}_n)^T \boldsymbol{\Lambda}_k (\mathbf{m}_k - \mathbf{x}_n) + \mathrm{Tr} \left[ \boldsymbol{\Lambda}_k \cdot (\beta_k \boldsymbol{\Lambda}_k)^{-1} \right]$$
Substituting it back into the integration, we can obtain:
$$\mathbb{E}_{\boldsymbol{\mu}_{k},\boldsymbol{\Lambda}_{k}}[(\mathbf{x}_{n}-\boldsymbol{\mu}_{k})^{T}\boldsymbol{\Lambda}_{k}(\mathbf{x}_{n}-\boldsymbol{\mu}_{k})] = \int \left[ (\mathbf{m}_{k}-\mathbf{x}_{n})^{T}\boldsymbol{\Lambda}_{k}(\mathbf{m}_{k}-\mathbf{x}_{n}) + D\boldsymbol{\beta}_{k}^{-1} \right] \cdot q^{}(\boldsymbol{\Lambda}_{k}) d\boldsymbol{\Lambda}_{k}$$
$$= D\boldsymbol{\beta}_{k}^{-1} + \mathbb{E}_{\boldsymbol{\Lambda}_{k}} \left[ (\mathbf{m}_{k}-\mathbf{x}_{n})^{T}\boldsymbol{\Lambda}_{k}(\mathbf{m}_{k}-\mathbf{x}_{n}) \right]$$
$$= D\boldsymbol{\beta}_{k}^{-1} + \mathbb{E}_{\boldsymbol{\Lambda}_{k}} \left\{ \operatorname{Tr}[\boldsymbol{\Lambda}_{k} \cdot (\mathbf{m}_{k}-\mathbf{x}_{n})(\mathbf{m}_{k}-\mathbf{x}_{n})^{T}] \right\}$$
$$= D\boldsymbol{\beta}_{k}^{-1} + \operatorname{Tr} \left\{ \mathbb{E}_{\boldsymbol{\Lambda}_{k}}[\boldsymbol{\Lambda}_{k}] \cdot (\mathbf{m}_{k}-\mathbf{x}_{n})(\mathbf{m}_{k}-\mathbf{x}_{n})^{T} \right\}$$
$$= D\boldsymbol{\beta}_{k}^{-1} + \operatorname{Tr} \left\{ v_{k} \mathbf{W}_{k} \cdot (\mathbf{m}_{k}-\mathbf{x}_{n})(\mathbf{m}_{k}-\mathbf{x}_{n})^{T} \right\}$$
$$= D\boldsymbol{\beta}_{k}^{-1} + v_{k} (\mathbf{m}_{k}-\mathbf{x}_{n})^{T} \mathbf{W}_{k} (\mathbf{m}_{k}-\mathbf{x}_{n})$$
Just as required.
| 3,025
|
10
|
10.15
|
easy
|
Using the result (B.17), show that the expected value of the mixing coefficients in the variational mixture of Gaussians is given by (10.69).
|
There is a typo in Eq (10.69). The numerator should be $\alpha_0 + N_k$ . Let's substitute Eq $\alpha_k = \alpha_0 + N_k.$ into (B.17):
$$\mathbb{E}[\pi_k] = \frac{\alpha_k}{\sum_k \alpha_k} = \frac{\alpha_0 + N_k}{K\alpha_0 + \sum_k N_k} = \frac{\alpha_0 + N_k}{K\alpha_0 + N}$$
| 290
|
10
|
10.16
|
medium
|
Verify the results $\mathbb{E}[\ln p(\mathbf{X}|\mathbf{Z}, \boldsymbol{\mu}, \boldsymbol{\Lambda})] = \frac{1}{2} \sum_{k=1}^{K} N_k \left\{ \ln \widetilde{\Lambda}_k - D\beta_k^{-1} - \nu_k \text{Tr}(\mathbf{S}_k \mathbf{W}_k) - \nu_k (\overline{\mathbf{x}}_k - \mathbf{m}_k)^{\mathrm{T}} \mathbf{W}_k (\overline{\mathbf{x}}_k - \mathbf{m}_k) - D \ln(2\pi) \right\}$ and $\mathbb{E}[\ln p(\mathbf{Z}|\boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \ln \widetilde{\pi}_k$ for the first two terms in the lower bound for the variational Gaussian mixture model given by $-\mathbb{E}[\ln q(\mathbf{Z})] - \mathbb{E}[\ln q(\boldsymbol{\pi})] - \mathbb{E}[\ln q(\boldsymbol{\mu}, \boldsymbol{\Lambda})]$.
|
According to Eq $p(\mathbf{X}|\mathbf{Z}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) = \prod_{n=1}^{N} \prod_{k=1}^{K} \mathcal{N} \left( \mathbf{x}_{n} | \boldsymbol{\mu}_{k}, \boldsymbol{\Lambda}_{k}^{-1} \right)^{z_{nk}}$, we can obtain:
$$\mathbb{E}[\ln p(\mathbf{X}|\mathbf{Z}, \boldsymbol{\mu}, \boldsymbol{\Lambda})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \mathbb{E}[z_{nk} \ln \mathcal{N}(\mathbf{x}_{n}|\boldsymbol{\mu}_{k}, \boldsymbol{\Lambda}_{k}^{-1})]$$
$$= \sum_{n=1}^{N} \sum_{k=1}^{K} \mathbb{E}[z_{nk}] \cdot \mathbb{E}[-\frac{D}{2} \ln 2\pi + \frac{1}{2} \ln |\boldsymbol{\Lambda}_{k}| - \frac{1}{2} (\mathbf{x}_{n} - \boldsymbol{\mu}_{k})^{T} \boldsymbol{\Lambda}_{k} (\mathbf{x}_{n} - \boldsymbol{\mu}_{k})]$$
$$= \frac{1}{2} \sum_{n=1}^{N} \sum_{k=1}^{K} \mathbb{E}[z_{nk}] \cdot \left\{ -D \ln 2\pi + \mathbb{E}[\ln |\boldsymbol{\Lambda}_{k}|] - \mathbb{E}[(\mathbf{x}_{n} - \boldsymbol{\mu}_{k})^{T} \boldsymbol{\Lambda}_{k} (\mathbf{x}_{n} - \boldsymbol{\mu}_{k})] \right\}$$
$$= \frac{1}{2} \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \cdot \left\{ -D \ln 2\pi + \ln \tilde{\boldsymbol{\Lambda}}_{k} - D \boldsymbol{\beta}_{k}^{-1} - v_{k} (\mathbf{x}_{n} - \mathbf{m}_{k})^{T} \mathbf{W}_{k} (\mathbf{x}_{n} - \mathbf{m}_{k}) \right\}$$
Where we have used Eq $\mathbb{E}[z_{nk}] = r_{nk}$, Eq $= D\beta_{k}^{-1} + \nu_{k}(\mathbf{x}_{n}-\mathbf{m}_{k})^{\mathrm{T}}\mathbf{W}_{k}(\mathbf{x}_{n}-\mathbf{m}_{k}) \quad$ and Eq $\ln \widetilde{\Lambda}_k \equiv \mathbb{E}\left[\ln |\mathbf{\Lambda}_k|\right] = \sum_{i=1}^D \psi\left(\frac{\nu_k + 1 - i}{2}\right) + D\ln 2 + \ln |\mathbf{W}_k| \quad$. Then we first deal with the first three terms inside the bracket, i.e.,
$$\begin{split} \frac{1}{2} \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \cdot \left\{ -D \ln 2\pi + \ln \widetilde{\Lambda}_{k} - D\beta_{k}^{-1} \right\} &= \frac{1}{2} \sum_{k=1}^{K} \sum_{n=1}^{N} r_{nk} \cdot \left\{ -D \ln 2\pi + \ln \widetilde{\Lambda}_{k} - D\beta_{k}^{-1} \right\} \\ &= \frac{1}{2} \sum_{k=1}^{K} \left[ \sum_{n=1}^{N} r_{nk} \right] \cdot \left[ -D \ln 2\pi + \ln \widetilde{\Lambda}_{k} - D\beta_{k}^{-1} \right] \\ &= \frac{1}{2} \sum_{k=1}^{K} N_{k} \cdot \left[ -D \ln 2\pi + \ln \widetilde{\Lambda}_{k} - D\beta_{k}^{-1} \right] \end{split}$$
Where we have used the definition of $N_k$ . Next we deal with the last term inside the bracket, i.e.,
$$\frac{1}{2} \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \cdot \left\{ -v_k (\mathbf{x}_n - \mathbf{m}_k)^T \mathbf{W}_k (\mathbf{x}_n - \mathbf{m}_k) \right\} = -\frac{1}{2} \sum_{n=1}^{N} \sum_{k=1}^{K} \operatorname{Tr}[r_{nk} v_k \cdot (\mathbf{x}_n - \mathbf{m}_k) (\mathbf{x}_n - \mathbf{m}_k)^T \cdot \mathbf{W}_k] \\
= -\frac{1}{2} \sum_{k=1}^{K} \operatorname{Tr}[\sum_{n=1}^{N} r_{nk} v_k \cdot (\mathbf{x}_n - \mathbf{m}_k) (\mathbf{x}_n - \mathbf{m}_k)^T \cdot \mathbf{W}_k]$$
Since we have:
$$\begin{split} \sum_{n=1}^{N} r_{nk} v_k \cdot (\mathbf{x}_n - \mathbf{m}_k) (\mathbf{x}_n - \mathbf{m}_k)^T &= v_k \sum_{n=1}^{N} r_{nk} \cdot (\bar{\mathbf{x}}_k - \mathbf{m}_k + \mathbf{x}_n - \bar{\mathbf{x}}_k) (\bar{\mathbf{x}}_k - \mathbf{m}_k + \mathbf{x}_n - \bar{\mathbf{x}}_k)^T \\ &= v_k \sum_{n=1}^{N} r_{nk} \cdot (\bar{\mathbf{x}}_k - \mathbf{m}_k) (\bar{\mathbf{x}}_k - \mathbf{m}_k)^T \\ &+ v_k \sum_{n=1}^{N} r_{nk} \cdot (\mathbf{x}_n - \bar{\mathbf{x}}_k) (\mathbf{x}_n - \bar{\mathbf{x}}_k)^T \\ &+ v_k \sum_{n=1}^{N} r_{nk} \cdot 2 (\bar{\mathbf{x}}_k - \mathbf{m}_k) (\mathbf{x}_n - \bar{\mathbf{x}}_k)^T \\ &= v_k N_k \cdot (\bar{\mathbf{x}}_k - \mathbf{m}_k) (\bar{\mathbf{x}}_k - \mathbf{m}_k)^T \\ &+ v_k N_k \mathbf{S}_k \\ &+ v_k \cdot 2 (\bar{\mathbf{x}}_k - \mathbf{m}_k) (\sum_{n=1}^{N} r_{nk} \mathbf{x}_n - \sum_{n=1}^{N} r_{nk} \bar{\mathbf{x}}_k)^T \\ &= v_k N_k \cdot (\bar{\mathbf{x}}_k - \mathbf{m}_k) (\bar{\mathbf{x}}_k - \mathbf{m}_k)^T + v_k N_k \mathbf{S}_k \\ &+ v_k \cdot 2 (\bar{\mathbf{x}}_k - \mathbf{m}_k) (N_k \bar{\mathbf{x}}_k - N_k \bar{\mathbf{x}}_k)^T \\ &= v_k N_k \cdot (\bar{\mathbf{x}}_k - \mathbf{m}_k) (N_k \bar{\mathbf{x}}_k - N_k \bar{\mathbf{x}}_k)^T \\ &= v_k N_k \cdot (\bar{\mathbf{x}}_k - \mathbf{m}_k) (N_k \bar{\mathbf{x}}_k - N_k \bar{\mathbf{x}}_k)^T \\ &= v_k N_k \cdot (\bar{\mathbf{x}}_k - \mathbf{m}_k) (N_k \bar{\mathbf{x}}_k - N_k \bar{\mathbf{x}}_k)^T \end{split}$$
Therefore, the last term can be reduced to:
$$\begin{split} \frac{1}{2} \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \cdot \left\{ -v_{k} (\mathbf{x}_{n} - \mathbf{m}_{k})^{T} \mathbf{W}_{k} (\mathbf{x}_{n} - \mathbf{m}_{k}) \right\} &= -\frac{1}{2} \sum_{k=1}^{K} \mathrm{Tr}[v_{k} N_{k} \cdot (\bar{\mathbf{x}}_{k} - \mathbf{m}_{k})(\bar{\mathbf{x}}_{k} - \mathbf{m}_{k})^{T} \mathbf{W}_{k}] \\ &- \frac{1}{2} \sum_{k=1}^{K} \mathrm{Tr}[v_{k} N_{k} \mathbf{S}_{k} \mathbf{W}_{k}] \\ &= -\frac{1}{2} \sum_{k=1}^{K} N_{k} v_{k} \cdot (\bar{\mathbf{x}}_{k} - \mathbf{m}_{k}) \mathbf{W}_{k} (\bar{\mathbf{x}}_{k} - \mathbf{m}_{k})^{T} \\ &- \frac{1}{2} \sum_{k=1}^{K} N_{k} v_{k} \mathrm{Tr}[\mathbf{S}_{k} \mathbf{W}_{k}] \end{split}$$
If we combine the first three and the last term, we just obtain Eq $\mathbb{E}[\ln p(\mathbf{X}|\mathbf{Z}, \boldsymbol{\mu}, \boldsymbol{\Lambda})] = \frac{1}{2} \sum_{k=1}^{K} N_k \left\{ \ln \widetilde{\Lambda}_k - D\beta_k^{-1} - \nu_k \text{Tr}(\mathbf{S}_k \mathbf{W}_k) - \nu_k (\overline{\mathbf{x}}_k - \mathbf{m}_k)^{\mathrm{T}} \mathbf{W}_k (\overline{\mathbf{x}}_k - \mathbf{m}_k) - D \ln(2\pi) \right\}$. Next we prove Eq $\mathbb{E}[\ln p(\mathbf{Z}|\boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \ln \widetilde{\pi}_k$. According to Eq $p(\mathbf{Z}|\boldsymbol{\pi}) = \prod_{n=1}^{N} \prod_{k=1}^{K} \pi_k^{z_{nk}}.$, we have:
$$\mathbb{E}[\ln p(\mathbf{Z}|\boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} \mathbb{E}[z_{nk} \ln \pi_k] = \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \ln \widetilde{\pi}_k$$
Just as required.
| 5,950
|
10
|
10.17
|
hard
|
Verify the results (10.73)–(10.77) for the remaining terms in the lower bound for the variational Gaussian mixture model given by $-\mathbb{E}[\ln q(\mathbf{Z})] - \mathbb{E}[\ln q(\boldsymbol{\pi})] - \mathbb{E}[\ln q(\boldsymbol{\mu}, \boldsymbol{\Lambda})]$.
|
According to Eq $p(\boldsymbol{\pi}) = \operatorname{Dir}(\boldsymbol{\pi}|\boldsymbol{\alpha}_0) = C(\boldsymbol{\alpha}_0) \prod_{k=1}^K \pi_k^{\alpha_0 - 1}$, we have:
$$\mathbb{E}[\ln p(\pi)] = \ln C(\boldsymbol{\alpha}_0) + (\alpha_0 - 1) \sum_{k=1}^K \mathbb{E}[\ln \pi_k]$$
$$= \ln C(\boldsymbol{\alpha}_0) + (\alpha_0 - 1) \sum_{k=1}^K \ln \widetilde{\pi}_k$$
According to Eq $= \prod_{k=1}^{K} \mathcal{N}\left(\boldsymbol{\mu}_{k}|\mathbf{m}_{0}, (\beta_{0}\boldsymbol{\Lambda}_{k})^{-1}\right) \mathcal{W}(\boldsymbol{\Lambda}_{k}|\mathbf{W}_{0}, \nu_{0}) \qquad$, we have:
$$\begin{split} \mathbb{E}[\ln p(\pmb{\mu}, \pmb{\Lambda})] &= \sum_{k=1}^K \mathbb{E}[\ln \mathcal{N}(\pmb{\mu}_k | \mathbf{m}_0, (\beta_0 \mathbf{\Lambda}_k)^{-1})] + \sum_{k=1}^K \mathbb{E}[\ln \mathcal{W}(\mathbf{\Lambda}_k | \mathbf{W}_0, v_0)] \\ &= \sum_{k=1}^K \mathbb{E}\Big\{-\frac{D}{2}\ln 2\pi + \frac{1}{2}\ln |\beta_0 \mathbf{\Lambda}_k| - \frac{1}{2}(\pmb{\mu}_k - \mathbf{m}_0)^T (\beta_0 \mathbf{\Lambda}_k) (\pmb{\mu}_k - \mathbf{m}_0)\Big\} \\ &+ \sum_{k=1}^K \mathbb{E}\Big\{\ln B(\mathbf{W}_0, v_0) + \frac{v_0 - D - 1}{2}\ln |\mathbf{\Lambda}_k| - \frac{1}{2}\mathrm{Tr}[\mathbf{W}_0^{-1}\mathbf{\Lambda}_k]\Big\} \\ &= \sum_{k=1}^K \mathbb{E}\Big\{-\frac{D}{2}\ln 2\pi + \frac{D}{2}\ln \beta_0 + \frac{1}{2}\ln |\mathbf{\Lambda}_k| - \frac{1}{2}(\pmb{\mu}_k - \mathbf{m}_0)^T (\beta_0 \mathbf{\Lambda}_k) (\pmb{\mu}_k - \mathbf{m}_0)\Big\} \\ &+ \sum_{k=1}^K \mathbb{E}\Big\{\ln B(\mathbf{W}_0, v_0) + \frac{v_0 - D - 1}{2}\ln |\mathbf{\Lambda}_k| - \frac{1}{2}\mathrm{Tr}[\mathbf{W}_0^{-1}\mathbf{\Lambda}_k]\Big\} \\ &= \frac{K \cdot D}{2}\ln \frac{\beta_0}{2\pi} + \frac{1}{2}\sum_{k=1}^K \ln \widetilde{\mathbf{\Lambda}_k} - \frac{1}{2}\sum_{k=1}^K \mathbb{E}\Big\{(\pmb{\mu}_k - \mathbf{m}_0)^T (\beta_0 \mathbf{\Lambda}_k) (\pmb{\mu}_k - \mathbf{m}_0)\Big\} \\ &K \cdot \ln B(\mathbf{W}_0, v_0) + \frac{v_0 - D - 1}{2}\sum_{k=1}^K \ln \widetilde{\mathbf{\Lambda}_k} - \frac{1}{2}\sum_{k=1}^K \mathbb{E}\Big\{\mathrm{Tr}[\mathbf{W}_0^{-1}\mathbf{\Lambda}_k]\Big\} \end{split}$$
So now we need to calculate these two expectations. Using (B.80), we can obtain:
$$\sum_{k=1}^K \mathbb{E} \Big\{ \mathrm{Tr}[\mathbf{W}_0^{-1} \mathbf{\Lambda}_k] = \sum_{k=1}^K \mathrm{Tr} \Big\{ \mathbf{W}_0^{-1} \cdot \mathbb{E}[\mathbf{\Lambda}_k] \Big\} = \sum_{k=1}^K \upsilon_k \cdot \mathrm{Tr} \Big\{ \mathbf{W}_0^{-1} \mathbf{W}_k \Big\}$$
To calculate the other expectation, first we write down two properties of the Gaussian distribution, i.e.,
$$\mathbb{E}[\boldsymbol{\mu}_k] = \mathbf{m}_k , \quad \mathbb{E}[\boldsymbol{\mu}_k \boldsymbol{\mu}_k^T] = \mathbf{m}_k \mathbf{m}_k^T + \boldsymbol{\beta}_k^{-1} \boldsymbol{\Lambda}_k^{-1}$$
Therefore, we can obtain:
$$\begin{split} \sum_{k=1}^K \mathbb{E}\Big\{ (\boldsymbol{\mu}_k - \mathbf{m}_0)^T (\beta_0 \boldsymbol{\Lambda}_k) (\boldsymbol{\mu}_k - \mathbf{m}_0) \Big\} &= \beta_0 \sum_{k=1}^K \mathbb{E}\Big\{ \mathrm{Tr}[\boldsymbol{\Lambda}_k \cdot (\boldsymbol{\mu}_k - \mathbf{m}_0) (\boldsymbol{\mu}_k - \mathbf{m}_0)^T] \Big\} \\ &= \beta_0 \sum_{k=1}^K \mathbb{E}_{\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k} \Big\{ \mathrm{Tr}\big[\boldsymbol{\Lambda}_k \cdot (\boldsymbol{\mu}_k \boldsymbol{\mu}_k^T - 2\boldsymbol{\mu}_k \mathbf{m}_0^T + \mathbf{m}_0 \mathbf{m}_0^T)] \Big\} \\ &= \beta_0 \sum_{k=1}^K \mathbb{E}_{\boldsymbol{\Lambda}_k} \Big\{ \mathrm{Tr}\big[\boldsymbol{\Lambda}_k \cdot (\mathbf{m}_k \mathbf{m}_k^T + \boldsymbol{\beta}_k^{-1} \boldsymbol{\Lambda}_k^{-1} - 2\mathbf{m}_k \mathbf{m}_0^T + \mathbf{m}_0 \mathbf{m}_0^T)] \Big\} \\ &= \beta_0 \sum_{k=1}^K \mathbb{E}_{\boldsymbol{\Lambda}_k} \Big\{ \mathrm{Tr}\big[\boldsymbol{\beta}_k^{-1} \mathbf{I} + \boldsymbol{\Lambda}_k \cdot (\mathbf{m}_k \mathbf{m}_k^T - 2\mathbf{m}_k \mathbf{m}_0^T + \mathbf{m}_0 \mathbf{m}_0^T)] \Big\} \\ &= \beta_0 \sum_{k=1}^K \mathbb{E}_{\boldsymbol{\Lambda}_k} \Big\{ \boldsymbol{D} \cdot \boldsymbol{\beta}_k^{-1} + \mathrm{Tr}\big[\boldsymbol{\Lambda}_k \cdot (\mathbf{m}_k - \mathbf{m}_0) (\mathbf{m}_k - \mathbf{m}_0)^T \big] \Big\} \\ &= \frac{KD\beta_0}{\beta_k} + \beta_0 \sum_{k=1}^K \mathbb{E}_{\boldsymbol{\Lambda}_k} \Big\{ (\mathbf{m}_k - \mathbf{m}_0) \boldsymbol{\Lambda}_k (\mathbf{m}_k - \mathbf{m}_0)^T \Big\} \\ &= \frac{KD\beta_0}{\beta_k} + \beta_0 \sum_{k=1}^K (\mathbf{m}_k - \mathbf{m}_0) \cdot \mathbb{E}_{\boldsymbol{\Lambda}_k} [\boldsymbol{\Lambda}_k] \cdot (\mathbf{m}_k - \mathbf{m}_0)^T \\ &= \frac{KD\beta_0}{\beta_k} + \beta_0 \sum_{k=1}^K (\mathbf{m}_k - \mathbf{m}_0) \cdot (v_k \mathbf{W}_k) \cdot (\mathbf{m}_k - \mathbf{m}_0)^T \end{split}$$
Substituting these two expectations back, we obtain Eq $\mathbb{E}[\ln p(\boldsymbol{\mu}, \boldsymbol{\Lambda})] = \frac{1}{2} \sum_{k=1}^{K} \left\{ D \ln(\beta_0/2\pi) + \ln \widetilde{\Lambda}_k - \frac{D\beta_0}{\beta_k} - \beta_0 \nu_k (\mathbf{m}_k - \mathbf{m}_0)^{\mathrm{T}} \mathbf{W}_k (\mathbf{m}_k - \mathbf{m}_0) \right\} + K \ln B(\mathbf{W}_0, \nu_0) + \frac{(\nu_0 - D - 1)}{2} \sum_{k=1}^{K} \ln \widetilde{\Lambda}_k - \frac{1}{2} \sum_{k=1}^{K} \nu_k \mathrm{Tr}(\mathbf{W}_0^{-1} \mathbf{W}_k)$ just as required. According to Eq $q^{}(\mathbf{Z}) = \prod_{n=1}^{N} \prod_{k=1}^{K} r_{nk}^{z_{nk}}$, we have:
$$\mathbb{E}[\ln q(\mathbf{Z})] = \sum_{n,k=1}^{N,K} \mathbb{E}[z_{nk}] \cdot \ln r_{nk} = \sum_{n,k=1}^{N,K} r_{nk} \cdot \ln r_{nk}$$
According to Eq $q^{}(\boldsymbol{\pi}) = \operatorname{Dir}(\boldsymbol{\pi}|\boldsymbol{\alpha})$, we have:
$$\mathbb{E}[\ln q(\pi)] = \ln C(\boldsymbol{\alpha}) + (\alpha_k - 1) \sum_{k=1}^K \mathbb{E}[\ln \pi_k]$$
$$= \ln C(\boldsymbol{\alpha}_0) + (\alpha_k - 1) \sum_{k=1}^K \ln \widetilde{\pi}_k$$
To derive Eq(10.77), we follow the same procedure as that for Eq(10.74):
$$\begin{split} \mathbb{E}[\ln q(\pmb{\mu},\pmb{\Lambda})] &= \sum_{k=1}^K \mathbb{E}[\ln \mathcal{N}(\pmb{\mu}_k|\mathbf{m}_k,(\beta_k\pmb{\Lambda}_k)^{-1})] + \sum_{k=1}^K \mathbb{E}[\ln \mathcal{W}(\pmb{\Lambda}_k|\mathbf{W}_k,v_k)] \\ &= \sum_{k=1}^K \mathbb{E}\Big\{-\frac{D}{2}\ln 2\pi + \frac{D}{2}\ln \beta_k + \frac{1}{2}\ln |\pmb{\Lambda}_k| - \frac{1}{2}(\pmb{\mu}_k - \mathbf{m}_k)^T(\beta_k\pmb{\Lambda}_k)(\pmb{\mu}_k - \mathbf{m}_k)\Big\} \\ &+ \sum_{k=1}^K \mathbb{E}\Big\{\ln B(\mathbf{W}_k,v_k) + \frac{v_k - D - 1}{2}\ln |\pmb{\Lambda}_k| - \frac{1}{2}\mathrm{Tr}[\mathbf{W}_k^{-1}\pmb{\Lambda}_k]\Big\} \\ &= \frac{K \cdot D}{2}\ln \frac{\beta_k}{2\pi} + \frac{1}{2}\sum_{k=1}^K \ln \widetilde{\pmb{\Lambda}_k} - \frac{1}{2}\sum_{k=1}^K \mathbb{E}\Big\{(\pmb{\mu}_k - \mathbf{m}_k)^T(\beta_k\pmb{\Lambda}_k)(\pmb{\mu}_k - \mathbf{m}_k)\Big\} \\ &K \cdot \ln B(\mathbf{W}_k,v_k) + \frac{v_k - D - 1}{2}\sum_{k=1}^K \ln \widetilde{\pmb{\Lambda}_k} - \frac{1}{2}\sum_{k=1}^K \mathbb{E}\Big\{\mathrm{Tr}[\mathbf{W}_k^{-1}\pmb{\Lambda}_k]\Big\} \\ &= \frac{K \cdot D}{2}\ln \frac{\beta_k}{2\pi} + \frac{1}{2}\sum_{k=1}^K \ln \widetilde{\pmb{\Lambda}_k} - \frac{KD}{2} \\ &K \cdot \ln B(\mathbf{W}_k,v_k) + \frac{v_k - D - 1}{2}\sum_{k=1}^K \ln \widetilde{\pmb{\Lambda}_k} - \frac{1}{2}\sum_{k=1}^K v_k \mathbb{E}\Big\{\mathrm{Tr}[\mathbf{W}_k^{-1}\mathbf{W}_k]\Big\} \\ &= \frac{K \cdot D}{2}\ln \frac{\beta_k}{2\pi} + \frac{1}{2}\sum_{k=1}^K \ln \widetilde{\pmb{\Lambda}_k} - \frac{KD}{2} \\ &K \cdot \ln B(\mathbf{W}_k,v_k) + \frac{v_k - D - 1}{2}\sum_{k=1}^K \ln \widetilde{\pmb{\Lambda}_k} - \frac{1}{2}\sum_{k=1}^K v_k \cdot D \end{split}$$
It is identical to Eq $\mathbb{E}[\ln q(\boldsymbol{\mu}, \boldsymbol{\Lambda})] = \sum_{k=1}^{K} \left\{ \frac{1}{2} \ln \widetilde{\Lambda}_k + \frac{D}{2} \ln \left( \frac{\beta_k}{2\pi} \right) - \frac{D}{2} - \operatorname{H}\left[q(\boldsymbol{\Lambda}_k)\right] \right\}$.
| 7,815
|
10
|
10.18
|
hard
|
In this exercise, we shall derive the variational re-estimation equations for the Gaussian mixture model by direct differentiation of the lower bound. To do this we assume that the variational distribution has the factorization defined by $q(\mathbf{Z}, \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) = q(\mathbf{Z})q(\boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}).$ and $q(\boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) = q(\boldsymbol{\pi}) \prod_{k=1}^{K} q(\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k).$ with factors given by $q^{}(\mathbf{Z}) = \prod_{n=1}^{N} \prod_{k=1}^{K} r_{nk}^{z_{nk}}$, $q^{}(\boldsymbol{\pi}) = \operatorname{Dir}(\boldsymbol{\pi}|\boldsymbol{\alpha})$, and $q^{}(\boldsymbol{\mu}_{k}, \boldsymbol{\Lambda}_{k}) = \mathcal{N}\left(\boldsymbol{\mu}_{k} | \mathbf{m}_{k}, (\beta_{k} \boldsymbol{\Lambda}_{k})^{-1}\right) \, \mathcal{W}(\boldsymbol{\Lambda}_{k} | \mathbf{W}_{k}, \nu_{k})$. Substitute these into $-\mathbb{E}[\ln q(\mathbf{Z})] - \mathbb{E}[\ln q(\boldsymbol{\pi})] - \mathbb{E}[\ln q(\boldsymbol{\mu}, \boldsymbol{\Lambda})]$ and hence obtain the lower bound as a function of the parameters of the variational distribution. Then, by maximizing the bound with respect to these parameters, derive the re-estimation equations for the factors in the variational distribution, and show that these are the same as those obtained in Section 10.2.1.
|
This problem is very complicated. Let's explain it in details. In section 10.2.1, we have obtained the update formula for all the coefficients using the general framework of variational inference. For more details you can see Prob.10.12 and Prob.10.13.
Moreover, in the previous problem, we have shown that $\mathcal{L}$ is given by Eq (10.70)-Eq $\mathbb{E}[\ln q(\boldsymbol{\mu}, \boldsymbol{\Lambda})] = \sum_{k=1}^{K} \left\{ \frac{1}{2} \ln \widetilde{\Lambda}_k + \frac{D}{2} \ln \left( \frac{\beta_k}{2\pi} \right) - \frac{D}{2} - \operatorname{H}\left[q(\boldsymbol{\Lambda}_k)\right] \right\}$, if we have assumed the form of q, i.e., Eq $q(\mathbf{Z}, \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) = q(\mathbf{Z})q(\boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}).$, Eq $q^{}(\mathbf{Z}) = \prod_{n=1}^{N} \prod_{k=1}^{K} r_{nk}^{z_{nk}}$,Eq $q(\boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) = q(\boldsymbol{\pi}) \prod_{k=1}^{K} q(\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k).$, Eq $q^{}(\boldsymbol{\pi}) = \operatorname{Dir}(\boldsymbol{\pi}|\boldsymbol{\alpha})$ and Eq $q^{}(\boldsymbol{\mu}_{k}, \boldsymbol{\Lambda}_{k}) = \mathcal{N}\left(\boldsymbol{\mu}_{k} | \mathbf{m}_{k}, (\beta_{k} \boldsymbol{\Lambda}_{k})^{-1}\right) \, \mathcal{W}(\boldsymbol{\Lambda}_{k} | \mathbf{W}_{k}, \nu_{k})$. Note that here we do not know the specific value of those coefficients, e.g., Eq (10.60)-Eq (10.63). In this problem, we will show that by maximizing $\mathcal{L}$ with respect to those coefficients, we will obtain those formula just as in section 10.2.1.
To summarize, here we write down all the coefficients required to estimate: $\{\beta_k, \mathbf{m}_k, v_k, \mathbf{W}_k, \alpha_k, r_{nk}\}$ . We begin by considering $\beta_k$ . Note that only Eq $\mathbb{E}[\ln p(\mathbf{X}|\mathbf{Z}, \boldsymbol{\mu}, \boldsymbol{\Lambda})] = \frac{1}{2} \sum_{k=1}^{K} N_k \left\{ \ln \widetilde{\Lambda}_k - D\beta_k^{-1} - \nu_k \text{Tr}(\mathbf{S}_k \mathbf{W}_k) - \nu_k (\overline{\mathbf{x}}_k - \mathbf{m}_k)^{\mathrm{T}} \mathbf{W}_k (\overline{\mathbf{x}}_k - \mathbf{m}_k) - D \ln(2\pi) \right\}$, $\mathbb{E}[\ln p(\boldsymbol{\mu}, \boldsymbol{\Lambda})] = \frac{1}{2} \sum_{k=1}^{K} \left\{ D \ln(\beta_0/2\pi) + \ln \widetilde{\Lambda}_k - \frac{D\beta_0}{\beta_k} - \beta_0 \nu_k (\mathbf{m}_k - \mathbf{m}_0)^{\mathrm{T}} \mathbf{W}_k (\mathbf{m}_k - \mathbf{m}_0) \right\} + K \ln B(\mathbf{W}_0, \nu_0) + \frac{(\nu_0 - D - 1)}{2} \sum_{k=1}^{K} \ln \widetilde{\Lambda}_k - \frac{1}{2} \sum_{k=1}^{K} \nu_k \mathrm{Tr}(\mathbf{W}_0^{-1} \mathbf{W}_k)$ and $\mathbb{E}[\ln q(\boldsymbol{\mu}, \boldsymbol{\Lambda})] = \sum_{k=1}^{K} \left\{ \frac{1}{2} \ln \widetilde{\Lambda}_k + \frac{D}{2} \ln \left( \frac{\beta_k}{2\pi} \right) - \frac{D}{2} - \operatorname{H}\left[q(\boldsymbol{\Lambda}_k)\right] \right\}$ contain $\beta_k$ , we calculate the derivative of $\mathcal{L}$ with
respect to $\beta_k$ and set it to zero:
$$\begin{array}{lcl} \frac{\partial \mathcal{L}}{\partial \beta_k} & = & (\frac{1}{2}N_k D\beta_k^{-2}) + (\frac{1}{2}D\beta_0\beta_k^{-2}) - (\frac{D}{2}\frac{1}{2\pi}\frac{2\pi}{\beta_k}) \\ & = & \frac{1}{2}\beta_k^{-2} \cdot (N_k D + D\beta_0 - D\beta_k) = 0 \end{array}$$
The three brackets in the first line correspond to the derivative with respect to Eq (10.71), (10.74) and (10.77). Rearranging it, we obtain Eq (10.60). Next we consider $\mathbf{m}_k$ , which only occurs in the quadratic terms in Eq (10.71) and (10.74).
$$\frac{\partial \mathcal{L}}{\partial \mathbf{m}_{k}} = \left[ \frac{1}{2} N_{k} v_{k} \cdot 2 \mathbf{W}_{k} (\bar{\mathbf{x}}_{k} - \mathbf{m}_{k}) \right] + \left[ -\frac{1}{2} \beta_{0} v_{k} \cdot 2 \mathbf{W}_{k} (\mathbf{m}_{k} - \mathbf{m}_{0}) \right]
= v_{k} \mathbf{W}_{k} \left[ N_{k} \cdot (\bar{\mathbf{x}}_{k} - \mathbf{m}_{k}) - \beta_{0} (\mathbf{m}_{k} - \mathbf{m}_{0}) \right] = 0$$
Similarly, the two brackets in the first line correspond to the derivative with respect to Eq (10.71) and (10.74). Rearranging it, we obtain Eq (10.61). Next noticing that $v_k$ and $\mathbf{W}_k$ are always coupled in $\mathcal{L}$ , e.g., $v_k$ occurs ahead of quadratic terms in Eq (10.71). We will deal with $v_k$ and $\mathbf{W}_k$ simultaneously. Let's first make this more clear by writing down those terms depend on $v_k$ and $\mathbf{W}_k$ in $\mathcal{L}$ :
$$(10.77) \propto \sum_{k=1}^{K} \left\{ \frac{1}{2} \ln \widetilde{\Lambda}_k - \text{H}[q(\boldsymbol{\Lambda}_k)] \right\}$$
$$(10.71) \propto \frac{1}{2} \sum_{k=1}^{K} N_k \left\{ \ln \widetilde{\Lambda}_k - v_k \cdot \text{Tr}[(\mathbf{S}_k + \mathbf{A}_k) \mathbf{W}_k] \right\}$$
$$(10.74) \propto \frac{1}{2} \sum_{k=1}^{K} \left\{ \ln \widetilde{\Lambda}_{k} - \beta_{0} v_{k} \cdot \text{Tr}[\mathbf{B}_{k} \mathbf{W}_{k}] \right\} + \frac{v_{0} - D - 1}{2} \sum_{k=1}^{K} \ln \widetilde{\Lambda}_{k} - \frac{1}{2} \sum_{k=1}^{K} v_{k} \text{Tr}[\mathbf{W}_{0}^{-1} \mathbf{W}_{k}]$$
$$= \frac{v_{0} - D}{2} \sum_{k=1}^{K} \ln \widetilde{\Lambda}_{k} - \frac{1}{2} \sum_{k=1}^{K} v_{k} \text{Tr}[(\beta_{0} \mathbf{B}_{k} + \mathbf{W}_{0}^{-1}) \mathbf{W}_{k}]$$
Where $\ln \widetilde{\Lambda}_k$ is given by Eq (10.65) and $\mathbf{A}_k$ and $\mathbf{B}_k$ are given by:
$$\mathbf{A}_k = (\bar{\mathbf{x}_k} - \mathbf{m}_k)(\bar{\mathbf{x}_k} - \mathbf{m}_k)^T, \quad \mathbf{B}_k = (\mathbf{m}_k - \mathbf{m}_0)(\mathbf{m}_k - \mathbf{m}_0)^T$$
Moreover, $H[q(\Lambda_k)]$ is given by (B.82):
$$H[q(\mathbf{\Lambda}_k)] = -\ln B(\mathbf{W}_k, v_k) - \frac{v_k - D - 1}{2} \ln \widetilde{\Lambda}_k + \frac{v_k D}{2}$$
Where $\ln B(\mathbf{W}_k, v_k)$ can be calculated based on (B.79). Note here we only focus on those terms dependent on $v_k$ and $\mathbf{W}_k$ :
$$\ln B(\mathbf{W}_k, v_k) \propto -\frac{v_k}{2} \ln |\mathbf{W}_k| - \frac{v_k D}{2} \ln 2 - \sum_{i=1}^{D} \Gamma(\frac{v_k + 1 - i}{2})$$
To further simplify the derivative, we now write down those terms in $\mathcal{L}$ which only depends on $v_k$ and $\mathbf{W}_k$ with a given specific index k:
$$\begin{split} \mathcal{L} & \propto & -\left\{\frac{1}{2}\ln\widetilde{\Lambda}_k - \mathrm{H}[q(\boldsymbol{\Lambda}_k)]\right\} + \frac{1}{2}N_k\left\{\ln\widetilde{\Lambda}_k - v_k \cdot \mathrm{Tr}[(\mathbf{S}_k + \mathbf{A}_k)\mathbf{W}_k]\right\} \\ & + \frac{v_0 - D}{2}\ln\widetilde{\Lambda}_k - \frac{1}{2}v_k\mathrm{Tr}[(\beta_0\mathbf{B}_k + \mathbf{W}_0^{-1})\mathbf{W}_k] \\ & = & \frac{1}{2}(-1 + N_k + v_0 - D)\ln\widetilde{\Lambda}_k + \mathrm{H}[q(\boldsymbol{\Lambda}_k)] - \frac{1}{2}v_k \cdot \mathrm{Tr}[(N_k\mathbf{S}_k + N_k\mathbf{A}_k + \beta_0\mathbf{B}_k + \mathbf{W}_0^{-1})\mathbf{W}_k] \\ & = & \frac{1}{2}(-1 + N_k + v_0 - D)\ln\widetilde{\Lambda}_k - \frac{1}{2}v_k \cdot \mathrm{Tr}[(N_k\mathbf{S}_k + N_k\mathbf{A}_k + \beta_0\mathbf{B}_k + \mathbf{W}_0^{-1})\mathbf{W}_k] \\ & - \ln B(\mathbf{W}_k, v_k) - \frac{v_k - D - 1}{2}\ln\widetilde{\Lambda}_k + \frac{v_kD}{2} \\ & = & \frac{1}{2}(N_k + v_0 - v_k)\ln\widetilde{\Lambda}_k - \frac{1}{2}v_k \cdot \mathrm{Tr}[\mathbf{F}_k\mathbf{W}_k] + \frac{v_kD}{2} - \ln B(\mathbf{W}_k, v_k) \end{split}$$
Where we have defined:
$$\mathbf{F}_k = N_k \mathbf{S}_k + N_k \mathbf{A}_k + \beta_0 \mathbf{B}_k + \mathbf{W}_0^{-1}$$
Note that Eq (10.77) has a minus sign in $\mathcal{L}$ , the negative of (\*) has been used in the first line. We first calculate the derivative of $\mathcal{L}$ with respect to $v_k$ and set it to zero:
$$\begin{split} \frac{\partial \mathcal{L}}{\partial v_k} &= \frac{1}{2} (N_k + v_0 - v_k) \frac{d \ln \widetilde{\Lambda}_k}{d v_k} - \frac{\ln \widetilde{\Lambda}_k}{2} - \frac{1}{2} \mathrm{Tr}[\mathbf{F}_k \mathbf{W}_k] + \frac{D}{2} \\ &+ \frac{|\mathbf{W}_k|}{2} + \frac{D \ln 2}{2} + \frac{1}{2} \sum_{i=1}^{D} \Gamma'(\frac{v_k + 1 - i}{2}) \\ &= \frac{1}{2} \Big[ (N_k + v_0 - v_k) \frac{d \ln \widetilde{\Lambda}_k}{d v_k} - \mathrm{Tr}[\mathbf{F}_k \mathbf{W}_k] + D \Big] = 0 \end{split}$$
Where in the last step, we have used the definition of $\ln \widetilde{\Lambda}_k$ , i.e., Eq (10.65). Then we calculate the derivative of $\mathscr L$ with respect to $\mathbf W_k$ and set it to zero:
$$\frac{\partial \mathcal{L}}{\partial \mathbf{W}_k} = \frac{1}{2} (N_k + v_0 - v_k) \mathbf{W}_k^{-1} - \frac{v_k}{2} \mathbf{F}_k + \frac{v_k}{2} \mathbf{W}_k^{-1}
= \frac{1}{2} (N_k + v_0 - v_k) \mathbf{W}_k^{-1} - \frac{v_k}{2} (\mathbf{F}_k - \mathbf{W}_k^{-1}) = 0$$
Staring at these two derivatives long enough, we find that if the following two conditions:
$$N_k + v_0 - v_k = 0$$
, and $\mathbf{F}_k = \mathbf{W}_k^{-1}$
are satisfied, the derivatives of $\mathcal{L}$ with respect to $v_k$ and $\mathbf{W}_k$ will all be zero. Rearranging the first condition, we obtain Eq (10.63). Next we prove that the second condition is exactly Eq (10.62), by simplifying $\mathbf{F}_k$ .
$$\mathbf{F}_k = N_k \mathbf{S}_k + N_k \mathbf{A}_k + \beta_0 \mathbf{B}_k + \mathbf{W}_0^{-1}$$
$$= \mathbf{W}_0^{-1} + N_k \mathbf{S}_k + N_k \cdot (\bar{\mathbf{x}}_k - \mathbf{m}_k) (\bar{\mathbf{x}}_k - \mathbf{m}_k)^T + \beta_0 \cdot (\mathbf{m}_k - \mathbf{m}_0) (\mathbf{m}_k - \mathbf{m}_0)^T$$
Comparing this with Eq (10.62), we only need to prove:
$$N_k \cdot (\bar{\mathbf{x}_k} - \mathbf{m}_k)(\bar{\mathbf{x}_k} - \mathbf{m}_k)^T + \beta_0 \cdot (\mathbf{m}_k - \mathbf{m}_0)(\mathbf{m}_k - \mathbf{m}_0)^T = \frac{\beta_0 N_k}{\beta_0 + N_k}(\bar{\mathbf{x}_k} - \mathbf{m}_0)(\bar{\mathbf{x}_k} - \mathbf{m}_0)^T$$
Let's start from the left hand side.
$$(\text{left}) = N_k \bar{\mathbf{x}}_k \bar{\mathbf{x}}_k^T - 2N_k \bar{\mathbf{x}}_k \mathbf{m}_k^T + N_k \mathbf{m}_k \mathbf{m}_k^T + \beta_0 \mathbf{m}_k \mathbf{m}_k^T - 2\beta_0 \mathbf{m}_k \mathbf{m}_0^T + \beta_0 \mathbf{m}_0 \mathbf{m}_0^T$$
$$= N_k \bar{\mathbf{x}}_k \bar{\mathbf{x}}_k^T - 2N_k \bar{\mathbf{x}}_k (\frac{\beta_0 \mathbf{m}_0 + N_k \bar{\mathbf{x}}_k}{\beta_0 + N_k})^T + (N_k + \beta_0) (\frac{\beta_0 \mathbf{m}_0 + N_k \bar{\mathbf{x}}_k}{\beta_0 + N_k}) (\frac{\beta_0 \mathbf{m}_0 + N_k \bar{\mathbf{x}}_k}{\beta_0 + N_k})^T$$
$$-2\beta_0 (\frac{\beta_0 \mathbf{m}_0 + N_k \bar{\mathbf{x}}_k}{\beta_0 + N_k}) \mathbf{m}_0^T + \beta_0 \mathbf{m}_0 \mathbf{m}_0^T$$
Then we complete the square with respect to $\bar{\mathbf{x}_k}$ , and we will see the coefficients match with the right hand side. Here as an example, we calculate the coefficients ahead of the quadratic term $\bar{\mathbf{x}_k}\bar{\mathbf{x}_k}^T$ :
$$\begin{array}{ll} ({\rm quad}) & = & N_k - 2N_k \frac{N_k}{\beta_0 + N_k} + (\beta_0 + N_k)(\frac{N_k}{\beta_0 + N_k})^2 \\ & = & \frac{N_k(\beta_0 + N_k) - 2N_k^2 + N_k^2}{\beta_0 + N_k} \\ & = & \frac{\beta_0 N_k}{\beta_0 + N_k} \end{array}$$
It is similar for the linear and the constant term, and here due to page limit, we omit the proof. the update formula for $\alpha_k, r_{nk}$ are still remaining to obtain. Noticing that only Eq (10.72), (10.73) and (10.76) depend on $\alpha_k$ , we now calculate the derivative of $\mathcal{L}$ with respect to $\alpha_k$ :
$$\begin{split} \frac{\partial \mathcal{L}}{\partial \alpha_k} &= \sum_{n=1}^N r_{nk} \frac{d \ln \widetilde{\pi}_k}{d \alpha_k} + (\alpha_0 - 1) \frac{d \ln \widetilde{\pi}_k}{d \alpha_k} - \left[ (\alpha_k - 1) \frac{d \ln \widetilde{\pi}_k}{d \alpha_k} + \ln \widetilde{\pi}_k + \frac{d \ln C(\alpha)}{d \alpha_k} \right] \\ &= (N_k + \alpha_0 - \alpha_k) \frac{d \ln \widetilde{\pi}_k}{d \alpha_k} - \ln \widetilde{\pi}_k - \frac{d \ln C(\alpha)}{d \alpha_k} \\ &= (N_k + \alpha_0 - \alpha_k) \left[ \phi^{'}(\alpha_k) - \phi^{'}(\widehat{\alpha}) \right] - \left[ \phi(\alpha_k) - \phi(\widehat{\alpha}) \right] - \frac{d \left[ \ln \Gamma(\widehat{\alpha}) - \ln \Gamma(\alpha_k) \right]}{d \alpha_k} \\ &= (N_k + \alpha_0 - \alpha_k) \left[ \phi^{'}(\alpha_k) - \phi^{'}(\widehat{\alpha}) \right] - \left[ \phi(\alpha_k) - \phi(\widehat{\alpha}) \right] - \left[ \phi(\widehat{\alpha}) - \phi(\alpha_k) \right] \\ &= (N_k + \alpha_0 - \alpha_k) \left[ \phi^{'}(\alpha_k) - \phi^{'}(\widehat{\alpha}) \right] = 0 \end{split}$$
Where we have used (B.25), Eq (10.66). Therefore, we obtain Eq (10.58). Finally, we are required to derive an update formula for $r_{nk}$ . Note that $\bar{\mathbf{x}}_k$ , $\mathbf{S}_k$ and $N_k$ also contains $r_{nk}$ , we conclude that Eq (10.71), (10.72) and (10.75) depend on $r_{nk}$ . Using the definition of $N_k$ , i.e., Eq (10.51), we can obtain:
$$\mathcal{L} \propto \frac{1}{2} \sum_{k,n} r_{nk} \left\{ \ln \widetilde{\Lambda}_k - D \beta_k^{-1} \right\} - \frac{1}{2} \sum_k N_k v_k \text{Tr}[(\mathbf{S}_k + \mathbf{A}_k) \mathbf{W}_k]$$
$$+ \frac{1}{2} \sum_{k,n} r_{nk} \ln \widetilde{\pi}_k - \frac{1}{2} \sum_{k,n} r_{nk} \ln r_{nk}$$
Note that constraint exists for $r_{nk}$ : $\sum_k r_{nk} = 1$ , we cannot calculate the derivative and set it to zero. We must introduce a Lagrange Multiplier. Before doing so, let's simplify $\mathbf{S}_k + \mathbf{A}_k$ :
$$\begin{split} \mathbf{S}_{k} + \mathbf{A}_{k} &= \frac{1}{N_{k}} \sum_{n=1}^{N} r_{nk} (\mathbf{x}_{n} - \bar{\mathbf{x}_{k}}) (\mathbf{x}_{n} - \bar{\mathbf{x}_{k}})^{T} + (\bar{\mathbf{x}_{k}} - \mathbf{m}_{k}) (\bar{\mathbf{x}_{k}} - \mathbf{m}_{k})^{T} \\ &= \frac{1}{N_{k}} \sum_{n=1}^{N} \left[ r_{nk} \mathbf{x}_{n} \mathbf{x}_{n}^{T} - 2 r_{nk} \mathbf{x}_{n} \bar{\mathbf{x}_{k}}^{T} + r_{nk} \bar{\mathbf{x}_{k}} \bar{\mathbf{x}_{k}}^{T} \right] + \bar{\mathbf{x}_{k}} \bar{\mathbf{x}_{k}}^{T} - 2 \bar{\mathbf{x}_{k}} \mathbf{m}_{k} + \mathbf{m}_{k} \mathbf{m}_{k}^{T} \\ &= \frac{1}{N_{k}} \sum_{n=1}^{N} r_{nk} \mathbf{x}_{n} \mathbf{x}_{n}^{T} - \frac{\sum_{n=1}^{N} 2 r_{nk} \mathbf{x}_{n} \bar{\mathbf{x}_{k}}^{T}}{N_{k}} + \frac{\sum_{n=1}^{N} r_{nk} \bar{\mathbf{x}_{k}} \bar{\mathbf{x}_{k}}^{T}}{N_{k}} + \bar{\mathbf{x}_{k}} \bar{\mathbf{x}_{k}}^{T} - 2 \bar{\mathbf{x}_{k}} \mathbf{m}_{k} + \mathbf{m}_{k} \mathbf{m}_{k}^{T} \\ &= \frac{1}{N_{k}} \sum_{n=1}^{N} r_{nk} \mathbf{x}_{n} \mathbf{x}_{n}^{T} - 2 \bar{\mathbf{x}_{k}} \mathbf{m}_{k} + \mathbf{m}_{k} \mathbf{m}_{k}^{T} \\ &= \frac{1}{N_{k}} (\sum_{n=1}^{N} r_{nk} \mathbf{x}_{n} \mathbf{x}_{n}^{T} - 2 \bar{\mathbf{x}_{k}} \mathbf{m}_{k} + \mathbf{m}_{k} \mathbf{m}_{k}^{T}) \\ &= \frac{1}{N_{k}} \left[ \sum_{n=1}^{N} r_{nk} (\mathbf{x}_{n} \mathbf{x}_{n}^{T} - 2 \bar{\mathbf{x}_{k}} \mathbf{m}_{k} + \mathbf{m}_{k} \mathbf{m}_{k}^{T}) \right] \\ &= \frac{1}{N_{k}} \sum_{n=1}^{N} r_{nk} (\mathbf{x}_{n} \mathbf{x}_{n}^{T} - 2 \bar{\mathbf{x}_{k}} \mathbf{m}_{k} + \mathbf{m}_{k} \mathbf{m}_{k}^{T}) \\ &= \frac{1}{N_{k}} \sum_{n=1}^{N} r_{nk} (\mathbf{x}_{n} - \mathbf{m}_{k}) (\mathbf{x}_{n} - \mathbf{m}_{k}) (\mathbf{x}_{n} - \mathbf{m}_{k})^{T} \end{split}$$
Therefore, we obtain:
$$\begin{split} \mathcal{L} & \propto & \frac{1}{2} \sum_{k,n} r_{nk} \left\{ \ln \widetilde{\Lambda}_k - D \beta_k^{-1} \right\} + \sum_{k,n} r_{nk} \ln \widetilde{\pi}_k - \sum_{k,n} r_{nk} \ln r_{nk} \\ & - \frac{1}{2} \sum_{k} N_k v_k \mathrm{Tr}[(\mathbf{S}_k + \mathbf{A}_k) \mathbf{W}_k] \\ & = & \frac{1}{2} \sum_{k,n} r_{nk} \left\{ \ln \widetilde{\Lambda}_k - D \beta_k^{-1} \right\} + \sum_{k,n} r_{nk} \ln \widetilde{\pi}_k - \sum_{k,n} r_{nk} \ln r_{nk} \\ & - \frac{1}{2} \sum_{k=1}^K \sum_{n=1}^N v_k r_{nk} (\mathbf{x}_n - \mathbf{m}_k)^T \mathbf{W}_k (\mathbf{x}_n - \mathbf{m}_k) \end{split}$$
Introducing Lagrange Multiplier $\lambda_n$ , we obtain:
$$\text{(Lagrange)} = \frac{1}{2} \sum_{k,n} r_{nk} \left\{ \ln \widetilde{\Lambda}_k - D \beta_k^{-1} \right\} + \sum_{k,n} r_{nk} \ln \widetilde{\pi}_k - \sum_{k,n} r_{nk} \ln r_{nk}$$
$$- \frac{1}{2} \sum_{k=1}^K \sum_{n=1}^N v_k r_{nk} (\mathbf{x}_n - \mathbf{m}_k)^T \mathbf{W}_k (\mathbf{x}_n - \mathbf{m}_k) + \sum_{n=1}^N \lambda_n (1 - \sum_k r_{nk})$$
Calculating the derivative with respect to $\lambda_n$ and setting it to zero, we can
obtain:
$$\begin{split} \frac{\partial (\text{Lagrange})}{\partial r_{nk}} &= \frac{1}{2} \{\ln \widetilde{\Lambda}_k - D \boldsymbol{\beta}_k^{-1}\} + \ln \widetilde{\pi}_k - [\ln r_{nk} + 1] \\ &- \frac{1}{2} v_k (\mathbf{x}_n - \mathbf{m}_k)^T \mathbf{W}_k (\mathbf{x}_n - \mathbf{m}_k) + \lambda_n = 0 \end{split}$$
Moving $\ln r_{nk}$ to the right side and then exponentiating both sides, we obtain Eq (10.67), and the normalized $r_{nk}$ is given by Eq (10.49), (10.46), and (10.64)-(10.66).
| 16,606
|
10
|
10.19
|
medium
|
$ Derive the result $p(\widehat{\mathbf{x}}|\mathbf{X}) = \frac{1}{\widehat{\alpha}} \sum_{k=1}^{K} \alpha_k \operatorname{St}(\widehat{\mathbf{x}}|\mathbf{m}_k, \mathbf{L}_k, \nu_k + 1 - D)$ for the predictive distribution in the variational treatment of the Bayesian mixture of Gaussians model.
|
Let's start from the definition, i.e., Eq $p(\widehat{\mathbf{x}}|\mathbf{X}) = \sum_{\widehat{\mathbf{z}}} \iiint p(\widehat{\mathbf{x}}|\widehat{\mathbf{z}}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) p(\widehat{\mathbf{z}}|\boldsymbol{\pi}) p(\boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}|\mathbf{X}) \, \mathrm{d}\boldsymbol{\pi} \, \mathrm{d}\boldsymbol{\mu} \, \mathrm{d}\boldsymbol{\Lambda}$.
$$\begin{split} p(\widehat{\mathbf{x}}|\mathbf{X}) &= \sum_{\widehat{\mathbf{z}}} \int \int \int p(\widehat{\mathbf{x}}|\widehat{\mathbf{z}}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) p(\widehat{\mathbf{z}}|\boldsymbol{\pi}) p(\boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}|\mathbf{X}) d\boldsymbol{\pi} d\boldsymbol{\mu} d\boldsymbol{\Lambda} \\ &= \sum_{\widehat{\mathbf{z}}} \int \int \int \prod_{k=1}^K \mathcal{N}(\widehat{\mathbf{x}}|\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k^{-1})^{\widehat{\boldsymbol{z}}_k} \cdot \prod_{k=1}^K \pi_k^{\widehat{\boldsymbol{z}}_k} \cdot p(\boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}|\mathbf{X}) d\boldsymbol{\pi} d\boldsymbol{\mu} d\boldsymbol{\Lambda} \\ &\approx \sum_{\widehat{\mathbf{z}}} \int \int \int \prod_{k=1}^K \left[ \mathcal{N}(\widehat{\mathbf{x}}|\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k^{-1}) \cdot \boldsymbol{\pi}_k \right]^{\widehat{\boldsymbol{z}}_k} \cdot q(\boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) d\boldsymbol{\pi} d\boldsymbol{\mu} d\boldsymbol{\Lambda} \\ &= \sum_{k=1}^K \int \int \int \left[ \mathcal{N}(\widehat{\mathbf{x}}|\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k^{-1}) \cdot \boldsymbol{\pi}_k \right] \cdot q(\boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) d\boldsymbol{\pi} d\boldsymbol{\mu} d\boldsymbol{\Lambda} \\ &= \sum_{k=1}^K \int \int \int \mathcal{N}(\widehat{\mathbf{x}}|\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k^{-1}) \cdot \boldsymbol{\pi}_k \cdot \left[ q(\boldsymbol{\pi}) \cdot \prod_{j=1}^K q(\boldsymbol{\mu}_j, \boldsymbol{\Lambda}_j) \right] d\boldsymbol{\pi} d\boldsymbol{\mu} d\boldsymbol{\Lambda} \end{split}$$
Where we have used the fact that **z** uses a one-of-k coding scheme. Recall that $\mu = \{\mu_k\}$ and $\Lambda = \{\Lambda_k\}$ , the term inside the summation can be further simplified. Namely, for those index $j \neq k$ , the integration with respect to $\mu_j$ and $\Lambda_j$ will equal 1, i.e.,
$$\begin{split} p(\widehat{\mathbf{x}}|\mathbf{X}) &= \sum_{k=1}^K \int \int \int \mathcal{N}(\widehat{\mathbf{x}}|\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k^{-1}) \cdot \boldsymbol{\pi}_k \cdot \left[ q(\boldsymbol{\pi}) \cdot \prod_{j=1}^K q(\boldsymbol{\mu}_j, \boldsymbol{\Lambda}_j) \right] d\boldsymbol{\pi} d\boldsymbol{\mu} d\boldsymbol{\Lambda} \\ &= \sum_{k=1}^K \int \int \int \mathcal{N}(\widehat{\mathbf{x}}|\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k^{-1}) \cdot \boldsymbol{\pi}_k \cdot q(\boldsymbol{\pi}) \cdot q(\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k) d\boldsymbol{\pi} d\boldsymbol{\mu}_k d\boldsymbol{\Lambda}_k \\ &= \sum_{k=1}^K \int \int \int \mathcal{N}(\widehat{\mathbf{x}}|\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k^{-1}) \cdot \boldsymbol{\pi}_k \cdot \mathrm{Dir}(\boldsymbol{\pi}|\boldsymbol{\alpha}) \cdot \mathcal{N}(\boldsymbol{\mu}_k|\mathbf{m}_k, (\boldsymbol{\beta}_k \boldsymbol{\Lambda}_k)^{-1}) \mathcal{W}(\boldsymbol{\Lambda}_k|\mathbf{W}_k, \boldsymbol{v}_k) d\boldsymbol{\pi} d\boldsymbol{\mu}_k d\boldsymbol{\Lambda}_k \end{split}$$
We notice that in the expression above, only $\pi_k \cdot \text{Dir}(\boldsymbol{\pi}|\boldsymbol{\alpha})$ contains $\pi_k$ , and we know that the expectation of $\pi_k$ with respect to $\text{Dir}(\boldsymbol{\pi}|\boldsymbol{\alpha})$ is $\alpha_k/\widehat{\alpha}_k$ .
Therefore, we can obtain:
$$\begin{split} p(\widehat{\mathbf{x}}|\mathbf{X}) &= \sum_{k=1}^K \int \int \frac{\alpha_k}{\widehat{\alpha}} \, \mathcal{N}(\widehat{\mathbf{x}}|\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k^{-1}) \cdot \mathcal{N}(\boldsymbol{\mu}_k|\mathbf{m}_k, (\beta_k \boldsymbol{\Lambda}_k)^{-1}) \cdot \mathcal{W}(\boldsymbol{\Lambda}_k|\mathbf{W}_k, v_k) \, d\,\boldsymbol{\mu}_k \, d\,\boldsymbol{\Lambda}_k \\ &= \sum_{k=1}^K \left\{ \int \left[ \int \mathcal{N}(\widehat{\mathbf{x}}|\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k^{-1}) \cdot \mathcal{N}(\boldsymbol{\mu}_k|\mathbf{m}_k, (\beta_k \boldsymbol{\Lambda}_k)^{-1}) \, d\,\boldsymbol{\mu}_k \right] \cdot \frac{\alpha_k}{\widehat{\alpha}} \cdot \mathcal{W}(\boldsymbol{\Lambda}_k|\mathbf{W}_k, v_k) \, d\,\boldsymbol{\Lambda}_k \right\} \\ &= \sum_{k=1}^K \left\{ \int \mathcal{N}(\widehat{\mathbf{x}}|\mathbf{m}_k, (1+\beta_k^{-1})\boldsymbol{\Lambda}_k^{-1}) \cdot \frac{\alpha_k}{\widehat{\alpha}} \cdot \mathcal{W}(\boldsymbol{\Lambda}_k|\mathbf{W}_k, v_k) \, d\,\boldsymbol{\Lambda}_k \right\} \\ &= \sum_{k=1}^K \frac{\alpha_k}{\widehat{\alpha}} \int \mathcal{N}(\widehat{\mathbf{x}}|\mathbf{m}_k, (1+\beta_k^{-1})\boldsymbol{\Lambda}_k^{-1}) \cdot \mathcal{W}(\boldsymbol{\Lambda}_k|\mathbf{W}_k, v_k) \, d\,\boldsymbol{\Lambda}_k \end{split}$$
Notice that the Wishart distribution is a conjugate prior for the Gaussian distribution with known mean and unknown precision. We conclude that the product of $\mathcal{N}(\widehat{\mathbf{x}}|\mathbf{m}_k, (1+\beta_k^{-1})\Lambda_k^{-1}) \cdot \mathcal{W}(\Lambda_k|\mathbf{W}_k, v_k)$ is again a Wishart distribution without normalized, which can be verified by focusing on the dependency on $\Lambda_k$ :
$$\begin{array}{ll} \text{(product)} & \propto & |\boldsymbol{\Lambda}_k|^{1/2 + (v_k - D - 1)/2} \cdot \exp \left\{ -\frac{\text{Tr}[\boldsymbol{\Lambda}_k \cdot (\widehat{\mathbf{x}} - \mathbf{m}_k)(\widehat{\mathbf{x}} - \mathbf{m}_k)^T]}{2(1 + \boldsymbol{\beta}_k^{-1})} - \frac{1}{2} \text{Tr}[\boldsymbol{\Lambda}_k \mathbf{W}_k^{-1}] \right\} \\ & \propto & \mathcal{W}(\boldsymbol{\Lambda}_k | \mathbf{W}^{'}, \boldsymbol{v}^{'}) \end{array}$$
Where we have defined:
$$v^{'}=v_k+1$$
and
$$[\mathbf{W}']^{-1} = \frac{(\widehat{\mathbf{x}} - \mathbf{m}_k)(\widehat{\mathbf{x}} - \mathbf{m}_k)^T}{1 + \beta_h^{-1}} + \mathbf{W}_h^{-1}$$
Using the normalization constant of Wishart distribution, i.e., (B.79), we can obtain:
$$\begin{split} p(\widehat{\mathbf{x}}|\mathbf{X}) &= \sum_{k=1}^K \frac{\alpha_k}{\widehat{\alpha}} \int \mathcal{N}(\widehat{\mathbf{x}}|\mathbf{m}_k, (1+\beta_k^{-1})\boldsymbol{\Lambda}_k^{-1}) \cdot \mathcal{W}(\boldsymbol{\Lambda}_k|\mathbf{W}_k, v_k) d\boldsymbol{\Lambda}_k \\ &= \sum_{k=1}^K \frac{\alpha_k}{\widehat{\alpha}} \cdot B(\mathbf{W}^{'}, v^{'}) \\ &\propto \left| \frac{(\widehat{\mathbf{x}} - \mathbf{m}_k)(\widehat{\mathbf{x}} - \mathbf{m}_k)^T}{1+\beta_k^{-1}} + \mathbf{W}_k^{-1} \right|^{-(v_k+1)/2} \\ &\propto \left| \frac{1}{1+\beta_k^{-1}} \mathbf{W}_k(\widehat{\mathbf{x}} - \mathbf{m}_k)(\widehat{\mathbf{x}} - \mathbf{m}_k)^T + \mathbf{I} \right|^{-(v_k+1)/2} \end{split}$$
Here we have only considered those terms dependent on $\hat{\mathbf{x}}_k$ . Next, we use:
$$|\mathbf{I} + \mathbf{a}\mathbf{b}^T| = 1 + \mathbf{a}^T\mathbf{b}$$
The expression above can be further simplified to:
$$\begin{split} p(\widehat{\mathbf{x}}|\mathbf{X}) & \propto & \left| \frac{1}{1 + \beta_k^{-1}} \mathbf{W}_k (\widehat{\mathbf{x}} - \mathbf{m}_k) (\widehat{\mathbf{x}} - \mathbf{m}_k)^T + \mathbf{I} \right|^{-(v_k + 1)/2} \\ & = & \left[ 1 + \frac{1}{1 + \beta_b^{-1}} (\widehat{\mathbf{x}} - \mathbf{m}_k)^T \mathbf{W}_k (\widehat{\mathbf{x}} - \mathbf{m}_k) \right]^{-(v_k + 1)/2} \end{split}$$
By comparing it with (B.68), we notice that it is a Student's t distribution, whose parameters are defined by Eq (10.81)-(10.82).
| 7,684
|
10
|
10.2
|
easy
|
Use the properties $\mathbb{E}[z_1] = m_1$ and $\mathbb{E}[z_2] = m_2$ to solve the simultaneous equations $m_1 = \mu_1 - \Lambda_{11}^{-1} \Lambda_{12} \left( \mathbb{E}[z_2] - \mu_2 \right).$ and $m_2 = \mu_2 - \Lambda_{22}^{-1} \Lambda_{21} \left( \mathbb{E}[z_1] - \mu_1 \right).$, and hence show that, provided the original distribution $p(\mathbf{z})$ is nonsingular, the unique solution for the means of the factors in the approximation distribution is given by $\mathbb{E}[z_1] = \mu_1$ and $\mathbb{E}[z_2] = \mu_2$ .
|
To be more clear, we are required to solve:
$$\begin{cases} m_1 = \mu_1 - \Lambda_{11}^{-1} \Lambda_{12} (m_2 - \mu_2) \\ m_2 = \mu_2 - \Lambda_{22}^{-1} \Lambda_{21} (m_1 - \mu_1) \end{cases}$$
To obtain the equation above, we need to substitute $\mathbb{E}[z_i] = m_i$ , where i = 1, 2, into Eq $m_1 = \mu_1 - \Lambda_{11}^{-1} \Lambda_{12} \left( \mathbb{E}[z_2] - \mu_2 \right).$ and Eq $q_2^{}(z_2) = \mathcal{N}(z_2|m_2, \Lambda_{22}^{-1})$. Here the unknown parameters are $m_1$ and $m_2$ . It is trivial to notice that $m_i = \mu_i$ is a solution for the equation above.
Let's solve this equation from another perspective. Firstly, if any (or both) of $\Lambda_{11}^{-1}$ and $\Lambda_{22}^{-1}$ equals 0, we can obtain $m_i = \mu_i$ directly from Eq (10.13)-(10.14). When none of $\Lambda_{11}^{-1}$ and $\Lambda_{22}^{-1}$ equals 0, we substitute $m_1$ , i.e., the first
line, into the second line:
$$\begin{split} m_2 &= \mu_2 - \Lambda_{22}^{-1} \Lambda_{21} \left( m_1 - \mu_1 \right) \\ &= \mu_2 - \Lambda_{22}^{-1} \Lambda_{21} \left[ \mu_1 - \Lambda_{11}^{-1} \Lambda_{12} \left( m_2 - \mu_2 \right) - \mu_1 \right] \\ &= \mu_2 - \Lambda_{22}^{-1} \Lambda_{21} \mu_1 + \Lambda_{22}^{-1} \Lambda_{21} \Lambda_{11}^{-1} \Lambda_{12} \left( m_2 - \mu_2 \right) + \Lambda_{22}^{-1} \Lambda_{21} \mu_1 \\ &= \left( 1 - \Lambda_{22}^{-1} \Lambda_{21} \Lambda_{11}^{-1} \Lambda_{12} \right) \mu_2 + \Lambda_{22}^{-1} \Lambda_{21} \Lambda_{11}^{-1} \Lambda_{12} \ m_2 \end{split}$$
We rearrange the expression above, yielding:
$$(1 - \Lambda_{22}^{-1} \Lambda_{21} \Lambda_{11}^{-1} \Lambda_{12}) (m_2 - \mu_2) = 0$$
The first term at the left hand side will equal 0 only when the distribution is singular, i.e., the determinant of the precision matrix $\Lambda$ (i.e., $\Lambda_{11}\Lambda_{22} - \Lambda_{12}\Lambda_{21}$ ) is 0. Therefore, if the distribution is nonsingular, we must have $m_2 = \mu_2$ . Substituting it back into the first line, we obtain $m_1 = \mu_1$ .
| 2,036
|
10
|
10.20
|
medium
|
- 10.20 (\*\*) This exercise explores the variational Bayes solution for the mixture of Gaussians model when the size N of the data set is large and shows that it reduces (as we would expect) to the maximum likelihood solution based on EM derived in Chapter 9. Note that results from Appendix B may be used to help answer this exercise. First show that the posterior distribution $q^*(\Lambda_k)$ of the precisions becomes sharply peaked around the maximum likelihood solution. Do the same for the posterior distribution of the means $q^*(\mu_k|\Lambda_k)$ . Next consider the posterior distribution $q^*(\pi)$ for the mixing coefficients and show that this too becomes sharply peaked around the maximum likelihood solution. Similarly, show that the responsibilities become equal to the corresponding maximum likelihood values for large N, by making use of the following asymptotic result for the digamma function for large x
$$\psi(x) = \ln x + O(1/x). \tag{10.241}$$
Finally, by making use of $p(\widehat{\mathbf{x}}|\mathbf{X}) = \sum_{k=1}^{K} \iiint \pi_k \mathcal{N}\left(\widehat{\mathbf{x}}|\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k^{-1}\right) q(\boldsymbol{\pi}) q(\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k) \, \mathrm{d}\boldsymbol{\pi} \, \mathrm{d}\boldsymbol{\mu}_k \, \mathrm{d}\boldsymbol{\Lambda}_k \quad$, show that for large N, the predictive distribution becomes a mixture of Gaussians.
|
Let's begin by dealing with $q^*(\Lambda_k)$ . When $N \to +\infty$ , we know that $N_k$ also approaches $+\infty$ based on Eq (10.51). Therefore, we know that $[\mathbf{W}_k]^{-1} \to N_k \mathbf{S}_k$ and $v_k \to N_k$ . Using (B.80), we conclude that $\mathbb{E}[\Lambda_k] = v_k \mathbf{W}_k \to \mathbf{S}_k^{-1}$ . If we now can prove that the entropy $H[\Lambda_k]$ is zero, we can conclude that the distribution collapse to a Dirac function, i.e, the distribution is sharply peaked around $\mathbf{S}_k^{-1}$ , which is identical to the EM of Gaussian mixture given by Eq $\Sigma_k^{\text{new}} = \frac{1}{N_k} \sum_{n=1}^N \gamma(z_{nk}) \left( \mathbf{x}_n - \boldsymbol{\mu}_k^{\text{new}} \right) \left( \mathbf{x}_n - \boldsymbol{\mu}_k^{\text{new}} \right)^{\text{T}}$. Therefore, let's now start from $\ln B(\mathbf{W}_k, v_k)$ , i.e., (B.79).
$$\begin{split} \ln &B(\mathbf{W}_k, v_k) &= -\frac{v_k}{2} \ln |\mathbf{W}_k| - \frac{v_k D}{2} \ln 2 - \frac{D(D-1)}{4} \ln \pi - \sum_{i=1}^D \ln \Gamma(\frac{v_k + 1 - i}{2}) \\ &\rightarrow \frac{N_k}{2} \ln |N_k \mathbf{S}_k| - \frac{N_k D}{2} \ln 2 - \sum_{i=1}^D \ln \Gamma(\frac{N_k + 1 - i}{2}) \\ &= \frac{N_k}{2} (D \ln N_k + \ln |\mathbf{S}_k| - D \ln 2) - \sum_{i=1}^D \ln \Gamma(\frac{N_k - 1 - i}{2} + 1) \\ &\approx \frac{N_k}{2} (D \ln \frac{N_k}{2} + \ln |\mathbf{S}_k|) \\ &- \sum_{i=1}^D \left[ \frac{1}{2} \ln 2\pi - \frac{N_k - 1 - i}{2} + (\frac{N_k - 1 - i}{2} + \frac{1}{2}) \ln \frac{N_k - 1 - i}{2} \right] \\ &\approx \frac{N_k}{2} (D \ln \frac{N_k}{2} + \ln |\mathbf{S}_k|) - \sum_{i=1}^D \left[ -\frac{N_k}{2} + \frac{N_k}{2} \ln \frac{N_k}{2} \right] \\ &= \frac{N_k}{2} (D \ln \frac{N_k}{2} + \ln |\mathbf{S}_k|) + \frac{N_k D}{2} - \frac{N_k D}{2} \ln \frac{N_k}{2} \\ &= \frac{N_k}{2} (D \ln |\mathbf{S}_k|) \end{split}$$
Where we have used Eq $\Gamma(x+1) \simeq (2\pi)^{1/2} e^{-x} x^{x+1/2}$ to approximate the logarithm of Gamma
function. Next we deal with $\mathbb{E}[\ln \Lambda_k]$ based on (B.81):
$$\begin{split} \mathbb{E}[\ln \Lambda_k] &= \sum_{i=1}^D \phi(\frac{v_k + 1 - i}{2}) + D \ln 2 + \ln |\mathbf{W}_k| \\ &\rightarrow \sum_{i=1}^D \ln(\frac{N_k + 1 - i}{2}) + D \ln 2 - \ln |N_k \mathbf{S}_k| \\ &\approx \sum_{i=1}^D \ln \frac{N_k}{2} + D \ln 2 - D \ln N_k - \ln |\mathbf{S}_k| \\ &= D \ln \frac{N_k}{2} + D \ln 2 - D \ln N_k - \ln |\mathbf{S}_k| \\ &= -\ln |\mathbf{S}_k| \end{split}$$
Where we have used Eq $\psi(x) = \ln x + O(1/x).$ to approximate the $\phi(\frac{v_k+1-i}{2})$ . Now we are ready to deal with the entropy $H[q(\Lambda_k)]$ :
$$\begin{split} \mathbf{H}[q(\mathbf{\Lambda}_k)] &= -\ln B(\mathbf{W}_k, v_k) - \frac{v_k - D - 1}{2} \mathbb{E}[\ln \Lambda_k] + \frac{v_k D}{2} \\ &\rightarrow -\frac{N_k}{2} (D + \ln |\mathbf{S}_k|) + \frac{N_k}{2} \ln |\mathbf{S}_k| + \frac{N_k D}{2} = 0 \end{split}$$
Therefore, we can conclude that the distribution $q^*(\Lambda_k)$ collapse to a Dirac function at $\mathbf{S}_k^{-1}$ . In other words, when $N \to +\infty$ , $\Lambda_k$ can only achieve one value $\mathbf{S}_k^{-1}$ .
Next, we deal with $q^*(\boldsymbol{\mu}_k|\boldsymbol{\Lambda}_k)$ . According to Eq $\beta_k = \beta_0 + N_k$, when $N\to +\infty$ , we conclude that $\beta_k\to N_k$ , and thus, $\mathbf{m}_k\to \bar{\mathbf{x}}_k$ based on Eq $\mathbf{m}_k = \frac{1}{\beta_k} \left( \beta_0 \mathbf{m}_0 + N_k \overline{\mathbf{x}}_k \right)$. Since we know $q^*(\boldsymbol{\mu}_k|\boldsymbol{\Lambda}_k)=\mathcal{N}(\boldsymbol{\mu}_k|\mathbf{m}_k,(\beta_k\boldsymbol{\Lambda}_k)^{-1})$ and $\beta_k\boldsymbol{\Lambda}_k\to N_k\mathbf{S}_k^{-1}$ is large, we conclude that when $N\to\infty$ , $\boldsymbol{\mu}_k$ also achieves only one value $\bar{\mathbf{x}}_k$ , which is identical to the EM of Gaussian Mixture, i.e., Eq $\boldsymbol{\mu}_{k}^{\text{new}} = \frac{1}{N_{k}} \sum_{n=1}^{N} \gamma(z_{nk}) \mathbf{x}_{n}$.
Finally, we consider $q^*(\pi)$ given by Eq $+\sum_{k=1}^{K}\sum_{n=1}^{N}\mathbb{E}[z_{nk}]\ln\mathcal{N}\left(\mathbf{x}_{n}|\boldsymbol{\mu}_{k},\boldsymbol{\Lambda}_{k}^{-1}\right)+\text{const.}$. Since we know $\alpha_k \to N_k$ based on Eq $\alpha_k = \alpha_0 + N_k.$, we see that $\mathbb{E}[\mu_k] = \alpha_k/\widehat{\alpha} \to \frac{N_k}{N}$ and
$$\operatorname{var}[\mu_k] = \frac{\alpha_k(\widehat{\alpha} - \alpha_k)}{\widehat{\alpha}^2(\widehat{\alpha} + 1)} \le \frac{\widehat{\alpha} \cdot \widehat{\alpha}}{\widehat{\alpha}^3} = \frac{1}{\widehat{\alpha}} \to 0$$
We can also conclude that $pi_k$ only achieves one value $\frac{N_k}{N}$ , which is identical to the EM of Gaussian Mixture, i.e., Eq $\pi_k^{\text{new}} = \frac{N_k}{N}$. Now it is trivial to see that the predictive distribution will reduce to a Mixture of Gaussian using Eq (10.80). Beause $\pi$ , $\mu_k$ and $\Lambda_k$ all reduce to a Dirac function, the integration is easy to perform.
| 4,988
|
10
|
10.21
|
easy
|
Show that the number of equivalent parameter settings due to interchange symmetries in a mixture model with K components is K!.
|
This can be verified directly. The total number of labeling equals assign K labels to K object. For the first label, we have K choice, K-1 choice for the second label, and so on. Therefore, the total number is given by K!.
| 222
|
10
|
10.22
|
medium
|
- 10.22 (\*\*) We have seen that each mode of the posterior distribution in a Gaussian mixture model is a member of a family of K! equivalent modes. Suppose that the result of running the variational inference algorithm is an approximate posterior distribution q that is localized in the neighbourhood of one of the modes. We can then approximate the full posterior distribution as a mixture of K! such q distributions, once centred on each mode and having equal mixing coefficients. Show that if we assume negligible overlap between the components of the q mixture, the resulting lower bound differs from that for a single component q distribution through the addition of an extra term $\ln K!$ .
|
Let's explain this problem in details. Suppose that now we have a mixture of Gaussian $p(\mathbf{Z}|\mathbf{X})$ , which are required to approximate. Moreover, it has K components and each of the modes is denoted as $\{\mu_1, \mu_2, ..., \mu_K\}$ . We use the variational inference, i.e., Eq $\mathcal{L}(q) = \int q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{X}, \mathbf{Z})}{q(\mathbf{Z})} \right\} d\mathbf{Z}$, to minimize the KL divergence: $\mathrm{KL}(q||p)$ , and obtain an approximate distribution $q_s(\mathbf{Z})$ and a corresponding lower bound $L(q_s)$ .
According to the problem description, this approximate distribution $q_s(\mathbf{Z})$ will be a single mode Gaussian located at one of the modes of $p(\mathbf{Z}|\mathbf{X})$ , i.e., $q_s(\mathbf{Z}) = \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_s, \boldsymbol{\Sigma}_s)$ , where $s \in \{1, 2, ..., K\}$ . Now, we replicate this $q_s$ for K! times in total. Each of the copies is moved to one mode's center.
Now we can write down the mixing distribution made up of K! Gaussian distribution:
$$q_m(\mathbf{Z}) = \frac{1}{K!} \sum_{m=1}^{K!} \mathcal{N}(\mathbf{Z} | \boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_s)$$
Where C(m) represents the mode of the m-th component. $C(m) \in \{1, 2, ..., K\}$ . What the problem wants us to prove is:
$$L(q_s) + \ln K! \approx L(q_m)$$
In other words, the lower bound using $q_m$ to approximate, i.e., $L(q_m)$ , is $\ln K!$ larger than using $q_s$ , i.e., $L(q_s)$ . Based on Eq $\mathcal{L}(q) = \int q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{X}, \mathbf{Z})}{q(\mathbf{Z})} \right\} d\mathbf{Z}$, let's equivalently deal with the KL divergence. According to Eq $KL(q||p) = -\int q(\mathbf{Z}) \ln \left\{ \frac{p(\mathbf{Z}|\mathbf{X})}{q(\mathbf{Z})} \right\} d\mathbf{Z}.$, we can obtain:
$$\begin{aligned} \operatorname{KL}(q_{m}||p) &= -\int q_{m}(\mathbf{Z}) \ln \frac{p(\mathbf{Z}|\mathbf{X})}{q_{m}(\mathbf{Z})} d\mathbf{Z} \\ &= -\int q_{m}(\mathbf{Z}) \ln p(\mathbf{Z}|\mathbf{X}) d\mathbf{Z} + \int q_{m}(\mathbf{Z}) \ln q_{m}(\mathbf{Z}) d\mathbf{Z} \\ &= -\int q_{m}(\mathbf{Z}) \ln p(\mathbf{Z}|\mathbf{X}) d\mathbf{Z} + \int q_{m}(\mathbf{Z}) \ln \left\{ \frac{1}{K!} \sum_{m=1}^{K!} \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_{s}) \right\} d\mathbf{Z} \\ &= -\int q_{m}(\mathbf{Z}) \ln p(\mathbf{Z}|\mathbf{X}) d\mathbf{Z} + \int q_{m}(\mathbf{Z}) \ln \left\{ \sum_{m=1}^{K!} \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_{s}) \right\} d\mathbf{Z} + \ln \frac{1}{K!} \\ &= -\ln K! - \int q_{m}(\mathbf{Z}) \ln p(\mathbf{Z}|\mathbf{X}) d\mathbf{Z} \\ &+ \frac{1}{K!} \int \sum_{m=1}^{K!} \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_{s}) \ln \left\{ \sum_{m=1}^{K!} \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_{s}) \right\} d\mathbf{Z} \end{aligned}$$
In order to further simplify the KL divergence, here we write down two useful equations. First, we use the "negligible overlap" property. To be more specific, according to the assumption that the overlap are negligible, we can obtain:
$$\int \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_s) \ln \left\{ \sum_{m=1}^{K!} \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_s) \right\} d\mathbf{Z} \approx \int \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_s) \ln \left\{ \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_s) \right\} d\mathbf{Z}$$
The second equation is that for any $m_1, m_2 \in \{1, 2, ..., K\}$ , we have:
$$\int q_s \ln q_s d\mathbf{Z} = \int \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m_1)}, \boldsymbol{\Sigma}_s) \ln \left\{ \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m_1)}, \boldsymbol{\Sigma}_s) \right\} d\mathbf{Z}$$
$$= \int \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m_2)}, \boldsymbol{\Sigma}_s) \ln \left\{ \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m_2)}, \boldsymbol{\Sigma}_s) \right\} d\mathbf{Z}$$
Therefore, now we can obtain:
$$\begin{split} \operatorname{KL}(q_m||p) &= -\ln K! - \int q_m(\mathbf{Z}) \ln p(\mathbf{Z}|\mathbf{X}) d\mathbf{Z} \\ &+ \frac{1}{K!} \int \sum_{m=1}^{K!} \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_s) \ln \left\{ \sum_{m=1}^{K!} \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_s) \right\} d\mathbf{Z} \\ &\approx -\ln K! - \int q_m(\mathbf{Z}) \ln p(\mathbf{Z}|\mathbf{X}) d\mathbf{Z} \\ &+ \frac{1}{K!} \int \sum_{m=1}^{K!} \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_s) \ln \left\{ \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_s) \right\} d\mathbf{Z} \\ &= -\ln K! - \int q_m(\mathbf{Z}) \ln p(\mathbf{Z}|\mathbf{X}) d\mathbf{Z} \\ &+ \int \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_s) \ln \left\{ \mathcal{N}(\mathbf{Z}|\boldsymbol{\mu}_{C(m)}, \boldsymbol{\Sigma}_s) \right\} d\mathbf{Z} \quad (\forall m \in \{1, 2, ..., K\}) \\ &= -\ln K! - \int q_m(\mathbf{Z}) \ln p(\mathbf{Z}|\mathbf{X}) d\mathbf{Z} + \int q_s(\mathbf{Z}) \ln q_s(\mathbf{Z}) d\mathbf{Z} \\ &= -\ln K! - \int q_s(\mathbf{Z}) \ln p(\mathbf{Z}|\mathbf{X}) d\mathbf{Z} + \int q_s(\mathbf{Z}) \ln q_s(\mathbf{Z}) d\mathbf{Z} \\ &\approx -\ln K! - \int q_s(\mathbf{Z}) \ln p(\mathbf{Z}|\mathbf{X}) d\mathbf{Z} + \int q_s(\mathbf{Z}) \ln q_s(\mathbf{Z}) d\mathbf{Z} \\ &= -\ln K! - \int q_s(\mathbf{Z}) \ln \frac{p(\mathbf{Z}|\mathbf{X})}{q_s(\mathbf{Z})} d\mathbf{Z} = -\ln K! + \operatorname{KL}(q_s||p) \end{split}$$
To obtain the desired result, we have adopted an approximation here, however, you should notice that this approximation is rough.
| 5,702
|
10
|
10.23
|
medium
|
- 10.23 (\*\*) Consider a variational Gaussian mixture model in which there is no prior distribution over mixing coefficients $\{\pi_k\}$ . Instead, the mixing coefficients are treated as parameters, whose values are to be found by maximizing the variational lower bound on the log marginal likelihood. Show that maximizing this lower bound with respect to the mixing coefficients, using a Lagrange multiplier to enforce the constraint that the mixing coefficients sum to one, leads to the re-estimation result $\pi_k = \frac{1}{N} \sum_{n=1}^{N} r_{nk}$. Note that there is no need to consider all of the terms in the lower bound but only the dependence of the bound on the $\{\pi_k\}$ .
|
Let's go back to Eq $-\mathbb{E}[\ln q(\mathbf{Z})] - \mathbb{E}[\ln q(\boldsymbol{\pi})] - \mathbb{E}[\ln q(\boldsymbol{\mu}, \boldsymbol{\Lambda})]$. If now we treat $\pi_k$ as a parameter without a prior distribution, $\pi_k$ will only occur in the second term in Eq $-\mathbb{E}[\ln q(\mathbf{Z})] - \mathbb{E}[\ln q(\boldsymbol{\pi})] - \mathbb{E}[\ln q(\boldsymbol{\mu}, \boldsymbol{\Lambda})]$, i.e., $\mathbb{E}[\ln p(\mathbf{Z}|\boldsymbol{\pi})]$ . Therefore, we can obtain:
$$\mathcal{L} \propto \mathbb{E}[\ln p(\mathbf{Z}|\boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \ln \pi_k$$
Where we have used Eq $\mathbb{E}[\ln p(\mathbf{Z}|\boldsymbol{\pi})] = \sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \ln \widetilde{\pi}_k$, and here since $\pi_k$ is a point estimate, the expectation $\mathbb{E}[\ln \pi_k]$ will reduce to $\ln \pi_k$ . Now we introduce a Lagrange Multiplier.
Lag =
$$\sum_{n=1}^{N} \sum_{k=1}^{K} r_{nk} \ln \pi_k + \lambda \cdot (\sum_{k=1}^{K} \pi_k - 1)$$
Calculating the derivative of the expression above with respect to $\pi_k$ and setting it to zero, we obtain:
$$\frac{\sum_{n=1}^{N} r_{nk}}{\pi_b} + \lambda = \frac{N_k}{\pi_b} + \lambda = 0 \tag{*}$$
Multiplying both sides by $\pi_k$ and then adopting summation of both sides with respect to k, we obtain
$$\sum_{k=1}^K N_k + \lambda \sum_{k=1}^K \pi_k = 0$$
Since we know the summation of $N_k$ with respect to k equals N, and the summation of $\pi_k$ with respect to k equals 1, we rearrange the equation above, yielding:
$$\lambda = -N$$
Substituting it back into (\*), we can obtain:
$$\pi_k = \frac{N_k}{N} = \frac{1}{N} \sum_{n=1}^{N} r_{nk}$$
Just as required.
| 1,718
|
10
|
10.24
|
medium
|
We have seen in Section 10.2 that the singularities arising in the maximum likelihood treatment of Gaussian mixture models do not arise in a Bayesian treatment. Discuss whether such singularities would arise if the Bayesian model were solved using maximum posterior (MAP) estimation.
|
Recall that the singularity in the maximum likelihood estimation of Gaussian mixture is caused by the determinant of the covariance matrix $\Sigma_k$ approaches 0, and thus the value in $\mathcal{N}(\mathbf{x}_n|\boldsymbol{\mu}_k,\boldsymbol{\Sigma}_k)$ will approach $+\infty$ . For more details, you can read Section 9.2, especially page 434.
In this problem, an intuition is that since we have introduce a prior distribution for $\Lambda_k$ , this singularity won't exist when adopting MAP. Let's verify this statement beginning by writing down the posterior.
$$p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) \propto p(\mathbf{X}|\mathbf{Z}, \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) \cdot p(\mathbf{Z}, \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda})$$
$$= p(\mathbf{X}|\mathbf{Z}, \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) \cdot p(\mathbf{Z}|\boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) \cdot p(\boldsymbol{\pi}|\boldsymbol{\mu}, \boldsymbol{\Lambda}) \cdot p(\boldsymbol{\mu}, \boldsymbol{\Lambda})$$
$$= p(\mathbf{X}|\mathbf{Z}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) \cdot p(\mathbf{Z}|\boldsymbol{\pi}) \cdot p(\boldsymbol{\pi}) \cdot p(\boldsymbol{\mu}, \boldsymbol{\Lambda})$$
Note that in the first step we have used Bayes' theorem, that in the second step we have used the fact that $p(a,b) = p(a|b) \cdot p(b)$ , and that in the last step we have omitted the extra dependence based on definition, i.e., Eq (10.37)-(10.40). Now let's calculate the MAP solution for $\Lambda_k$ .
$$\begin{split} \ln p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda}) & \propto & \frac{1}{2} \sum_{n=1}^{N} z_{nk} \Big\{ \ln |\boldsymbol{\Lambda}_{k}| - (\mathbf{x}_{n} - \boldsymbol{\mu}_{k})^{T} \boldsymbol{\Lambda}_{k} (\mathbf{x}_{n} - \boldsymbol{\mu}_{k}) \Big\} \\ & \frac{1}{2} \Big\{ \ln |\boldsymbol{\Lambda}_{k}| - \beta_{0} (\boldsymbol{\mu}_{k} - \mathbf{m}_{0})^{T} \boldsymbol{\Lambda}_{k} (\boldsymbol{\mu}_{k} - \mathbf{m}_{0}) \Big\} \\ & + \frac{1}{2} \Big\{ (v_{0} - D - 1) \ln |\boldsymbol{\Lambda}_{k}| - \mathrm{Tr}[\mathbf{W}_{0}^{-1} \boldsymbol{\Lambda}_{k}] \Big\} + \mathrm{const} \\ & = & c \cdot \ln |\boldsymbol{\Lambda}_{k}| - \mathrm{Tr}[\mathbf{B} \boldsymbol{\Lambda}_{k}] + \mathrm{const} \end{split}$$
Where const is the term independent of $\Lambda_k$ , and we have defined:
$$c = \frac{1}{2}(v_0 - D + \sum_{n=1}^{N} z_{nk})$$
and
$$\mathbf{B} = \frac{1}{2} \left\{ \sum_{n=1}^{N} z_{nk} (\mathbf{x}_n - \boldsymbol{\mu}_k) (\mathbf{x}_n - \boldsymbol{\mu}_k)^T + \beta_0 (\boldsymbol{\mu}_k - \mathbf{m}_0) (\boldsymbol{\mu}_k - \mathbf{m}_0)^T + \mathbf{W}_0^{-1} \right\}$$
Next we calculate the derivative of $\ln p(\mathbf{Z}|\mathbf{X}, \boldsymbol{\pi}, \boldsymbol{\mu}, \boldsymbol{\Lambda})$ with respect to $\boldsymbol{\Lambda}_k$ and set it to 0, yielding:
$$c \cdot \mathbf{\Lambda}_k^{-1} - \mathbf{B} = 0$$
therefore, we obtain:
$$\mathbf{\Lambda}_k^{-1} = \frac{1}{c} \mathbf{B}$$
Note that in the MAP framework, we need to solve $z_{nk}$ first, and then substitute them in c and $\mathbf{B}$ in the expression above. Nevertheless, from the expression above, we can see that $\Lambda_k^{-1}$ won't have zero determinant.
| 3,331
|
10
|
10.25
|
medium
|
- 10.25 (\*\*) The variational treatment of the Bayesian mixture of Gaussians, discussed in Section 10.2, made use of a factorized approximation $q(\mathbf{Z}) = \prod_{i=1}^{M} q_i(\mathbf{Z}_i).$ to the posterior distribution. As we saw in Figure 10.2, the factorized assumption causes the variance of the posterior distribution to be under-estimated for certain directions in parameter space. Discuss qualitatively the effect this will have on the variational approximation to the model evidence, and how this effect will vary with the number of components in the mixture. Hence explain whether the variational Gaussian mixture will tend to under-estimate or over-estimate the optimal number of components.
|
We qualitatively solve this problem. As the number of mixture components grows, so does the number of variables that may be correlated, but they are treated as independent under a variational approximation if Eq $q(\mathbf{Z}) = \prod_{i=1}^{M} q_i(\mathbf{Z}_i).$ has been used. Therefore, the proportion of probability mass under the true distribution, $p(\mathbf{Z}, \pi, \mu, \Sigma | \mathbf{X})$ , that the variational approximation $q(\mathbf{Z}, \pi, \mu, \Sigma)$ does not capture, will grow. The consequence will be that the second term in $\ln p(\mathbf{X}) = \mathcal{L}(q) + \mathrm{KL}(q||p)$, the KL divergence between $q(\mathbf{Z}, \pi, \mu, \Sigma)$ and $p(\mathbf{Z}, \pi, \mu, \Sigma | \mathbf{X})$ will increase.
To answer the question whether we will underestimate or overestimate the number of components by minimizing $\mathrm{KL}(q||p)$ divergence under factorization, we only need to see Fig.10.3. It is obvious that we will underestimate the number of components.
| 1,016
|
10
|
10.26
|
hard
|
Extend the variational treatment of Bayesian linear regression to include a gamma hyperprior $\operatorname{Gam}(\beta|c_0,d_0)$ over $\beta$ and solve variationally, by assuming a factorized variational distribution of the form $q(\mathbf{w})q(\alpha)q(\beta)$ . Derive the variational update equations for the three factors in the variational distribution and also obtain an expression for the lower bound and for the predictive distribution.
|
In this problem, we also need to consider the prior $p(\beta) = \text{Gam}(\beta|c_0, d_0)$ . To be more specific, based on the original joint distribution $p(\mathbf{t}, \mathbf{w}, \alpha)$ , i.e., Eq $p(\mathbf{t}, \mathbf{w}, \alpha) = p(\mathbf{t}|\mathbf{w})p(\mathbf{w}|\alpha)p(\alpha).$, the joint distribution $p(\mathbf{t}, \mathbf{w}, \alpha, \beta)$ now should be written as:
$$p(\mathbf{t}, \mathbf{w}, \alpha, \beta) = p(\mathbf{t}|\mathbf{w}, \beta)p(\mathbf{w}|\alpha)p(\alpha)p(\beta)$$
Where the first term on the right hand side is given by Eq $p(\mathbf{t}|\mathbf{w}) = \prod_{n=1}^{N} \mathcal{N}(t_n|\mathbf{w}^{\mathrm{T}}\boldsymbol{\phi}_n, \beta^{-1})$, the second one is given by Eq $p(\mathbf{w}|\alpha) = \mathcal{N}(\mathbf{w}|\mathbf{0}, \alpha^{-1}\mathbf{I})$, the third one is given by Eq $p(\alpha) = \operatorname{Gam}(\alpha|a_0, b_0)$, and the last one is given by $Gam(\beta|c_0,d_0)$ . Using the variational framework, we assume a posterior variational distribution:
$$q(\mathbf{w}, \alpha, \beta) = q(\mathbf{w})q(\alpha)q(\beta)$$
It is trivial to observe that introducing a Gamma prior for $\beta$ doesn't affect $q(\alpha)$ because the expectation of $p(\beta)$ can be absorbed into the 'const' term in Eq $= (a_0 - 1) \ln \alpha - b_0 \alpha + \frac{M}{2} \ln \alpha - \frac{\alpha}{2} \mathbb{E}[\mathbf{w}^{\mathrm{T}} \mathbf{w}] + \text{const.}$. In other words, we still obtain Eq (10.93)-Eq(10.95).
Now we deal with $q(\mathbf{w})$ . By analogy to Eq (10.96)-(10.98), we can obtain:
$$\begin{split} & \ln q^{}(\mathbf{w}) & \propto & \mathbb{E}_{\beta}[\ln p(\mathbf{t}|\mathbf{w},\beta)] + \mathbf{E}_{\alpha}[\ln p(\mathbf{w}|\alpha)] + \text{const} \\ & \propto & -\frac{\mathbb{E}_{\beta}[\beta]}{2} \cdot \sum_{n=1}^{N} (t_{n} - \mathbf{w}^{T} \boldsymbol{\phi}_{n})^{2} - \frac{\mathbb{E}_{\alpha}[\alpha]}{2} \mathbf{w}^{T} \mathbf{w} + \text{const} \\ & = & -\frac{1}{2} \mathbf{w}^{T} \Big\{ \mathbb{E}_{\beta}[\beta] \cdot \boldsymbol{\Phi}^{T} \boldsymbol{\Phi} + \mathbb{E}_{\alpha}[\alpha] \cdot \mathbf{I} \Big\} \mathbf{w} + \mathbb{E}_{\beta}[\beta] \mathbf{w}^{T} \boldsymbol{\Phi}^{T} \mathbf{t} + \text{const} \end{split}$$
Therefore, by analogy to Eq (10.99)-(10.101), we can conclude that $q^*(\mathbf{w})$ is still Gaussian, i.e., $q^*(\mathbf{w}) = \mathcal{N}(\mathbf{w}|\mathbf{m}_N, \mathbf{S}_N)$ , where we have defined:
$$\mathbf{m}_N = \mathbb{E}_{\beta}[\beta] \mathbf{S}_N \mathbf{\Phi}^T \mathbf{t}$$
and
$$\mathbf{S}_{N} = \left\{ \mathbb{E}_{\beta}[\beta] \cdot \mathbf{\Phi}^{T} \mathbf{\Phi} + \mathbb{E}_{\alpha}[\alpha] \cdot \mathbf{I} \right\}^{-1}$$
Next, we deal with $q(\beta)$ . According to definition, we have:
$$\begin{split} \ln q^{}(\beta) & \propto & \mathbb{E}_{\mathbf{w}}[\ln p(\mathbf{t}|\mathbf{w},\beta)] + \ln p(\beta) + \operatorname{const} \\ & \propto & \frac{N}{2} \cdot \ln \beta - \frac{\beta}{2} \cdot \mathbb{E}[\sum_{n=1}^{N} (t_{n} - \mathbf{w}^{T} \boldsymbol{\phi}_{n})^{2}] + (c_{0} - 1) \ln \beta - d_{0}\beta \\ & = & (\frac{N}{2} + c_{0} - 1) \cdot \ln \beta - \frac{\beta}{2} \cdot \mathbb{E}[||\boldsymbol{\Phi}\mathbf{w} - \mathbf{t}||^{2}] - d_{0}\beta \\ & = & (\frac{N}{2} + c_{0} - 1) \cdot \ln \beta - \beta \cdot \left\{ \frac{1}{2} \cdot \mathbb{E}[||\boldsymbol{\Phi}\mathbf{w} - \mathbf{t}||^{2}] + d_{0} \right\} \\ & = & (\frac{N}{2} + c_{0} - 1) \cdot \ln \beta - \beta \cdot \left\{ \frac{1}{2} \cdot \mathbb{E}[\mathbf{w}^{T} \boldsymbol{\Phi}^{T} \boldsymbol{\Phi}\mathbf{w} - 2\mathbf{t}^{T} \boldsymbol{\Phi}\mathbf{w} + \mathbf{t}^{T} \mathbf{t}] + d_{0} \right\} \\ & = & (\frac{N}{2} + c_{0} - 1) \cdot \ln \beta - \beta \cdot \left\{ \frac{1}{2} \cdot \operatorname{Tr}[\boldsymbol{\Phi}^{T} \boldsymbol{\Phi} \mathbb{E}[\mathbf{w} \mathbf{w}^{T}]] - \mathbf{t}^{T} \boldsymbol{\Phi} \mathbb{E}[\mathbf{w}] + \frac{1}{2} \mathbf{t}^{T} \mathbf{t} + d_{0} \right\} \\ & = & (\frac{N}{2} + c_{0} - 1) \cdot \ln \beta - \beta \cdot \left\{ \frac{1}{2} \cdot \operatorname{Tr}[\boldsymbol{\Phi}^{T} \boldsymbol{\Phi}(\mathbf{m}_{N} \mathbf{m}_{N}^{T} + \mathbf{S}_{N})] - \mathbf{t}^{T} \boldsymbol{\Phi} \mathbf{m}_{N} + \frac{1}{2} \mathbf{t}^{T} \mathbf{t} + d_{0} \right\} \\ & = & (\frac{N}{2} + c_{0} - 1) \cdot \ln \beta - \beta \cdot \left\{ \frac{1}{2} \operatorname{Tr}[\boldsymbol{\Phi}^{T} \boldsymbol{\Phi} \mathbf{S}_{N}] + \frac{1}{2} \mathbf{m}_{N}^{T} \boldsymbol{\Phi}^{T} \boldsymbol{\Phi} \mathbf{m}_{N} - \mathbf{t}^{T} \boldsymbol{\Phi} \mathbf{m}_{N} + \frac{1}{2} \mathbf{t}^{T} \mathbf{t} + d_{0} \right\} \\ & = & (\frac{N}{2} + c_{0} - 1) \cdot \ln \beta - \beta \cdot \left\{ \frac{1}{2} \operatorname{Tr}[\boldsymbol{\Phi}^{T} \boldsymbol{\Phi} \mathbf{S}_{N}] + ||\boldsymbol{\Phi} \mathbf{m}_{N} - \mathbf{t}||^{2} + 2d_{0} \right\} \end{split}$$
Therefore, we obtain $q^*(\beta) = \text{Gam}(\beta|c_N, d_N)$ , where we have defined:
$$c_N = \frac{N}{2} + c_0$$
and
$$d_N = d_0 + \frac{1}{2} \left\{ \text{Tr}[\boldsymbol{\Phi}^T \boldsymbol{\Phi} \mathbf{S}_N] + ||\boldsymbol{\Phi} \mathbf{m}_N - \mathbf{t}||^2 \right\}$$
Furthermore, notice that from (B.27), the expectations in $\mathbf{m}_N$ and $\mathbf{S}_N$ can be expressed in $a_N$ , $b_N$ , and $c_N$ , $d_N$ :
$$\mathbb{E}[\alpha] = \frac{a_N}{b_N}$$
and $\mathbb{E}[\beta] = \frac{c_N}{d_N}$
We have already obtained all the update formula. Next, we calculate the lower bound. By noticing Eq $-\mathbb{E}_{\alpha}[\ln q(\mathbf{w})]_{\mathbf{w}} - \mathbb{E}[\ln q(\alpha)].$, in this case, the first term on the right hand side of Eq $-\mathbb{E}_{\alpha}[\ln q(\mathbf{w})]_{\mathbf{w}} - \mathbb{E}[\ln q(\alpha)].$ will be modified, and two more terms will be added on the right hand side, i.e., $+\mathbb{E}[\ln p(\beta)]$ and $-\mathbb{E}[\ln q^*(\beta)]$ . Let's start from calculating the adding two terms:
$$\begin{split} +\mathbb{E}[\ln p(\beta)] &= (c_0 - 1)\mathbb{E}[\ln \beta] - d_0\mathbb{E}[\beta] + c_0 \ln d_0 - \ln \Gamma(c_0) \\ &= (c_0 - 1) \cdot (\varphi(c_N) - \ln d_N) - d_0 \frac{c_N}{d_N} + c_0 \ln d_0 - \ln \Gamma(c_0) \end{split}$$
where we have used (B.26) and (B.30). Similarly, we have:
$$-\mathbb{E}[\ln q^{}(\beta)] = (c_N - 1) \cdot \varphi(c_N) - c_N + \ln d_N - \ln \Gamma(c_N)$$
where we have used (B.31). Finally, we deal with the modification of the first term on the right hand side of Eq (10.107):
$$\begin{split} \mathbb{E}_{\beta,\mathbf{w}}[\ln p(\mathbf{t}|\mathbf{w},\beta)] &= \mathbb{E}_{\beta} \left\{ \frac{N}{2} \ln \beta - \frac{N}{2} \ln 2\pi - \frac{\beta}{2} \mathbb{E}_{\mathbf{w}}[||\mathbf{\Phi}\mathbf{w} - \mathbf{t}||^{2}] \right\} \\ &= \frac{N}{2} \mathbb{E}_{\beta}[\ln \beta] - \frac{N}{2} \ln 2\pi - \frac{\mathbb{E}_{\beta}[\beta]}{2} \mathbb{E}_{\mathbf{w}}[||\mathbf{\Phi}\mathbf{w} - \mathbf{t}||^{2}] \\ &= \frac{N}{2} (\varphi(c_{N}) - \ln d_{N} - \ln 2\pi) - \frac{c_{N}}{2d_{N}} \mathbb{E}_{\mathbf{w}}[||\mathbf{\Phi}\mathbf{w} - \mathbf{t}||^{2}] \\ &= \frac{N}{2} (\varphi(c_{N}) - \ln d_{N} - \ln 2\pi) - \frac{c_{N}}{2d_{N}} \left\{ \text{Tr}[\mathbf{\Phi}^{T}\mathbf{\Phi}\mathbf{S}_{N}] + ||\mathbf{\Phi}\mathbf{m}_{N} - \mathbf{t}||^{2} \right\} \end{split}$$
The last question is the predictive distribution. It is not difficult to observe that the predictive distribution is still given by Eq $= \mathcal{N}(t|\mathbf{m}_{N}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}), \sigma^{2}(\mathbf{x})) \qquad$ and Eq $\sigma^{2}(\mathbf{x}) = \frac{1}{\beta} + \phi(\mathbf{x})^{\mathrm{T}} \mathbf{S}_{N} \phi(\mathbf{x}).$, with $1/\beta$ replaced by $1/\mathbb{E}[\beta]$ .
| 7,754
|
10
|
10.27
|
medium
|
By making use of the formulae given in Appendix B show that the variational lower bound for the linear basis function regression model, defined by $-\mathbb{E}_{\alpha}[\ln q(\mathbf{w})]_{\mathbf{w}} - \mathbb{E}[\ln q(\alpha)].$, can be written in the form $-\mathbb{E}_{\alpha}[\ln q(\mathbf{w})]_{\mathbf{w}} - \mathbb{E}[\ln q(\alpha)].$ with the various terms defined by (10.108)–(10.112).
|
Let's deal with the terms in Eq(10.107) one by one. Noticing Eq $p(\mathbf{t}|\mathbf{w}) = \prod_{n=1}^{N} \mathcal{N}(t_n|\mathbf{w}^{\mathrm{T}}\boldsymbol{\phi}_n, \beta^{-1})$, we have:
$$\begin{split} \mathbb{E}[\ln p(\mathbf{t}|\mathbf{w})]_{\mathbf{w}} &= -\frac{N}{2}\ln(2\pi) + \frac{N}{2}\ln\beta - \frac{\beta}{2}\mathbb{E}\Big[\sum_{n=1}^{N}(t_n - \mathbf{w}^T\boldsymbol{\phi}_n)^2\Big] \\ &= -\frac{N}{2}\ln(2\pi) + \frac{N}{2}\ln\beta - \frac{\beta}{2}\mathbb{E}\Big[\sum_{n=1}^{N}t_n^2 - 2\sum_{n=1}^{N}t_n\cdot\mathbf{w}^T\boldsymbol{\phi}_n + \sum_{n=1}^{N}\mathbf{w}^T\boldsymbol{\phi}_n\cdot\boldsymbol{\phi}_n^T\mathbf{w}\Big] \\ &= -\frac{N}{2}\ln(2\pi) + \frac{N}{2}\ln\beta - \frac{\beta}{2}\mathbb{E}\Big[\mathbf{t}^T\mathbf{t} - 2\mathbf{w}^T\mathbf{\Phi}^T\mathbf{t} + \mathbf{w}^T\cdot(\mathbf{\Phi}^T\mathbf{\Phi})\cdot\mathbf{w}\Big] \\ &= -\frac{N}{2}\ln(2\pi) + \frac{N}{2}\ln\beta - \frac{\beta}{2}\mathbf{t}^T\mathbf{t} - \beta\mathbb{E}[\mathbf{w}^T]\cdot\mathbf{\Phi}^T\mathbf{t} + \mathrm{Tr}\Big[\mathbb{E}\big[(\mathbf{w}\mathbf{w}^T)\big]\cdot(\mathbf{\Phi}^T\mathbf{\Phi})\big] \Big] \end{split}$$
Where we have defined $\mathbf{\Phi} = [\boldsymbol{\phi}_1, \, \boldsymbol{\phi}_2, \, ..., \, \boldsymbol{\phi}_N]^T$ , i.e., the *i*-th row of $\mathbf{\Phi}$ is $\boldsymbol{\phi}_i^T$ . Then using Eq(10.99), $\mathbf{m}_N = \beta \mathbf{S}_N \mathbf{\Phi}^{\mathrm{T}} \mathbf{t}$ and $\mathbb{E}[\mathbf{w}\mathbf{w}^{\mathrm{T}}] = \mathbf{m}_{N}\mathbf{m}_{N}^{\mathrm{T}} + \mathbf{S}_{N}.$, it is easy to obtain $- \frac{\beta}{2} \mathrm{Tr} \left[\mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi} (\mathbf{m}_{N} \mathbf{m}_{N}^{\mathrm{T}} + \mathbf{S}_{N})\right] \qquad$. Next, we deal with the second term by noticing Eq (10.88):
$$\mathbb{E}\big[\ln p(\mathbf{w}|\alpha)\big]_{\mathbf{w},\alpha} = -\frac{M}{2}\ln(2\pi) + \frac{M}{2}\mathbb{E}[\ln \alpha]_{\alpha} - \frac{\mathbb{E}[\alpha]_{\alpha}}{2} \cdot \mathbb{E}[\mathbf{w}\mathbf{w}^{T}]_{\mathbf{w}}$$
Then using Eq (10.93)-(10.95), (B.27), (B.30) and Eq $\mathbb{E}[\mathbf{w}\mathbf{w}^{\mathrm{T}}] = \mathbf{m}_{N}\mathbf{m}_{N}^{\mathrm{T}} + \mathbf{S}_{N}.$, we obtain Eq $- \frac{a_{N}}{2b_{N}} \left[\mathbf{m}_{N}^{\mathrm{T}} \mathbf{m}_{N} + \mathrm{Tr}(\mathbf{S}_{N})\right] \qquad$ just as required. Then we deal with the third term in Eq $-\mathbb{E}_{\alpha}[\ln q(\mathbf{w})]_{\mathbf{w}} - \mathbb{E}[\ln q(\alpha)].$ by noticing Eq (10.89):
$$\mathbb{E}[\ln p(\alpha)]_{\alpha} = a_0 \ln b_0 + (a_0 - 1)\mathbb{E}[\ln \alpha] - b_0 \mathbb{E}[\alpha] - \ln \Gamma(a_0)$$
Similarly, using Eq (10.93)-(10.95), (B.27), (B.30), we will obtain Eq $\mathbb{E}[\ln p(\alpha)]_{\alpha} = a_0 \ln b_0 + (a_0 - 1) [\psi(a_N) - \ln b_N] -b_0 \frac{a_N}{b_N} - \ln \Gamma(a_N)$. Notice that there is a typo in Eq $\mathbb{E}[\ln p(\alpha)]_{\alpha} = a_0 \ln b_0 + (a_0 - 1) [\psi(a_N) - \ln b_N] -b_0 \frac{a_N}{b_N} - \ln \Gamma(a_N)$. The last term in Eq $\mathbb{E}[\ln p(\alpha)]_{\alpha} = a_0 \ln b_0 + (a_0 - 1) [\psi(a_N) - \ln b_N] -b_0 \frac{a_N}{b_N} - \ln \Gamma(a_N)$ should be $\ln \Gamma(a_0)$ instead of $\ln \Gamma(a_N)$ .
Finally we deal with the last two terms in Eq (10.107). We notice that these two terms are actually negative entropy of a Gaussian and a Gamma distribution, so that using (B.31) and (B.41), we can obtain:
$$-\mathbb{E}[\ln q(\alpha)]_{\alpha} = \mathbb{H}[\alpha] = \ln \Gamma(a_N) - (a_N - 1) \cdot \varphi(a_N) - \ln b_N + a_N$$
and
$$-\mathbb{E}[\ln q(\mathbf{w})]_{\mathbf{w}} = \mathbf{H}[\mathbf{w}] = \frac{1}{2}\ln |\mathbf{S}_N| + \frac{M}{2}(1 + \ln(2\pi))$$
**Problem 10.28 Solution**
| 3,772
|
10
|
10.29
|
easy
|
Show that the function $f(x) = \ln(x)$ is concave for $0 < x < \infty$ by computing its second derivative. Determine the form of the dual function $g(\lambda)$ defined by $g(\lambda) = \min_{x} \left\{ \lambda x - f(x) \right\}.$, and verify that minimization of $\lambda x g(\lambda)$ with respect to $\lambda$ according to $f(x) = \min_{\lambda} \{\lambda x - g(\lambda)\}$ indeed recovers the function $\ln(x)$ .
|
The second derivative of f(x) is given by:
$$\frac{d^2}{dx^2}(\ln x) = \frac{d}{dx}(\frac{1}{x}) = -\frac{1}{x^2} < 0$$
Therefore, $f(x) = \ln x$ is concave for $0 < x < \infty$ . Based on definition, i.e., Eq $g(\lambda) = \min_{x} \left\{ \lambda x - f(x) \right\}.$, we can obtain:
$$g(\lambda) = \min_{x} \{\lambda x - \ln x\}$$
We observe that:
$$\frac{d}{dx}(\lambda x - \ln x) = \lambda - \frac{1}{x}$$
In other words, when $\lambda \le 0$ , $\lambda x - \ln x$ will always decrease as x increase. On the other hand, when $\lambda > 0$ , $\lambda x - \ln x$ will achieve its minimum when $x = 1/\lambda$ . Therefore, we conclude that:
$$g(\lambda) = \lambda \cdot \frac{1}{\lambda} - \ln \frac{1}{\lambda} = 1 + \ln \lambda$$
Substituting $g(\lambda)$ back into Eq $f(x) = \min_{\lambda} \{\lambda x - g(\lambda)\}$, we obtain:
$$f(x) = \min_{\lambda} \{\lambda x - 1 - \ln \lambda\}$$
We calculate the derivative:
$$\frac{d}{d\lambda}(\lambda x - 1 - \ln \lambda) = x - \frac{1}{\lambda}$$
Therefore, when $\lambda = 1/x$ , $\lambda x - 1 - \ln \lambda$ achieves minimum with respect to $\lambda$ , which yields:
$$f(x) = \frac{1}{x} \cdot x - 1 - \ln \frac{1}{x} = \ln x$$
In other words, we have shown that Eq $f(x) = \min_{\lambda} \{\lambda x - g(\lambda)\}$ indeed recovers $f(x) = \ln x$ .
| 1,364
|
10
|
10.3
|
medium
|
- 10.3 (\*\*) Consider a factorized variational distribution $q(\mathbf{Z})$ of the form $q(\mathbf{Z}) = \prod_{i=1}^{M} q_i(\mathbf{Z}_i).$. By using the technique of Lagrange multipliers, verify that minimization of the Kullback-Leibler divergence $\mathrm{KL}(p\|q)$ with respect to one of the factors $q_i(\mathbf{Z}_i)$ , keeping all other factors fixed, leads to the solution $q_j^{}(\mathbf{Z}_j) = \int p(\mathbf{Z}) \prod_{i \neq j} d\mathbf{Z}_i = p(\mathbf{Z}_j).$.
|
Let's start from the definition of KL divergence given in Eq $Q(\boldsymbol{\xi}, \boldsymbol{\xi}^{\text{old}}) = \sum_{n=1}^{N} \left\{ \ln \sigma(\xi_n) - \xi_n/2 - \lambda(\xi_n) (\boldsymbol{\phi}_n^{\text{T}} \mathbb{E}[\mathbf{w}\mathbf{w}^{\text{T}}] \boldsymbol{\phi}_n - \xi_n^2) \right\} + \text{const}$.
$$\begin{split} KL(p||q) &= -\int p(\mathbf{Z}) \Big[ \sum_{i=1}^{M} \ln q_i(\mathbf{Z}_i) \Big] d\mathbf{Z} + \text{const} \\ &= -\int p(\mathbf{Z}) \Big[ \ln q_j(\mathbf{Z}_j) + \sum_{i \neq j} \ln q_i(\mathbf{Z}_i) \Big] d\mathbf{Z} + \text{const} \\ &= -\int p(\mathbf{Z}) \ln q_j(\mathbf{Z}_j) d\mathbf{Z} + \text{const} \\ &= -\int \Big[ \int p(\mathbf{Z}) \prod_{i \neq j} d\mathbf{Z}_i \Big] \ln q_j(\mathbf{Z}_j) d\mathbf{Z}_j + \text{const} \\ &= -\int P(\mathbf{Z}_j) \ln q_j(\mathbf{Z}_j) d\mathbf{Z}_j + \text{const} \end{split}$$
Note that in the third step, since all the factors $q_i(\mathbf{Z}_i)$ , where $i \neq j$ , are fixed, they can be absorbed into the 'Const' variable. In the last step, we have denoted the marginal distribution:
$$p(\mathbf{Z}_j) = \int p(\mathbf{Z}) \prod_{i \neq j} d\mathbf{Z}_i$$
We introduce the Lagrange multiplier to enforce $q_j(\mathbf{Z}_j)$ integrate to 1.
$$L = -\int P(\mathbf{Z}_j) \ln q_j(\mathbf{Z}_j) d\mathbf{Z}_j + \lambda (\int q_j(\mathbf{Z}_j) d\mathbf{Z}_j - 1)$$
Using the functional derivative (for more details, you can refer to Appendix D or Prob.1.34), we calculate the functional derivative of L with respect to $q_j(\mathbf{Z}_j)$ and set it to 0:
$$-\frac{p(\mathbf{Z}_j)}{q_j(\mathbf{Z}_j)} + \lambda = 0$$
Rearranging it, we can obtain:
$$\lambda q_j(\mathbf{Z}_j) = p(\mathbf{Z}_j)$$
Integrating both sides with respect to $\mathbf{Z}_j$ , we see that $\lambda = 1$ . Substituting it back into the derivative, we can obtain the optimal $q_j(\mathbf{Z}_j)$ :
$$q_j^{}(\mathbf{Z}_j) = p(\mathbf{Z}_j)$$
Notice that actually we should also enforce $q_j(\mathbf{Z}_j) > 0$ in the Lagrange multiplier, however as we can see that when we only enforce $q_j(\mathbf{Z}_j)$ integrate to 1 and obtain the final close expression, $q_j(\mathbf{Z}_j)$ is definitely larger than 0 at all $\mathbf{Z}_j$ because $p(\mathbf{Z}_j)$ is a PDF. Therefore, there is no need to introduce this inequality constraint in the Lagrange multiplier.
| 2,359
|
10
|
10.30
|
easy
|
By evaluating the second derivative, show that the log logistic function $f(x) = -\ln(1+e^{-x})$ is concave. Derive the variational upper bound $\sigma(x) \leqslant \exp(\lambda x - g(\lambda))$ directly by making a second order Taylor expansion of the log logistic function around a point $x=\xi$ .
- 10.31 $( $ By finding the second derivative with respect to x, show that the function $f(x) = -\ln(e^{x/2} + e^{-x/2})$ is a concave function of x. Now consider the second derivatives with respect to the variable $x^2$ and hence show that it is a convex function of $x^2$ . Plot graphs of f(x) against x and against $x^2$ . Derive the lower bound $\sigma(x) \geqslant \sigma(\xi) \exp\left\{ (x - \xi)/2 - \lambda(\xi)(x^2 - \xi^2) \right\}$ on the logistic sigmoid function directly by making a first order Taylor series expansion of the function f(x) in the variable $x^2$ centred on the value $\xi^2$ .
|
We begin by calculating the first derivative:
$$\frac{df(x)}{dx} = -\frac{-e^{-x}}{1 + e^{-x}} = \sigma(x) \cdot e^{-x}$$
Then we can obtain the second derivative:
$$\frac{d^2f(x)}{dx^2} = \frac{-e^{-x}(1+e^{-x}) - e^{-x}(-e^{-x})}{(1+e^{-x})^2} = -[\sigma(x)]^2 \cdot e^{-x} < 0$$
Therefore, the log logistic function f(x) is concave. Utilizing this concave property, we can obtain:
$$f(x) \le f(\xi) + f'(\xi) \cdot (x - \xi)$$
which gives,
$$\ln \sigma(x) \le \ln \sigma(\xi) + \sigma(\xi) \cdot e^{-\xi} \cdot (x - \xi) \tag{*}$$
Comparing the expression above with Eq $\ln \sigma(x) \leqslant \lambda x - g(\lambda)$, we define $\lambda = \sigma(\xi) \cdot e^{-\xi}$ . Then we can obtain:
$$\lambda = \sigma(\xi) \cdot e^{-\xi} = \frac{e^{-\xi}}{1 + e^{-\xi}} = 1 - \frac{1}{1 + e^{-\xi}} = 1 - \sigma(\xi)$$
In other words, we have obtained $\sigma(\xi) = 1 - \lambda$ . In order to simplify (\*), we need to express $\xi$ using $\lambda$ and x. According to the definition of $\lambda$ , we can obtain:
$$\xi = \ln \sigma(\xi) - \ln \lambda = \ln(1 - \lambda) - \ln \lambda$$
Now (\*) can be simplified as:
$$\ln \sigma(x) \leq \ln(1-\lambda) + \lambda \cdot (x-\xi)$$
$$= \lambda \cdot x + \ln(1-\lambda) - \lambda \cdot \xi$$
$$= \lambda \cdot x + \ln(1-\lambda) - \lambda \cdot \left[\ln(1-\lambda) - \ln\lambda\right]$$
$$= \lambda \cdot x + (1-\lambda)\ln(1-\lambda) + \lambda \ln\lambda$$
$$= \lambda \cdot x - g(\lambda)$$
Just as required.
| 1,489
|
10
|
10.33
|
easy
|
By differentiating the quantity $Q(\xi, \xi^{\text{old}})$ defined by (10.161) with respect to the variational parameter $\xi_n$ show that the update equation for $\xi_n$ for the Bayesian logistic regression model is given by $(\xi_n^{\text{new}})^2 = \boldsymbol{\phi}_n^{\text{T}} \mathbb{E}[\mathbf{w} \mathbf{w}^{\text{T}}] \boldsymbol{\phi}_n = \boldsymbol{\phi}_n^{\text{T}} \left( \mathbf{S}_N + \mathbf{m}_N \mathbf{m}_N^{\text{T}} \right) \boldsymbol{\phi}_n$.
|
To prove Eq $(\xi_n^{\text{new}})^2 = \boldsymbol{\phi}_n^{\text{T}} \mathbb{E}[\mathbf{w} \mathbf{w}^{\text{T}}] \boldsymbol{\phi}_n = \boldsymbol{\phi}_n^{\text{T}} \left( \mathbf{S}_N + \mathbf{m}_N \mathbf{m}_N^{\text{T}} \right) \boldsymbol{\phi}_n$, we only need to prove Eq $0 = \lambda'(\xi_n)(\boldsymbol{\phi}_n^{\mathrm{T}} \mathbb{E}[\mathbf{w}\mathbf{w}^{\mathrm{T}}] \boldsymbol{\phi}_n - \xi_n^2).$, from which Eq $(\xi_n^{\text{new}})^2 = \boldsymbol{\phi}_n^{\text{T}} \mathbb{E}[\mathbf{w} \mathbf{w}^{\text{T}}] \boldsymbol{\phi}_n = \boldsymbol{\phi}_n^{\text{T}} \left( \mathbf{S}_N + \mathbf{m}_N \mathbf{m}_N^{\text{T}} \right) \boldsymbol{\phi}_n$ can be easily derived according to the text below Eq $f(x) = \min_{\lambda} \{\lambda x - g(\lambda)\}$. Therefore, in what follows, we prove that the derivative of $Q(\xi, \xi^{\text{old}})$ with respect to $\xi_n$ will give Eq $0 = \lambda'(\xi_n)(\boldsymbol{\phi}_n^{\mathrm{T}} \mathbb{E}[\mathbf{w}\mathbf{w}^{\mathrm{T}}] \boldsymbol{\phi}_n - \xi_n^2).$. We start by noticing Eq $\frac{d\sigma}{da} = \sigma(1 - \sigma).$, i.e.,
$$\frac{d\sigma(\xi)}{d\xi} = \sigma(\xi) \cdot (1 - \sigma(\xi))$$
Noticing Eq $\lambda(\xi) = \frac{1}{2\xi} \left[ \sigma(\xi) - \frac{1}{2} \right].$, now we can obtain:
$$\frac{dQ(\boldsymbol{\xi}, \boldsymbol{\xi}^{\text{old}})}{d\xi_{n}} = \frac{\sigma^{'}(\xi_{n})}{\sigma(\xi_{n})} - \frac{1}{2} - \lambda^{'}(\xi_{n}) \cdot (\boldsymbol{\phi}_{n}^{T} \mathbb{E}[\mathbf{w}\mathbf{w}^{T}] \boldsymbol{\phi}_{n} - \xi_{n}^{2}) - \lambda(\xi_{n}) \cdot (-2\xi_{n})$$
$$= 1 - \sigma(\xi_{n}) - \frac{1}{2} + 2\xi_{n} \cdot \lambda(\xi_{n}) - \lambda^{'}(\xi_{n}) \cdot (\boldsymbol{\phi}_{n}^{T} \mathbb{E}[\mathbf{w}\mathbf{w}^{T}] \boldsymbol{\phi}_{n} - \xi_{n}^{2})$$
$$= \frac{1}{2} - \sigma(\xi_{n}) + \sigma(\xi_{n}) - \frac{1}{2} - \lambda^{'}(\xi_{n}) \cdot (\boldsymbol{\phi}_{n}^{T} \mathbb{E}[\mathbf{w}\mathbf{w}^{T}] \boldsymbol{\phi}_{n} - \xi_{n}^{2})$$
$$= -\lambda^{'}(\xi_{n}) \cdot (\boldsymbol{\phi}_{n}^{T} \mathbb{E}[\mathbf{w}\mathbf{w}^{T}] \boldsymbol{\phi}_{n} - \xi_{n}^{2})$$
Setting the derivative equal to zero, we obtain Eq $0 = \lambda'(\xi_n)(\boldsymbol{\phi}_n^{\mathrm{T}} \mathbb{E}[\mathbf{w}\mathbf{w}^{\mathrm{T}}] \boldsymbol{\phi}_n - \xi_n^2).$, from which Eq (10.16b3) follows.
| 2,426
|
10
|
10.34
|
medium
|
- 10.34 (\*\*) In this exercise we derive re-estimation equations for the variational parameters $\xi$ in the Bayesian logistic regression model of Section 4.5 by direct maximization of the lower bound given by $\mathcal{L}(\boldsymbol{\xi}) = \frac{1}{2} \ln \frac{|\mathbf{S}_{N}|}{|\mathbf{S}_{0}|} - \frac{1}{2} \mathbf{m}_{N}^{\mathrm{T}} \mathbf{S}_{N}^{-1} \mathbf{m}_{N} + \frac{1}{2} \mathbf{m}_{0}^{\mathrm{T}} \mathbf{S}_{0}^{-1} \mathbf{m}_{0} + \sum_{n=1}^{N} \left\{ \ln \sigma(\xi_{n}) - \frac{1}{2} \xi_{n} - \lambda(\xi_{n}) \xi_{n}^{2} \right\}.$. To do this set the derivative of $\mathcal{L}(\xi)$ with respect to $\xi_n$ equal to zero, making use of the result $\frac{d}{d\alpha}\ln|\mathbf{A}| = \operatorname{Tr}\left(\mathbf{A}^{-1}\frac{d}{d\alpha}\mathbf{A}\right).$ for the derivative of the log of a determinant, together with the expressions $\mathbf{m}_N = \mathbf{S}_N \left( \mathbf{S}_0^{-1} \mathbf{m}_0 + \sum_{n=1}^N (t_n - 1/2) \phi_n \right)$ and $\mathbf{S}_{N}^{-1} = \mathbf{S}_{0}^{-1} + 2\sum_{n=1}^{N} \lambda(\xi_{n}) \phi_{n} \phi_{n}^{\mathrm{T}}.$ which define the mean and covariance of the variational posterior distribution $q(\mathbf{w})$ .
|
First, we should clarify one thing and that is there is typos in Eq(10.164). It is not difficult to observe these error if we notice that for $q(\mathbf{w}) = \mathcal{N}(\mathbf{w}|\mathbf{m}_N, \mathbf{S}_N)$ , in its logarithm, i.e., $\ln q(\mathbf{w})$ , $\frac{1}{2} \ln |\mathbf{S}_N|$ should always have the same sign as $\frac{1}{2}\mathbf{m}_N^T \mathbf{S}_N^{-1}\mathbf{m}_N$ . This is our intuition. However, this is not the case in Eq(10.164). Based on Eq(10.159), Eq(10.153) and the Gaussian prior $p(\mathbf{w}) = \mathcal{N}(\mathbf{w}|\mathbf{m}_0, \mathbf{S}_0)$ , we can analytically obtain the correct lower bound $L(\xi)$ (this will also be strictly proved by the next problem):
$$L(\xi) = \frac{1}{2} \ln |\mathbf{S}_{N}| - \frac{1}{2} \ln |\mathbf{S}_{0}| + \frac{1}{2} \mathbf{m}_{N}^{T} \mathbf{S}_{N}^{-1} \mathbf{m}_{N} - \frac{1}{2} \mathbf{m}_{0}^{T} \mathbf{S}_{0}^{-1} \mathbf{m}_{0}$$
$$+ \sum_{n=1}^{N} \left\{ \ln \sigma(\xi_{n}) - \frac{1}{2} \xi_{n} + \lambda(\xi_{n}) \xi_{n}^{2} \right\}$$
$$= \frac{1}{2} \ln |\mathbf{S}_{N}| + \frac{1}{2} \mathbf{m}_{N}^{T} \mathbf{S}_{N}^{-1} \mathbf{m}_{N} + \sum_{n=1}^{N} \left\{ \ln \sigma(\xi_{n}) - \frac{1}{2} \xi_{n} + \lambda(\xi_{n}) \xi_{n}^{2} \right\} + \text{const}$$
Where const denotes the term unrelated to $\xi_n$ because $\mathbf{m}_0$ and $\mathbf{S}_0$ don't depend on $\xi_n$ . Moreover, noticing that $\mathbf{S}_N^{-1} \cdot \mathbf{m}_N$ also doesn't depend on $\xi_n$ according to Eq(10.157),thus it will be convenient to define a variable: $\mathbf{z}_N = \mathbf{S}_N^{-1} \cdot \mathbf{m}_N$ , and we can easily verify:
$$\mathbf{m}_N^T \mathbf{S}_N^{-1} \mathbf{m}_N = [\mathbf{S}_N \mathbf{S}_N^{-1} \mathbf{m}_N]^T \mathbf{S}_N^{-1} [\mathbf{S}_N \mathbf{S}_N^{-1} \mathbf{m}_N] = [\mathbf{S}_N \mathbf{z}_N]^T \mathbf{S}_N^{-1} [\mathbf{S}_N \mathbf{z}_N] = \mathbf{z}_N^T \mathbf{S}_N^{-1} \mathbf{z}_N$$
Now, we can obtain:
$$\begin{split} \frac{\partial L(\xi)}{\partial \xi_n} &= \frac{d}{d\xi_n} \left\{ \frac{1}{2} \ln |\mathbf{S}_N| + \frac{1}{2} \mathbf{z}_N^T \mathbf{S}_N \mathbf{z}_N + \sum_{n=1}^N \left\{ \ln \sigma(\xi_n) - \frac{1}{2} \xi_n + \lambda(\xi_n) \xi_n^2 \right\} \right\} \\ &= \frac{1}{2} \mathrm{Tr} \big[ \mathbf{S}_N^{-1} \frac{\partial \mathbf{S}_N}{\partial \xi_n} \big] + \frac{1}{2} \mathrm{Tr} \big[ \mathbf{z}_N \mathbf{z}_N^T \cdot \frac{\partial \mathbf{S}_N}{\partial \xi_n} \big] + \lambda'(\xi_n) \xi_n^2 \end{split}$$
Where we have used Eq(3.117) for the first term, and for the second term we have used:
$$\frac{d}{d\xi_n} \left\{ \frac{1}{2} \mathbf{z}_N^T \mathbf{S}_N \mathbf{z}_N \right\} = \frac{1}{2} \frac{d}{d\xi_n} \left\{ \text{Tr} \left[ \mathbf{z}_N \mathbf{z}_N^T \cdot \mathbf{S}_N \right] \right\} = \frac{1}{2} \text{Tr} \left[ \mathbf{z}_N \mathbf{z}_N^T \cdot \frac{\partial \mathbf{S}_N}{\partial \xi_n} \right]$$
Furthermore, for the last term, we can follow the same procedure as in the previous problem and now our remain task is to calculate $\partial \mathbf{S}_N/\partial \xi_n$ . Based on Eq(10.158) and (C.21), we can obtain:
$$\frac{\partial \mathbf{S}_{N}}{\partial \xi_{n}} = -\mathbf{S}_{N} \frac{\partial \mathbf{S}_{N}^{-1}}{\partial \xi_{n}} \mathbf{S}_{N} = -\mathbf{S}_{N} \cdot [2\lambda'(\xi_{n}) \boldsymbol{\phi}_{n} \boldsymbol{\phi}_{n}^{T}] \cdot \mathbf{S}_{N}$$
Substituting it back into the derivative, we can obtain:
$$\begin{split} \frac{\partial L(\boldsymbol{\xi})}{\partial \boldsymbol{\xi}_{n}} &= \frac{1}{2} \mathrm{Tr} \big[ (\mathbf{S}_{N}^{-1} + \mathbf{z}_{N} \mathbf{z}_{N}^{T}) \frac{\partial \mathbf{S}_{N}}{\partial \boldsymbol{\xi}_{n}} \big] + \boldsymbol{\lambda}'(\boldsymbol{\xi}_{n}) \boldsymbol{\xi}_{n}^{2} \\ &= -\frac{1}{2} \mathrm{Tr} \big[ (\mathbf{S}_{N}^{-1} + \mathbf{z}_{N} \mathbf{z}_{N}^{T}) \mathbf{S}_{N} \cdot [2\boldsymbol{\lambda}'(\boldsymbol{\xi}_{n}) \boldsymbol{\phi}_{n} \boldsymbol{\phi}_{n}^{T}] \cdot \mathbf{S}_{N} \big] + \boldsymbol{\lambda}'(\boldsymbol{\xi}_{n}) \boldsymbol{\xi}_{n}^{2} \\ &= -\boldsymbol{\lambda}'(\boldsymbol{\xi}_{n}) \cdot \left\{ \mathrm{Tr} \big[ (\mathbf{S}_{N}^{-1} + \mathbf{z}_{N} \mathbf{z}_{N}^{T}) \cdot \mathbf{S}_{N} \cdot \boldsymbol{\phi}_{n} \boldsymbol{\phi}_{n}^{T} \cdot \mathbf{S}_{N} \big] - \boldsymbol{\xi}_{n}^{2} \right\} = 0 \end{split}$$
Therefore, we can obtain:
$$\begin{split} \boldsymbol{\xi}_n^2 &= \operatorname{Tr} \big[ (\mathbf{S}_N^{-1} + \mathbf{z}_N \mathbf{z}_N^T) \cdot \mathbf{S}_N \cdot \boldsymbol{\phi}_n \boldsymbol{\phi}_n^T \cdot \mathbf{S}_N \big] \\ &= (\mathbf{S}_N \cdot \boldsymbol{\phi}_n)^T \cdot (\mathbf{S}_N^{-1} + \mathbf{z}_N \mathbf{z}_N^T) \cdot (\mathbf{S}_N \cdot \boldsymbol{\phi}_n) \\ &= \boldsymbol{\phi}_n^T \cdot (\mathbf{S}_N + \mathbf{S}_N \mathbf{z}_N \mathbf{z}_N^T \mathbf{S}_N) \cdot \boldsymbol{\phi}_n \\ &= \boldsymbol{\phi}_n^T \cdot (\mathbf{S}_N + \mathbf{m}_N \mathbf{m}_N^T) \cdot \boldsymbol{\phi}_n \end{split}$$
Where we have used the definition of $\mathbf{z}_N$ , i.e., $\mathbf{z}_N = \mathbf{S}_N^{-1} \cdot \mathbf{m}_N$ and also repeatedly used the symmetry property of $\mathbf{S}_N$ .
| 5,188
|
10
|
10.35
|
medium
|
- 10.35 (\*\*) Derive the result $\mathcal{L}(\boldsymbol{\xi}) = \frac{1}{2} \ln \frac{|\mathbf{S}_{N}|}{|\mathbf{S}_{0}|} - \frac{1}{2} \mathbf{m}_{N}^{\mathrm{T}} \mathbf{S}_{N}^{-1} \mathbf{m}_{N} + \frac{1}{2} \mathbf{m}_{0}^{\mathrm{T}} \mathbf{S}_{0}^{-1} \mathbf{m}_{0} + \sum_{n=1}^{N} \left\{ \ln \sigma(\xi_{n}) - \frac{1}{2} \xi_{n} - \lambda(\xi_{n}) \xi_{n}^{2} \right\}.$ for the lower bound $\mathcal{L}(\xi)$ in the variational logistic regression model. This is most easily done by substituting the expressions for the Gaussian prior $q(\mathbf{w}) = \mathcal{N}(\mathbf{w}|\mathbf{m}_0, \mathbf{S}_0)$ , together with the lower bound $h(\mathbf{w}, \xi)$ on the likelihood function, into the integral $\ln p(\mathbf{t}) = \ln \int p(\mathbf{t}|\mathbf{w})p(\mathbf{w}) \, d\mathbf{w} \geqslant \ln \int h(\mathbf{w}, \boldsymbol{\xi})p(\mathbf{w}) \, d\mathbf{w} = \mathcal{L}(\boldsymbol{\xi}). \quad$ which defines $\mathcal{L}(\xi)$ . Next gather together the terms which depend on $\mathbf{w}$ in the exponential and complete the square to give a Gaussian integral, which can then be evaluated by invoking the standard result for the normalization coefficient of a multivariate Gaussian. Finally take the logarithm to obtain $\mathcal{L}(\boldsymbol{\xi}) = \frac{1}{2} \ln \frac{|\mathbf{S}_{N}|}{|\mathbf{S}_{0}|} - \frac{1}{2} \mathbf{m}_{N}^{\mathrm{T}} \mathbf{S}_{N}^{-1} \mathbf{m}_{N} + \frac{1}{2} \mathbf{m}_{0}^{\mathrm{T}} \mathbf{S}_{0}^{-1} \mathbf{m}_{0} + \sum_{n=1}^{N} \left\{ \ln \sigma(\xi_{n}) - \frac{1}{2} \xi_{n} - \lambda(\xi_{n}) \xi_{n}^{2} \right\}.$.
|
There is a typo in Eq $\mathcal{L}(\boldsymbol{\xi}) = \frac{1}{2} \ln \frac{|\mathbf{S}_{N}|}{|\mathbf{S}_{0}|} - \frac{1}{2} \mathbf{m}_{N}^{\mathrm{T}} \mathbf{S}_{N}^{-1} \mathbf{m}_{N} + \frac{1}{2} \mathbf{m}_{0}^{\mathrm{T}} \mathbf{S}_{0}^{-1} \mathbf{m}_{0} + \sum_{n=1}^{N} \left\{ \ln \sigma(\xi_{n}) - \frac{1}{2} \xi_{n} - \lambda(\xi_{n}) \xi_{n}^{2} \right\}.$, for more details you can refer to the previous problem. Let's calculate $L(\xi)$ based on Based on Eq(10.159), Eq(10.153) and the Gaussian prior $p(\mathbf{w}) = \mathcal{N}(\mathbf{w}|\mathbf{m}_0, \mathbf{S}_0)$ :
$$h(\mathbf{w}, \boldsymbol{\xi}) p(\mathbf{w}) = \mathcal{N}(\mathbf{w} | \mathbf{m}_{0}, \mathbf{S}_{0}) \cdot \prod_{n=1}^{N} \sigma(\xi_{n}) \exp\left\{\mathbf{w}^{T} \boldsymbol{\phi}_{n} \mathbf{t}_{n} - (\mathbf{w}^{T} \boldsymbol{\phi}_{n} + \xi_{n})/2\right.$$
$$\left. - \lambda(\xi_{n}) ([\mathbf{w}^{T} \boldsymbol{\phi}_{n}]^{2} - \xi_{n}^{2})\right\}$$
$$= \left\{ (2\pi)^{-W/2} \cdot |\mathbf{S}_{0}|^{-1/2} \cdot \prod_{n=1}^{N} \sigma(\xi_{n}) \right\} \cdot \exp\left\{ -\frac{1}{2} (\mathbf{w} - \mathbf{m}_{0})^{T} \mathbf{S}_{0}^{-1} (\mathbf{w} - \mathbf{m}_{0}) \right\}$$
$$\cdot \prod_{n=1}^{N} \exp\left\{\mathbf{w}^{T} \boldsymbol{\phi}_{n} \mathbf{t}_{n} - (\mathbf{w}^{T} \boldsymbol{\phi}_{n} + \xi_{n})/2 - \lambda(\xi_{n}) ([\mathbf{w}^{T} \boldsymbol{\phi}_{n}]^{2} - \xi_{n}^{2}) \right\}$$
$$= \left\{ (2\pi)^{-W/2} \cdot |\mathbf{S}_{0}|^{-1/2} \cdot \prod_{n=1}^{N} \sigma(\xi_{n}) \cdot \exp\left( -\frac{1}{2} \mathbf{m}_{0}^{T} \mathbf{S}_{0}^{-1} \mathbf{m}_{0} - \sum_{n=1}^{N} \frac{\xi_{n}}{2} + \sum_{n=1}^{N} \lambda(\xi_{n}) \xi_{n}^{2} \right) \right\}$$
$$\cdot \exp\left\{ -\frac{1}{2} \mathbf{w}^{T} \left( \mathbf{S}_{0}^{-1} + 2 \sum_{n=1}^{N} \lambda(\xi_{n}) \boldsymbol{\phi}_{n} \boldsymbol{\phi}_{n}^{T} \right) \mathbf{w} + \mathbf{w}^{T} \left( \mathbf{S}_{0}^{-1} \mathbf{m}_{0} + \sum_{n=1}^{N} \boldsymbol{\phi}_{n} (t_{n} - \frac{1}{2}) \right) \right\}$$
Noticing Eq (10.157)-(10.58), we can obtain:
$$\begin{split} h(\mathbf{w}, \boldsymbol{\xi}) \, p(\mathbf{w}) &= \left. \left\{ (2\pi)^{-W/2} \cdot |\mathbf{S}_0|^{-1/2} \cdot \prod_{n=1}^N \sigma(\xi_n) \cdot \exp\left(-\frac{1}{2}\mathbf{m}_0^T \mathbf{S}_0^{-1} \mathbf{m}_0 - \sum_{n=1}^N \frac{\xi_n}{2} + \sum_{n=1}^N \lambda(\xi_n) \xi_n^2 \right) \right\} \\ &\cdot \exp\left\{ -\frac{1}{2}\mathbf{w}^T \mathbf{S}_N^{-1} \mathbf{w} + \mathbf{w}^T \mathbf{S}_N^{-1} \mathbf{m}_N \right\} \\ &= \left. \left\{ (2\pi)^{-W/2} \cdot |\mathbf{S}_0|^{-1/2} \cdot \prod_{n=1}^N \sigma(\xi_n) \right. \\ &\cdot \exp\left( -\frac{1}{2}\mathbf{m}_0^T \mathbf{S}_0^{-1} \mathbf{m}_0 - \sum_{n=1}^N \frac{\xi_n}{2} + \sum_{n=1}^N \lambda(\xi_n) \xi_n^2 + \frac{1}{2}\mathbf{m}_N^T \mathbf{S}_N^{-1} \mathbf{m}_N \right) \right\} \\ &\cdot \exp\left\{ -\frac{1}{2}(\mathbf{w} - \mathbf{m}_N)^T \mathbf{S}_N^{-1} (\mathbf{w} - \mathbf{m}_N) \right\} \end{split}$$
Therefore, utilizing the normalization constant of Gaussian distribution,
now we can obtain:
$$\begin{split} \int h(\mathbf{w}, \boldsymbol{\xi}) p(\mathbf{w}) d\mathbf{w} &= (2\pi)^{W/2} \cdot |\mathbf{S}_N|^{1/2} \cdot \left\{ (2\pi)^{-W/2} \cdot |\mathbf{S}_0|^{-1/2} \cdot \prod_{n=1}^N \sigma(\xi_n) \right. \\ & \cdot \exp\left( -\frac{1}{2} \mathbf{m}_0^T \mathbf{S}_0^{-1} \mathbf{m}_0 - \sum_{n=1}^N \frac{\xi_n}{2} + \sum_{n=1}^N \lambda(\xi_n) \xi_n^2 + \frac{1}{2} \mathbf{m}_N^T \mathbf{S}_N^{-1} \mathbf{m}_N \right) \right\} \\ &= \left. \left\{ (\frac{|\mathbf{S}_N|}{|\mathbf{S}_0|})^{1/2} \cdot \prod_{n=1}^N \sigma(\xi_n) \right. \\ & \cdot \exp\left( -\frac{1}{2} \mathbf{m}_0^T \mathbf{S}_0^{-1} \mathbf{m}_0 - \sum_{n=1}^N \frac{\xi_n}{2} + \sum_{n=1}^N \lambda(\xi_n) \xi_n^2 + \frac{1}{2} \mathbf{m}_N^T \mathbf{S}_N^{-1} \mathbf{m}_N \right) \right\} \end{split}$$
Therefore, $L(\xi)$ can be written as:
$$L(\boldsymbol{\xi}) = \ln \int h(\mathbf{w}, \boldsymbol{\xi}) p(\mathbf{w}) d\mathbf{w}$$
$$= \frac{1}{2} \ln \frac{|\mathbf{S}_N|}{|\mathbf{S}_0|} - \frac{1}{2} \mathbf{m}_0^T \mathbf{S}_0^{-1} \mathbf{m}_0 + \frac{1}{2} \mathbf{m}_N^T \mathbf{S}_N^{-1} \mathbf{m}_N + \sum_{n=1}^N \left\{ \ln \sigma(\xi_n) - \frac{1}{2} \xi_n + \lambda(\xi_n) \xi_n^2 \right\}$$
| 4,295
|
10
|
10.36
|
medium
|
Consider the ADF approximation scheme discussed in Section 10.7, and show that inclusion of the factor $f_j(\theta)$ leads to an update of the model evidence of the form
$$p_j(\mathcal{D}) \simeq p_{j-1}(\mathcal{D})Z_j$$
$p_j(\mathcal{D}) \simeq p_{j-1}(\mathcal{D})Z_j$
where $Z_j$ is the normalization constant defined by $Z_j = \int f_j(\boldsymbol{\theta}) q^{\setminus j}(\boldsymbol{\theta}) \, \mathrm{d}\boldsymbol{\theta}.$. By applying this result recursively, and initializing with $p_0(\mathcal{D}) = 1$ , derive the result
$$p(\mathcal{D}) \simeq \prod_{j} Z_{j}.$$
$p(\mathcal{D}) \simeq \prod_{j} Z_{j}.$
|
Let's clarify this problem. What this problem wants us to prove is that suppose at beginning the joint distribution comprises a product of j-1 factors, i.e.,
$$p_{j-1}(D,\boldsymbol{\theta}) = \prod_{i=1}^{j-1} f_{j-1}(\boldsymbol{\theta})$$
and now the joint distribution comprises a product of *j* factors:
$$p_j(D, \boldsymbol{\theta}) = \prod_{i=1}^j f_j(\boldsymbol{\theta}) = p_{j-1}(D, \boldsymbol{\theta}) \cdot f_j(\boldsymbol{\theta})$$
Then we are asked to prove Eq $p_j(\mathcal{D}) \simeq p_{j-1}(\mathcal{D})Z_j$. This situation corresponds to j-1 data points at the beginning and then one more data point is obtained. For more details you can read the text below Eq $p(\mathcal{D}, \boldsymbol{\theta}) = \prod_{i} f_i(\boldsymbol{\theta}).$. Based on definition, we can write down:
$$\begin{split} p_{j}(D) &= \int p_{j}(D,\boldsymbol{\theta})d\boldsymbol{\theta} \\ &= \int p_{j-1}(D,\boldsymbol{\theta})\cdot f_{j}(\boldsymbol{\theta})d\boldsymbol{\theta} \\ &= \int p_{j-1}(D)\cdot p_{j-1}(\boldsymbol{\theta}|D)\cdot f_{j}(\boldsymbol{\theta})d\boldsymbol{\theta} \\ &= p_{j-1}(D)\cdot \int p_{j-1}(\boldsymbol{\theta}|D)\cdot f_{j}(\boldsymbol{\theta})d\boldsymbol{\theta} \\ &\approx p_{j-1}(D)\cdot \int q_{j-1}(\boldsymbol{\theta})\cdot f_{j}(\boldsymbol{\theta})d\boldsymbol{\theta} \\ &= p_{j-1}(D)\cdot Z_{j} \end{split}$$
Where we have sequentially used Bayes' Theorem, $q_{j-1}(\theta)$ is an approximation for the posterior $p_{j-1}(\theta|D)$ , and Eq $Z_j = \int f_j(\boldsymbol{\theta}) q^{\setminus j}(\boldsymbol{\theta}) \, \mathrm{d}\boldsymbol{\theta}.$. To further prove Eq $p(\mathcal{D}) \simeq \prod_{j} Z_{j}.$, we only need to recursively use the expression we have proved.
| 1,766
|
10
|
10.37
|
easy
|
Consider the expectation propagation algorithm from Section 10.7, and suppose that one of the factors $f_0(\theta)$ in the definition $p(\mathcal{D}, \boldsymbol{\theta}) = \prod_{i} f_i(\boldsymbol{\theta}).$ has the same exponential family functional form as the approximating distribution $q(\theta)$ . Show that if the factor $\widetilde{f}_0(\theta)$ is initialized to be $f_0(\theta)$ , then an EP update to refine $\widetilde{f}_0(\theta)$ leaves $f_0(\theta)$ unchanged. This situation typically arises when one of the factors is the prior $p(\theta)$ , and so we see that the prior factor can be incorporated once exactly and does not need to be refined.
|
Let's start from definition. q() will be initialized as
$$q^{\text{init}}(\boldsymbol{\theta}) = \widetilde{f}_0(\boldsymbol{\theta}) \prod_{i \neq 0} \widetilde{f}_i(\boldsymbol{\theta}) = f_0(\boldsymbol{\theta}) \prod_{i \neq 0} \widetilde{f}_i(\boldsymbol{\theta})$$
Where we have used $\tilde{f}_0(\boldsymbol{\theta}) = f_0(\boldsymbol{\theta})$ according to the problem description. Then we can obtain:
$$q^{0}(\boldsymbol{\theta}) = \frac{q(\boldsymbol{\theta})}{\widetilde{f}_0(\boldsymbol{\theta})} = \prod_{i \neq 0} \widetilde{f}_i(\boldsymbol{\theta})$$
Next, we will obtain $q^{\text{new}}(\boldsymbol{\theta})$ by matching its moments against $q^{0}(\boldsymbol{\theta}) f_0(\boldsymbol{\theta})$ , which exactly equals:
$$q^{0}(\boldsymbol{\theta})f_0(\boldsymbol{\theta}) = \frac{q(\boldsymbol{\theta})}{\widetilde{f}_0(\boldsymbol{\theta})} = \prod_{i \neq 0} \widetilde{f}_i(\boldsymbol{\theta}) \cdot f_0(\boldsymbol{\theta}) = q^{\text{init}}(\boldsymbol{\theta})$$
In other words, in order to obtain $q^{\text{new}}(\boldsymbol{\theta})$ , we need to match its moment against $q^{0}(\boldsymbol{\theta})$ , and since $q^{\text{new}}$ and $q^{\text{init}}$ both belong to exponential family, they will be identical if they have the same moment. Moreover, based on Eq $Z_j = \int q^{\setminus j}(\boldsymbol{\theta}) f_j(\boldsymbol{\theta}) d\boldsymbol{\theta}.$, we have:
$$Z_0 = \int q^{0}(\boldsymbol{\theta}) f_0(\boldsymbol{\theta}) d\boldsymbol{\theta} = \int q^{\text{init}}(\boldsymbol{\theta}) d\boldsymbol{\theta} = 1$$
Therefore, based on Eq(10.207), we have:
$$\widetilde{f}_0(\boldsymbol{\theta}) = Z_0 \frac{q^{\text{new}}(\boldsymbol{\theta})}{q^{0}(\boldsymbol{\theta})} = 1 \cdot \frac{q^{\text{init}}(\boldsymbol{\theta})}{q^{0}(\boldsymbol{\theta})} = f_0(\boldsymbol{\theta})$$
| 1,849
|
10
|
10.38
|
hard
|
In this exercise and the next, we shall verify the results (10.214)–(10.224) for the expectation propagation algorithm applied to the clutter problem. Begin by using the division formula $q^{\setminus j}(\boldsymbol{\theta}) = \frac{q(\boldsymbol{\theta})}{\widetilde{f}_j(\boldsymbol{\theta})}.$ to derive the expressions $\mathbf{m}^{\setminus n} = \mathbf{m} + v^{\setminus n} v_n^{-1} (\mathbf{m} - \mathbf{m}_n)$ and $(v^{\setminus n})^{-1} = v^{-1} - v_n^{-1}.$ by completing the square inside the exponential to identify the mean and variance. Also, show that the normalization constant $Z_n$ , defined by $Z_j = \int q^{\setminus j}(\boldsymbol{\theta}) f_j(\boldsymbol{\theta}) d\boldsymbol{\theta}.$, is given for the clutter problem by $Z_n = (1 - w)\mathcal{N}(\mathbf{x}_n | \mathbf{m}^{n}, (v^{n} + 1)\mathbf{I}) + w\mathcal{N}(\mathbf{x}_n | \mathbf{0}, a\mathbf{I}).$. This can be done by making use of the general result $p(\mathbf{y}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\boldsymbol{\mu} + \mathbf{b}, \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}})$.
|
Based on Eq $q^{\setminus j}(\boldsymbol{\theta}) = \frac{q(\boldsymbol{\theta})}{\widetilde{f}_j(\boldsymbol{\theta})}.$, $q(\boldsymbol{\theta}) = \mathcal{N}(\boldsymbol{\theta}|\mathbf{m}, v\mathbf{I}).$ and $\widetilde{f}_n(\boldsymbol{\theta}) = s_n \mathcal{N}(\boldsymbol{\theta}|\mathbf{m}_n, v_n \mathbf{I})$, we can obtain:
$$q^{/j}(\boldsymbol{\theta}) = \frac{q(\boldsymbol{\theta})}{\widetilde{f}_{j}(\boldsymbol{\theta})} = \frac{\mathcal{N}(\boldsymbol{\theta}|\mathbf{m}, v\mathbf{I})}{s_{n}\mathcal{N}(\boldsymbol{\theta}|\mathbf{m}_{n}, v_{n}\mathbf{I})}$$
$$\propto \frac{\exp\left\{-\frac{1}{2}(\boldsymbol{\theta} - \mathbf{m})^{T}(v\mathbf{I})^{-1}(\boldsymbol{\theta} - \mathbf{m})\right\}}{\exp\left\{-\frac{1}{2}(\boldsymbol{\theta} - \mathbf{m}_{n})^{T}(v_{n}\mathbf{I})^{-1}(\boldsymbol{\theta} - \mathbf{m}_{n})\right\}}$$
$$= \exp\left\{-\frac{1}{2}(\boldsymbol{\theta} - \mathbf{m})^{T}(v\mathbf{I})^{-1}(\boldsymbol{\theta} - \mathbf{m}) + \frac{1}{2}(\boldsymbol{\theta} - \mathbf{m}_{n})^{T}(v_{n}\mathbf{I})^{-1}(\boldsymbol{\theta} - \mathbf{m}_{n})\right\}$$
$$= \exp\left\{-\frac{1}{2}(\boldsymbol{\theta}^{T}\mathbf{A}\boldsymbol{\theta} + \boldsymbol{\theta}^{T} \cdot \mathbf{B} + \mathbf{C})\right\}$$
Where we have completed squares over $\theta$ in the last step, and we have defined:
$$\mathbf{A} = (v\mathbf{I})^{-1} - (v_n\mathbf{I})^{-1}$$
and $\mathbf{B} = 2 \cdot \left[ -(v\mathbf{I})^{-1} \cdot \mathbf{m} + (v_n\mathbf{I})^{-1} \cdot \mathbf{m}_n \right]$
Note that in order to match this to a Gaussian, we don't actually need **C**, so we omit it here. Now we match this against a Gaussian, beginning by first considering the quadratic term, we can obtain:
$$[\boldsymbol{\Sigma}^{/n}]^{-1} = (v\mathbf{I})^{-1} - (v_n\mathbf{I})^{-1} = (v^{-1} - v_n^{-1})\mathbf{I}^{-1} = [v^{/n}]^{-1} \cdot \mathbf{I}^{-1}$$
It is identical to Eq $\mathbf{m}^{\setminus n} = \mathbf{m} + v^{\setminus n} v_n^{-1} (\mathbf{m} - \mathbf{m}_n)$. By matching the linear term, we can also obtain:
$$-2 \cdot [\mathbf{\Sigma}^{/n}]^{-1} \cdot (\mathbf{m}^{/n}) = \mathbf{B} = 2 \cdot \left[ -(v\mathbf{I})^{-1} \cdot \mathbf{m} + (v_n\mathbf{I})^{-1} \cdot \mathbf{m}_n \right]$$
Rearranging it, we can obtain:
$$(\mathbf{m}^{/n}) = -[\mathbf{\Sigma}^{/n}] \cdot \left[ -(v\mathbf{I})^{-1} \cdot \mathbf{m} + (v_n \mathbf{I})^{-1} \cdot \mathbf{m}_n \right]$$
$$= -[v^{/n}] \cdot \left[ -v^{-1} \cdot \mathbf{m} + v_n^{-1} \cdot \mathbf{m}_n \right]$$
$$= v^{/n} \cdot v^{-1} \cdot \mathbf{m} - \frac{v^{/n}}{v_n} \cdot \mathbf{m}_n$$
$$= v^{/n} ([v^{/n}]^{-1} - v_n^{-1}) \cdot \mathbf{m} - \frac{v^{/n}}{v_n} \cdot \mathbf{m}_n$$
$$= \mathbf{m} + \frac{v^{/n}}{v_n} \cdot (\mathbf{m} - \mathbf{m}_n)$$
Which is identical to Eq $\mathbf{m}^{\setminus n} = \mathbf{m} + v^{\setminus n} v_n^{-1} (\mathbf{m} - \mathbf{m}_n)$. **One important thing worthy clarified is that**: for arbitrary two Gaussian random variable, their division is not a Gaussian. You can find more details by typing "ratio distribution" in Wikipedia. Generally speaking, the division of two Gaussian random variable follows a Cauchy distribution. Moreover, the product of two Gaussian random variables is not a Gaussian random variable.
However, the product of two Gaussian PDF, e.g., $p(\mathbf{x})$ and $p(\mathbf{y})$ , can be a Gaussian PDF because when $\mathbf{x}$ and $\mathbf{y}$ are independent, $p(\mathbf{x},\mathbf{y}) = p(\mathbf{x})p(\mathbf{y})$ , is a Gaussian PDF. In the EP framework,according to Eq $q(\boldsymbol{\theta}) \propto \prod_{i} \widetilde{f}_{i}(\boldsymbol{\theta}).$, we have already assumed that $q(\boldsymbol{\theta})$ , i.e., Eq $q(\boldsymbol{\theta}) = \mathcal{N}(\boldsymbol{\theta}|\mathbf{m}, v\mathbf{I}).$, is given by the product of $\tilde{f}_j(\boldsymbol{\theta})$ , i.e.,(10.213). Therefore, their division still gives by the product of many remaining Gaussian PDF, which is still a Gaussian.
Finally, based on Eq $Z_j = \int q^{\setminus j}(\boldsymbol{\theta}) f_j(\boldsymbol{\theta}) d\boldsymbol{\theta}.$ and $p(\mathbf{x}|\boldsymbol{\theta}) = (1 - w)\mathcal{N}(\mathbf{x}|\boldsymbol{\theta}, \mathbf{I}) + w\mathcal{N}(\mathbf{x}|\mathbf{0}, a\mathbf{I})$, we can obtain:
$$Z_{n} = \int q^{/n}(\boldsymbol{\theta})p(\mathbf{x}_{n}|\boldsymbol{\theta})d\boldsymbol{\theta}$$
$$= \int \mathcal{N}(\boldsymbol{\theta}|\mathbf{m}^{/n}, v^{/n}\mathbf{I}) \cdot \{(1-w)\mathcal{N}(\mathbf{x}_{n}|\boldsymbol{\theta}, \mathbf{I}) + w\mathcal{N}(\mathbf{x}_{n}|\boldsymbol{0}, \alpha\mathbf{I})\} d\boldsymbol{\theta}$$
$$= (1-w)\int \mathcal{N}(\boldsymbol{\theta}|\mathbf{m}^{/n}, v^{/n}\mathbf{I})\mathcal{N}(\mathbf{x}_{n}|\boldsymbol{\theta}, \mathbf{I}) d\boldsymbol{\theta} + w\int \mathcal{N}(\boldsymbol{\theta}|\mathbf{m}^{/n}, v^{/n}\mathbf{I}) \cdot \mathcal{N}(\mathbf{x}_{n}|\boldsymbol{0}, \alpha\mathbf{I}) d\boldsymbol{\theta}$$
$$= (1-w)\mathcal{N}(\mathbf{x}_{n}|\mathbf{m}^{/n}, (v^{/n}+1)\mathbf{I}) + w\mathcal{N}(\mathbf{x}_{n}|\boldsymbol{0}, \alpha\mathbf{I})$$
Where we have used Eq $p(\mathbf{y}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\boldsymbol{\mu} + \mathbf{b}, \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}})$.
| 5,368
|
10
|
10.39
|
hard
|
$ Show that the mean and variance of $q^{\text{new}}(\theta)$ for EP applied to the clutter problem are given by $\mathbf{m} = \mathbf{m}^{n} + \rho_n \frac{v^{n}}{v^{n} + 1} (\mathbf{x}_n - \mathbf{m}^{n})$ and $v = v^{n} - \rho_n \frac{(v^{n})^2}{v^{n} + 1} + \rho_n (1 - \rho_n) \frac{(v^{n})^2 ||\mathbf{x}_n - \mathbf{m}^{n}||^2}{D(v^{n} + 1)^2}$. To do this, first prove the following results for the expectations of $\theta$ and $\theta\theta^{\mathrm{T}}$ under $q^{\mathrm{new}}(\theta)$
$$\mathbb{E}[\boldsymbol{\theta}] = \mathbf{m}^{n} + v^{n} \nabla_{\mathbf{m}^{n}} \ln Z_{n}$$
$\mathbb{E}[\boldsymbol{\theta}] = \mathbf{m}^{n} + v^{n} \nabla_{\mathbf{m}^{n}} \ln Z_{n}$
$$\mathbb{E}[\boldsymbol{\theta}] = \mathbf{m}^{n} + v^{n} \nabla_{\mathbf{m}^{n}} \ln Z_{n}$$
$$\mathbb{E}[\boldsymbol{\theta}^{T} \boldsymbol{\theta}] = 2(v^{n})^{2} \nabla_{v^{n}} \ln Z_{n} + 2\mathbb{E}[\boldsymbol{\theta}]^{T} \mathbf{m}^{n} - \|\mathbf{m}^{n}\|^{2}$$
$\mathbb{E}[\boldsymbol{\theta}^{T} \boldsymbol{\theta}] = 2(v^{n})^{2} \nabla_{v^{n}} \ln Z_{n} + 2\mathbb{E}[\boldsymbol{\theta}]^{T} \mathbf{m}^{n} - \|\mathbf{m}^{n}\|^{2}$
and then make use of the result $Z_n = (1 - w)\mathcal{N}(\mathbf{x}_n | \mathbf{m}^{n}, (v^{n} + 1)\mathbf{I}) + w\mathcal{N}(\mathbf{x}_n | \mathbf{0}, a\mathbf{I}).$ for $Z_n$ . Next, prove the results (10.220)– $s_n = \frac{Z_n}{(2\pi v_n)^{D/2} \mathcal{N}(\mathbf{m}_n | \mathbf{m}^{n}, (v_n + v^{n})\mathbf{I})}.$ by using $\widetilde{f}_{j}(\boldsymbol{\theta}) = Z_{j} \frac{q^{\text{new}}(\boldsymbol{\theta})}{q^{\setminus j}(\boldsymbol{\theta})}.$ and completing the square in the exponential. Finally, use $p(\mathcal{D}) \simeq \int \prod_{i} \widetilde{f}_{i}(\boldsymbol{\theta}) d\boldsymbol{\theta}.$ to derive the result $p(\mathcal{D}) \simeq (2\pi v^{\text{new}})^{D/2} \exp(B/2) \prod_{n=1}^{N} \left\{ s_n (2\pi v_n)^{-D/2} \right\}$.
# Sampling Methods
For most probabilistic models of practical interest, exact inference is intractable, and so we have to resort to some form of approximation. In Chapter 10, we discussed inference algorithms based on deterministic approximations, which include methods such as variational Bayes and expectation propagation. Here we consider approximate inference methods based on numerical sampling, also known as *Monte Carlo* techniques.
Although for some applications the posterior distribution over unobserved variables will be of direct interest in itself, for most situations the posterior distribution is required primarily for the purpose of evaluating expectations, for example in order to make predictions. The fundamental problem that we therefore wish to address in this chapter involves finding the expectation of some function $f(\mathbf{z})$ with respect to a probability distribution $p(\mathbf{z})$ . Here, the components of $\mathbf{z}$ might comprise discrete or continuous variables or some combination of the two. Thus in the case of continuous
Figure 11.1 Schematic illustration of a function f(z) whose expectation is to be evaluated with respect to a distribution p(z).

variables, we wish to evaluate the expectation
$$\mathbb{E}[f] = \int f(\mathbf{z})p(\mathbf{z}) \,\mathrm{d}\mathbf{z} \tag{11.1}$$
where the integral is replaced by summation in the case of discrete variables. This is illustrated schematically for a single continuous variable in Figure 11.1. We shall suppose that such expectations are too complex to be evaluated exactly using analytical techniques.
The general idea behind sampling methods is to obtain a set of samples $\mathbf{z}^{(l)}$ (where $l=1,\ldots,L$ ) drawn independently from the distribution $p(\mathbf{z})$ . This allows the expectation $\mathbb{E}[f] = \int f(\mathbf{z})p(\mathbf{z}) \,\mathrm{d}\mathbf{z}$ to be approximated by a finite sum
$$\widehat{f} = \frac{1}{L} \sum_{l=1}^{L} f(\mathbf{z}^{(l)}). \tag{11.2}$$
As long as the samples $\mathbf{z}^{(l)}$ are drawn from the distribution $p(\mathbf{z})$ , then $\mathbb{E}[\widehat{f}] = \mathbb{E}[f]$ and so the estimator $\widehat{f}$ has the correct mean. The variance of the estimator is given by
$$\operatorname{var}[\widehat{f}] = \frac{1}{L} \mathbb{E}\left[ (f - \mathbb{E}[f])^2 \right]$$
(11.3)
is the variance of the function $f(\mathbf{z})$ under the distribution $p(\mathbf{z})$ . It is worth emphasizing that the accuracy of the estimator therefore does not depend on the dimensionality of $\mathbf{z}$ , and that, in principle, high accuracy may be achievable with a relatively small number of samples $\mathbf{z}^{(l)}$ . In practice, ten or twenty independent samples may suffice to estimate an expectation to sufficient accuracy.
The problem, however, is that the samples $\{\mathbf{z}^{(l)}\}$ might not be independent, and so the effective sample size might be much smaller than the apparent sample size. Also, referring back to Figure 11.1, we note that if $f(\mathbf{z})$ is small in regions where $p(\mathbf{z})$ is large, and vice versa, then the expectation may be dominated by regions of small probability, implying that relatively large sample sizes will be required to achieve sufficient accuracy.
For many models, the joint distribution $p(\mathbf{z})$ is conveniently specified in terms of a graphical model. In the case of a directed graph with no observed variables, it is
### Exercise 11.1
straightforward to sample from the joint distribution (assuming that it is possible to sample from the conditional distributions at each node) using the following *ancestral sampling* approach, discussed briefly in Section 8.1.2. The joint distribution is specified by
$$p(\mathbf{z}) = \prod_{i=1}^{M} p(\mathbf{z}_i | \mathbf{pa}_i)$$
(11.4)
|
This problem is really complicated, but hint has already been given in Eq $\mathbb{E}[\boldsymbol{\theta}] = \mathbf{m}^{n} + v^{n} \nabla_{\mathbf{m}^{n}} \ln Z_{n}$ and (10.255). Notice that in Eq $\mathbb{E}[\boldsymbol{\theta}] = \mathbf{m}^{n} + v^{n} \nabla_{\mathbf{m}^{n}} \ln Z_{n}$, we have a quite complicated term $\nabla_{\mathbf{m}^{/n}} \ln Z_n$ , which we know that $\nabla_{\mathbf{m}^{/n}} \ln Z_n = (\nabla_{\mathbf{m}^{/n}} Z_n)/Z_n$ based on the Chain Rule, and since we know the exact form of $Z_n$ which has been derived in the previous problem, we guess that we can start from dealing with $\nabla_{\mathbf{m}^{/n}} \ln Z_n$ to obtain Eq $\mathbb{E}[\boldsymbol{\theta}] = \mathbf{m}^{n} + v^{n} \nabla_{\mathbf{m}^{n}} \ln Z_{n}$. Before starting, we write down a basic formula here: for a Gaussian random variable $\mathbf{x} \sim \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma})$ , we have:
$$\nabla_{\boldsymbol{\mu}} \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) \cdot (\mathbf{x} - \boldsymbol{\mu}) \boldsymbol{\Sigma}^{-1}$$
Now we can obtain:
$$\nabla_{\mathbf{m}^{/n}} \ln Z_{n} = \frac{1}{Z_{n}} \cdot \nabla_{\mathbf{m}^{/n}} Z_{n}$$
$$= \frac{1}{Z_{n}} \cdot \nabla_{\mathbf{m}^{/n}} \int q^{/n}(\boldsymbol{\theta}) p(\mathbf{x}_{n} | \boldsymbol{\theta}) d\boldsymbol{\theta}$$
$$= \frac{1}{Z_{n}} \cdot \int \left\{ \nabla_{\mathbf{m}^{/n}} q^{/n}(\boldsymbol{\theta}) \right\} \cdot p(\mathbf{x}_{n} | \boldsymbol{\theta}) d\boldsymbol{\theta}$$
$$= \frac{1}{Z_{n}} \cdot \int \frac{1}{v^{/n}} (\boldsymbol{\theta} - \mathbf{m}^{/n}) \cdot q^{/n}(\boldsymbol{\theta}) \cdot p(\mathbf{x}_{n} | \boldsymbol{\theta}) d\boldsymbol{\theta}$$
$$= \frac{1}{Z_{n}} \cdot \frac{1}{v^{/n}} \cdot \left\{ \int \boldsymbol{\theta} \cdot q^{/n}(\boldsymbol{\theta}) \cdot p(\mathbf{x}_{n} | \boldsymbol{\theta}) d\boldsymbol{\theta} - \int \mathbf{m}^{/n} \cdot q^{/n}(\boldsymbol{\theta}) \cdot p(\mathbf{x}_{n} | \boldsymbol{\theta}) d\boldsymbol{\theta} \right\}$$
$$= \frac{1}{v^{/n}} \cdot \left\{ \mathbb{E}[\boldsymbol{\theta}] - \mathbf{m}^{/n} \right\}$$
Here we have used $q^{/n}(\boldsymbol{\theta}) = \mathcal{N}(\boldsymbol{\theta}|\mathbf{m}^{/n}, v^{/n}\mathbf{I})$ , and $q^{/n}(\boldsymbol{\theta}) \cdot p(\mathbf{x}_n|\boldsymbol{\theta}) = Z_n \cdot q^{\text{new}}(\boldsymbol{\theta})$ . Rearranging the equation above, we obtain Eq $\mathbb{E}[\boldsymbol{\theta}] = \mathbf{m}^{n} + v^{n} \nabla_{\mathbf{m}^{n}} \ln Z_{n}$. Then we use Eq $Z_n = (1 - w)\mathcal{N}(\mathbf{x}_n | \mathbf{m}^{n}, (v^{n} + 1)\mathbf{I}) + w\mathcal{N}(\mathbf{x}_n | \mathbf{0}, a\mathbf{I}).$, yielding:
$$\mathbb{E}[\boldsymbol{\theta}] = \mathbf{m}^{/n} + v^{/n} \cdot \nabla_{\mathbf{m}^{/n}} \ln Z_n$$
$$= \mathbf{m}^{/n} + v^{/n} \cdot \frac{1}{Z_n} (1 - w) \mathcal{N}(\mathbf{x}_n | \mathbf{m}^{/n}, (v^{/n} + 1)\mathbf{I}) \cdot \frac{1}{v^{/n} + 1} (\mathbf{x}_n - \mathbf{m}^{/n})$$
$$= \mathbf{m}^{/n} + v^{/n} \cdot \rho_n \cdot \frac{1}{v^{/n} + 1} (\mathbf{x}_n - \mathbf{m}^{/n})$$
Where we have defined:
$$\rho_n = \frac{1}{Z_n} (1 - w) \mathcal{N}(\mathbf{x}_n | \mathbf{m}^{/n}, (v^{/n} + 1)\mathbf{I})$$
$$= \frac{1}{Z_n} (1 - w) \cdot \frac{Z_n - w \mathcal{N}(\mathbf{x}_n | \mathbf{0}, \alpha \mathbf{I})}{1 - w}$$
$$= 1 - \frac{w}{Z_n} \mathcal{N}(\mathbf{x}_n | \mathbf{0}, \alpha \mathbf{I})$$
Therefore, we have proved the mean $\mathbf{m}$ is given by Eq $\mathbf{m} = \mathbf{m}^{n} + \rho_n \frac{v^{n}}{v^{n} + 1} (\mathbf{x}_n - \mathbf{m}^{n})$, next we prove Eq $v = v^{n} - \rho_n \frac{(v^{n})^2}{v^{n} + 1} + \rho_n (1 - \rho_n) \frac{(v^{n})^2 ||\mathbf{x}_n - \mathbf{m}^{n}||^2}{D(v^{n} + 1)^2}$. Similarly, we can write down:
$$\nabla_{v^{/n}} \ln Z_n = \frac{1}{Z_n} \cdot \nabla_{v^{/n}} Z_n$$
$$= \frac{1}{Z_n} \cdot \nabla_{v^{/n}} \int q^{/n}(\boldsymbol{\theta}) p(\mathbf{x}_n | \boldsymbol{\theta}) d\boldsymbol{\theta}$$
$$= \frac{1}{Z_n} \cdot \int \left\{ \nabla_{v^{/n}} q^{/n}(\boldsymbol{\theta}) \right\} p(\mathbf{x}_n | \boldsymbol{\theta}) d\boldsymbol{\theta}$$
$$= \frac{1}{Z_n} \cdot \int \left\{ \frac{1}{2(v^{/n})^2} ||\mathbf{m}^{/n} - \boldsymbol{\theta}||^2 - \frac{D}{2v^{/n}} \right\} q^{/n}(\boldsymbol{\theta}) \cdot p(\mathbf{x}_n | \boldsymbol{\theta}) d\boldsymbol{\theta}$$
$$= \int q^{\text{new}}(\boldsymbol{\theta}) \cdot \left\{ \frac{1}{2(v^{/n})^2} (\mathbf{m}^{/n} - \boldsymbol{\theta})^T (\mathbf{m}^{/n} - \boldsymbol{\theta}) - \frac{D}{2v^{/n}} \right\} d\boldsymbol{\theta}$$
$$= \frac{1}{2(v^{/n})^2} \left\{ \mathbb{E}[\boldsymbol{\theta}\boldsymbol{\theta}^T] - 2\mathbb{E}[\boldsymbol{\theta}] \mathbf{m}^{/n} + ||\mathbf{m}^{/n}||^2 \right\} - \frac{D}{2v^{/n}}$$
Rearranging it, we can obtain:
$$\mathbb{E}[\boldsymbol{\theta}\boldsymbol{\theta}^T] = 2(v^{/n})^2 \cdot \nabla_{v^{/n}} \ln Z_n + 2\mathbb{E}[\boldsymbol{\theta}] \mathbf{m}^{/n} - ||\mathbf{m}^{/n}||^2 + D \cdot v^{/n}$$
There is a typo in Eq (10.255), and the intrinsic reason is that when calculating $\nabla_{v^{/n}}q^{/n}(\boldsymbol{\theta})$ , there are two terms in $q^{/n}(\boldsymbol{\theta})$ dependent on $v^{/n}$ : one is inside the exponential, and the other is in the fraction $\frac{1}{|v^{/n}\mathbf{I}|^{1/2}}$ , which is outside the exponential. Now, we still use Eq $Z_n = (1 - w)\mathcal{N}(\mathbf{x}_n | \mathbf{m}^{n}, (v^{n} + 1)\mathbf{I}) + w\mathcal{N}(\mathbf{x}_n | \mathbf{0}, a\mathbf{I}).$, yielding:
$$\nabla_{v^{/n}} \ln Z_n = \frac{1}{Z_n} (1 - w) \mathcal{N}(\mathbf{x}_n | \mathbf{m}^{/n}, (v^{/n} + 1)\mathbf{I}) \cdot \left[ \frac{1}{2(v^{/n} + 1)^2} ||\mathbf{x}_n - \mathbf{m}^{/n}||^2 - \frac{D}{2(v^{/n} + 1)} \right]$$
$$= \rho_n \cdot \left[ \frac{1}{2(v^{/n} + 1)^2} ||\mathbf{x}_n - \mathbf{m}^{/n}||^2 - \frac{D}{2(v^{/n} + 1)} \right]$$
Finally, using the definition of variance, we obtain:
$$v\mathbf{I} = \mathbb{E}[\boldsymbol{\theta}\boldsymbol{\theta}^T] - \mathbb{E}[\boldsymbol{\theta}]\mathbb{E}[\boldsymbol{\theta}^T]$$
Therefore, taking the trace, we obtain:
$$\begin{split} v &= \frac{1}{D} \cdot \left\{ \mathbb{E}[\boldsymbol{\theta}^T \boldsymbol{\theta}] - \mathbb{E}[\boldsymbol{\theta}^T] \mathbb{E}[\boldsymbol{\theta}] \right\} = \frac{1}{D} \cdot \left\{ \mathbb{E}[\boldsymbol{\theta}^T \boldsymbol{\theta}] - ||\mathbb{E}[\boldsymbol{\theta}]||^2 \right\} \\ &= \frac{1}{D} \cdot \left\{ 2(v^{/n})^2 \cdot \nabla_{v^{/n}} \ln Z_n + 2\mathbb{E}[\boldsymbol{\theta}] \mathbf{m}^{/n} - ||\mathbf{m}^{/n}||^2 + D \cdot v^{/n} - ||\mathbb{E}[\boldsymbol{\theta}]||^2 \right\} \\ &= \frac{1}{D} \cdot \left\{ 2(v^{/n})^2 \cdot \nabla_{v^{/n}} \ln Z_n - ||\mathbb{E}[\boldsymbol{\theta}] - \mathbf{m}^{/n}||^2 + D \cdot v^{/n} \right\} \\ &= \frac{1}{D} \cdot \left\{ 2(v^{/n})^2 \cdot \nabla_{v^{/n}} \ln Z_n - ||v^{/n} \cdot \rho_n \cdot \frac{1}{v^{/n} + 1} (\mathbf{x}_n - \mathbf{m}^{/n})||^2 + D \cdot v^{/n} \right\} \end{split}$$
If we substitute $\nabla_{v^{/n}} \ln Z_n$ into the expression above, we will just obtain Eq (10.215) as required.
# 0.11 Sampling Methods
| 7,260
|
10
|
10.4
|
medium
|
Suppose that $p(\mathbf{x})$ is some fixed distribution and that we wish to approximate it using a Gaussian distribution $q(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma})$ . By writing down the form of the KL divergence $\mathrm{KL}(p\|q)$ for a Gaussian $q(\mathbf{x})$ and then differentiating, show that
- minimization of $\mathrm{KL}(p||q)$ with respect to $\mu$ and $\Sigma$ leads to the result that $\mu$ is given by the expectation of $\mathbf{x}$ under $p(\mathbf{x})$ and that $\Sigma$ is given by the covariance.
|
We begin by writing down the KL divergence.
$$\begin{aligned} \mathrm{KL}(p||q) &= -\int p(\mathbf{x}) \ln \left\{ \frac{q(\mathbf{x})}{p(\mathbf{x})} \right\} d\mathbf{x} \\ &= -\int p(\mathbf{x}) \ln q(\mathbf{x}) d\mathbf{x} + \mathrm{const} \\ &= -\int p(\mathbf{x}) \left[ -\frac{D}{2} \ln 2\pi - \frac{1}{2} \ln |\mathbf{\Sigma}| - \frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^T \mathbf{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right] d\mathbf{x} + \mathrm{const} \\ &= \int p(\mathbf{x}) \left[ \frac{1}{2} \ln |\mathbf{\Sigma}| + \frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^T \mathbf{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right] d\mathbf{x} + \mathrm{const} \\ &= \frac{1}{2} \ln |\mathbf{\Sigma}| + \int p(\mathbf{x}) \left[ \frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^T \mathbf{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right] d\mathbf{x} + \mathrm{const} \\ &= \frac{1}{2} \ln |\mathbf{\Sigma}| + \int p(\mathbf{x}) \frac{1}{2} \left[ \mathbf{x}^T \mathbf{\Sigma}^{-1} \mathbf{x} - 2\boldsymbol{\mu}^T \mathbf{\Sigma}^{-1} \mathbf{x} + \boldsymbol{\mu}^T \mathbf{\Sigma}^{-1} \boldsymbol{\mu} \right] d\mathbf{x} + \mathrm{const} \\ &= \frac{1}{2} \ln |\mathbf{\Sigma}| + \frac{1}{2} \int p(\mathbf{x}) \mathrm{Tr}[\mathbf{\Sigma}^{-1} (\mathbf{x} \mathbf{x}^T)] d\mathbf{x} - \boldsymbol{\mu}^T \mathbf{\Sigma}^{-1} \mathbb{E}[\mathbf{x}] + \frac{1}{2} \boldsymbol{\mu}^T \mathbf{\Sigma}^{-1} \boldsymbol{\mu} + \mathrm{const} \\ &= \frac{1}{2} \ln |\mathbf{\Sigma}| + \frac{1}{2} \mathrm{Tr}[\mathbf{\Sigma}^{-1} \mathbb{E}(\mathbf{x} \mathbf{x}^T)] - \boldsymbol{\mu}^T \mathbf{\Sigma}^{-1} \mathbb{E}[\mathbf{x}] + \frac{1}{2} \boldsymbol{\mu}^T \mathbf{\Sigma}^{-1} \boldsymbol{\mu} + \mathrm{const} \end{aligned}$$
Here D is the dimension of $\mathbf{x}$ . We first calculate the derivative of $\mathrm{KL}(p||q)$ with respect to $\boldsymbol{\mu}$ and set it to 0:
$$\frac{\partial \mathrm{KL}}{\partial \boldsymbol{\mu}} = -\boldsymbol{\Sigma}^{-1} \mathbb{E}[x] + \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu} = 0$$
Therefore, we can obtain $\mu = \mathbb{E}[\mathbf{x}]$ . When $\mu = \mathbb{E}[\mathbf{x}]$ is satisfied, KL divergence reduces to:
$$\mathrm{KL}(p||q) = \frac{1}{2}\ln|\mathbf{\Sigma}| + \frac{1}{2}\mathrm{Tr}[\mathbf{\Sigma}^{-1}\mathbb{E}(\mathbf{x}\mathbf{x}^T)] - \frac{1}{2}\boldsymbol{\mu}^T\mathbf{\Sigma}^{-1}\boldsymbol{\mu} + \mathrm{const}$$
Then we calculate the derivative of $\mathrm{KL}(p||q)$ with respect to $\Sigma$ and set it to 0:
$$\frac{\partial \mathrm{KL}}{\partial \boldsymbol{\Sigma}} = \frac{1}{2} \boldsymbol{\Sigma}^{-1} - \frac{1}{2} \boldsymbol{\Sigma}^{-1} \mathbb{E}[\mathbf{x} \mathbf{x}^T] \boldsymbol{\Sigma}^{-1} + \frac{1}{2} \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu} \boldsymbol{\mu}^T \boldsymbol{\Sigma}^{-1} = 0$$
Note that here we have used and Eq (61) and Eq (124) in 'MatrixCook-Book', and that $\Sigma$ , $\mathbb{E}[\mathbf{x}\mathbf{x}^T]$ are both symmetric. We rewrite those equations here for your reference:
$$\frac{\partial \mathbf{a}^T \mathbf{X}^{-1} \mathbf{b}}{\partial \mathbf{X}} = -\mathbf{X}^{-T} \mathbf{a} \mathbf{b}^T \mathbf{X}^{-T} \quad \text{and} \quad \frac{\partial \text{Tr}(\mathbf{A} \mathbf{X}^{-1} \mathbf{B})}{\partial \mathbf{X}} = -\mathbf{X}^{-T} \mathbf{A}^T \mathbf{B}^T \mathbf{X}^{-T}$$
Rearranging the derivative, we can obtain:
$$\mathbf{\Sigma} = \mathbb{E}[\mathbf{x}\mathbf{x}^T] - \boldsymbol{\mu}\boldsymbol{\mu}^T = \mathbb{E}[\mathbf{x}\mathbf{x}^T] - \mathbb{E}[\mathbf{x}]\mathbb{E}[\mathbf{x}]^T = \text{cov}[\mathbf{x}]$$
| 3,594
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.