chapter
int64
1
14
question_number
stringlengths
3
5
difficulty
stringclasses
3 values
question_text
stringlengths
64
8.24k
answer
stringlengths
56
16.5k
answer_length
int64
14
16.6k
3
3.11
medium
- 3.11 (\*\*) We have seen that, as the size of a data set increases, the uncertainty associated with the posterior distribution over model parameters decreases. Make use of the matrix identity (Appendix C) $$\left(\mathbf{M} + \mathbf{v}\mathbf{v}^{\mathrm{T}}\right)^{-1} = \mathbf{M}^{-1} - \frac{\left(\mathbf{M}^{-1}\mathbf{v}\right)\left(\mathbf{v}^{\mathrm{T}}\mathbf{M}^{-1}\right)}{1 + \mathbf{v}^{\mathrm{T}}\mathbf{M}^{-1}\mathbf{v}}$$ $\left(\mathbf{M} + \mathbf{v}\mathbf{v}^{\mathrm{T}}\right)^{-1} = \mathbf{M}^{-1} - \frac{\left(\mathbf{M}^{-1}\mathbf{v}\right)\left(\mathbf{v}^{\mathrm{T}}\mathbf{M}^{-1}\right)}{1 + \mathbf{v}^{\mathrm{T}}\mathbf{M}^{-1}\mathbf{v}}$ to show that the uncertainty $\sigma_N^2(\mathbf{x})$ associated with the linear regression function given by $\sigma_N^2(\mathbf{x}) = \frac{1}{\beta} + \phi(\mathbf{x})^{\mathrm{T}} \mathbf{S}_N \phi(\mathbf{x}).$ satisfies $$\sigma_{N+1}^2(\mathbf{x}) \leqslant \sigma_N^2(\mathbf{x}). \tag{3.111}$$
We need to use the result obtained in Prob.3.8. In Prob.3.8, we have derived a formula for $\mathbf{S}_{N+1}^{-1}$ : $$S_{N+1}^{-1} = S_N^{-1} + \beta \, \phi(x_{N+1}) \, \phi(x_{N+1})^T$$ And then using $\left(\mathbf{M} + \mathbf{v}\mathbf{v}^{\mathrm{T}}\right)^{-1} = \mathbf{M}^{-1} - \frac{\left(\mathbf{M}^{-1}\mathbf{v}\right)\left(\mathbf{v}^{\mathrm{T}}\mathbf{M}^{-1}\right)}{1 + \mathbf{v}^{\mathrm{T}}\mathbf{M}^{-1}\mathbf{v}}$, we can obtain: $$S_{N+1} = \left[ S_{N}^{-1} + \beta \phi(x_{N+1}) \phi(x_{N+1})^{T} \right]^{-1}$$ $$= \left[ S_{N}^{-1} + \sqrt{\beta} \phi(x_{N+1}) \sqrt{\beta} \phi(x_{N+1})^{T} \right]^{-1}$$ $$= S_{N} - \frac{S_{N}(\sqrt{\beta} \phi(x_{N+1}))(\sqrt{\beta} \phi(x_{N+1})^{T}) S_{N}}{1 + (\sqrt{\beta} \phi(x_{N+1})^{T}) S_{N}(\sqrt{\beta} \phi(x_{N+1}))}$$ $$= S_{N} - \frac{\beta S_{N} \phi(x_{N+1}) \phi(x_{N+1})^{T} S_{N}}{1 + \beta \phi(x_{N+1})^{T} S_{N} \phi(x_{N+1})}$$ Now we calculate $\sigma_N^2(\mathbf{x}) - \sigma_{N+1}^2(\mathbf{x})$ according to $\sigma_N^2(\mathbf{x}) = \frac{1}{\beta} + \phi(\mathbf{x})^{\mathrm{T}} \mathbf{S}_N \phi(\mathbf{x}).$. $$\begin{split} \sigma_N^2(x) - \sigma_{N+1}^2(x) &= \phi(x)^T (S_N - S_{N+1}) \phi(x) \\ &= \phi(x)^T \frac{\beta S_N \phi(x_{N+1}) \phi(x_{N+1})^T S_N}{1 + \beta \phi(x_{N+1})^T S_N \phi(x_{N+1})} \phi(x) \\ &= \frac{\phi(x)^T S_N \phi(x_{N+1}) \phi(x_{N+1})^T S_N \phi(x)}{1/\beta + \phi(x_{N+1})^T S_N \phi(x_{N+1})} \\ &= \frac{\left[\phi(x)^T S_N \phi(x_{N+1})\right]^2}{1/\beta + \phi(x_{N+1})^T S_N \phi(x_{N+1})} \quad (*) \end{split}$$ And since $S_N$ is positive definite, (\*) is larger than 0. Therefore, we have proved that $\sigma_N^2(\mathbf{x}) - \sigma_{N+1}^2(\mathbf{x}) \ge 0$
1,746
3
3.12
medium
We saw in Section 2.3.6 that the conjugate prior for a Gaussian distribution with unknown mean and unknown precision (inverse variance) is a normal-gamma distribution. This property also holds for the case of the conditional Gaussian distribution $p(t|\mathbf{x}, \mathbf{w}, \beta)$ of the linear regression model. If we consider the likelihood function $p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta) = \prod_{n=1}^{N} \mathcal{N}(t_n | \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n), \beta^{-1})$, then the conjugate prior for $\mathbf{w}$ and $\beta$ is given by $$p(\mathbf{w}, \beta) = \mathcal{N}(\mathbf{w}|\mathbf{m}_0, \beta^{-1}\mathbf{S}_0)\operatorname{Gam}(\beta|a_0, b_0). \tag{3.112}$$ Show that the corresponding posterior distribution takes the same functional form, so that $$p(\mathbf{w}, \beta | \mathbf{t}) = \mathcal{N}(\mathbf{w} | \mathbf{m}_N, \beta^{-1} \mathbf{S}_N) \operatorname{Gam}(\beta | a_N, b_N)$$ $p(\mathbf{w}, \beta | \mathbf{t}) = \mathcal{N}(\mathbf{w} | \mathbf{m}_N, \beta^{-1} \mathbf{S}_N) \operatorname{Gam}(\beta | a_N, b_N)$ and find expressions for the posterior parameters $\mathbf{m}_N$ , $\mathbf{S}_N$ , $a_N$ , and $b_N$ .
Let's begin by writing down the prior PDF $p(\mathbf{w}, \beta)$ : $$p(\boldsymbol{w}, \beta) = \mathcal{N}(\boldsymbol{w} | \boldsymbol{m_0}, \beta^{-1} \boldsymbol{S_0}) \operatorname{Gam}(\beta | a_0, b_0) \quad (*)$$ $$\propto \left(\frac{\beta}{|\boldsymbol{S_0}|}\right)^2 exp\left(-\frac{1}{2}(\boldsymbol{w} - \boldsymbol{m_0})^T \beta \boldsymbol{S_0}^{-1}(\boldsymbol{w} - \boldsymbol{m_0})\right) b_0^{a_0} \beta^{a_0 - 1} exp(-b_0 \beta)$$ And then we write down the likelihood function $p(\mathbf{t}|\mathbf{X}, \boldsymbol{w}, \beta)$ : $$p(\mathbf{t}|\mathbf{X}, \boldsymbol{w}, \beta) = \prod_{n=1}^{N} \mathcal{N}(t_n | \boldsymbol{w}^T \boldsymbol{\phi}(\boldsymbol{x_n}), \beta^{-1})$$ $$\propto \prod_{n=1}^{N} \beta^{1/2} exp\left[-\frac{\beta}{2}(t_n - \boldsymbol{w}^T \boldsymbol{\phi}(\boldsymbol{x_n}))^2\right] \quad (**)$$ According to Bayesian Inference, we have $p(\boldsymbol{w}, \beta | \mathbf{t}) \propto p(\mathbf{t} | \mathbf{X}, \boldsymbol{w}, \beta) \times p(\boldsymbol{w}, \beta)$ . We first focus on the quadratic term with regard to $\boldsymbol{w}$ in the exponent. quadratic term = $$-\frac{\beta}{2} \boldsymbol{w}^T \boldsymbol{S_0}^{-1} \boldsymbol{w} + \sum_{n=1}^{N} -\frac{\beta}{2} \boldsymbol{w}^T \boldsymbol{\phi}(\boldsymbol{x_n}) \boldsymbol{\phi}(\boldsymbol{x_n})^T \boldsymbol{w}$$ = $-\frac{\beta}{2} \boldsymbol{w}^T [\boldsymbol{S_0}^{-1} + \sum_{n=1}^{N} \boldsymbol{\phi}(\boldsymbol{x_n}) \boldsymbol{\phi}(\boldsymbol{x_n})^T] \boldsymbol{w}$ Where the first term is generated by (\*), and the second by (\*\*). By now, we know that: $$S_N^{-1} = S_0^{-1} + \sum_{n=1}^{N} \phi(x_n) \phi(x_n)^T$$ We then focus on the linear term with regard to $\boldsymbol{w}$ in the exponent. linear term = $$\beta \boldsymbol{m_0}^T \boldsymbol{S_0}^{-1} \boldsymbol{w} + \sum_{n=1}^{N} \beta t_n \boldsymbol{\phi}(\boldsymbol{x_n})^T \boldsymbol{w}$$ = $\beta [\boldsymbol{m_0}^T \boldsymbol{S_0}^{-1} + \sum_{n=1}^{N} t_n \boldsymbol{\phi}(\boldsymbol{x_n})^T] \boldsymbol{w}$ Again, the first term is generated by (\*), and the second by (\*\*). We can also obtain: $$m_N^T S_N^{-1} = m_0^T S_0^{-1} + \sum_{n=1}^N t_n \phi(x_n)^T$$ Which gives: $$\boldsymbol{m_N} = \boldsymbol{S_N} \big[ \boldsymbol{S_0}^{-1} \boldsymbol{m_0} + \sum_{n=1}^{N} t_n \boldsymbol{\phi}(\boldsymbol{x_n}) \big]$$ Then we focus on the constant term with regard to $\boldsymbol{w}$ in the exponent. constant term = $$(-\frac{\beta}{2} \boldsymbol{m_0}^T \boldsymbol{S_0}^{-1} \boldsymbol{m_0} - b_0 \beta) - \frac{\beta}{2} \sum_{n=1}^{N} t_n^2$$ = $-\beta \left[ \frac{1}{2} \boldsymbol{m_0}^T \boldsymbol{S_0}^{-1} \boldsymbol{m_0} + b_0 + \frac{1}{2} \sum_{n=1}^{N} t_n^2 \right]$ Therefore, we can obtain: $$\frac{1}{2} \boldsymbol{m_N}^T \boldsymbol{S_N}^{-1} \boldsymbol{m_N} + b_N = \frac{1}{2} \boldsymbol{m_0}^T \boldsymbol{S_0}^{-1} \boldsymbol{m_0} + b_0 + \frac{1}{2} \sum_{n=1}^{N} t_n^2$$ Which gives: $$b_{N} = \frac{1}{2} \boldsymbol{m_0}^T \boldsymbol{S_0}^{-1} \boldsymbol{m_0} + b_0 + \frac{1}{2} \sum_{n=1}^{N} t_n^2 - \frac{1}{2} \boldsymbol{m_N}^T \boldsymbol{S_N}^{-1} \boldsymbol{m_N}$$ Finally, we focus on the exponential term whose base is $\beta$ . exponent term = $$(2 + a_0 - 1) + \frac{N}{2}$$ Which gives: $$2 + a_N - 1 = (2 + a_0 - 1) + \frac{N}{2}$$ Hence, $$a_N = a_0 + \frac{N}{2}$$
3,411
3
3.13
medium
Show that the predictive distribution $p(t|\mathbf{x}, \mathbf{t})$ for the model discussed in Exercise 3.12 is given by a Student's t-distribution of the form $$p(t|\mathbf{x}, \mathbf{t}) = \operatorname{St}(t|\mu, \lambda, \nu) \tag{3.114}$$ and obtain expressions for $\mu$ , $\lambda$ and $\nu$ .
Similar to $p(t|\mathbf{t},\alpha,\beta) = \int p(t|\mathbf{w},\beta)p(\mathbf{w}|\mathbf{t},\alpha,\beta) \,d\mathbf{w}$, we write down the expression of the predictive distribution $p(t|\mathbf{X},\mathbf{t})$ : $$p(t|\mathbf{X},\mathbf{t}) = \int \int p(t|\mathbf{w},\beta) \, p(\mathbf{w},\beta|\mathbf{X},\mathbf{t}) \, d\mathbf{w} \, d\beta$$ (\*) We know that: $$p(t|\boldsymbol{w}, \beta) = \mathcal{N}(t|y(\boldsymbol{x}, \boldsymbol{w}), \beta^{-1}) = \mathcal{N}(t|\boldsymbol{\phi}(\boldsymbol{x})^T \boldsymbol{w}, \beta^{-1})$$ and that: $$p(\boldsymbol{w}, \beta | \mathbf{X}, \mathbf{t}) = \mathcal{N}(\boldsymbol{w} | \boldsymbol{m}_{N}, \beta^{-1} \mathbf{S}_{N}) \operatorname{Gam}(\beta | a_{N}, b_{N})$$ We go back to (\*), and first deal with the integral with regard to $\boldsymbol{w}$ : $$p(t|\mathbf{X},\mathbf{t}) = \int \left[ \int \mathcal{N}(t|\boldsymbol{\phi}(\boldsymbol{x})^T \boldsymbol{w}, \beta^{-1}) \mathcal{N}(\boldsymbol{w}|\boldsymbol{m}_N, \beta^{-1} \boldsymbol{S}_N) d\boldsymbol{w} \right] \operatorname{Gam}(\beta|a_N, b_N) d\beta$$ $$= \int \mathcal{N}(t|\boldsymbol{\phi}(\boldsymbol{x})^T \boldsymbol{m}_N, \beta^{-1} + \boldsymbol{\phi}(\boldsymbol{x})^T \beta^{-1} \boldsymbol{S}_N \boldsymbol{\phi}(\boldsymbol{x})) \operatorname{Gam}(\beta|a_N, b_N) d\beta$$ $$= \int \mathcal{N}\left[ t|\boldsymbol{\phi}(\boldsymbol{x})^T \boldsymbol{m}_N, \beta^{-1} (1 + \boldsymbol{\phi}(\boldsymbol{x})^T \boldsymbol{S}_N \boldsymbol{\phi}(\boldsymbol{x})) \right] \operatorname{Gam}(\beta|a_N, b_N) d\beta$$ Where we have used $p(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Lambda}^{-1})$, $p(\mathbf{y}|\mathbf{x}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\mathbf{x} + \mathbf{b}, \mathbf{L}^{-1})$ and $p(\mathbf{y}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\boldsymbol{\mu} + \mathbf{b}, \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}})$. Then, we follow (2.158)-(2.160), we can see that $p(t|\mathbf{X}, \mathbf{t}) = \operatorname{St}(t|\mu, \lambda, v)$ , where we have defined: $$\mu = \phi(\mathbf{x})^T \mathbf{m}_N, \quad \lambda = \frac{a_N}{b_N} \cdot \left[ 1 + \phi(\mathbf{x})^T \mathbf{S}_N \phi(\mathbf{x}) \right]^{-1}, \quad v = 2a_N$$ ## Problem 3.14 Solution(Wait for updating) Firstly, according to $\mathbf{\Phi} = \begin{pmatrix} \phi_0(\mathbf{x}_1) & \phi_1(\mathbf{x}_1) & \cdots & \phi_{M-1}(\mathbf{x}_1) \\ \phi_0(\mathbf{x}_2) & \phi_1(\mathbf{x}_2) & \cdots & \phi_{M-1}(\mathbf{x}_2) \\ \vdots & \vdots & \ddots & \vdots \\ \phi_0(\mathbf{x}_N) & \phi_1(\mathbf{x}_N) & \cdots & \phi_{M-1}(\mathbf{x}_N) \end{pmatrix}.$, if we use the new orthonormal basis set specified in the problem to construct $\Phi$ , we can obtain an important property: $\Phi^T \Phi = \mathbf{I}$ . Hence, if $\alpha = 0$ , together with $\mathbf{S}_{N}^{-1} = \alpha \mathbf{I} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi}.$, we know that $\mathbf{S_N} = 1/\beta$ . Finally, according to $k(\mathbf{x}, \mathbf{x}') = \beta \phi(\mathbf{x})^{\mathrm{T}} \mathbf{S}_N \phi(\mathbf{x}')$, we can obtain: $$k(\mathbf{x}, \mathbf{x}') = \beta \mathbf{\psi}(\mathbf{x})^T \mathbf{S}_{\mathbf{N}} \mathbf{\psi}(\mathbf{x}') = \mathbf{\psi}(\mathbf{x})^T \mathbf{\psi}(\mathbf{x}')$$
3,355
3
3.15
easy
- 3.15 (\*) Consider a linear basis function model for regression in which the parameters $\alpha$ and $\beta$ are set using the evidence framework. Show that the function $E(\mathbf{m}_N)$ defined by $E(\mathbf{m}_N) = \frac{\beta}{2} \|\mathbf{t} - \mathbf{\Phi} \mathbf{m}_N\|^2 + \frac{\alpha}{2} \mathbf{m}_N^{\mathrm{T}} \mathbf{m}_N.$ satisfies the relation $2E(\mathbf{m}_N) = N$ .
It is quite obvious if we substitute $\alpha = \frac{\gamma}{\mathbf{m}_N^{\mathrm{T}} \mathbf{m}_N}.$ and $\frac{1}{\beta} = \frac{1}{N - \gamma} \sum_{n=1}^{N} \left\{ t_n - \mathbf{m}_N^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) \right\}^2.$ into $E(\mathbf{m}_N) = \frac{\beta}{2} \|\mathbf{t} - \mathbf{\Phi} \mathbf{m}_N\|^2 + \frac{\alpha}{2} \mathbf{m}_N^{\mathrm{T}} \mathbf{m}_N.$, which gives, $$E(\boldsymbol{m}_{N}) = \frac{\beta}{2} ||\mathbf{t} - \Phi \boldsymbol{m}_{N}||^{2} + \frac{\alpha}{2} \boldsymbol{m}_{N}^{T} \boldsymbol{m}_{N} = \frac{N - \gamma}{2} + \frac{\gamma}{2} = \frac{N}{2}$$ ## Problem 3.16 Solution We know that $$p(\mathbf{t}|\boldsymbol{w},\beta) = \prod_{n=1}^{N} \mathcal{N}(\boldsymbol{\phi}(\boldsymbol{x_n})^T \boldsymbol{w}, \beta^{-1}) \propto \mathcal{N}(\boldsymbol{\Phi}\boldsymbol{w}, \beta^{-1}\mathbf{I})$$ And $$p(\boldsymbol{w}|\alpha) = \mathcal{N}(\mathbf{0}, \alpha^{-1}\mathbf{I})$$ Comparing them with $p(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Lambda}^{-1})$, $p(\mathbf{y}|\mathbf{x}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\mathbf{x} + \mathbf{b}, \mathbf{L}^{-1})$ and $p(\mathbf{y}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\boldsymbol{\mu} + \mathbf{b}, \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}})$, we can obtain: $$p(\mathbf{t}|\alpha,\beta) = \mathcal{N}(\mathbf{0},\beta^{-1}\mathbf{I} + \alpha^{-1}\mathbf{\Phi}\mathbf{\Phi}^{T})$$
1,514
3
3.17
easy
Show that the evidence function for the Bayesian linear regression model can be written in the form $p(\mathbf{t}|\alpha,\beta) = \left(\frac{\beta}{2\pi}\right)^{N/2} \left(\frac{\alpha}{2\pi}\right)^{M/2} \int \exp\left\{-E(\mathbf{w})\right\} d\mathbf{w}$ in which $E(\mathbf{w})$ is defined by $= \frac{\beta}{2} \|\mathbf{t} - \mathbf{\Phi} \mathbf{w}\|^2 + \frac{\alpha}{2} \mathbf{w}^{\mathrm{T}} \mathbf{w}.$.
We know that: $$p(\mathbf{t}|\mathbf{w}, \beta) = \prod_{n=1}^{N} \mathcal{N}(\phi(\mathbf{x}_{n})^{T}\mathbf{w}, \beta^{-1})$$ $$= \prod_{n=1}^{N} \frac{1}{(2\pi\beta^{-1})^{1/2}} exp\{-\frac{1}{2\beta^{-1}} (t_{n} - \phi(\mathbf{x}_{n})^{T}\mathbf{w})^{2}\}$$ $$= (\frac{\beta}{2\pi})^{N/2} exp\{\sum_{n=1}^{N} -\frac{\beta}{2} (t_{n} - \phi(\mathbf{x}_{n})^{T}\mathbf{w})^{2}\}$$ $$= (\frac{\beta}{2\pi})^{N/2} exp\{-\frac{\beta}{2} ||\mathbf{t} - \Phi\mathbf{w}||^{2}\}$$ And that: $$p(\boldsymbol{w}|\alpha) = \mathcal{N}(\boldsymbol{0}, \alpha^{-1}\mathbf{I})$$ $$= \frac{\alpha^{M/2}}{(2\pi)^{M/2}} exp\left\{-\frac{\alpha}{2}||\boldsymbol{w}||^2\right\}$$ If we substitute the expressions above into $p(\mathbf{t}|\alpha,\beta) = \int p(\mathbf{t}|\mathbf{w},\beta)p(\mathbf{w}|\alpha)\,\mathrm{d}\mathbf{w}.$, we can obtain $p(\mathbf{t}|\alpha,\beta) = \left(\frac{\beta}{2\pi}\right)^{N/2} \left(\frac{\alpha}{2\pi}\right)^{M/2} \int \exp\left\{-E(\mathbf{w})\right\} d\mathbf{w}$ just as required.
1,032
3
3.18
medium
By completing the square over w, show that the error function $= \frac{\beta}{2} \|\mathbf{t} - \mathbf{\Phi} \mathbf{w}\|^2 + \frac{\alpha}{2} \mathbf{w}^{\mathrm{T}} \mathbf{w}.$ in Bayesian linear regression can be written in the form $E(\mathbf{w}) = E(\mathbf{m}_N) + \frac{1}{2}(\mathbf{w} - \mathbf{m}_N)^{\mathrm{T}} \mathbf{A}(\mathbf{w} - \mathbf{m}_N)$.
We expand $= \frac{\beta}{2} \|\mathbf{t} - \mathbf{\Phi} \mathbf{w}\|^2 + \frac{\alpha}{2} \mathbf{w}^{\mathrm{T}} \mathbf{w}.$ as follows: $$E(\boldsymbol{w}) = \frac{\beta}{2} ||\mathbf{t} - \boldsymbol{\Phi} \boldsymbol{w}||^2 + \frac{\alpha}{2} \boldsymbol{w}^T \boldsymbol{w}$$ $$= \frac{\beta}{2} (\mathbf{t}^T \mathbf{t} - 2\mathbf{t}^T \boldsymbol{\Phi} \boldsymbol{w} + \boldsymbol{w}^T \boldsymbol{\Phi}^T \boldsymbol{\Phi} \boldsymbol{w}) + \frac{\alpha}{2} \boldsymbol{w}^T \boldsymbol{w}$$ $$= \frac{1}{2} [\boldsymbol{w}^T (\beta \boldsymbol{\Phi}^T \boldsymbol{\Phi} + \alpha \mathbf{I}) \boldsymbol{w} - 2\beta \mathbf{t}^T \boldsymbol{\Phi} \boldsymbol{w} + \beta \mathbf{t}^T \mathbf{t}]$$ Observing the equation above, we see that $E({\pmb w})$ contains the following term : $$\frac{1}{2}(\boldsymbol{w} - \boldsymbol{m}_{\boldsymbol{N}})^T \mathbf{A}(\boldsymbol{w} - \boldsymbol{m}_{\boldsymbol{N}}) \tag{*}$$ Now, we need to solve **A** and $m_N$ . We expand (\*) and obtain: $$(*) = \frac{1}{2} (\boldsymbol{w}^T \mathbf{A} \boldsymbol{w} - 2\boldsymbol{m}_{\boldsymbol{N}}^T \mathbf{A} \boldsymbol{w} + \boldsymbol{m}_{\boldsymbol{N}}^T \mathbf{A} \boldsymbol{m}_{\boldsymbol{N}})$$ We firstly compare the quadratic term, which gives: $$\mathbf{A} = \beta \mathbf{\Phi}^T \mathbf{\Phi} + \alpha \mathbf{I}$$ And then we compare the linear term, which gives: $$\mathbf{m_N}^T \mathbf{A} = \beta \mathbf{t}^T \mathbf{\Phi}$$ Noticing that $\mathbf{A} = \mathbf{A}^T$ , which implies $\mathbf{A}^{-1}$ is also symmetric, we first transpose and then multiply $\mathbf{A}^{-1}$ on both sides, which gives: $$\boldsymbol{m}_{\boldsymbol{N}} = \beta \mathbf{A}^{-1} \mathbf{\Phi}^T \mathbf{t}$$ Now we rewrite $E(\boldsymbol{w})$ : $$E(\boldsymbol{w}) = \frac{1}{2} \left[ \boldsymbol{w}^T (\beta \boldsymbol{\Phi}^T \boldsymbol{\Phi} + \alpha \mathbf{I}) \boldsymbol{w} - 2\beta \mathbf{t}^T \boldsymbol{\Phi} \boldsymbol{w} + \beta \mathbf{t}^T \mathbf{t} \right]$$ $$= \frac{1}{2} \left[ (\boldsymbol{w} - \boldsymbol{m}_N)^T \mathbf{A} (\boldsymbol{w} - \boldsymbol{m}_N) + \beta \mathbf{t}^T \mathbf{t} - \boldsymbol{m}_N^T \mathbf{A} \boldsymbol{m}_N \right]$$ $$= \frac{1}{2} (\boldsymbol{w} - \boldsymbol{m}_N)^T \mathbf{A} (\boldsymbol{w} - \boldsymbol{m}_N) + \frac{1}{2} (\beta \mathbf{t}^T \mathbf{t} - \boldsymbol{m}_N^T \mathbf{A} \boldsymbol{m}_N)$$ $$= \frac{1}{2} (\boldsymbol{w} - \boldsymbol{m}_N)^T \mathbf{A} (\boldsymbol{w} - \boldsymbol{m}_N) + \frac{1}{2} (\beta \mathbf{t}^T \mathbf{t} - 2\boldsymbol{m}_N^T \mathbf{A} \boldsymbol{m}_N + \boldsymbol{m}_N^T \mathbf{A} \boldsymbol{m}_N)$$ $$= \frac{1}{2} (\boldsymbol{w} - \boldsymbol{m}_N)^T \mathbf{A} (\boldsymbol{w} - \boldsymbol{m}_N) + \frac{1}{2} (\beta \mathbf{t}^T \mathbf{t} - 2\boldsymbol{m}_N^T \mathbf{A} \boldsymbol{m}_N + \boldsymbol{m}_N^T (\beta \boldsymbol{\Phi}^T \boldsymbol{\Phi} + \alpha \mathbf{I}) \boldsymbol{m}_N)$$ $$= \frac{1}{2} (\boldsymbol{w} - \boldsymbol{m}_N)^T \mathbf{A} (\boldsymbol{w} - \boldsymbol{m}_N) + \frac{1}{2} [\beta \mathbf{t}^T \mathbf{t} - 2\beta \mathbf{t}^T \boldsymbol{\Phi} \boldsymbol{m}_N + \boldsymbol{m}_N^T (\beta \boldsymbol{\Phi}^T \boldsymbol{\Phi}) \boldsymbol{m}_N] + \frac{\alpha}{2} \boldsymbol{m}_N^T \boldsymbol{m}_N$$ $$= \frac{1}{2} (\boldsymbol{w} - \boldsymbol{m}_N)^T \mathbf{A} (\boldsymbol{w} - \boldsymbol{m}_N) + \frac{\beta}{2} ||\mathbf{t} - \boldsymbol{\Phi} \boldsymbol{m}_N||^2 + \frac{\alpha}{2} \boldsymbol{m}_N^T \boldsymbol{m}_N$$ Just as required.
3,565
3
3.19
medium
Show that the integration over w in the Bayesian linear regression model gives the result $= \exp\left\{-E(\mathbf{m}_N)\right\} (2\pi)^{M/2} |\mathbf{A}|^{-1/2}.$. Hence show that the log marginal likelihood is given by $\ln p(\mathbf{t}|\alpha,\beta) = \frac{M}{2} \ln \alpha + \frac{N}{2} \ln \beta - E(\mathbf{m}_N) - \frac{1}{2} \ln |\mathbf{A}| - \frac{N}{2} \ln(2\pi)$.
Based on the standard form of a multivariate normal distribution, we know that $$\int \frac{1}{(2\pi)^{M/2}} \frac{1}{|\mathbf{A}|^{1/2}} exp\left\{-\frac{1}{2}(\mathbf{w} - \mathbf{m}_{N})^{T} \mathbf{A}(\mathbf{w} - \mathbf{m}_{N})\right\} d\mathbf{w} = 1$$ Hence, $$\int exp\left\{-\frac{1}{2}(\boldsymbol{w}-\boldsymbol{m_N})^T\mathbf{A}(\boldsymbol{w}-\boldsymbol{m_N})\right\}d\boldsymbol{w} = (2\pi)^{M/2}|\mathbf{A}|^{1/2}$$ And since $E(\mathbf{m}_N)$ doesn't depend on $\mathbf{w}$ , $= \exp\left\{-E(\mathbf{m}_N)\right\} (2\pi)^{M/2} |\mathbf{A}|^{-1/2}.$ is quite obvious. Then we substitute $= \exp\left\{-E(\mathbf{m}_N)\right\} (2\pi)^{M/2} |\mathbf{A}|^{-1/2}.$ into $p(\mathbf{t}|\alpha,\beta) = \left(\frac{\beta}{2\pi}\right)^{N/2} \left(\frac{\alpha}{2\pi}\right)^{M/2} \int \exp\left\{-E(\mathbf{w})\right\} d\mathbf{w}$, which will immediately gives $\ln p(\mathbf{t}|\alpha,\beta) = \frac{M}{2} \ln \alpha + \frac{N}{2} \ln \beta - E(\mathbf{m}_N) - \frac{1}{2} \ln |\mathbf{A}| - \frac{N}{2} \ln(2\pi)$.
1,067
3
3.2
medium
$ Show that the matrix $$\mathbf{\Phi}(\mathbf{\Phi}^{\mathrm{T}}\mathbf{\Phi})^{-1}\mathbf{\Phi}^{\mathrm{T}} \tag{3.103}$$ takes any vector $\mathbf{v}$ and projects it onto the space spanned by the columns of $\mathbf{\Phi}$ . Use this result to show that the least-squares solution $\mathbf{w}_{\mathrm{ML}} = \left(\mathbf{\Phi}^{\mathrm{T}}\mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^{\mathrm{T}}\mathbf{t}$ corresponds to an orthogonal projection of the vector $\mathbf{t}$ onto the manifold $\mathcal{S}$ as shown in Figure 3.2.
To begin with, if we denote $\mathbf{v}^* = (\mathbf{\Phi}^T \mathbf{\Phi})^{-1} \mathbf{\Phi}^T \mathbf{v}$ . Then we have: $$\mathbf{\Phi}(\mathbf{\Phi}^T\mathbf{\Phi})^{-1}\mathbf{\Phi}^T\mathbf{v} = \mathbf{\Phi}\mathbf{v}^* \tag{1}$$ By definition, $\Phi v^*$ is in the column space of $\Phi$ . In other words, we have prove $\Phi(\Phi^T\Phi)^{-1}\Phi^T$ can project a vector v into the column space of $\Phi$ . Next, we are required to prove the 'residue' of the projection shown in Fig. 3.2 of the main text (i.e., v - t is orthogonal to the column space of $\Phi$ ). Since we have: $$(\mathbf{y} - \mathbf{t})^{T} \mathbf{\Phi} = (\mathbf{\Phi} \mathbf{w}_{ML} - \mathbf{t})^{T}$$ $$= (\mathbf{\Phi} (\mathbf{\Phi}^{T} \mathbf{\Phi})^{-1} \mathbf{\Phi}^{T} \mathbf{t} - \mathbf{t})^{T} \mathbf{\Phi}$$ $$= \mathbf{t}^{T} (\mathbf{\Phi} (\mathbf{\Phi}^{T} \mathbf{\Phi})^{-1} \mathbf{\Phi}^{T} - \mathbf{I})^{T} \mathbf{\Phi}$$ $$= \mathbf{t}^{T} (\mathbf{\Phi} (\mathbf{\Phi}^{T} \mathbf{\Phi})^{-1} \mathbf{\Phi}^{T} - \mathbf{I}) \mathbf{\Phi}$$ $$= \mathbf{t}^{T} (\mathbf{\Phi} - \mathbf{\Phi})$$ $$= \mathbf{0}$$ $$(2)$$ which means (y - t) is in the left null space of $\Phi$ , and it is also orthogonal to the column space of $\Phi$ .
1,269
3
3.20
medium
Starting from $\ln p(\mathbf{t}|\alpha,\beta) = \frac{M}{2} \ln \alpha + \frac{N}{2} \ln \beta - E(\mathbf{m}_N) - \frac{1}{2} \ln |\mathbf{A}| - \frac{N}{2} \ln(2\pi)$ verify all of the steps needed to show that maximization of the log marginal likelihood function $\ln p(\mathbf{t}|\alpha,\beta) = \frac{M}{2} \ln \alpha + \frac{N}{2} \ln \beta - E(\mathbf{m}_N) - \frac{1}{2} \ln |\mathbf{A}| - \frac{N}{2} \ln(2\pi)$ with respect to $\alpha$ leads to the re-estimation equation $\alpha = \frac{\gamma}{\mathbf{m}_N^{\mathrm{T}} \mathbf{m}_N}.$.
You can just follow the steps from $(\beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi}) \mathbf{u}_i = \lambda_i \mathbf{u}_i.$ to $\alpha = \frac{\gamma}{\mathbf{m}_N^{\mathrm{T}} \mathbf{m}_N}.$, which is already very clear.
239
3
3.21
medium
An alternative way to derive the result $\alpha = \frac{\gamma}{\mathbf{m}_N^{\mathrm{T}} \mathbf{m}_N}.$ for the optimal value of $\alpha$ in the evidence framework is to make use of the identity $$\frac{d}{d\alpha}\ln|\mathbf{A}| = \operatorname{Tr}\left(\mathbf{A}^{-1}\frac{d}{d\alpha}\mathbf{A}\right). \tag{3.117}$$ Prove this identity by considering the eigenvalue expansion of a real, symmetric matrix $\mathbf{A}$ , and making use of the standard results for the determinant and trace of $\mathbf{A}$ expressed in terms of its eigenvalues (Appendix C). Then make use of $\frac{d}{d\alpha}\ln|\mathbf{A}| = \operatorname{Tr}\left(\mathbf{A}^{-1}\frac{d}{d\alpha}\mathbf{A}\right).$ to derive $\alpha = \frac{\gamma}{\mathbf{m}_N^{\mathrm{T}} \mathbf{m}_N}.$ starting from $\ln p(\mathbf{t}|\alpha,\beta) = \frac{M}{2} \ln \alpha + \frac{N}{2} \ln \beta - E(\mathbf{m}_N) - \frac{1}{2} \ln |\mathbf{A}| - \frac{N}{2} \ln(2\pi)$.
Let's first prove $\frac{d}{d\alpha}\ln|\mathbf{A}| = \operatorname{Tr}\left(\mathbf{A}^{-1}\frac{d}{d\alpha}\mathbf{A}\right).$. According to (C.47) and (C.48), we know that if **A** is a $M \times M$ real symmetric matrix, with eigenvalues $\lambda_i$ , i = 1, 2, ..., M, $|\mathbf{A}|$ and $\text{Tr}(\mathbf{A})$ can be written as: $$|\mathbf{A}| = \prod_{i=1}^{M} \lambda_i$$ , $\operatorname{Tr}(\mathbf{A}) = \sum_{i=1}^{M} \lambda_i$ Back to this problem, according to section 3.5.2, we know that **A** has eigenvalues $\alpha + \lambda_i$ , i = 1, 2, ..., M. Hence the left side of $\frac{d}{d\alpha}\ln|\mathbf{A}| = \operatorname{Tr}\left(\mathbf{A}^{-1}\frac{d}{d\alpha}\mathbf{A}\right).$ equals to: left side = $$\frac{d}{d\alpha}ln\left[\prod_{i=1}^{M}(\alpha+\lambda_i)\right] = \sum_{i=1}^{M}\frac{d}{d\alpha}ln(\alpha+\lambda_i) = \sum_{i=1}^{M}\frac{1}{\alpha+\lambda_i}$$ And according to $\mathbf{A} = \alpha \mathbf{I} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi}$, we can obtain: $$\mathbf{A}^{-1}\frac{d}{d\alpha}\mathbf{A} = \mathbf{A}^{-1}\mathbf{I} = \mathbf{A}^{-1}$$ For the symmetric matrix **A**, its inverse $\mathbf{A}^{-1}$ has eigenvalues $1/(\alpha + \lambda_i)$ , i = 1, 2, ..., M. Therefore, $$\operatorname{Tr}(\mathbf{A}^{-1}\frac{d}{d\alpha}\mathbf{A}) = \sum_{i=1}^{M} \frac{1}{\alpha + \lambda_i}$$ Hence there are the same, and $\alpha = \frac{\gamma}{\mathbf{m}_N^{\mathrm{T}} \mathbf{m}_N}.$ is quite obvious.
1,515
3
3.22
medium
Starting from $\ln p(\mathbf{t}|\alpha,\beta) = \frac{M}{2} \ln \alpha + \frac{N}{2} \ln \beta - E(\mathbf{m}_N) - \frac{1}{2} \ln |\mathbf{A}| - \frac{N}{2} \ln(2\pi)$ verify all of the steps needed to show that maximization of the log marginal likelihood function $\ln p(\mathbf{t}|\alpha,\beta) = \frac{M}{2} \ln \alpha + \frac{N}{2} \ln \beta - E(\mathbf{m}_N) - \frac{1}{2} \ln |\mathbf{A}| - \frac{N}{2} \ln(2\pi)$ with respect to $\beta$ leads to the re-estimation equation $\frac{1}{\beta} = \frac{1}{N - \gamma} \sum_{n=1}^{N} \left\{ t_n - \mathbf{m}_N^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) \right\}^2.$.
Let's derive $\ln p(\mathbf{t}|\alpha,\beta) = \frac{M}{2} \ln \alpha + \frac{N}{2} \ln \beta - E(\mathbf{m}_N) - \frac{1}{2} \ln |\mathbf{A}| - \frac{N}{2} \ln(2\pi)$ with regard to $\beta$ . The first term dependent on $\beta$ in $\ln p(\mathbf{t}|\alpha,\beta) = \frac{M}{2} \ln \alpha + \frac{N}{2} \ln \beta - E(\mathbf{m}_N) - \frac{1}{2} \ln |\mathbf{A}| - \frac{N}{2} \ln(2\pi)$ is: $$\frac{d}{d\beta}(\frac{N}{2}ln\beta) = \frac{N}{2\beta}$$ The second term is: $$\frac{d}{d\beta}E(\boldsymbol{m}_{N}) = \frac{1}{2}||\mathbf{t} - \boldsymbol{\Phi}\boldsymbol{m}_{N}||^{2} + \frac{\beta}{2}\frac{d}{d\beta}||\mathbf{t} - \boldsymbol{\Phi}\boldsymbol{m}_{N}||^{2} + \frac{d}{d\beta}\frac{\alpha}{2}\boldsymbol{m}_{N}^{T}\boldsymbol{m}_{N}$$ The last two terms in the equation above can be further written as: $$\frac{\beta}{2} \frac{d}{d\beta} ||\mathbf{t} - \mathbf{\Phi} \mathbf{m}_{N}||^{2} + \frac{d}{d\beta} \frac{\alpha}{2} \mathbf{m}_{N}^{T} \mathbf{m}_{N} = \left\{ \frac{\beta}{2} \frac{d}{d\mathbf{m}_{N}} ||\mathbf{t} - \mathbf{\Phi} \mathbf{m}_{N}||^{2} + \frac{d}{d\mathbf{m}_{N}} \frac{\alpha}{2} \mathbf{m}_{N}^{T} \mathbf{m}_{N} \right\} \cdot \frac{d\mathbf{m}_{N}}{d\beta} \\ = \left\{ \frac{\beta}{2} [-2\mathbf{\Phi}^{T} (\mathbf{t} - \mathbf{\Phi} \mathbf{m}_{N})] + \frac{\alpha}{2} 2\mathbf{m}_{N} \right\} \cdot \frac{d\mathbf{m}_{N}}{d\beta} \\ = \left\{ -\beta \mathbf{\Phi}^{T} (\mathbf{t} - \mathbf{\Phi} \mathbf{m}_{N}) + \alpha \mathbf{m}_{N} \right\} \cdot \frac{d\mathbf{m}_{N}}{d\beta} \\ = \left\{ -\beta \mathbf{\Phi}^{T} \mathbf{t} + (\alpha \mathbf{I} + \beta \mathbf{\Phi}^{T} \mathbf{\Phi}) \mathbf{m}_{N} \right\} \cdot \frac{d\mathbf{m}_{N}}{d\beta} \\ = \left\{ -\beta \mathbf{\Phi}^{T} \mathbf{t} + \mathbf{A} \mathbf{m}_{N} \right\} \cdot \frac{d\mathbf{m}_{N}}{d\beta} \\ = 0$$ Where we have taken advantage of $\mathbf{A} = \nabla \nabla E(\mathbf{w})$ and $\mathbf{m}_N = \beta \mathbf{A}^{-1} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t}.$. Hence $$\frac{d}{d\beta}E(\boldsymbol{m}_{\boldsymbol{N}}) = \frac{1}{2}||\mathbf{t} - \boldsymbol{\Phi}\boldsymbol{m}_{\boldsymbol{N}}||^2 = \frac{1}{2}\sum_{n=1}^{N}(t_n - \boldsymbol{m}_{\boldsymbol{N}}^T\boldsymbol{\phi}(\boldsymbol{x}_n))^2$$ The last term dependent on $\beta$ in $\ln p(\mathbf{t}|\alpha,\beta) = \frac{M}{2} \ln \alpha + \frac{N}{2} \ln \beta - E(\mathbf{m}_N) - \frac{1}{2} \ln |\mathbf{A}| - \frac{N}{2} \ln(2\pi)$ is: $$\frac{d}{d\beta}(\frac{1}{2}ln|\mathbf{A}|) = \frac{\gamma}{2\beta}$$ Therefore, if we combine all those expressions together, we will obtain $0 = \frac{N}{2\beta} - \frac{1}{2} \sum_{n=1}^{N} \left\{ t_n - \mathbf{m}_N^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) \right\}^2 - \frac{\gamma}{2\beta}$. And then if we rearrange it, we will obtain $\frac{1}{\beta} = \frac{1}{N - \gamma} \sum_{n=1}^{N} \left\{ t_n - \mathbf{m}_N^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) \right\}^2.$.
2,993
3
3.23
medium
Show that the marginal probability of the data, in other words the model evidence, for the model described in Exercise 3.12 is given by $$p(\mathbf{t}) = \frac{1}{(2\pi)^{N/2}} \frac{b_0^{a_0}}{b_N^{a_N}} \frac{\Gamma(a_N)}{\Gamma(a_0)} \frac{|\mathbf{S}_N|^{1/2}}{|\mathbf{S}_0|^{1/2}}$$ $p(\mathbf{t}) = \frac{1}{(2\pi)^{N/2}} \frac{b_0^{a_0}}{b_N^{a_N}} \frac{\Gamma(a_N)}{\Gamma(a_0)} \frac{|\mathbf{S}_N|^{1/2}}{|\mathbf{S}_0|^{1/2}}$ by first marginalizing with respect to w and then with respect to $\beta$ .
First, according to $p(\mathbf{t}|\mathbf{X}, \mathbf{w}, \beta) = \prod_{n=1}^{N} \mathcal{N}(t_n | \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n), \beta^{-1})$, we know that $p(\mathbf{t}|\mathbf{X}, \boldsymbol{w}, \beta)$ can be further written as $p(\mathbf{t}|\mathbf{X}, \boldsymbol{w}, \beta) = \mathcal{N}(\mathbf{t}|\boldsymbol{\Phi}\boldsymbol{w}, \beta^{-1}\mathbf{I})$ , and given that $p(\boldsymbol{w}|\beta) = \mathcal{N}(\boldsymbol{m_0}, \beta^{-1}\mathbf{S_0})$ and $p(\beta) = \operatorname{Gam}(\beta|a_0, b_0)$ . Therefore, we just follow the hint in the problem. $$p(\mathbf{t}) = \int \int p(\mathbf{t}|\mathbf{X}, \boldsymbol{w}, \beta) p(\boldsymbol{w}|\beta) d\boldsymbol{w} p(\beta) d\beta$$ $$= \int \int (\frac{\beta}{2\pi})^{N/2} exp\{-\frac{\beta}{2}(\mathbf{t} - \boldsymbol{\Phi} \boldsymbol{w})^T (\mathbf{t} - \boldsymbol{\Phi} \boldsymbol{w})\} \cdot$$ $$(\frac{\beta}{2\pi})^{M/2} |\mathbf{S}_{\mathbf{0}}|^{-1/2} exp\{-\frac{\beta}{2}(\boldsymbol{w} - \boldsymbol{m}_{\mathbf{0}})^T \mathbf{S}_{\mathbf{0}}^{-1} (\boldsymbol{w} - \boldsymbol{m}_{\mathbf{0}})\} d\boldsymbol{w}$$ $$\Gamma(a_0)^{-1} b_0^{a_0} \beta^{a_0 - 1} exp(-b_0 \beta) d\beta$$ $$= \frac{b_0^{a_0}}{(2\pi)^{(M+N)/2} |\mathbf{S}_{\mathbf{0}}|^{1/2}} \int \int exp\{-\frac{\beta}{2}(\mathbf{t} - \boldsymbol{\Phi} \boldsymbol{w})^T (\mathbf{t} - \boldsymbol{\Phi} \boldsymbol{w})\}$$ $$exp\{-\frac{\beta}{2}(\boldsymbol{w} - \boldsymbol{m}_{\mathbf{0}})^T \mathbf{S}_{\mathbf{0}}^{-1} (\boldsymbol{w} - \boldsymbol{m}_{\mathbf{0}})\} d\boldsymbol{w}$$ $$\beta^{a_0 - 1 + N/2 + M/2} exp(-b_0 \beta) d\beta$$ $$= \frac{b_0^{a_0}}{(2\pi)^{(M+N)/2} |\mathbf{S}_{\mathbf{0}}|^{1/2}} \int \int exp\{-\frac{\beta}{2}(\boldsymbol{w} - \boldsymbol{m}_{\mathbf{N}})^T \mathbf{S}_{\mathbf{N}}^{-1} (\boldsymbol{w} - \boldsymbol{m}_{\mathbf{N}})\} d\boldsymbol{w}$$ $$exp\{-\frac{\beta}{2}(\mathbf{t}^T \mathbf{t} + \boldsymbol{m}_{\mathbf{0}}^T \mathbf{S}_{\mathbf{0}}^{-1} \boldsymbol{m}_{\mathbf{0}} - \boldsymbol{m}_{\mathbf{N}}^T \mathbf{S}_{\mathbf{N}}^{-1} \boldsymbol{m}_{\mathbf{N}})\}$$ $$\beta^{a_N - 1 + M/2} exp(-b_0 \beta) d\beta$$ Where we have defined $$\begin{aligned} \boldsymbol{m}_{N} &= \mathbf{S}_{N} \left( \mathbf{S}_{0}^{-1} \boldsymbol{m}_{0} + \boldsymbol{\Phi}^{T} \mathbf{t} \right) \\ \boldsymbol{S}_{N}^{-1} &= \mathbf{S}_{0}^{-1} + \boldsymbol{\Phi}^{T} \boldsymbol{\Phi} \\ \boldsymbol{a}_{N} &= \boldsymbol{a}_{0} + \frac{N}{2} \\ b_{N} &= b_{0} + \frac{1}{2} (\boldsymbol{m}_{0}^{T} \mathbf{S}_{0}^{-1} \boldsymbol{m}_{0} - \boldsymbol{m}_{N}^{T} \mathbf{S}_{N}^{-1} \boldsymbol{m}_{N} + \sum_{n=1}^{N} t_{n}^{2}) \end{aligned}$$ Which are exactly the same as those in Prob.3.12, and then we evaluate the integral, taking advantage of the normalized property of multivariate Gaussian Distribution and Gamma Distribution. $$p(\mathbf{t}) = \frac{b_0^{a_0}}{(2\pi)^{(M+N)/2} |\mathbf{S_0}|^{1/2}} (\frac{2\pi}{\beta})^{M/2} |\mathbf{S_N}|^{1/2} \int \beta^{a_N - 1 + M/2} exp(-b_N \beta) d\beta$$ $$= \frac{b_0^{a_0}}{(2\pi)^{(M+N)/2} |\mathbf{S_0}|^{1/2}} (2\pi)^{M/2} |\mathbf{S_N}|^{1/2} \int \beta^{a_N - 1} exp(-b_N \beta) d\beta$$ $$= \frac{1}{(2\pi)^{N/2}} \frac{|\mathbf{S_N}|^{1/2}}{|\mathbf{S_0}|^{1/2}} \frac{b_0^{a_0}}{b_N^{a_N}} \frac{\Gamma(a_N)}{\Gamma(b_N)}$$ Just as required.
3,362
3
3.24
medium
$ Repeat the previous exercise but now use Bayes' theorem in the form $$p(\mathbf{t}) = \frac{p(\mathbf{t}|\mathbf{w}, \beta)p(\mathbf{w}, \beta)}{p(\mathbf{w}, \beta|\mathbf{t})}$$ $p(\mathbf{t}) = \frac{p(\mathbf{t}|\mathbf{w}, \beta)p(\mathbf{w}, \beta)}{p(\mathbf{w}, \beta|\mathbf{t})}$ and then substitute for the prior and posterior distributions and the likelihood function in order to derive the result $p(\mathbf{t}) = \frac{1}{(2\pi)^{N/2}} \frac{b_0^{a_0}}{b_N^{a_N}} \frac{\Gamma(a_N)}{\Gamma(a_0)} \frac{|\mathbf{S}_N|^{1/2}}{|\mathbf{S}_0|^{1/2}}$. # Linear Models for Classification In the previous chapter, we explored a class of regression models having particularly simple analytical and computational properties. We now discuss an analogous class of models for solving classification problems. The goal in classification is to take an input vector $\mathbf{x}$ and to assign it to one of K discrete classes $\mathcal{C}_k$ where $k=1,\ldots,K$ . In the most common scenario, the classes are taken to be disjoint, so that each input is assigned to one and only one class. The input space is thereby divided into *decision regions* whose boundaries are called *decision boundaries* or *decision surfaces*. In this chapter, we consider linear models for classification, by which we mean that the decision surfaces are linear functions of the input vector $\mathbf{x}$ and hence are defined by (D-1)-dimensional hyperplanes within the D-dimensional input space. Data sets whose classes can be separated exactly by linear decision surfaces are said to be *linearly separable*. For regression problems, the target variable ${\bf t}$ was simply the vector of real numbers whose values we wish to predict. In the case of classification, there are various ways of using target values to represent class labels. For probabilistic models, the most convenient, in the case of two-class problems, is the binary representation in which there is a single target variable $t \in \{0,1\}$ such that t=1 represents class $\mathcal{C}_1$ and t=0 represents class $\mathcal{C}_2$ . We can interpret the value of t as the probability that the class is $\mathcal{C}_1$ , with the values of probability taking only the extreme values of 0 and 1. For K>2 classes, it is convenient to use a 1-of-K coding scheme in which t is a vector of length K such that if the class is $\mathcal{C}_j$ , then all elements $t_k$ of t are zero except element $t_j$ , which takes the value 1. For instance, if we have K=5 classes, then a pattern from class 2 would be given the target vector $$\mathbf{t} = (0, 1, 0, 0, 0)^{\mathrm{T}}. (4.1)$$ Again, we can interpret the value of $t_k$ as the probability that the class is $C_k$ . For nonprobabilistic models, alternative choices of target variable representation will sometimes prove convenient. In Chapter 1, we identified three distinct approaches to the classification problem. The simplest involves constructing a discriminant function that directly assigns each vector $\mathbf{x}$ to a specific class. A more powerful approach, however, models the conditional probability distribution $p(\mathcal{C}_k|\mathbf{x})$ in an inference stage, and then subsequently uses this distribution to make optimal decisions. By separating inference and decision, we gain numerous benefits, as discussed in Section 1.5.4. There are two different approaches to determining the conditional probabilities $p(\mathcal{C}_k|\mathbf{x})$ . One technique is to model them directly, for example by representing them as parametric models and then optimizing the parameters using a training set. Alternatively, we can adopt a generative approach in which we model the class-conditional densities given by $p(\mathbf{x}|\mathcal{C}_k)$ , together with the prior probabilities $p(\mathcal{C}_k)$ for the classes, and then we compute the required posterior probabilities using Bayes' theorem $$p(C_k|\mathbf{x}) = \frac{p(\mathbf{x}|C_k)p(C_k)}{p(\mathbf{x})}.$$ $p(C_k|\mathbf{x}) = \frac{p(\mathbf{x}|C_k)p(C_k)}{p(\mathbf{x})}.$ We shall discuss examples of all three approaches in this chapter. In the linear regression models considered in Chapter 3, the model prediction $y(\mathbf{x}, \mathbf{w})$ was given by a linear function of the parameters $\mathbf{w}$ . In the simplest case, the model is also linear in the input variables and therefore takes the form $y(\mathbf{x}) = \mathbf{w}^T \mathbf{x} + w_0$ , so that y is a real number. For classification problems, however, we wish to predict discrete class labels, or more generally posterior probabilities that lie in the range (0,1). To achieve this, we consider a generalization of this model in which we transform the linear function of $\mathbf{w}$ using a nonlinear function $f(\cdot)$ so that $$y(\mathbf{x}) = f\left(\mathbf{w}^{\mathrm{T}}\mathbf{x} + w_0\right). \tag{4.3}$$ In the machine learning literature $f(\cdot)$ is known as an *activation function*, whereas its inverse is called a *link function* in the statistics literature. The decision surfaces correspond to $y(\mathbf{x}) = \text{constant}$ , so that $\mathbf{w}^T\mathbf{x} + w_0 = \text{constant}$ and hence the decision surfaces are linear functions of $\mathbf{x}$ , even if the function $f(\cdot)$ is nonlinear. For this reason, the class of models described by $y(\mathbf{x}) = f\left(\mathbf{w}^{\mathrm{T}}\mathbf{x} + w_0\right).$ are called *generalized linear models* (McCullagh and Nelder, 1989). Note, however, that in contrast to the models used for regression, they are no longer linear in the parameters due to the presence of the nonlinear function $f(\cdot)$ . This will lead to more complex analytical and computational properties than for linear regression models. Nevertheless, these models are still relatively simple compared to the more general nonlinear models that will be studied in subsequent chapters. The algorithms discussed in this chapter will be equally applicable if we first make a fixed nonlinear transformation of the input variables using a vector of basis functions $\phi(\mathbf{x})$ as we did for regression models in Chapter 3. We begin by considering classification directly in the original input space $\mathbf{x}$ , while in Section 4.3 we shall find it convenient to switch to a notation involving basis functions for consistency with later chapters. ### 4.1. Discriminant Functions A discriminant is a function that takes an input vector $\mathbf{x}$ and assigns it to one of K classes, denoted $\mathcal{C}_k$ . In this chapter, we shall restrict attention to *linear discriminants*, namely those for which the decision surfaces are hyperplanes. To simplify the discussion, we consider first the case of two classes and then investigate the extension to K > 2 classes. ### 4.1.1 Two classes The simplest representation of a linear discriminant function is obtained by taking a linear function of the input vector so that $$y(\mathbf{x}) = \mathbf{w}^{\mathrm{T}} \mathbf{x} + w_0 \tag{4.4}$$ where $\mathbf{w}$ is called a *weight vector*, and $w_0$ is a *bias* (not to be confused with bias in the statistical sense). The negative of the bias is sometimes called a *threshold*. An input vector $\mathbf{x}$ is assigned to class $\mathcal{C}_1$ if $y(\mathbf{x}) \geqslant 0$ and to class $\mathcal{C}_2$ otherwise. The corresponding decision boundary is therefore defined by the relation $y(\mathbf{x}) = 0$ , which corresponds to a (D-1)-dimensional hyperplane within the D-dimensional input space. Consider two points $\mathbf{x}_A$ and $\mathbf{x}_B$ both of which lie on the decision surface. Because $y(\mathbf{x}_A) = y(\mathbf{x}_B) = 0$ , we have $\mathbf{w}^T(\mathbf{x}_A - \mathbf{x}_B) = 0$ and hence the vector $\mathbf{w}$ is orthogonal to every vector lying within the decision surface, and so $\mathbf{w}$ determines the orientation of the decision surface. Similarly, if $\mathbf{x}$ is a point on the decision surface, then $y(\mathbf{x}) = 0$ , and so the normal distance from the origin to the decision surface is given by $$\frac{\mathbf{w}^{\mathrm{T}}\mathbf{x}}{\|\mathbf{w}\|} = -\frac{w_0}{\|\mathbf{w}\|}.\tag{4.5}$$
Let's just follow the hint and we begin by writing down expression for the likelihood, prior and posterior PDF. We know that $p(\mathbf{t}|\boldsymbol{w},\beta) = \mathcal{N}(\mathbf{t}|\boldsymbol{\Phi}\boldsymbol{w},\beta^{-1}\mathbf{I})$ . What's more, the form of the prior and posterior are quite similar: $$p(\boldsymbol{w}, \beta) = \mathcal{N}(\boldsymbol{w}|\mathbf{m_0}, \beta^{-1}\mathbf{S_0}) \operatorname{Gam}(\beta|a_0, b_0)$$ And $$p(\boldsymbol{w}, \beta | \mathbf{t}) = \mathcal{N}(\boldsymbol{w} | \mathbf{m_N}, \beta^{-1} \mathbf{S_N}) \operatorname{Gam}(\beta | a_N, b_N)$$ Where the relationships among those parameters are shown in Prob.3.12, Prob.3.23. Now according to $p(\mathbf{t}) = \frac{p(\mathbf{t}|\mathbf{w}, \beta)p(\mathbf{w}, \beta)}{p(\mathbf{w}, \beta|\mathbf{t})}$, we can write: $$p(\mathbf{t}) = \mathcal{N}(\mathbf{t}|\boldsymbol{\Phi}\boldsymbol{w}, \boldsymbol{\beta}^{-1}\mathbf{I}) \frac{\mathcal{N}(\boldsymbol{w}|\mathbf{m}_{0}, \boldsymbol{\beta}^{-1}\mathbf{S}_{0}) \operatorname{Gam}(\boldsymbol{\beta}|a_{0}, b_{0})}{\mathcal{N}(\boldsymbol{w}|\mathbf{m}_{N}, \boldsymbol{\beta}^{-1}\mathbf{S}_{N}) \operatorname{Gam}(\boldsymbol{\beta}|a_{N}, b_{N})}$$ $$= \mathcal{N}(\mathbf{t}|\boldsymbol{\Phi}\boldsymbol{w}, \boldsymbol{\beta}^{-1}\mathbf{I}) \frac{\mathcal{N}(\boldsymbol{w}|\mathbf{m}_{0}, \boldsymbol{\beta}^{-1}\mathbf{S}_{0})}{\mathcal{N}(\boldsymbol{w}|\mathbf{m}_{N}, \boldsymbol{\beta}^{-1}\mathbf{S}_{N})} \frac{b_{0}^{a_{0}} \boldsymbol{\beta}^{a_{0}-1} exp(-b_{0}\boldsymbol{\beta})/\Gamma(a_{0})}{b_{N}^{a_{N}} \boldsymbol{\beta}^{a_{N}-1} exp(-b_{N}\boldsymbol{\beta})/\Gamma(a_{N})}$$ $$= \mathcal{N}(\mathbf{t}|\boldsymbol{\Phi}\boldsymbol{w}, \boldsymbol{\beta}^{-1}\mathbf{I}) \frac{\mathcal{N}(\boldsymbol{w}|\mathbf{m}_{0}, \boldsymbol{\beta}^{-1}\mathbf{S}_{0})}{\mathcal{N}(\boldsymbol{w}|\mathbf{m}_{N}, \boldsymbol{\beta}^{-1}\mathbf{S}_{N})} \frac{b_{0}^{a_{0}}}{b_{N}^{a_{N}}} \frac{\Gamma(a_{N})}{\Gamma(a_{0})} \boldsymbol{\beta}^{a_{0}-a_{N}} exp\left\{-(b_{0}-b_{N})\boldsymbol{\beta}\right\}$$ $$= \mathcal{N}(\mathbf{t}|\boldsymbol{\Phi}\boldsymbol{w}, \boldsymbol{\beta}^{-1}\mathbf{I}) \frac{\mathcal{N}(\boldsymbol{w}|\mathbf{m}_{0}, \boldsymbol{\beta}^{-1}\mathbf{S}_{0})}{\mathcal{N}(\boldsymbol{w}|\mathbf{m}_{N}, \boldsymbol{\beta}^{-1}\mathbf{S}_{N})} exp\left\{-(b_{0}-b_{N})\boldsymbol{\beta}\right\} \frac{b_{0}^{a_{0}}}{b_{N}^{a_{N}}} \frac{\Gamma(a_{N})}{\Gamma(a_{0})} \boldsymbol{\beta}^{-N/2}$$ Where we have used $a_N = a_0 + \frac{N}{2}$ . Now we deal with the terms expressed in the form of Gaussian Distribution: Gaussian terms = $$\mathcal{N}(\mathbf{t}|\mathbf{\Phi}\boldsymbol{w}, \beta^{-1}\mathbf{I}) \frac{\mathcal{N}(\boldsymbol{w}|\mathbf{m}_{0}, \beta^{-1}\mathbf{S}_{0})}{\mathcal{N}(\boldsymbol{w}|\mathbf{m}_{N}, \beta^{-1}\mathbf{S}_{N})}$$ = $(\frac{\beta}{2\pi})^{N/2} exp \left\{ -\frac{\beta}{2} (\mathbf{t} - \mathbf{\Phi}\boldsymbol{w})^{T} (\mathbf{t} - \mathbf{\Phi}\boldsymbol{w}) \right\} \cdot \frac{|\beta^{-1}\mathbf{S}_{N}|^{1/2}}{|\beta^{-1}\mathbf{S}_{0}|^{1/2}} \frac{exp \left\{ -\frac{\beta}{2} (\boldsymbol{w} - \mathbf{m}_{0})^{T}\mathbf{S}_{0}^{-1} (\boldsymbol{w} - \mathbf{m}_{0}) \right\}}{exp \left\{ -\frac{\beta}{2} (\boldsymbol{w} - \mathbf{m}_{N})^{T}\mathbf{S}_{N}^{-1} (\boldsymbol{w} - \mathbf{m}_{N}) \right\}}$ = $(\frac{\beta}{2\pi})^{N/2} \frac{|\mathbf{S}_{N}|^{1/2}}{|\mathbf{S}_{0}|^{1/2}} exp \left\{ -\frac{\beta}{2} (\mathbf{t} - \mathbf{\Phi}\boldsymbol{w})^{T} (\mathbf{t} - \mathbf{\Phi}\boldsymbol{w}) \right\} \cdot \frac{exp \left\{ -\frac{\beta}{2} (\boldsymbol{w} - \mathbf{m}_{0})^{T}\mathbf{S}_{0}^{-1} (\boldsymbol{w} - \mathbf{m}_{0}) \right\}}{exp \left\{ -\frac{\beta}{2} (\boldsymbol{w} - \mathbf{m}_{N})^{T}\mathbf{S}_{N}^{-1} (\boldsymbol{w} - \mathbf{m}_{N}) \right\}}$ We look back to the previous problem and we notice that at the last step in the deduction of $p(\mathbf{t})$ , we complete the square according to $\mathbf{w}$ . And if we carefully compare the left and right side at the last step, we can obtain : $$exp\left\{-\frac{\beta}{2}(\mathbf{t} - \mathbf{\Phi} \mathbf{w})^{T}(\mathbf{t} - \mathbf{\Phi} \mathbf{w})\right\} exp\left\{-\frac{\beta}{2}(\mathbf{w} - \mathbf{m_0})^{T} \mathbf{S_0}^{-1}(\mathbf{w} - \mathbf{m_0})\right\}$$ $$= exp\left\{-\frac{\beta}{2}(\mathbf{w} - \mathbf{m_N})^{T} \mathbf{S_N}^{-1}(\mathbf{w} - \mathbf{m_N})\right\} exp\left\{-(b_N - b_0)\beta\right\}$$ Hence, we go back to deal with the Gaussian terms: Gaussian terms = $$(\frac{\beta}{2\pi})^{N/2} \frac{|\mathbf{S_N}|^{1/2}}{|\mathbf{S_0}|^{1/2}} exp\{-(b_N - b_0)\beta\}$$ If we substitute the expressions above into $p(\mathbf{t})$ , we will obtain $p(\mathbf{t}) = \frac{1}{(2\pi)^{N/2}} \frac{b_0^{a_0}}{b_N^{a_N}} \frac{\Gamma(a_N)}{\Gamma(a_0)} \frac{|\mathbf{S}_N|^{1/2}}{|\mathbf{S}_0|^{1/2}}$ immediately. ## 0.4 Linear Models Classification
4,987
3
3.3
easy
Consider a data set in which each data point $t_n$ is associated with a weighting factor $r_n > 0$ , so that the sum-of-squares error function becomes $$E_D(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} r_n \left\{ t_n - \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) \right\}^2.$$ $E_D(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} r_n \left\{ t_n - \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) \right\}^2.$ Find an expression for the solution $\mathbf{w}^*$ that minimizes this error function. Give two alternative interpretations of the weighted sum-of-squares error function in terms of (i) data dependent noise variance and (ii) replicated data points.
Let's calculate the derivative of $E_D(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} r_n \left\{ t_n - \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) \right\}^2.$ with respect to $\boldsymbol{w}$ . $$\nabla E_D(\boldsymbol{w}) = \sum_{n=1}^{N} r_n \left\{ t_n - \boldsymbol{w}^T \boldsymbol{\Phi}(\boldsymbol{x_n}) \right\} \boldsymbol{\Phi}(\boldsymbol{x_n})^T$$ We set the derivative equal to 0. $$0 = \sum_{n=1}^{N} r_n t_n \mathbf{\Phi}(\mathbf{x}_n)^T - \mathbf{w}^T \left( \sum_{n=1}^{N} r_n \mathbf{\Phi}(\mathbf{x}_n) \mathbf{\Phi}(\mathbf{x}_n)^T \right)$$ If we denote $\sqrt{r_n} \phi(x_n) = \phi'(x_n)$ and $\sqrt{r_n} t_n = t'_n$ , we can obtain: $$0 = \sum_{n=1}^{N} t'_n \mathbf{\Phi}'(\mathbf{x}_n)^T - \mathbf{w}^T \left( \sum_{n=1}^{N} \mathbf{\Phi}'(\mathbf{x}_n) \mathbf{\Phi}'(\mathbf{x}_n)^T \right)$$ Taking advantage of $= \frac{N}{2} \ln \beta - \frac{N}{2} \ln(2\pi) - \beta E_D(\mathbf{w})$ – $\mathbf{\Phi}^{\dagger} \equiv \left(\mathbf{\Phi}^{\mathrm{T}}\mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^{\mathrm{T}}$, we can derive a similar result, i.e. $\boldsymbol{w}_{ML} = (\boldsymbol{\Phi}^T \boldsymbol{\Phi})^{-1} \boldsymbol{\Phi}^T \boldsymbol{t}$ . But here, we define $\boldsymbol{t}$ as: $$\boldsymbol{t} = \left[\sqrt{r_1}t_1, \sqrt{r_2}t_2, \dots, \sqrt{r_N}t_N\right]^T$$ We also define $\Phi$ as a $N \times M$ matrix, with element $\Phi(i,j) = \sqrt{r_i} \, \phi_j(\boldsymbol{x_i})$ . The interpretation is two folds: (1) Examining Eq (3.10)-(3.12), we see that if we substitute $\beta^{-1}$ by $r_n \cdot \beta^{-1}$ in the summation term, Eq $E_D(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \{t_n - \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n)\}^2.$ will become the expression in exercise 3.3. (2) $r_n$ can also be viewed as the effective number of observation of $(\mathbf{x}_n, t_n)$ . Alternatively speaking, you can treat $(\mathbf{x}_n, t_n)$ as repeatedly occurring $r_n$ times.
2,003
3
3.4
easy
Consider a linear model of the form $$y(x, \mathbf{w}) = w_0 + \sum_{i=1}^{D} w_i x_i$$ $y(x, \mathbf{w}) = w_0 + \sum_{i=1}^{D} w_i x_i$ together with a sum-of-squares error function of the form $$E_D(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \{y(x_n, \mathbf{w}) - t_n\}^2.$$ $E_D(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \{y(x_n, \mathbf{w}) - t_n\}^2.$ Now suppose that Gaussian noise $\epsilon_i$ with zero mean and variance $\sigma^2$ is added independently to each of the input variables $x_i$ . By making use of $\mathbb{E}[\epsilon_i] = 0$ and $\mathbb{E}[\epsilon_i\epsilon_j] = \delta_{ij}\sigma^2$ , show that minimizing $E_D$ averaged over the noise distribution is equivalent to minimizing the sum-of-squares error for noise-free input variables with the addition of a weight-decay regularization term, in which the bias parameter $w_0$ is omitted from the regularizer.
Firstly, we rearrange $E_D(\boldsymbol{w})$ . $$\begin{split} E_{D}(\boldsymbol{w}) &= \frac{1}{2} \sum_{n=1}^{N} \left\{ \left[ w_{0} + \sum_{i=1}^{D} w_{i}(x_{i} + \epsilon_{i}) \right] - t_{n} \right\}^{2} \\ &= \frac{1}{2} \sum_{n=1}^{N} \left\{ \left( w_{0} + \sum_{i=1}^{D} w_{i}x_{i} \right) - t_{n} + \sum_{i=1}^{D} w_{i}\epsilon_{i} \right\}^{2} \\ &= \frac{1}{2} \sum_{n=1}^{N} \left\{ y(x_{n}, \boldsymbol{w}) - t_{n} + \sum_{i=1}^{D} w_{i}\epsilon_{i} \right\}^{2} \\ &= \frac{1}{2} \sum_{n=1}^{N} \left\{ \left( y(x_{n}, \boldsymbol{w}) - t_{n} \right)^{2} + \left( \sum_{i=1}^{D} w_{i}\epsilon_{i} \right)^{2} + 2\left( \sum_{i=1}^{D} w_{i}\epsilon_{i} \right) \left( y(x_{n}, \boldsymbol{w}) - t_{n} \right) \right\} \end{split}$$ Where we have used $y(x_n, \boldsymbol{w})$ to denote the output of the linear model when input variable is $x_n$ , without noise added. For the second term in the equation above, we can obtain: $$\mathbb{E}_{\epsilon}[(\sum_{i=1}^{D}w_{i}\epsilon_{i})^{2}] = \mathbb{E}_{\epsilon}[\sum_{i=1}^{D}\sum_{j=1}^{D}w_{i}w_{j}\epsilon_{i}\epsilon_{j}] = \sum_{i=1}^{D}\sum_{j=1}^{D}w_{i}w_{j}\mathbb{E}_{\epsilon}[\epsilon_{i}\epsilon_{j}] = \sigma^{2}\sum_{i=1}^{D}\sum_{j=1}^{D}w_{i}w_{j}\delta_{ij}$$ Which gives $$\mathbb{E}_{\epsilon}[(\sum_{i=1}^{D} w_i \epsilon_i)^2] = \sigma^2 \sum_{i=1}^{D} w_i^2$$ For the third term, we can obtain: $$\mathbb{E}_{\epsilon}[2(\sum_{i=1}^{D} w_{i} \epsilon_{i})(y(x_{n}, \boldsymbol{w}) - t_{n})] = 2(y(x_{n}, \boldsymbol{w}) - t_{n}) \mathbb{E}_{\epsilon}[\sum_{i=1}^{D} w_{i} \epsilon_{i}]$$ $$= 2(y(x_{n}, \boldsymbol{w}) - t_{n}) \sum_{i=1}^{D} \mathbb{E}_{\epsilon}[w_{i} \epsilon_{i}]$$ $$= 0$$ Therefore, if we calculate the expectation of $E_D(\boldsymbol{w})$ with respect to $\epsilon$ , we can obtain: $$\mathbb{E}_{\epsilon}[E_D(\boldsymbol{w})] = \frac{1}{2} \sum_{n=1}^{N} (y(x_n, \boldsymbol{w}) - t_n)^2 + \frac{\sigma^2}{2} \sum_{i=1}^{D} w_i^2$$
1,964
3
3.5
easy
Using the technique of Lagrange multipliers, discussed in Appendix E, show that minimization of the regularized error function $\frac{1}{2} \sum_{n=1}^{N} \{t_n - \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n)\}^2 + \frac{\lambda}{2} \sum_{j=1}^{M} |w_j|^q$ is equivalent to minimizing the unregularized sum-of-squares error $E_D(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \{t_n - \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n)\}^2.$ subject to the constraint $\sum_{j=1}^{M} |w_j|^q \leqslant \eta$. Discuss the relationship between the parameters $\eta$ and $\lambda$ .
We can firstly rewrite the constraint $\sum_{j=1}^{M} |w_j|^q \leqslant \eta$ as: $$\frac{1}{2} \left( \sum_{j=1}^{M} |w_j|^q - \eta \right) \le 0$$ Where we deliberately introduce scaling factor 1/2 for convenience. Then it is straightforward to obtain the Lagrange function. $$L(\boldsymbol{w}, \lambda) = \frac{1}{2} \sum_{n=1}^{N} \left\{ t_n - \boldsymbol{w}^T \boldsymbol{\phi}(\boldsymbol{x_n}) \right\}^2 + \frac{\lambda}{2} \left( \sum_{j=1}^{M} |w_j|^q - \eta \right)$$ It is obvious that $L(\boldsymbol{w}, \lambda)$ and $\frac{1}{2} \sum_{n=1}^{N} \{t_n - \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n)\}^2 + \frac{\lambda}{2} \sum_{j=1}^{M} |w_j|^q$ has the same dependence on $\boldsymbol{w}$ . Meanwhile, if we denote the optimal $\boldsymbol{w}$ that can minimize $L(\boldsymbol{w}, \lambda)$ as $\boldsymbol{w}^*(\lambda)$ , we can see that $$\eta = \sum_{j=1}^{M} |w_j^{}|^q$$
937
3
3.6
easy
Consider a linear basis function regression model for a multivariate target variable t having a Gaussian distribution of the form $$p(\mathbf{t}|\mathbf{W}, \mathbf{\Sigma}) = \mathcal{N}(\mathbf{t}|\mathbf{y}(\mathbf{x}, \mathbf{W}), \mathbf{\Sigma})$$ $p(\mathbf{t}|\mathbf{W}, \mathbf{\Sigma}) = \mathcal{N}(\mathbf{t}|\mathbf{y}(\mathbf{x}, \mathbf{W}), \mathbf{\Sigma})$ where $$\mathbf{y}(\mathbf{x}, \mathbf{W}) = \mathbf{W}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}) \tag{3.108}$$ together with a training data set comprising input basis vectors $\phi(\mathbf{x}_n)$ and corresponding target vectors $\mathbf{t}_n$ , with $n=1,\ldots,N$ . Show that the maximum likelihood solution $\mathbf{W}_{\mathrm{ML}}$ for the parameter matrix $\mathbf{W}$ has the property that each column is given by an expression of the form $\mathbf{w}_{\mathrm{ML}} = \left(\mathbf{\Phi}^{\mathrm{T}}\mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^{\mathrm{T}}\mathbf{t}$, which was the solution for an isotropic noise distribution. Note that this is independent of the covariance matrix $\Sigma$ . Show that the maximum likelihood solution for $\Sigma$ is given by $$\Sigma = \frac{1}{N} \sum_{n=1}^{N} \left( \mathbf{t}_{n} - \mathbf{W}_{\mathrm{ML}}^{\mathrm{T}} \phi(\mathbf{x}_{n}) \right) \left( \mathbf{t}_{n} - \mathbf{W}_{\mathrm{ML}}^{\mathrm{T}} \phi(\mathbf{x}_{n}) \right)^{\mathrm{T}}.$$ $\Sigma = \frac{1}{N} \sum_{n=1}^{N} \left( \mathbf{t}_{n} - \mathbf{W}_{\mathrm{ML}}^{\mathrm{T}} \phi(\mathbf{x}_{n}) \right) \left( \mathbf{t}_{n} - \mathbf{W}_{\mathrm{ML}}^{\mathrm{T}} \phi(\mathbf{x}_{n}) \right)^{\mathrm{T}}.$
Firstly, we write down the log likelihood function. $$lnp(\boldsymbol{T}|\boldsymbol{X}, \boldsymbol{W}, \boldsymbol{\beta}) = -\frac{N}{2}ln|\boldsymbol{\Sigma}| - \frac{1}{2}\sum_{n=1}^{N} \left[\boldsymbol{t_n} - \boldsymbol{W}^T \boldsymbol{\phi}(\boldsymbol{x_n})\right]^T \boldsymbol{\Sigma}^{-1} \left[\boldsymbol{t_n} - \boldsymbol{W}^T \boldsymbol{\phi}(\boldsymbol{x_n})\right]$$ Where we have already omitted the constant term. We set the derivative of the equation above with respect to $\boldsymbol{W}$ equals to zero. $$\mathbf{0} = -\sum_{n=1}^{N} \mathbf{\Sigma}^{-1} [\boldsymbol{t_n} - \boldsymbol{W}^T \boldsymbol{\phi}(\boldsymbol{x_n})] \boldsymbol{\phi}(\boldsymbol{x_n})^T$$ Therefore, we can obtain similar result for W as $\mathbf{w}_{\mathrm{ML}} = \left(\mathbf{\Phi}^{\mathrm{T}}\mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^{\mathrm{T}}\mathbf{t}$. For $\Sigma$ , comparing with $\ln p(\mathbf{X}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) = -\frac{ND}{2} \ln(2\pi) - \frac{N}{2} \ln |\boldsymbol{\Sigma}| - \frac{1}{2} \sum_{n=1}^{N} (\mathbf{x}_n - \boldsymbol{\mu})^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{x}_n - \boldsymbol{\mu}). \quad$ – $\mathbb{E}[\Sigma_{\mathrm{ML}}] = \frac{N-1}{N} \Sigma.$, we can easily write down a similar result : $$\boldsymbol{\Sigma} = \frac{1}{N} \sum_{n=1}^{N} [\boldsymbol{t_n} - \boldsymbol{W}_{ML}^T \boldsymbol{\phi}(\boldsymbol{x_n})] [\boldsymbol{t_n} - \boldsymbol{W}_{ML}^T \boldsymbol{\phi}(\boldsymbol{x_n})]^T$$ We can see that the solutions for W and $\Sigma$ are also decoupled.
1,591
3
3.7
easy
By using the technique of completing the square, verify the result (3.49) for the posterior distribution of the parameters w in the linear basis function model in which $\mathbf{m}_N$ and $\mathbf{S}_N$ are defined by $\mathbf{m}_{N} = \mathbf{S}_{N} \left( \mathbf{S}_{0}^{-1} \mathbf{m}_{0} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{t} \right)$ and $\mathbf{S}_N^{-1} = \mathbf{S}_0^{-1} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi}.$ respectively.
Let's begin by writing down the prior distribution p(w) and likelihood function $p(t|X, w, \beta)$ . $$p(\boldsymbol{w}) = \mathcal{N}(\boldsymbol{w}|\boldsymbol{m}_0, \boldsymbol{S}_0) , \quad p(\boldsymbol{t}|\boldsymbol{X}, \boldsymbol{w}, \boldsymbol{\beta}) = \prod_{n=1}^{N} \mathcal{N}(t_n|\boldsymbol{w}^T \boldsymbol{\phi}(\boldsymbol{x}_n), \boldsymbol{\beta}^{-1})$$ Since the posterior PDF equals to the product of the prior PDF and likelihood function, up to a normalized constant. We mainly focus on the exponential term of the product. exponential term $$= -\frac{\beta}{2} \sum_{n=1}^{N} \left\{ t_n - \boldsymbol{w}^T \boldsymbol{\phi}(\boldsymbol{x_n}) \right\}^2 - \frac{1}{2} (\boldsymbol{w} - \boldsymbol{m_0})^T \boldsymbol{S}_0^{-1} (\boldsymbol{w} - \boldsymbol{m_0})$$ $$= -\frac{\beta}{2} \sum_{n=1}^{N} \left\{ t_n^2 - 2t_n \boldsymbol{w}^T \boldsymbol{\phi}(\boldsymbol{x_n}) + \boldsymbol{w}^T \boldsymbol{\phi}(\boldsymbol{x_n}) \boldsymbol{\phi}(\boldsymbol{x_n})^T \boldsymbol{w} \right\} - \frac{1}{2} (\boldsymbol{w} - \boldsymbol{m_0})^T \boldsymbol{S}_0^{-1} (\boldsymbol{w} - \boldsymbol{m_0})$$ $$= -\frac{1}{2} \boldsymbol{w}^T \left[ \sum_{n=1}^{N} \beta \boldsymbol{\phi}(\boldsymbol{x_n}) \boldsymbol{\phi}(\boldsymbol{x_n})^T + \boldsymbol{S}_0^{-1} \right] \boldsymbol{w}$$ $$-\frac{1}{2} \left[ -2\boldsymbol{m_0}^T \boldsymbol{S}_0^{-1} - \sum_{n=1}^{N} 2\beta t_n \boldsymbol{\phi}(\boldsymbol{x_n})^T \right] \boldsymbol{w}$$ Hence, by comparing the quadratic term with standard Gaussian Distribution, we can obtain: $\mathbf{S}_N^{-1} = \mathbf{S}_0^{-1} + \beta \mathbf{\Phi}^T \mathbf{\Phi}$ . And then comparing the linear term, we can obtain : $$-2m_{N}^{T}S_{N}^{-1} = -2m_{0}^{T}S_{0}^{-1} - \sum_{n=1}^{N} 2\beta t_{n}\phi(x_{n})^{T}$$ If we multiply -0.5 on both sides, and then transpose both sides, we can easily see that $m_N = S_N(S_0^{-1}m_0 + \beta\Phi^T t)$
1,934
3
3.8
medium
Consider the linear basis function model in Section 3.1, and suppose that we have already observed N data points, so that the posterior distribution over w is given by (3.49). This posterior can be regarded as the prior for the next observation. By considering an additional data point $(\mathbf{x}_{N+1}, t_{N+1})$ , and by completing the square in the exponential, show that the resulting posterior distribution is again given by (3.49) but with $\mathbf{S}_N$ replaced by $\mathbf{S}_{N+1}$ and $\mathbf{m}_N$ replaced by $\mathbf{m}_{N+1}$ .
Firstly, we write down the prior: $$p(\boldsymbol{w}) = \mathcal{N}(\boldsymbol{m}_{N}, \boldsymbol{S}_{N})$$ Where $m_N$ , $S_N$ are given by $\mathbf{m}_{N} = \mathbf{S}_{N} \left( \mathbf{S}_{0}^{-1} \mathbf{m}_{0} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{t} \right)$ and $\mathbf{S}_N^{-1} = \mathbf{S}_0^{-1} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi}.$. And if now we observe another sample $(X_{N+1}, t_{N+1})$ , we can write down the likelihood function: $$p(t_{N+1}|\mathbf{x}_{N+1},\mathbf{w}) = \mathcal{N}(t_{N+1}|y(\mathbf{x}_{N+1},\mathbf{w}),\beta^{-1})$$ Since the posterior equals to the production of likelihood function and the prior, up to a constant, we focus on the exponential term. exponential term = $$(\boldsymbol{w} - \boldsymbol{m}_{N})^{T} \boldsymbol{S}_{N}^{-1} (\boldsymbol{w} - \boldsymbol{m}_{N}) + \beta (t_{N+1} - \boldsymbol{w}^{T} \boldsymbol{\phi}(\boldsymbol{x}_{N+1}))^{2}$$ = $\boldsymbol{w}^{T} [\boldsymbol{S}_{N}^{-1} + \beta \boldsymbol{\phi}(\boldsymbol{x}_{N+1}) \boldsymbol{\phi}(\boldsymbol{x}_{N+1})^{T}] \boldsymbol{w}$ $-2 \boldsymbol{w}^{T} [\boldsymbol{S}_{N}^{-1} \boldsymbol{m}_{N} + \beta \boldsymbol{\phi}(\boldsymbol{x}_{N+1}) t_{N+1}]$ +const Therefore, after observing $(X_{N+1}, t_{N+1})$ , we have $p(w) = \mathcal{N}(m_{N+1}, S_{N+1})$ , where we have defined: $$S_{N+1}^{-1} = S_N^{-1} + \beta \phi(x_{N+1}) \phi(x_{N+1})^T$$ And $$m_{N+1} = S_{N+1} (S_N^{-1} m_N + \beta \phi(x_{N+1}) t_{N+1})$$
1,513
3
3.9
medium
Repeat the previous exercise but instead of completing the square by hand, make use of the general result for linear-Gaussian models given by $p(\mathbf{x}|\mathbf{y}) = \mathcal{N}(\mathbf{x}|\mathbf{\Sigma}\{\mathbf{A}^{\mathrm{T}}\mathbf{L}(\mathbf{y}-\mathbf{b}) + \mathbf{\Lambda}\boldsymbol{\mu}\}, \mathbf{\Sigma})$.
We know that the prior $p(\mathbf{w})$ can be written as: $$p(\boldsymbol{w}) = \mathcal{N}(\boldsymbol{m}_{N}, \boldsymbol{S}_{N})$$ And the likelihood function $p(t_{N+1}|\boldsymbol{x_{N+1}},\boldsymbol{w})$ can be written as: $$p(t_{N+1}|x_{N+1}, w) = \mathcal{N}(t_{N+1}|y(x_{N+1}, w), \beta^{-1})$$ According to the fact that $y(x_{N+1}, w) = w^T \phi(x_{N+1}) = \phi(x_{N+1})^T w$ , the likelihood can be further written as: $$p(t_{N+1}|\boldsymbol{x_{N+1}},\boldsymbol{w}) = \mathcal{N}(t_{N+1}|(\boldsymbol{\phi}(\boldsymbol{x_{N+1}})^T\boldsymbol{w},\beta^{-1})$$ Then we take advantage of $p(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Lambda}^{-1})$, $p(\mathbf{y}|\mathbf{x}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\mathbf{x} + \mathbf{b}, \mathbf{L}^{-1})$ and $p(\mathbf{x}|\mathbf{y}) = \mathcal{N}(\mathbf{x}|\mathbf{\Sigma}\{\mathbf{A}^{\mathrm{T}}\mathbf{L}(\mathbf{y}-\mathbf{b}) + \mathbf{\Lambda}\boldsymbol{\mu}\}, \mathbf{\Sigma})$, which gives: $$p(\boldsymbol{w}|\boldsymbol{x}_{N+1},t_{N+1}) = \mathcal{N}(\boldsymbol{\Sigma}\{\boldsymbol{\phi}(\boldsymbol{x}_{N+1})\beta t_{N+1} + \boldsymbol{S}_{N}^{-1}\boldsymbol{m}_{N}\},\boldsymbol{\Sigma})$$ Where $\Sigma = (S_N^{-1} + \phi(x_{N+1})\beta\phi(x_{N+1})^T)^{-1}$ , and we can see that the result is exactly the same as the one we obtained in the previous problem.
1,406
4
4.1
medium
$ Given a set of data points $\{\mathbf{x}_n\}$ , we can define the *convex hull* to be the set of all points $\mathbf{x}$ given by $$\mathbf{x} = \sum_{n} \alpha_n \mathbf{x}_n \tag{4.156}$$ where $\alpha_n \geqslant 0$ and $\sum_n \alpha_n = 1$ . Consider a second set of points $\{\mathbf{y}_n\}$ together with their corresponding convex hull. By definition, the two sets of points will be linearly separable if there exists a vector $\widehat{\mathbf{w}}$ and a scalar $w_0$ such that $\widehat{\mathbf{w}}^T\mathbf{x}_n + w_0 > 0$ for all $\mathbf{x}_n$ , and $\widehat{\mathbf{w}}^T\mathbf{y}_n + w_0 < 0$ for all $\mathbf{y}_n$ . Show that if their convex hulls intersect, the two sets of points cannot be linearly separable, and conversely that if they are linearly separable, their convex hulls do not intersect.
If the convex hull of $\{\mathbf{x_n}\}$ and $\{\mathbf{y_n}\}$ intersects, we know that there will be a point $\mathbf{z}$ which can be written as $\mathbf{z} = \sum_n \alpha_n \mathbf{x_n}$ and also $\mathbf{z} = \sum_n \beta_n \mathbf{y_n}$ . Hence we can obtain: $$\widehat{\mathbf{w}}^T \mathbf{z} + w_0 = \widehat{\mathbf{w}}^T (\sum_n \alpha_n \mathbf{x_n}) + w_0$$ $$= (\sum_n \alpha_n \widehat{\mathbf{w}}^T \mathbf{x_n}) + (\sum_n \alpha_n) w_0$$ $$= \sum_n \alpha_n (\widehat{\mathbf{w}}^T \mathbf{x_n} + w_0) \quad (*)$$ Where we have used $\sum_n \alpha_n = 1$ . And if $\{\mathbf{x_n}\}$ and $\{\mathbf{y_n}\}$ are linearly separable, we have $\widehat{\mathbf{w}}^T \mathbf{x_n} + w_0 > 0$ and $\widehat{\mathbf{w}}^T \mathbf{y_n} + w_0 < 0$ , for $\forall \mathbf{x_n}$ , $\mathbf{y_n}$ . Together with $\alpha_n \geq 0$ and (\*), we know that $\widehat{\mathbf{w}}^T \mathbf{z} + w_0 > 0$ . And if we calculate $\widehat{\mathbf{w}}^T \mathbf{z} + w_0$ from the perspective of $\{\mathbf{y_n}\}$ following the same procedure, we can obtain $\widehat{\mathbf{w}}^T \mathbf{z} + w_0 < 0$ . Hence contradictory occurs. In other words, they are not linearly separable if their convex hulls intersect. We have already proved the first statement, i.e., "convex hulls intersect" gives "not linearly separable", and what the second part wants us to prove is that "linearly separable" gives "convex hulls do not intersect". This can be done simply by contrapositive. The true converse of the first statement should be if their convex hulls do not intersect, the data sets should be linearly separable. This is exactly what Hyperplane Separation Theorem shows us.
1,703
4
4.11
medium
Consider a classification problem with K classes for which the feature vector $\phi$ has M components each of which can take L discrete states. Let the values of the components be represented by a 1-of-L binary coding scheme. Further suppose that, conditioned on the class $\mathcal{C}_k$ , the M components of $\phi$ are independent, so that the class-conditional density factorizes with respect to the feature vector components. Show that the quantities $a_k$ given by $a_k = \ln p(\mathbf{x}|\mathcal{C}_k)p(\mathcal{C}_k).$, which appear in the argument to the softmax function describing the posterior class probabilities, are linear functions of the components of $\phi$ . Note that this represents an example of the naive Bayes model which is discussed in Section 8.2.2.
Based on definition, we can write down $$p(\boldsymbol{\phi}|C_k) = \prod_{m=1}^{M} \prod_{l=1}^{L} \mu_{kml}^{\phi_{ml}}$$ Note that here only one of the value among $\phi_{m1}$ , $\phi_{m2}$ , ... $\phi_{mL}$ is 1, and the others are all 0 because we have used a 1-of-L binary coding scheme, and also we have taken advantage of the assumption that the M components of $\phi$ are independent conditioned on the class $C_k$ . We substitute the expression above into $a_k = \ln p(\mathbf{x}|\mathcal{C}_k)p(\mathcal{C}_k).$, which gives: $$a_k = \sum_{m=1}^{M} \sum_{l=1}^{L} \phi_{ml} \cdot \ln \mu_{kml} + \ln p(C_k)$$ Hence it is obvious that $a_k$ is a linear function of the components of $\phi$ .
723
4
4.12
easy
Verify the relation $\frac{d\sigma}{da} = \sigma(1 - \sigma).$ for the derivative of the logistic sigmoid function defined by $\sigma(a) = \frac{1}{1 + \exp(-a)}$.
Based on definition, i.e., $\sigma(a) = \frac{1}{1 + \exp(-a)}$, we know that logistic sigmoid has the form: $$\sigma(a) = \frac{1}{1 + exp(-a)}$$ Now, we calculate its derivative with regard to a. $$\frac{d\sigma(a)}{da} = \frac{exp(a)}{[1+exp(-a)]^2} = \frac{exp(a)}{1+exp(-a)} \cdot \frac{1}{1+exp(-a)} = [1-\sigma(a)] \cdot \sigma(a)$$ Just as required.
369
4
4.13
easy
By making use of the result $\frac{d\sigma}{da} = \sigma(1 - \sigma).$ for the derivative of the logistic sigmoid, show that the derivative of the error function $E(\mathbf{w}) = -\ln p(\mathbf{t}|\mathbf{w}) = -\sum_{n=1}^{N} \{t_n \ln y_n + (1 - t_n) \ln(1 - y_n)\}$ for the logistic regression model is given by $\nabla E(\mathbf{w}) = \sum_{n=1}^{N} (y_n - t_n) \phi_n$.
Let's follow the hint. $$\nabla E(\mathbf{w}) = -\nabla \sum_{n=1}^{N} \{t_n \ln y_n + (1 - t_n) \ln(1 - y_n)\}$$ $$= -\sum_{n=1}^{N} \nabla \{t_n \ln y_n + (1 - t_n) \ln(1 - y_n)\}$$ $$= -\sum_{n=1}^{N} \frac{d\{t_n \ln y_n + (1 - t_n) \ln(1 - y_n)\}}{dy_n} \frac{dy_n}{da_n} \frac{da_n}{d\mathbf{w}}$$ $$= -\sum_{n=1}^{N} (\frac{t_n}{y_n} - \frac{1 - t_n}{1 - y_n}) \cdot y_n (1 - y_n) \cdot \phi_n$$ $$= -\sum_{n=1}^{N} \frac{t_n - y_n}{y_n (1 - y_n)} \cdot y_n (1 - y_n) \cdot \phi_n$$ $$= -\sum_{n=1}^{N} (t_n - y_n) \phi_n$$ $$= \sum_{n=1}^{N} (y_n - t_n) \phi_n$$ Where we have used $y_n = \sigma(a_n)$ , $a_n = \mathbf{w}^T \boldsymbol{\phi_n}$ , the chain rules and $\frac{d\sigma}{da} = \sigma(1 - \sigma).$.
736
4
4.14
easy
Show that for a linearly separable data set, the maximum likelihood solution for the logistic regression model is obtained by finding a vector $\mathbf{w}$ whose decision boundary $\mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}) = 0$ separates the classes and then taking the magnitude of $\mathbf{w}$ to infinity.
According to definition, we know that if a dataset is linearly separable, we can find $\mathbf{w}$ , for some points $\mathbf{x_n}$ , we have $\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x_n}) > 0$ , and the others $\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x_m}) < 0$ . Then the boundary is given by $\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x}) = 0$ . Note that for any point $\mathbf{x_0}$ in the dataset, the value of $\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x_0})$ should either be positive or negative, but it can not equal to 0. Therefore, the maximum likelihood solution for logistic regression is trivial. We suppose for those points $\mathbf{x_n}$ belonging to class $C_1$ , we have $\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x_n}) > 0$ and $\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x_m}) < 0$ for those belonging to class $C_2$ . According to $p(C_1|\phi) = y(\phi) = \sigma\left(\mathbf{w}^{\mathrm{T}}\phi\right)$, if $|\mathbf{w}| \to \infty$ , we have $$p(C_1|\boldsymbol{\phi}(\mathbf{x_n})) = \sigma(\mathbf{w}^T\boldsymbol{\phi}(\mathbf{x_n})) \to 1$$ Where we have used $\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x_n}) \to +\infty$ . And since $\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x_m}) \to -\infty$ , we can also obtain: $$p(C_2|\boldsymbol{\phi}(\mathbf{x_m})) = 1 - p(C_1|\boldsymbol{\phi}(\mathbf{x_m})) = 1 - \sigma(\mathbf{w}^T\boldsymbol{\phi}(\mathbf{x_m})) \to 1$$ In other words, for the likelihood function, i.e.,(4.89), if we have $|\mathbf{w}| \to \infty$ , and also we label all the points lying on one side of the boundary as class $C_1$ , and those on the other side as class $C_2$ , the every term in $p(\mathbf{t}|\mathbf{w}) = \prod_{n=1}^{N} y_n^{t_n} \left\{ 1 - y_n \right\}^{1 - t_n}$ can achieve its maximum value, i.e., 1, finally leading to the maximum of the likelihood. Hence, for a linearly separable dataset, the learning process may prefer to make $|\mathbf{w}| \to \infty$ and use the linear boundary to label the datasets, which can cause severe over-fitting problem. ## **Problem 4.15 Solution** Since $y_n$ is the output of the logistic sigmoid function, we know that $0 < y_n < 1$ and hence $y_n(1-y_n) > 0$ . Then we use $\mathbf{H} = \nabla \nabla E(\mathbf{w}) = \sum_{n=1}^{N} y_n (1 - y_n) \boldsymbol{\phi}_n \boldsymbol{\phi}_n^{\mathrm{T}} = \boldsymbol{\Phi}^{\mathrm{T}} \mathbf{R} \boldsymbol{\Phi}$, for an arbitrary non-zero real vector $\mathbf{a} \neq \mathbf{0}$ , we have: $$\mathbf{a}^{T}\mathbf{H}\mathbf{a} = \mathbf{a}^{T} \left[ \sum_{n=1}^{N} y_{n} (1 - y_{n}) \boldsymbol{\phi}_{n} \boldsymbol{\phi}_{n}^{T} \right] \mathbf{a}$$ $$= \sum_{n=1}^{N} y_{n} (1 - y_{n}) (\boldsymbol{\phi}_{n}^{T} \mathbf{a})^{T} (\boldsymbol{\phi}_{n}^{T} \mathbf{a})$$ $$= \sum_{n=1}^{N} y_{n} (1 - y_{n}) b_{n}^{2}$$ Where we have denoted $b_n = \phi_n^T \mathbf{a}$ . What's more, there should be at least one of $\{b_1, b_2, ..., b_N\}$ not equal to zero and then we can see that the expression above is larger than 0 and hence $\mathbf{H}$ is positive definite. Otherwise, if all the $b_n = 0$ , $\mathbf{a} = [a_1, a_2, ..., a_M]^T$ will locate in the null space of matrix $\mathbf{\Phi}_{N \times M}$ . However, with regard to the *rank-nullity theorem*, we know that $\operatorname{Rank}(\mathbf{\Phi}) + \operatorname{Nullity}(\mathbf{\Phi}) = M$ , and we have already assumed that those M features are independent, i.e., $\operatorname{Rank}(\mathbf{\Phi}) = M$ , which means there is only $\mathbf{0}$ in its null space. Therefore contradictory occurs.
3,587
4
4.16
easy
Consider a binary classification problem in which each observation $\mathbf{x}_n$ is known to belong to one of two classes, corresponding to t=0 and t=1, and suppose that the procedure for collecting training data is imperfect, so that training points are sometimes mislabelled. For every data point $\mathbf{x}_n$ , instead of having a value t for the class label, we have instead a value t representing the probability that t = 1. Given a probabilistic model t = 1t = 1t t write down the log likelihood function appropriate to such a data set.
We still denote $y_n = p(t = 1 | \phi_n)$ , and then we can write down the log likelihood by replacing $t_n$ with $\pi_n$ in $p(\mathbf{t}|\mathbf{w}) = \prod_{n=1}^{N} y_n^{t_n} \left\{ 1 - y_n \right\}^{1 - t_n}$ and $E(\mathbf{w}) = -\ln p(\mathbf{t}|\mathbf{w}) = -\sum_{n=1}^{N} \{t_n \ln y_n + (1 - t_n) \ln(1 - y_n)\}$. $$\ln p(\mathbf{t}|\mathbf{w}) = \sum_{n=1}^{N} \{\pi_n \ln y_n + (1 - \pi_n) \ln (1 - y_n)\}\$$
445
4
4.17
easy
Show that the derivatives of the softmax activation function $p(\mathcal{C}_k|\phi) = y_k(\phi) = \frac{\exp(a_k)}{\sum_j \exp(a_j)}$, where the $a_k$ are defined by $a_k = \mathbf{w}_k^{\mathrm{T}} \boldsymbol{\phi}.$, are given by $\frac{\partial y_k}{\partial a_j} = y_k (I_{kj} - y_j)$.
We should discuss in two situations separately, namely j = k and $j \neq k$ . When $j \neq k$ , we have: $$\frac{\partial y_k}{\partial a_j} = \frac{-exp(a_k) \cdot exp(a_j)}{[\sum_j exp(a_j)]^2} = -y_k \cdot y_j$$ And when j = k, we have: $$\frac{\partial y_k}{\partial a_k} = \frac{exp(a_k)\sum_j exp(a_j) - exp(a_k)exp(a_k)}{[\sum_j exp(a_j)]^2} = y_k - y_k^2 = y_k(1-y_k)$$ Therefore, we can obtain: $$\frac{\partial y_k}{\partial a_j} = y_k (I_{kj} - y_j)$$ Where $I_{kj}$ is the elements of the indentity matrix.
528
4
4.18
easy
Using the result $\nabla E(\mathbf{w}) = \sum_{n=1}^{N} (y_n - t_n) \phi_n$ for the derivatives of the softmax activation function, show that the gradients of the cross-entropy error $E(\mathbf{w}_1, \dots, \mathbf{w}_K) = -\ln p(\mathbf{T}|\mathbf{w}_1, \dots, \mathbf{w}_K) = -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} \ln y_{nk}$ are given by $\nabla_{\mathbf{w}_j} E(\mathbf{w}_1, \dots, \mathbf{w}_K) = \sum_{n=1}^N (y_{nj} - t_{nj}) \, \boldsymbol{\phi}_n$.
We derive every term $t_{nk} \ln y_{nk}$ with regard to $a_j$ . $$\begin{array}{lll} \frac{\partial t_{nk} \ln y_{nk}}{\partial \mathbf{w_j}} & = & \frac{\partial t_{nk} \ln y_{nk}}{\partial y_{nk}} \frac{\partial y_{nk}}{\partial a_j} \frac{\partial a_j}{\partial \mathbf{w_j}} \\ & = & t_{nk} \frac{1}{y_{nk}} \cdot y_{nk} (I_{kj} - y_{nj}) \cdot \boldsymbol{\phi_n} \\ & = & t_{nk} (I_{kj} - y_{nj}) \boldsymbol{\phi_n} \end{array}$$ Where we have used $a_k = \mathbf{w}_k^{\mathrm{T}} \boldsymbol{\phi}.$ and $\frac{\partial y_k}{\partial a_j} = y_k (I_{kj} - y_j)$. Next we perform summation over n and k. $$\nabla_{\mathbf{w_j}} E = -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} (I_{kj} - y_{nj}) \boldsymbol{\phi_n}$$ $$= \sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} y_{nj} \boldsymbol{\phi_n} - \sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} I_{kj} \boldsymbol{\phi_n}$$ $$= \sum_{n=1}^{N} \left[ (\sum_{k=1}^{K} t_{nk}) y_{nj} \boldsymbol{\phi_n} \right] - \sum_{n=1}^{N} t_{nj} \boldsymbol{\phi_n}$$ $$= \sum_{n=1}^{N} y_{nj} \boldsymbol{\phi_n} - \sum_{n=1}^{N} t_{nj} \boldsymbol{\phi_n}$$ $$= \sum_{n=1}^{N} (y_{nj} - t_{nj}) \boldsymbol{\phi_n}$$ Where we have used the fact that for arbitrary n, we have $\sum_{k=1}^{K} t_{nk} = 1$ . **Problem 4.19 Solution** We write down the log likelihood. $$\ln p(\mathbf{t}|\mathbf{w}) = \sum_{n=1}^{N} \{t_n \ln y_n + (1 - t_n) \ln(1 - y_n)\}$$ Therefore, we can obtain: $$\nabla_{\mathbf{w}} \ln p = \frac{\partial \ln p}{\partial y_n} \cdot \frac{\partial y_n}{\partial a_n} \cdot \frac{\partial a_n}{\partial \mathbf{w}}$$ $$= \sum_{n=1}^{N} (\frac{t_n}{y_n} - \frac{1 - t_n}{1 - y_n}) \Phi'(a_n) \phi_n$$ $$= \sum_{n=1}^{N} \frac{y_n - t_n}{y_n (1 - y_n)} \Phi'(a_n) \phi_n$$ Where we have used $y = p(t = 1|a) = \Phi(a)$ and $a_n = \mathbf{w}^T \phi_n$ . According to $\Phi(a) = \int_{-\infty}^{a} \mathcal{N}(\theta|0,1) \,\mathrm{d}\theta$, we can obtain: $$\Phi'(a) = \mathcal{N}(\theta|0,1)\big|_{\theta=a} = \frac{1}{\sqrt{2\pi}}exp(-\frac{1}{2}a^2)$$ Hence, we can obtain: $$\nabla_{\mathbf{w}} \ln p = \sum_{n=1}^{N} \frac{y_n - t_n}{y_n (1 - y_n)} \frac{exp(-\frac{a_n^2}{2})}{\sqrt{2\pi}} \boldsymbol{\phi_n}$$ To calculate the Hessian Matrix, we need to first evaluate several derivatives. $$\frac{\partial}{\partial \mathbf{w}} \left\{ \frac{y_n - t_n}{y_n (1 - y_n)} \right\} = \frac{\partial}{\partial y_n} \left\{ \frac{y_n - t_n}{y_n (1 - y_n)} \right\} \cdot \frac{\partial y_n}{\partial a_n} \cdot \frac{\partial a_n}{\partial \mathbf{w}}$$ $$= \frac{y_n (1 - y_n) - (y_n - t_n)(1 - 2y_n)}{[y_n (1 - y_n)]^2} \Phi'(a_n) \phi_n$$ $$= \frac{y_n^2 + t_n - 2y_n t_n}{y_n^2 (1 - y_n)^2} \frac{exp(-\frac{a_n^2}{2})}{\sqrt{2\pi}} \phi_n$$ And $$\frac{\partial}{\partial \mathbf{w}} \left\{ \frac{exp(-\frac{a_n^2}{2})}{\sqrt{2\pi}} \right\} = \frac{\partial}{\partial a_n} \left\{ \frac{exp(-\frac{a_n^2}{2})}{\sqrt{2\pi}} \right\} \frac{\partial a_n}{\partial \mathbf{w}}$$ $$= -\frac{a_n}{\sqrt{2\pi}} exp(-\frac{a_n^2}{2}) \phi_n$$ Therefore, using the chain rule, we can obtain: $$\frac{\partial}{\partial \mathbf{w}} \left\{ \frac{y_n - t_n}{y_n (1 - y_n)} \frac{exp(-\frac{a_n^2}{2})}{\sqrt{2\pi}} \right\} = \frac{\partial}{\partial \mathbf{w}} \left\{ \frac{y_n - t_n}{y_n (1 - y_n)} \right\} \frac{exp(-\frac{a_n^2}{2})}{\sqrt{2\pi}} + \frac{y_n - t_n}{y_n (1 - y_n)} \frac{\partial}{\partial \mathbf{w}} \left\{ \frac{exp(-\frac{a_n^2}{2})}{\sqrt{2\pi}} \right\} \\ = \left[ \frac{y_n^2 + t_n - 2y_n t_n}{y_n (1 - y_n)} \frac{exp(-\frac{a_n^2}{2})}{\sqrt{2\pi}} - a_n (y_n - t_n) \right] \frac{exp(-\frac{a_n^2}{2})}{\sqrt{2\pi} y_n (1 - y_n)} \phi_n$$ Finally if we perform summation over n, we can obtain the Hessian Matrix: $$\begin{split} \mathbf{H} &= \nabla \nabla_{\mathbf{w}} \ln p \\ &= \sum_{n=1}^{N} \frac{\partial}{\partial \mathbf{w}} \left\{ \frac{y_n - t_n}{y_n (1 - y_n)} \frac{exp(-\frac{a_n^2}{2})}{\sqrt{2\pi}} \right\} \cdot \boldsymbol{\phi_n} \\ &= \sum_{n=1}^{N} \left[ \frac{y_n^2 + t_n - 2y_n t_n}{y_n (1 - y_n)} \frac{exp(-\frac{a_n^2}{2})}{\sqrt{2\pi}} - a_n (y_n - t_n) \right] \frac{exp(-\frac{a_n^2}{2})}{\sqrt{2\pi} \gamma_n (1 - y_n)} \boldsymbol{\phi_n} \boldsymbol{\phi_n}^T \end{split}$$ ## Problem 4.20 Solution We know that the Hessian Matrix is of size $MK \times MK$ , and the (j,k)th block with size $M \times M$ is given by $\nabla_{\mathbf{w}_k} \nabla_{\mathbf{w}_j} E(\mathbf{w}_1, \dots, \mathbf{w}_K) = -\sum_{n=1}^N y_{nk} (I_{kj} - y_{nj}) \phi_n \phi_n^{\mathrm{T}}.$, where j,k=1,2,...,K. Therefore, we can obtain: $$\mathbf{u}^{\mathbf{T}}\mathbf{H}\mathbf{u} = \sum_{j=1}^{K} \sum_{k=1}^{K} \mathbf{u}_{j}^{\mathbf{T}}\mathbf{H}_{j,k}\mathbf{u}_{k}$$ (\*) Where we use $\mathbf{u_k}$ to denote the kth block vector of $\mathbf{u}$ with size $M \times 1$ , and $\mathbf{H_{j,k}}$ to denote the (j,k)th block matrix of $\mathbf{H}$ with size $M \times M$ . Then based on $\nabla_{\mathbf{w}_k} \nabla_{\mathbf{w}_j} E(\mathbf{w}_1, \dots, \mathbf{w}_K) = -\sum_{n=1}^N y_{nk} (I_{kj} - y_{nj}) \phi_n \phi_n^{\mathrm{T}}.$, we further expand (4.110): $$(*) = \sum_{j=1}^{K} \sum_{k=1}^{K} \mathbf{u}_{j}^{\mathbf{T}} \{-\sum_{n=1}^{N} y_{nk} (I_{kj} - y_{nj}) \boldsymbol{\phi}_{n} \boldsymbol{\phi}_{n}^{T} \} \mathbf{u}_{k}$$ $$= \sum_{j=1}^{K} \sum_{k=1}^{K} \sum_{n=1}^{N} \mathbf{u}_{j}^{\mathbf{T}} \{-y_{nk} (I_{kj} - y_{nj}) \boldsymbol{\phi}_{n} \boldsymbol{\phi}_{n}^{T} \} \mathbf{u}_{k}$$ $$= \sum_{j=1}^{K} \sum_{k=1}^{K} \sum_{n=1}^{N} \mathbf{u}_{j}^{\mathbf{T}} \{-y_{nk} I_{kj} \boldsymbol{\phi}_{n} \boldsymbol{\phi}_{n}^{T} \} \mathbf{u}_{k} + \sum_{j=1}^{K} \sum_{k=1}^{K} \sum_{n=1}^{N} \mathbf{u}_{j}^{\mathbf{T}} \{y_{nk} y_{nj} \boldsymbol{\phi}_{n} \boldsymbol{\phi}_{n}^{T} \} \mathbf{u}_{k}$$ $$= \sum_{k=1}^{K} \sum_{n=1}^{N} \mathbf{u}_{k}^{\mathbf{T}} \{-y_{nk} \boldsymbol{\phi}_{n} \boldsymbol{\phi}_{n}^{T} \} \mathbf{u}_{k} + \sum_{j=1}^{K} \sum_{k=1}^{K} \sum_{n=1}^{N} y_{nj} \mathbf{u}_{j}^{\mathbf{T}} \{ \boldsymbol{\phi}_{n} \boldsymbol{\phi}_{n}^{T} \} y_{nk} \mathbf{u}_{k}$$
6,125
4
4.19
easy
Write down expressions for the gradient of the log likelihood, as well as the corresponding Hessian matrix, for the probit regression model defined in Section 4.3.5. These are the quantities that would be required to train such a model using IRLS.
Using the cross-entropy error function $E(\mathbf{w}) = -\ln p(\mathbf{t}|\mathbf{w}) = -\sum_{n=1}^{N} \{t_n \ln y_n + (1 - t_n) \ln(1 - y_n)\}$, and following Exercise 4.13, we have $$\frac{\partial E}{\partial y_n} = \frac{y_n - t_n}{y_n (1 - y_n)}. (108)$$ Also $$\nabla a_n = \phi_n. \tag{109}$$ From $\operatorname{erf}(a) = \frac{2}{\sqrt{\pi}} \int_0^a \exp(-\theta^2/2) \, \mathrm{d}\theta$ and $\Phi(a) = \frac{1}{2} \left\{ 1 + \frac{1}{\sqrt{2}} \operatorname{erf}(a) \right\}.$ we have $$\frac{\partial y_n}{\partial a_n} = \frac{\partial \Phi(a_n)}{\partial a_n} = \frac{1}{\sqrt{2\pi}} e^{-a_n^2}.$$ (110) Combining (108), (109) and (110), we get $$\nabla E = \sum_{n=1}^{N} \frac{\partial E}{\partial y_n} \frac{\partial y_n}{\partial a_n} \nabla a_n = \sum_{n=1}^{N} \frac{y_n - t_n}{y_n (1 - y_n)} \frac{1}{\sqrt{2\pi}} e^{-a_n^2} \phi_n.$$ (111) In order to find the expression for the Hessian, it is is convenient to first determine $$\frac{\partial}{\partial y_n} \frac{y_n - t_n}{y_n (1 - y_n)} = \frac{y_n (1 - y_n)}{y_n^2 (1 - y_n)^2} - \frac{(y_n - t_n)(1 - 2y_n)}{y_n^2 (1 - y_n)^2} = \frac{y_n^2 + t_n - 2y_n t_n}{y_n^2 (1 - y_n)^2}.$$ (112) Then using (109)–(112) we have $$\nabla \nabla E = \sum_{n=1}^{N} \left\{ \frac{\partial}{\partial y_n} \left[ \frac{y_n - t_n}{y_n (1 - y_n)} \right] \frac{1}{\sqrt{2\pi}} e^{-a_n^2} \phi_n \nabla y_n \right.$$ $$\left. + \frac{y_n - t_n}{y_n (1 - y_n)} \frac{1}{\sqrt{2\pi}} e^{-a_n^2} (-2a_n) \phi_n \nabla a_n \right\}$$ $$= \sum_{n=1}^{N} \left( \frac{y_n^2 + t_n - 2y_n t_n}{y_n (1 - y_n)} \frac{1}{\sqrt{2\pi}} e^{-a_n^2} - 2a_n (y_n - t_n) \right) \frac{e^{-2a_n^2} \phi_n \phi_n^{\mathrm{T}}}{\sqrt{2\pi} y_n (1 - y_n)}.$$
1,741
4
4.2
medium
$ Consider the minimization of a sum-of-squares error function $E_D(\widetilde{\mathbf{W}}) = \frac{1}{2} \text{Tr} \left\{ (\widetilde{\mathbf{X}} \widetilde{\mathbf{W}} - \mathbf{T})^{\mathrm{T}} (\widetilde{\mathbf{X}} \widetilde{\mathbf{W}} - \mathbf{T}) \right\}.$, and suppose that all of the target vectors in the training set satisfy a linear constraint $$\mathbf{a}^{\mathrm{T}}\mathbf{t}_{n} + b = 0 \tag{4.157}$$ where $\mathbf{t}_n$ corresponds to the $n^{\mathrm{th}}$ row of the matrix $\mathbf{T}$ in $E_D(\widetilde{\mathbf{W}}) = \frac{1}{2} \text{Tr} \left\{ (\widetilde{\mathbf{X}} \widetilde{\mathbf{W}} - \mathbf{T})^{\mathrm{T}} (\widetilde{\mathbf{X}} \widetilde{\mathbf{W}} - \mathbf{T}) \right\}.$. Show that as a consequence of this constraint, the elements of the model prediction $\mathbf{y}(\mathbf{x})$ given by the least-squares solution $\mathbf{y}(\mathbf{x}) = \widetilde{\mathbf{W}}^{\mathrm{T}} \widetilde{\mathbf{x}} = \mathbf{T}^{\mathrm{T}} \left( \widetilde{\mathbf{X}}^{\dagger} \right)^{\mathrm{T}} \widetilde{\mathbf{x}}.$ also satisfy this constraint, so that $$\mathbf{a}^{\mathrm{T}}\mathbf{y}(\mathbf{x}) + b = 0. \tag{4.158}$$ To do so, assume that one of the basis functions $\phi_0(\mathbf{x}) = 1$ so that the corresponding parameter $w_0$ plays the role of a bias.
Let's make the dependency of $E_D(\widetilde{\mathbf{W}})$ on $w_0$ explicitly: $$E_D(\widetilde{\mathbf{W}}) = \frac{1}{2} \text{Tr} \{ (\mathbf{X} \mathbf{W} + \mathbf{1} \mathbf{w_0}^T - \mathbf{T})^T (\mathbf{X} \mathbf{W} + \mathbf{1} \mathbf{w_0}^T - \mathbf{T}) \}$$ Then we calculate the derivative of $E_D(\widetilde{\mathbf{W}})$ with respect to $\mathbf{w_0}$ : $$\frac{\partial E_D(\widetilde{\mathbf{W}})}{\partial \mathbf{w_0}} = 2N\mathbf{w_0} + 2(\mathbf{X}\mathbf{W} - \mathbf{T})^T \mathbf{1}$$ Where we have used the property: $$\frac{\partial}{\partial \mathbf{X}} \mathrm{Tr} \big[ (\mathbf{A} \mathbf{X} \mathbf{B} + \mathbf{C}) (\mathbf{A} \mathbf{X} \mathbf{B} + \mathbf{C})^T \big] = 2 \mathbf{A}^T (\mathbf{A} \mathbf{X} \mathbf{B} + \mathbf{C}) \mathbf{B}^T$$ We set the derivative equals to 0, which gives: $$\mathbf{w_0} = -\frac{1}{N}(\mathbf{XW} - \mathbf{T})^T \mathbf{1} = \bar{\mathbf{t}} - \mathbf{W}^T \bar{\mathbf{x}}$$ Where we have denoted: $$\bar{\mathbf{t}} = \frac{1}{N} \mathbf{T}^T \mathbf{1}$$ , and $\bar{\mathbf{x}} = \frac{1}{N} \mathbf{X}^T \mathbf{1}$ If we substitute the equations above into $E_D(\widetilde{\mathbf{W}})$ , we can obtain: $$E_D(\widetilde{\mathbf{W}}) = \frac{1}{2} \mathrm{Tr} \big\{ (\mathbf{X} \mathbf{W} + \bar{\mathbf{T}} - \bar{\mathbf{X}} \mathbf{W} - \mathbf{T})^T (\mathbf{X} \mathbf{W} + \bar{\mathbf{T}} - \bar{\mathbf{X}} \mathbf{W} - \mathbf{T}) \big\}$$ Where we further denote $$\bar{\mathbf{T}} = \mathbf{1}\bar{\mathbf{t}}^T$$ . and $\bar{\mathbf{X}} = \mathbf{1}\bar{\mathbf{x}}^T$ Then we set the derivative of $E_D(\widetilde{\mathbf{W}})$ with regard to $\mathbf{W}$ to 0, which gives: $$W=\widehat{X}^{\dagger}\widehat{T}$$ Where we have defined: $$\hat{\mathbf{X}} = \mathbf{X} - \bar{\mathbf{X}}$$ , and $\hat{\mathbf{T}} = \mathbf{T} - \bar{\mathbf{T}}$ Now consider the prediction for a new given $\mathbf{x}$ , we have: $$\mathbf{y}(\mathbf{x}) = \mathbf{W}^T \mathbf{x} + \mathbf{w_0}$$ $$= \mathbf{W}^T \mathbf{x} + \bar{\mathbf{t}} - \mathbf{W}^T \bar{\mathbf{x}}$$ $$= \bar{\mathbf{t}} + \mathbf{W}^T (\mathbf{x} - \bar{\mathbf{x}})$$ If we know that $\mathbf{a}^T \mathbf{t_n} + b = 0$ holds for some $\mathbf{a}$ and b, we can obtain: $$\mathbf{a}^T \mathbf{\bar{t}} = \frac{1}{N} \mathbf{a}^T \mathbf{T}^T \mathbf{1} = \frac{1}{N} \sum_{n=1}^{N} \mathbf{a}^T \mathbf{t_n} = -b$$ Therefore, $$\mathbf{a}^{T}\mathbf{y}(\mathbf{x}) = \mathbf{a}^{T}[\bar{\mathbf{t}} + \mathbf{W}^{T}(\mathbf{x} - \bar{\mathbf{x}})]$$ $$= \mathbf{a}^{T}\bar{\mathbf{t}} + \mathbf{a}^{T}\mathbf{W}^{T}(\mathbf{x} - \bar{\mathbf{x}})$$ $$= -b + \mathbf{a}^{T}\widehat{\mathbf{T}}^{T}(\widehat{\mathbf{X}}^{\dagger})^{T}(\mathbf{x} - \bar{\mathbf{x}})$$ $$= -b$$ Where we have used: $$\mathbf{a}^{T} \widehat{\mathbf{T}}^{T} = \mathbf{a}^{T} (\mathbf{T} - \overline{\mathbf{I}})^{T} = \mathbf{a}^{T} (\mathbf{T} - \frac{1}{N} \mathbf{1} \mathbf{1}^{T} \mathbf{T})^{T}$$ $$= \mathbf{a}^{T} \mathbf{T}^{T} - \frac{1}{N} \mathbf{a}^{T} \mathbf{T}^{T} \mathbf{1} \mathbf{1}^{T} = -b \mathbf{1}^{T} + b \mathbf{1}^{T}$$ $$= \mathbf{0}^{T}$$
3,167
4
4.21
easy
Show that the probit function $\Phi(a) = \int_{-\infty}^{a} \mathcal{N}(\theta|0,1) \,\mathrm{d}\theta$ and the erf function $\operatorname{erf}(a) = \frac{2}{\sqrt{\pi}} \int_0^a \exp(-\theta^2/2) \, \mathrm{d}\theta$ are related by $\Phi(a) = \frac{1}{2} \left\{ 1 + \frac{1}{\sqrt{2}} \operatorname{erf}(a) \right\}.$.
It is quite obvious. $$\begin{split} \Phi(a) &= \int_{-\infty}^{a} \mathcal{N}(\theta|0,1) d\theta \\ &= \frac{1}{2} + \int_{0}^{a} \mathcal{N}(\theta|0,1) d\theta \\ &= \frac{1}{2} + \int_{0}^{a} \mathcal{N}(\theta|0,1) d\theta \\ &= \frac{1}{2} + \frac{1}{\sqrt{2\pi}} \int_{0}^{a} exp(-\theta^{2}/2) d\theta \\ &= \frac{1}{2} + \frac{1}{\sqrt{2\pi}} \frac{\sqrt{\pi}}{2} \int_{0}^{a} \frac{2}{\sqrt{\pi}} exp(-\theta^{2}/2) d\theta \\ &= \frac{1}{2} (1 + \frac{1}{\sqrt{2}} \int_{0}^{a} \frac{2}{\sqrt{\pi}} exp(-\theta^{2}/2) d\theta) \\ &= \frac{1}{2} \{1 + \frac{1}{\sqrt{2}} erf(a) \} \end{split}$$ Where we have used $$\int_{-\infty}^{0} \mathcal{N}(\theta|0,1)d\theta = \frac{1}{2}$$
695
4
4.22
easy
Using the result $= f(\mathbf{z}_0) \frac{(2\pi)^{M/2}}{|\mathbf{A}|^{1/2}}$, derive the expression $\ln p(\mathcal{D}) \simeq \ln p(\mathcal{D}|\boldsymbol{\theta}_{\text{MAP}}) + \underbrace{\ln p(\boldsymbol{\theta}_{\text{MAP}}) + \frac{M}{2}\ln(2\pi) - \frac{1}{2}\ln|\mathbf{A}|}_{\text{Occam factor}}$ for the log model evidence under the Laplace approximation.
If we denote $f(\theta) = p(D|\theta)p(\theta)$ , we can write: $$p(D) = \int p(D|\boldsymbol{\theta})p(\boldsymbol{\theta})d\boldsymbol{\theta} = \int f(\boldsymbol{\theta})d\boldsymbol{\theta}$$ $$= f(\boldsymbol{\theta}_{MAP})\frac{(2\pi)^{M/2}}{|\mathbf{A}|^{1/2}}$$ $$= p(D|\boldsymbol{\theta}_{MAP})p(\boldsymbol{\theta}_{MAP})\frac{(2\pi)^{M/2}}{|\mathbf{A}|^{1/2}}$$ Where $\theta_{MAP}$ is the value of $\theta$ at the mode of $f(\theta)$ , **A** is the Hessian Matrix of $-\ln f(\theta)$ and we have also used $= f(\mathbf{z}_0) \frac{(2\pi)^{M/2}}{|\mathbf{A}|^{1/2}}$. Therefore, $$\ln p(D) = \ln p(D|\boldsymbol{\theta}_{MAP}) + \ln p(\boldsymbol{\theta}_{MAP}) + \frac{M}{2} \ln 2\pi - \frac{1}{2} \ln |\mathbf{A}|$$ Just as required.
769
4
4.23
medium
In this exercise, we derive the BIC result $\ln p(\mathcal{D}) \simeq \ln p(\mathcal{D}|\boldsymbol{\theta}_{\text{MAP}}) - \frac{1}{2}M \ln N$ starting from the Laplace approximation to the model evidence given by $\ln p(\mathcal{D}) \simeq \ln p(\mathcal{D}|\boldsymbol{\theta}_{\text{MAP}}) + \underbrace{\ln p(\boldsymbol{\theta}_{\text{MAP}}) + \frac{M}{2}\ln(2\pi) - \frac{1}{2}\ln|\mathbf{A}|}_{\text{Occam factor}}$. Show that if the prior over parameters is Gaussian of the form $p(\theta) = \mathcal{N}(\theta|\mathbf{m}, \mathbf{V}_0)$ , the log model evidence under the Laplace approximation takes the form $$\ln p(\mathcal{D}) \simeq \ln p(\mathcal{D}|\boldsymbol{\theta}_{\mathrm{MAP}}) - \frac{1}{2}(\boldsymbol{\theta}_{\mathrm{MAP}} - \mathbf{m})^{\mathrm{T}}\mathbf{V}_{0}^{-1}(\boldsymbol{\theta}_{\mathrm{MAP}} - \mathbf{m}) - \frac{1}{2}\ln |\mathbf{H}| + \mathrm{const}$$ where $\mathbf{H}$ is the matrix of second derivatives of the log likelihood $\ln p(\mathcal{D}|\boldsymbol{\theta})$ evaluated at $\boldsymbol{\theta}_{\text{MAP}}$ . Now assume that the prior is broad so that $\mathbf{V}_0^{-1}$ is small and the second term on the right-hand side above can be neglected. Furthermore, consider the case of independent, identically distributed data so that $\mathbf{H}$ is the sum of terms one for each data point. Show that the log model evidence can then be written approximately in the form of the BIC expression $\ln p(\mathcal{D}) \simeq \ln p(\mathcal{D}|\boldsymbol{\theta}_{\text{MAP}}) - \frac{1}{2}M \ln N$.
According to $\ln p(\mathcal{D}) \simeq \ln p(\mathcal{D}|\boldsymbol{\theta}_{\text{MAP}}) + \underbrace{\ln p(\boldsymbol{\theta}_{\text{MAP}}) + \frac{M}{2}\ln(2\pi) - \frac{1}{2}\ln|\mathbf{A}|}_{\text{Occam factor}}$, we can write: $$\ln p(D) = \ln p(D|\boldsymbol{\theta}_{MAP}) + \ln p(\boldsymbol{\theta}_{MAP}) + \frac{M}{2} \ln 2\pi - \frac{1}{2} \ln |\mathbf{A}|$$ $$= \ln p(D|\boldsymbol{\theta}_{MAP}) - \frac{M}{2} \ln 2\pi - \frac{1}{2} \ln |\mathbf{V_0}| - \frac{1}{2} (\boldsymbol{\theta}_{MAP} - \mathbf{m})^T \mathbf{V_0}^{-1} (\boldsymbol{\theta}_{MAP} - \mathbf{m})$$ $$+ \frac{M}{2} \ln 2\pi - \frac{1}{2} \ln |\mathbf{A}|$$ $$= \ln p(D|\boldsymbol{\theta}_{MAP}) - \frac{1}{2} \ln |\mathbf{V_0}| - \frac{1}{2} (\boldsymbol{\theta}_{MAP} - \mathbf{m})^T \mathbf{V_0}^{-1} (\boldsymbol{\theta}_{MAP} - \mathbf{m}) - \frac{1}{2} \ln |\mathbf{A}|$$ Where we have used the definition of the multivariate Gaussian Distribution. Then, from $\mathbf{A} = -\nabla\nabla \ln p(\mathcal{D}|\boldsymbol{\theta}_{\text{MAP}})p(\boldsymbol{\theta}_{\text{MAP}}) = -\nabla\nabla \ln p(\boldsymbol{\theta}_{\text{MAP}}|\mathcal{D}).$, we can write: $$\mathbf{A} = -\nabla \nabla \ln p(D|\boldsymbol{\theta_{MAP}}) p(\boldsymbol{\theta_{MAP}})$$ $$= -\nabla \nabla \ln p(D|\boldsymbol{\theta_{MAP}}) - \nabla \nabla \ln p(\boldsymbol{\theta_{MAP}})$$ $$= \mathbf{H} - \nabla \nabla \left\{ -\frac{1}{2} (\boldsymbol{\theta_{MAP}} - \mathbf{m})^T \mathbf{V_0}^{-1} (\boldsymbol{\theta_{MAP}} - \mathbf{m}) \right\}$$ $$= \mathbf{H} + \nabla \left\{ \mathbf{V_0}^{-1} (\boldsymbol{\theta_{MAP}} - \mathbf{m}) \right\}$$ $$= \mathbf{H} + \mathbf{V_0}^{-1}$$ Where we have denoted $\mathbf{H} = -\nabla \nabla \ln p(D|\boldsymbol{\theta_{MAP}})$ . Therefore, the equation above becomes: $$\ln p(D) = \ln p(D|\boldsymbol{\theta}_{MAP}) - \frac{1}{2}(\boldsymbol{\theta}_{MAP} - \mathbf{m})^T \mathbf{V_0}^{-1}(\boldsymbol{\theta}_{MAP} - \mathbf{m}) - \frac{1}{2}\ln\left\{|\mathbf{V_0}| \cdot |\mathbf{H} + \mathbf{V_0}^{-1}|\right\} = \ln p(D|\boldsymbol{\theta}_{MAP}) - \frac{1}{2}(\boldsymbol{\theta}_{MAP} - \mathbf{m})^T \mathbf{V_0}^{-1}(\boldsymbol{\theta}_{MAP} - \mathbf{m}) - \frac{1}{2}\ln\left\{|\mathbf{V_0}\mathbf{H} + \mathbf{I}|\right\} \approx \ln p(D|\boldsymbol{\theta}_{MAP}) - \frac{1}{2}(\boldsymbol{\theta}_{MAP} - \mathbf{m})^T \mathbf{V_0}^{-1}(\boldsymbol{\theta}_{MAP} - \mathbf{m}) - \frac{1}{2}\ln|\mathbf{V_0}| - \frac{1}{2}\ln|\mathbf{H}| \approx \ln p(D|\boldsymbol{\theta}_{MAP}) - \frac{1}{2}(\boldsymbol{\theta}_{MAP} - \mathbf{m})^T \mathbf{V_0}^{-1}(\boldsymbol{\theta}_{MAP} - \mathbf{m}) - \frac{1}{2}\ln|\mathbf{H}| + \text{const}$$ Where we have used the property of determinant: $|A| \cdot |B| = |AB|$ , and the fact that the prior is board, i.e. I can be neglected with regard to $V_0H$ . What's more, since the prior is pre-given, we can view $V_0$ as constant. And if the data is large, we can write: $$\mathbf{H} = \sum_{n=1}^{N} \mathbf{H_n} = N\widehat{\mathbf{H}}$$ Where $\hat{\mathbf{H}} = 1/N \sum_{n=1}^{N} \mathbf{H_n}$ , and then $$\begin{split} \ln p(D) &\approx & \ln p(D|\boldsymbol{\theta}_{MAP}) - \frac{1}{2}(\boldsymbol{\theta}_{\boldsymbol{MAP}} - \mathbf{m})^T \mathbf{V_0}^{-1}(\boldsymbol{\theta}_{\boldsymbol{MAP}} - \mathbf{m}) - \frac{1}{2} \ln |\mathbf{H}| + \mathrm{const} \\ &\approx & \ln p(D|\boldsymbol{\theta}_{MAP}) - \frac{1}{2}(\boldsymbol{\theta}_{\boldsymbol{MAP}} - \mathbf{m})^T \mathbf{V_0}^{-1}(\boldsymbol{\theta}_{\boldsymbol{MAP}} - \mathbf{m}) - \frac{1}{2} \ln |N\widehat{\mathbf{H}}| + \mathrm{const} \\ &\approx & \ln p(D|\boldsymbol{\theta}_{MAP}) - \frac{1}{2}(\boldsymbol{\theta}_{\boldsymbol{MAP}} - \mathbf{m})^T \mathbf{V_0}^{-1}(\boldsymbol{\theta}_{\boldsymbol{MAP}} - \mathbf{m}) - \frac{M}{2} \ln N - \frac{1}{2} \ln |\widehat{\mathbf{H}}| + \mathrm{const} \\ &\approx & \ln p(D|\boldsymbol{\theta}_{MAP}) - \frac{M}{2} \ln N \end{split}$$ This is because when N >> 1, other terms can be neglected. ## **Problem 4.24 Solution**(Waiting for updating)
4,100
4
4.25
medium
Suppose we wish to approximate the logistic sigmoid $\sigma(a)$ defined by $\sigma(a) = \frac{1}{1 + \exp(-a)}$ by a scaled probit function $\Phi(\lambda a)$ , where $\Phi(a)$ is defined by $\Phi(a) = \int_{-\infty}^{a} \mathcal{N}(\theta|0,1) \,\mathrm{d}\theta$. Show that if $\lambda$ is chosen so that the derivatives of the two functions are equal at a=0, then $\lambda^2=\pi/8$ . ### **4. LINEAR MODELS FOR CLASSIFICATION**
We first need to obtain the expression for the first derivative of probit function $\Phi(\lambda a)$ with regard to a. According to $\Phi(a) = \int_{-\infty}^{a} \mathcal{N}(\theta|0,1) \,\mathrm{d}\theta$, we can write down: $$\frac{d}{da}\Phi(\lambda a) = \frac{d\Phi(\lambda a)}{d(\lambda a)} \cdot \frac{d\lambda a}{da}$$ $$= \frac{\lambda}{\sqrt{2\pi}} exp\left\{-\frac{1}{2}(\lambda a)^2\right\}$$ Which further gives: $$\frac{d}{da}\Phi(\lambda a)\Big|_{a=0} = \frac{\lambda}{\sqrt{2\pi}}$$ And for logistic sigmoid function, according to $\frac{d\sigma}{da} = \sigma(1 - \sigma).$, we have $$\frac{d\sigma}{da} = \sigma(1 - \sigma) = 0.5 \times 0.5 = \frac{1}{4}$$ Where we have used $\sigma(0) = 0.5$ . Let their derivatives at origin equals, we have: $$\frac{\lambda}{\sqrt{2\pi}} = \frac{1}{4}$$ i.e., $\lambda = \sqrt{2\pi}/4$ . And hence $\lambda^2 = \pi/8$ is obvious.
913
4
4.26
medium
In this exercise, we prove the relation (4.152) for the convolution of a probit function with a Gaussian distribution. To do this, show that the derivative of the left-hand side with respect to $\mu$ is equal to the derivative of the right-hand side, and then integrate both sides with respect to $\mu$ and then show that the constant of integration vanishes. Note that before differentiating the left-hand side, it is convenient first to introduce a change of variable given by $a = \mu + \sigma z$ so that the integral over a is replaced by an integral over a. When we differentiate the left-hand side of the relation (4.152), we will then obtain a Gaussian integral over a that can be evaluated analytically.
We will prove (4.152) in a more simple and intuitive way. But firstly, we need to prove a trivial yet useful statement: Suppose we have a random variable satisfied normal distribution denoted as $X \sim \mathcal{N}(X|\mu,\sigma^2)$ , the probability of $X \leq x$ is $P(X \leq x) = \Phi(\frac{x-\mu}{\sigma})$ , and here x is a given real number. We can see this by writing down the integral: $$P(X \le x) = \int_{-\infty}^{x} \frac{1}{\sqrt{2\pi\sigma^2}} exp\left[-\frac{1}{2\sigma^2} (X - \mu)^2\right] dX$$ $$= \int_{-\infty}^{\frac{x-\mu}{\sigma}} \frac{1}{\sqrt{2\pi\sigma^2}} exp\left(-\frac{1}{2}\gamma^2\right) \sigma d\gamma$$ $$= \int_{-\infty}^{\frac{x-\mu}{\sigma}} \frac{1}{\sqrt{2\pi}} exp\left(-\frac{1}{2}\gamma^2\right) d\gamma$$ $$= \Phi\left(\frac{x-\mu}{\sigma}\right)$$ Where we have changed the variable $X = \mu + \sigma \gamma$ . Now consider two random variables $X \sim \mathcal{N}(0, \lambda^{-2})$ and $Y \sim \mathcal{N}(\mu, \sigma^2)$ . We first calculate the conditional probability $P(X \leq Y \mid Y = \alpha)$ : $$P(X \le Y \mid Y = a) = P(X \le a) = \Phi(\frac{a - 0}{\lambda^{-1}}) = \Phi(\lambda a)$$ Together with Bayesian Formula, we can obtain: $$P(X \le Y) = \int_{-\infty}^{+\infty} P(X \le Y \mid Y = a) p df(Y = a) dY$$ $$= \int_{-\infty}^{+\infty} \Phi(\lambda a) \mathcal{N}(a \mid \mu, \sigma^2) da$$ Where $pdf(\cdot)$ denotes the probability density function and we have also used $pdf(Y) = \mathcal{N}(\mu, \sigma^2)$ . What's more, we know that X - Y should also satisfy normal distribution, with: $$E[X - Y] = E[X] - E[Y] = 0 - \mu = -\mu$$ And $$var[X - Y] = var[X] + var[Y] = \lambda^{-2} + \sigma^{2}$$ Therefore, $X - Y \sim \mathcal{N}(-\mu, \lambda^{-2} + \sigma^2)$ and it follows that: $$P(X - Y \le 0) = \Phi(\frac{0 - (-\mu)}{\sqrt{\lambda^{-2} + \sigma^2}}) = \Phi(\frac{\mu}{\sqrt{\lambda^{-2} + \sigma^2}})$$ Since $P(X \le Y) = P(X - Y \le 0)$ , we obtain what have been required. # 0.5 Neural Networks
2,001
4
4.3
medium
Extend the result of Exercise 4.2 to show that if multiple linear constraints are satisfied simultaneously by the target vectors, then the same constraints will also be satisfied by the least-squares prediction of a linear model.
Suppose there are Q constraints in total. We can write $\mathbf{a_q}^T \mathbf{t_n} + b_q = 0$ , q = 1, 2, ..., Q for all the target vector $\mathbf{t_n}$ , n = 1, 2, ..., N. Or alternatively, we can group them together: $$\mathbf{A}^T \mathbf{t_n} + \mathbf{b} = \mathbf{0}$$ Where **A** is a $Q \times Q$ matrix, and the qth column of **A** is $\mathbf{a_q}$ , and meanwhile **b** is a $Q \times 1$ column vector, and the qth element is $\mathbf{b_q}$ . for every pair of $\{\mathbf{a_q}, b_q\}$ we can follow the same procedure in the previous problem to show that $\mathbf{a_qy(x)} + b_q = 0$ . In other words, the proofs will not affect each other. Therefore, it is obvious: $$\mathbf{A}^T \mathbf{y}(\mathbf{x}) + \mathbf{b} = \mathbf{0}$$
759
4
4.4
easy
Show that maximization of the class separation criterion given by $m_k = \mathbf{w}^{\mathrm{T}} \mathbf{m}_k$ with respect to $\mathbf{w}$ , using a Lagrange multiplier to enforce the constraint $\mathbf{w}^T\mathbf{w}=1$ , leads to the result that $\mathbf{w} \propto (\mathbf{m}_2 \mathbf{m}_1)$ .
We use Lagrange multiplier to enforce the constraint $\mathbf{w}^T\mathbf{w} = 1$ . We now need to maximize: $$L(\lambda, \mathbf{w}) = \mathbf{w}^T (\mathbf{m}_2 - \mathbf{m}_1) + \lambda (\mathbf{w}^T \mathbf{w} - 1)$$ We calculate the derivatives: $$\frac{\partial L(\lambda, \mathbf{w})}{\partial \lambda} = \mathbf{w}^T \mathbf{w} - 1$$ And $$\frac{\partial L(\lambda, \mathbf{w})}{\partial \mathbf{w}} = \mathbf{m_2} - \mathbf{m_1} + 2\lambda \mathbf{w}$$ We set the derivatives above equals to 0, which gives: $$\mathbf{w} = -\frac{1}{2\lambda}(\mathbf{m_2} - \mathbf{m_1}) \propto (\mathbf{m_2} - \mathbf{m_1})$$
628
4
4.5
easy
By making use of $y = \mathbf{w}^{\mathrm{T}} \mathbf{x}.$, $m_k = \mathbf{w}^{\mathrm{T}} \mathbf{m}_k$, and $s_k^2 = \sum_{n \in \mathcal{C}_k} (y_n - m_k)^2$, show that the Fisher criterion $J(\mathbf{w}) = \frac{(m_2 - m_1)^2}{s_1^2 + s_2^2}.$ can be written in the form $J(\mathbf{w}) = \frac{\mathbf{w}^{\mathrm{T}} \mathbf{S}_{\mathrm{B}} \mathbf{w}}{\mathbf{w}^{\mathrm{T}} \mathbf{S}_{\mathrm{W}} \mathbf{w}}$.
We expand $J(\mathbf{w}) = \frac{(m_2 - m_1)^2}{s_1^2 + s_2^2}.$ using $m_2 - m_1 = \mathbf{w}^{\mathrm{T}}(\mathbf{m}_2 - \mathbf{m}_1)$, $m_k = \mathbf{w}^{\mathrm{T}} \mathbf{m}_k$ and $s_k^2 = \sum_{n \in \mathcal{C}_k} (y_n - m_k)^2$. $$J(\mathbf{w}) = \frac{(m_2 - m_1)^2}{s_1^2 + s_2^2}$$ $$= \frac{||\mathbf{w}^T (\mathbf{m_2} - \mathbf{m_1})||^2}{\sum_{n \in C_1} (\mathbf{w}^T \mathbf{x_n} - m_1)^2 + \sum_{n \in C_2} (\mathbf{w}^T \mathbf{x_n} - m_2)^2}$$ The numerator can be further written as: numerator = $$[\mathbf{w}^T (\mathbf{m_2} - \mathbf{m_1})] [\mathbf{w}^T (\mathbf{m_2} - \mathbf{m_1})]^T = \mathbf{w}^T \mathbf{S_B} \mathbf{w}$$ Where we have defined: $$\mathbf{S}_{\mathbf{B}} = (\mathbf{m}_2 - \mathbf{m}_1)(\mathbf{m}_2 - \mathbf{m}_1)^T$$ And ti is the same for the denominator: denominator = $$\sum_{n \in C_1} [\mathbf{w}^T (\mathbf{x_n} - \mathbf{m_1})]^2 + \sum_{n \in C_2} [\mathbf{w}^T (\mathbf{x_n} - \mathbf{m_2})]^2$$ = $$\mathbf{w}^T \mathbf{S_{w1}} \mathbf{w} + \mathbf{w}^T \mathbf{S_{w2}} \mathbf{w}$$ = $$\mathbf{w}^T \mathbf{S_{w}} \mathbf{w}$$ Where we have defined: $$\mathbf{S_w} = \sum_{n \in C_1} (\mathbf{x_n} - \mathbf{m_1})(\mathbf{x_n} - \mathbf{m_1})^T + \sum_{n \in C_2} (\mathbf{x_n} - \mathbf{m_2})(\mathbf{x_n} - \mathbf{m_2})^T$$ Just as required.
1,354
4
4.6
easy
Using the definitions of the between-class and within-class covariance matrices given by $\mathbf{S}_{\mathrm{B}} = (\mathbf{m}_2 - \mathbf{m}_1)(\mathbf{m}_2 - \mathbf{m}_1)^{\mathrm{T}}$ and $\mathbf{S}_{W} = \sum_{n \in \mathcal{C}_{1}} (\mathbf{x}_{n} - \mathbf{m}_{1})(\mathbf{x}_{n} - \mathbf{m}_{1})^{\mathrm{T}} + \sum_{n \in \mathcal{C}_{2}} (\mathbf{x}_{n} - \mathbf{m}_{2})(\mathbf{x}_{n} - \mathbf{m}_{2})^{\mathrm{T}}.$, respectively, together with $w_0 = -\mathbf{w}^{\mathrm{T}}\mathbf{m}$ and $\mathbf{m} = \frac{1}{N} \sum_{n=1}^{N} \mathbf{x}_n = \frac{1}{N} (N_1 \mathbf{m}_1 + N_2 \mathbf{m}_2).$ and the choice of target values described in Section 4.1.5, show that the expression $\sum_{n=1}^{N} \left( \mathbf{w}^{\mathrm{T}} \mathbf{x}_n + w_0 - t_n \right) \mathbf{x}_n = 0.$ that minimizes the sum-of-squares error function can be written in the form $\left(\mathbf{S}_{\mathrm{W}} + \frac{N_1 N_2}{N} \mathbf{S}_{\mathrm{B}}\right) \mathbf{w} = N(\mathbf{m}_1 - \mathbf{m}_2)$.
Let's follow the hint, beginning by expanding $\sum_{n=1}^{N} \left( \mathbf{w}^{\mathrm{T}} \mathbf{x}_n + w_0 - t_n \right) \mathbf{x}_n = 0.$. $$(4.33) = \sum_{n=1}^{N} \mathbf{w}^{T} \mathbf{x}_{n} \mathbf{x}_{n} + w_{0} \sum_{n=1}^{N} \mathbf{x}_{n} - \sum_{n=1}^{N} t_{n} \mathbf{x}_{n}$$ $$= \sum_{n=1}^{N} \mathbf{x}_{n} \mathbf{x}_{n}^{T} \mathbf{w} - \mathbf{w}^{T} \mathbf{m} \sum_{n=1}^{N} \mathbf{x}_{n} - (\sum_{n \in C_{1}} t_{n} \mathbf{x}_{n} + \sum_{n \in C_{2}} t_{n} \mathbf{x}_{n})$$ $$= \sum_{n=1}^{N} \mathbf{x}_{n} \mathbf{x}_{n}^{T} \mathbf{w} - \mathbf{w}^{T} \mathbf{m} \cdot (N \mathbf{m}) - (\sum_{n \in C_{1}} \frac{N}{N_{1}} \mathbf{x}_{n} + \sum_{n \in C_{2}} \frac{-N}{N_{2}} \mathbf{x}_{n})$$ $$= \sum_{n=1}^{N} \mathbf{x}_{n} \mathbf{x}_{n}^{T} \mathbf{w} - N \mathbf{w}^{T} \mathbf{m} \mathbf{m} - N(\sum_{n \in C_{1}} \frac{1}{N_{1}} \mathbf{x}_{n} - \sum_{n \in C_{2}} \frac{1}{N_{2}} \mathbf{x}_{n})$$ $$= \sum_{n=1}^{N} \mathbf{x}_{n} \mathbf{x}_{n}^{T} \mathbf{w} - N \mathbf{m} \mathbf{m}^{T} \mathbf{w} - N(\mathbf{m}_{1} - \mathbf{m}_{2})$$ $$= [\sum_{n=1}^{N} (\mathbf{x}_{n} \mathbf{x}_{n}^{T}) - N \mathbf{m} \mathbf{m}^{T}] \mathbf{w} - N(\mathbf{m}_{1} - \mathbf{m}_{2})$$ If we let the derivative equal to 0, we will see that: $$\left[\sum_{n=1}^{N} (\mathbf{x_n} \mathbf{x_n}^T) - N \mathbf{m} \mathbf{m}^T\right] \mathbf{w} = N(\mathbf{m_1} - \mathbf{m_2})$$ Therefore, now we need to prove: $$\sum_{n=1}^{N} (\mathbf{x_n} \mathbf{x_n}^T) - N \mathbf{m} \mathbf{m}^T = \mathbf{S_w} + \frac{N_1 N_2}{N} \mathbf{S_B}$$ Let's expand the left side of the equation above: $$\begin{split} & \text{left} &= \sum_{n=1}^{N} \mathbf{x_n} \mathbf{x_n}^T - N(\frac{N_1}{N} \mathbf{m_1} + \frac{N_2}{N} \mathbf{m_2})^2 \\ &= \sum_{n=1}^{N} \mathbf{x_n} \mathbf{x_n}^T - N(\frac{N_1^2}{N^2} || \mathbf{m_1} ||^2 + \frac{N_2^2}{N^2} || \mathbf{m_2} ||^2 + 2 \frac{N_1 N_2}{N^2} \mathbf{m_1} \mathbf{m_2}^T) \\ &= \sum_{n=1}^{N} \mathbf{x_n} \mathbf{x_n}^T - \frac{N_1^2}{N} || \mathbf{m_1} ||^2 - \frac{N_2^2}{N} || \mathbf{m_2} ||^2 - 2 \frac{N_1 N_2}{N} \mathbf{m_1} \mathbf{m_2}^T \\ &= \sum_{n=1}^{N} \mathbf{x_n} \mathbf{x_n}^T + (N_1 + \frac{N_1 N_2}{N} - 2N_1) || \mathbf{m_1} ||^2 + (N_2 + \frac{N_1 N_2}{N} - 2N_2) || \mathbf{m_2} ||^2 - 2 \frac{N_1 N_2}{N} \mathbf{m_1} \mathbf{m_2}^T \\ &= \sum_{n=1}^{N} \mathbf{x_n} \mathbf{x_n}^T + (N_1 - 2N_1) || \mathbf{m_1} ||^2 + (N_2 - 2N_2) || \mathbf{m_2} ||^2 + \frac{N_1 N_2}{N} || \mathbf{m_1} - \mathbf{m_2} ||^2 \\ &= \sum_{n=1}^{N} \mathbf{x_n} \mathbf{x_n}^T + (N_1 - 2N_1) || \mathbf{m_1} ||^2 - 2 \mathbf{m_1} \cdot (N_1 \mathbf{m_1}^T) + N_2 || \mathbf{m_2} ||^2 - 2 \mathbf{m_2} \cdot (N_2 \mathbf{m_2}^T) + \frac{N_1 N_2}{N} \mathbf{S_B} \\ &= \sum_{n=1}^{N} \mathbf{x_n} \mathbf{x_n}^T + N_1 || \mathbf{m_1} ||^2 - 2 \mathbf{m_1} \sum_{n \in C_1} x_n^T + N_2 || \mathbf{m_2} ||^2 - 2 \mathbf{m_2} \sum_{n \in C_2} x_n^T + \frac{N_1 N_2}{N} \mathbf{S_B} \\ &= \sum_{n \in C_1} \mathbf{x_n} \mathbf{x_n}^T + N_1 || \mathbf{m_1} ||^2 - 2 \mathbf{m_1} \sum_{n \in C_1} x_n^T + \frac{N_1 N_2}{N} \mathbf{S_B} \\ &= \sum_{n \in C_1} (\mathbf{x_n} \mathbf{x_n}^T + N_2 || \mathbf{m_2} ||^2 - 2 \mathbf{m_2} \sum_{n \in C_2} x_n^T + \frac{N_1 N_2}{N} \mathbf{S_B} \\ &= \sum_{n \in C_1} (\mathbf{x_n} \mathbf{x_n}^T + || \mathbf{m_1} ||^2 - 2 \mathbf{m_1} x_n^T) + \sum_{n \in C_2} (\mathbf{x_n} \mathbf{x_n}^T + || \mathbf{m_2} ||^2 - 2 \mathbf{m_2} \mathbf{x_n}^T) + \frac{N_1 N_2}{N} \mathbf{S_B} \\ &= \sum_{n \in C_1} || \mathbf{x_n} - \mathbf{m_1} ||^2 + \sum_{n \in C_2} || \mathbf{x_n} - \mathbf{m_2} ||^2 + \frac{N_1 N_2}{N} \mathbf{S_B} \\ &= \mathbf{S_w} + \frac{N_1 N_2}{N} \mathbf{S_B} \end{aligned}$$ Just as required.
3,750
4
4.7
easy
Show that the logistic sigmoid function $\sigma(a) = \frac{1}{1 + \exp(-a)}$ satisfies the property $\sigma(-a) = 1 \sigma(a)$ and that its inverse is given by $\sigma^{-1}(y) = \ln \{y/(1-y)\}$ .
This problem is quite simple. We can solve it by definition. We know that logistic sigmoid function has the form: $$\sigma(a) = \frac{1}{1 + exp(-a)}$$ Therefore, we can obtain: $$\sigma(a) + \sigma(-a) = \frac{1}{1 + exp(-a)} + \frac{1}{1 + exp(a)}$$ $$= \frac{2 + exp(a) + exp(-a)}{[1 + exp(-a)][1 + exp(a)]}$$ $$= \frac{2 + exp(a) + exp(-a)}{2 + exp(a) + exp(-a)} = 1$$ Next we exchange the dependent and independent variables to obtain its inverse. $$a = \frac{1}{1 + exp(-y)}$$ We first rearrange the equation above, which gives: $$exp(-y) = \frac{1-a}{a}$$ Then we calculate the logarithm for both sides, which gives: $$y = \ln(\frac{a}{1-a})$$ Just as required.
680
4
4.8
easy
Using $= \frac{1}{1 + \exp(-a)} = \sigma(a)$ and $a = \ln \frac{p(\mathbf{x}|\mathcal{C}_1)p(\mathcal{C}_1)}{p(\mathbf{x}|\mathcal{C}_2)p(\mathcal{C}_2)}$, derive the result $p(\mathcal{C}_1|\mathbf{x}) = \sigma(\mathbf{w}^{\mathrm{T}}\mathbf{x} + w_0)$ for the posterior class probability in the two-class generative model with Gaussian densities, and verify the results $\mathbf{w} = \mathbf{\Sigma}^{-1}(\boldsymbol{\mu}_1 - \boldsymbol{\mu}_2)$ and $w_0 = -\frac{1}{2} \boldsymbol{\mu}_1^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu}_1 + \frac{1}{2} \boldsymbol{\mu}_2^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu}_2 + \ln \frac{p(\mathcal{C}_1)}{p(\mathcal{C}_2)}.$ for the parameters $\mathbf{w}$ and $w_0$ .
According to $a = \ln \frac{p(\mathbf{x}|\mathcal{C}_1)p(\mathcal{C}_1)}{p(\mathbf{x}|\mathcal{C}_2)p(\mathcal{C}_2)}$ and $p(\mathbf{x}|\mathcal{C}_k) = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf{\Sigma}|^{1/2}} \exp\left\{-\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu}_k)^{\mathrm{T}} \mathbf{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}_k)\right\}.$, we can write: $$a = \ln \frac{p(\mathbf{x}|C_1)p(C_1)}{p(\mathbf{x}|C_2)p(C_2)}$$ $$= \ln p(\mathbf{x}|C_1) - \ln p(\mathbf{x}|C_2) + \ln \frac{p(C_1)}{p(C_2)}$$ $$= -\frac{1}{2}(\mathbf{x} - \boldsymbol{\mu}_1)^T \boldsymbol{\Sigma}^{-1}(\mathbf{x} - \boldsymbol{\mu}_1) + \frac{1}{2}(\mathbf{x} - \boldsymbol{\mu}_2)^T \boldsymbol{\Sigma}^{-1}(\mathbf{x} - \boldsymbol{\mu}_2) + \ln \frac{p(C_1)}{p(C_2)}$$ $$= \boldsymbol{\Sigma}^{-1}(\boldsymbol{\mu}_1 - \boldsymbol{\mu}_2)\mathbf{x} - \frac{1}{2}\boldsymbol{\mu}_1^T \boldsymbol{\Sigma}^{-1}\boldsymbol{\mu}_1 + \frac{1}{2}\boldsymbol{\mu}_2^T \boldsymbol{\Sigma}^{-1}\boldsymbol{\mu}_2 + \ln \frac{p(C_1)}{p(C_2)}$$ $$= \mathbf{w}^T \mathbf{x} + w_0$$ Where in the last second step, we rearrange the term according to $\mathbf{x}$ , i.e., its quadratic, linear, constant term. We have also defined: $$\mathbf{w} = \mathbf{\Sigma}^{-1}(\boldsymbol{\mu_1} - \boldsymbol{\mu_2})$$ And $$w_0 = -\frac{1}{2} \boldsymbol{\mu_1}^T \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu_1} + \frac{1}{2} \boldsymbol{\mu_2}^T \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu_2} + \ln \frac{p(C_1)}{p(C_2)}$$ Finally, since $p(C_1|\mathbf{x}) = \sigma(a)$ as stated in $= \frac{1}{1 + \exp(-a)} = \sigma(a)$, we have $p(C_1|\mathbf{x}) = \sigma(\mathbf{w}^T\mathbf{x} + w_0)$ just as required.
1,705
4
4.9
easy
Consider a generative classification model for K classes defined by prior class probabilities $p(\mathcal{C}_k) = \pi_k$ and general class-conditional densities $p(\phi|\mathcal{C}_k)$ where $\phi$ is the input feature vector. Suppose we are given a training data set $\{\phi_n, \mathbf{t}_n\}$ where $n=1,\ldots,N$ , and $\mathbf{t}_n$ is a binary target vector of length K that uses the 1-of-K coding scheme, so that it has components $t_{nj} = I_{jk}$ if pattern n is from class $\mathcal{C}_k$ . Assuming that the data points are drawn independently from this model, show that the maximum-likelihood solution for the prior probabilities is given by $$\pi_k = \frac{N_k}{N} \tag{4.159}$$ where $N_k$ is the number of data points assigned to class $C_k$ .
We begin by writing down the likelihood function. $$p(\{\phi_{\mathbf{n}}, t_n\} | \pi_1, \pi_2, ..., \pi_K) = \prod_{n=1}^{N} \prod_{k=1}^{K} [p(\phi_{\mathbf{n}} | C_k) p(C_k)]^{t_{nk}}$$ $$= \prod_{n=1}^{N} \prod_{k=1}^{K} [\pi_k p(\phi_{\mathbf{n}} | C_k)]^{t_{nk}}$$ Hence we can obtain the expression for the logarithm likelihood: $$\ln p = \sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} \left[ \ln \pi_k + \ln p(\boldsymbol{\phi_n}|C_k) \right] \propto \sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} \ln \pi_k$$ Since there is a constraint on $\pi_k$ , so we need to add a Lagrange Multiplier to the expression, which becomes: $$L = \sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} \ln \pi_k + \lambda (\sum_{k=1}^{K} \pi_k - 1)$$ We calculate the derivative of the expression above with regard to $\pi_k$ : $$\frac{\partial L}{\partial \pi_k} = \sum_{n=1}^{N} \frac{t_{nk}}{\pi_k} + \lambda$$ And if we set the derivative equal to 0, we can obtain: $$\pi_k = -\left(\sum_{n=1}^N t_{nk}\right)/\lambda = -\frac{N_k}{\lambda} \tag{*}$$ And if we preform summation on both sides with regard to k, we can see that: $$1 = -(\sum_{k=1}^{K} N_k)/\lambda = -\frac{N}{\lambda}$$ Which gives $\lambda = -N$ , and substitute it into (\*), we can obtain $\pi_k = N_k/N$ . # **Problem 4.10 Solution** This time, we focus on the term which dependent on $\mu_k$ and $\Sigma$ in the logarithm likelihood. $$\ln p = \sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} \left[ \ln \pi_k + \ln p(\boldsymbol{\phi_n}|C_k) \right] \propto \sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} \ln p(\boldsymbol{\phi_n}|C_k)$$ Provided $p(\phi|C_k) = \mathcal{N}(\phi|\boldsymbol{\mu_k}, \boldsymbol{\Sigma})$ , we can further derive: $$\ln p \propto \sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} \left[ -\frac{1}{2} \ln |\mathbf{\Sigma}| - \frac{1}{2} (\boldsymbol{\phi_n} - \boldsymbol{\mu_k}) \mathbf{\Sigma}^{-1} (\boldsymbol{\phi_n} - \boldsymbol{\mu_k})^T \right]$$ We first calculate the derivative of the expression above with regard to $\mu_k$ : $$\frac{\partial \ln p}{\partial \boldsymbol{\mu_k}} = \sum_{n=1}^{N} t_{nk} \boldsymbol{\Sigma}^{-1} (\boldsymbol{\phi_n} - \boldsymbol{\mu_k})$$ We set the derivative equals to 0, which gives: $$\sum_{n=1}^{N} t_{nk} \boldsymbol{\Sigma}^{-1} \boldsymbol{\phi_n} = \sum_{n=1}^{N} t_{nk} \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu_k} = N_k \boldsymbol{\Sigma}^{-1} \boldsymbol{\mu_k}$$ Therefore, if we multiply both sides by $\Sigma/N_k$ , we will obtain $\mu_k = \frac{1}{N_k} \sum_{n=1}^{N} t_{nk} \phi_n$. Now let's calculate the derivative of $\ln p$ with regard to $\Sigma$ , which gives: $$\frac{\partial \ln p}{\partial \boldsymbol{\Sigma}} = \sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} \left( -\frac{1}{2} \boldsymbol{\Sigma}^{-1} \right) - \frac{1}{2} \frac{\partial}{\partial \boldsymbol{\Sigma}} \sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} (\boldsymbol{\phi_n} - \boldsymbol{\mu_k}) \boldsymbol{\Sigma}^{-1} (\boldsymbol{\phi_n} - \boldsymbol{\mu_k})^T$$ $$= \sum_{n=1}^{N} \sum_{k=1}^{K} -\frac{t_{nk}}{2} \boldsymbol{\Sigma}^{-1} - \frac{1}{2} \frac{\partial}{\partial \boldsymbol{\Sigma}} \sum_{k=1}^{K} \sum_{n=1}^{N} t_{nk} (\boldsymbol{\phi_n} - \boldsymbol{\mu_k}) \boldsymbol{\Sigma}^{-1} (\boldsymbol{\phi_n} - \boldsymbol{\mu_k})^T$$ $$= \sum_{n=1}^{N} -\frac{1}{2} \boldsymbol{\Sigma}^{-1} - \frac{1}{2} \frac{\partial}{\partial \boldsymbol{\Sigma}} \sum_{k=1}^{K} N_k \operatorname{Tr}(\boldsymbol{\Sigma}^{-1} \mathbf{S_k})$$ $$= -\frac{N}{2} \boldsymbol{\Sigma}^{-1} + \frac{1}{2} \sum_{k=1}^{K} N_k \boldsymbol{\Sigma}^{-1} \mathbf{S_k} \boldsymbol{\Sigma}^{-1}$$ Where we have denoted $$\mathbf{S_k} = \frac{1}{N_k} \sum_{n=1}^{N} t_{nk} (\boldsymbol{\phi_n} - \boldsymbol{\mu_k}) (\boldsymbol{\phi_n} - \boldsymbol{\mu_k})^T$$ Now we set the derivative equals to 0, and rearrange the equation, which gives: $$\mathbf{\Sigma} = \sum_{k=1}^{K} \frac{N_k}{N} \mathbf{S_k}$$
3,903
5
5.1
medium
$ Consider a two-layer network function of the form $y_k(\mathbf{x}, \mathbf{w}) = \sigma \left( \sum_{j=1}^M w_{kj}^{(2)} h \left( \sum_{i=1}^D w_{ji}^{(1)} x_i + w_{j0}^{(1)} \right) + w_{k0}^{(2)} \right)$ in which the hiddenunit nonlinear activation functions $g(\cdot)$ are given by logistic sigmoid functions of the form $$\sigma(a) = \{1 + \exp(-a)\}^{-1}.$$ $\sigma(a) = \{1 + \exp(-a)\}^{-1}.$ Show that there exists an equivalent network, which computes exactly the same function, but with hidden unit activation functions given by $\tanh(a)$ where the $\tanh$ function is defined by $\tanh(a) = \frac{e^a - e^{-a}}{e^a + e^{-a}}.$. Hint: first find the relation between $\sigma(a)$ and $\tanh(a)$ , and then show that the parameters of the two networks differ by linear transformations.
Based on definition of $tanh(\cdot)$ , we can obtain: $$tanh(a) = \frac{e^{a} - e^{-a}}{e^{a} + e^{-a}}$$ $$= -1 + \frac{2e^{a}}{e^{a} + e^{-a}}$$ $$= -1 + 2\frac{1}{1 + e^{-2a}}$$ $$= 2\sigma(2a) - 1$$ If we have parameters $w_{ji}^{(1s)}$ , $w_{j0}^{(1s)}$ and $w_{kj}^{(2s)}$ , $w_{k0}^{(2s)}$ for a network whose hidden units use logistic sigmoid function as activation and $w_{ji}^{(1t)}$ , $w_{j0}^{(1t)}$ and $w_{kj}^{(2t)}$ , $w_{k0}^{(2t)}$ for another one using $tanh(\cdot)$ , for the network using $tanh(\cdot)$ as activation, we can write down the following expression by using (5.4): $$\begin{split} a_k^{(t)} &= \sum_{j=1}^M w_{kj}^{(2t)} tanh(a_j^{(t)}) + w_{k0}^{(2t)} \\ &= \sum_{j=1}^M w_{kj}^{(2t)} [2\sigma(2a_j^{(t)}) - 1] + w_{k0}^{(2t)} \\ &= \sum_{j=1}^M 2w_{kj}^{(2t)} \sigma(2a_j^{(t)}) + \left[ -\sum_{j=1}^M w_{kj}^{(2t)} + w_{k0}^{(2t)} \right] \end{split}$$ What's more, we also have: $$a_k^{(s)} = \sum_{j=1}^M w_{kj}^{(2s)} \sigma(a_j^{(s)}) + w_{k0}^{(2s)}$$ To make the two networks equivalent, i.e., $a_k^{(s)} = a_k^{(t)}$ , we should make sure: $$\begin{cases} a_{j}^{(s)} = 2a_{j}^{(t)} \\ w_{kj}^{(2s)} = 2w_{kj}^{(2t)} \\ w_{k0}^{(2s)} = -\sum_{j=1}^{M} w_{kj}^{(2t)} + w_{k0}^{(2t)} \end{cases}$$ Note that the first condition can be achieved by simply enforcing: $$w_{ii}^{(1s)} = 2w_{ii}^{(1t)}$$ , and $w_{i0}^{(1s)} = 2w_{i0}^{(1t)}$ Therefore, these two networks are equivalent under a linear transformation.
1,484
5
5.10
easy
Consider a Hessian matrix $\mathbf{H}$ with eigenvector equation $\mathbf{H}\mathbf{u}_i = \lambda_i \mathbf{u}_i$. By setting the vector $\mathbf{v}$ in $\mathbf{v}^{\mathrm{T}}\mathbf{H}\mathbf{v} = \sum_{i} c_{i}^{2} \lambda_{i}$ equal to each of the eigenvectors $\mathbf{u}_i$ in turn, show that $\mathbf{H}$ is positive definite if, and only if, all of its eigenvalues are positive.
It is obvious. Suppose **H** is positive definite, i.e., (5.37) holds. We set **v** equals to the eigenvector of **H**, i.e., $\mathbf{v} = \mathbf{u_i}$ which gives: $$\mathbf{v}^T \mathbf{H} \mathbf{v} = \mathbf{v}^T (\mathbf{H} \mathbf{v}) = \mathbf{u_i}^T \lambda_i \mathbf{u_i} = \lambda_i ||\mathbf{u_i}||^2$$ Therefore, every $\lambda_i$ should be positive. On the other hand, If all the eigenvalues $\lambda_i$ are positive, from $\mathbf{v} = \sum_{i} c_i \mathbf{u}_i.$ and $\mathbf{v}^{\mathrm{T}}\mathbf{H}\mathbf{v} = \sum_{i} c_{i}^{2} \lambda_{i}$, we see that **H** is positive definite.
627
5
5.11
medium
Consider a quadratic error function defined by $E(\mathbf{w}) = E(\mathbf{w}^*) + \frac{1}{2}(\mathbf{w} - \mathbf{w}^*)^{\mathrm{T}}\mathbf{H}(\mathbf{w} - \mathbf{w}^*)$, in which the Hessian matrix **H** has an eigenvalue equation given by $\mathbf{H}\mathbf{u}_i = \lambda_i \mathbf{u}_i$. Show that the contours of constant error are ellipses whose axes are aligned with the eigenvectors $\mathbf{u}_i$ , with lengths that are inversely proportional to the square root of the corresponding eigenvalues $\lambda_i$ .
It is obvious. We follow $\mathbf{w} - \mathbf{w}^* = \sum_i \alpha_i \mathbf{u}_i.$ and then write the error function in the form of $E(\mathbf{w}) = E(\mathbf{w}^*) + \frac{1}{2} \sum_{i} \lambda_i \alpha_i^2.$. To obtain the contour, we enforce $E(\mathbf{w})$ to equal to a constant C. $$E(\mathbf{w}) = E(\mathbf{w}^*) + \frac{1}{2} \sum_{i} \lambda_i \alpha_i^2 = C$$ We rearrange the equation above, and then obtain: $$\sum_{i} \lambda_i \alpha_i^2 = B$$ Where $B = 2C - 2E(\mathbf{w}^*)$ is a constant. Therefore, the contours of constant error are ellipses whose axes are aligned with the eigenvector $\mathbf{u_i}$ of the Hessian Matrix $\mathbf{H}$ . The length for the jth axis is given by setting all $\alpha_i = 0, s.t. i \neq j$ : $$\alpha_j = \sqrt{\frac{B}{\lambda_j}}$$ In other words, the length is inversely proportional to the square root of the corresponding eigenvalue $\lambda_i$ .
936
5
5.12
medium
By considering the local Taylor expansion $E(\mathbf{w}) = E(\mathbf{w}^*) + \frac{1}{2}(\mathbf{w} - \mathbf{w}^*)^{\mathrm{T}}\mathbf{H}(\mathbf{w} - \mathbf{w}^*)$ of an error function about a stationary point $\mathbf{w}^*$ , show that the necessary and sufficient condition for the stationary point to be a local minimum of the error function is that the Hessian matrix $\mathbf{H}$ , defined by $(\mathbf{H})_{ij} \equiv \left. \frac{\partial E}{\partial w_i \partial w_j} \right|_{\mathbf{w} = \widehat{\mathbf{w}}}.$ with $\hat{\mathbf{w}} = \mathbf{w}^*$ , be positive definite.
If **H** is positive definite, we know the second term on the right side of $E(\mathbf{w}) = E(\mathbf{w}^*) + \frac{1}{2}(\mathbf{w} - \mathbf{w}^*)^{\mathrm{T}}\mathbf{H}(\mathbf{w} - \mathbf{w}^*)$ will be positive for arbitrary **w**. Therefore, $E(\mathbf{w}^*)$ is a local minimum. On the other hand, if $\mathbf{w}^*$ is a local minimum, we have $$E(\mathbf{w}^*) - E(\mathbf{w}) = -\frac{1}{2}(\mathbf{w} - \mathbf{w}^*)^T \mathbf{H}(\mathbf{w} - \mathbf{w}^*) < 0$$ In other words, for arbitrary $\mathbf{w}$ , $(\mathbf{w} - \mathbf{w}^*)^T \mathbf{H} (\mathbf{w} - \mathbf{w}^*) > 0$ , according to the previous problem, we know that this means $\mathbf{H}$ is positive definite.
708
5
5.13
easy
Show that as a consequence of the symmetry of the Hessian matrix $\mathbf{H}$ , the number of independent elements in the quadratic error function $E(\mathbf{w}) \simeq E(\widehat{\mathbf{w}}) + (\mathbf{w} - \widehat{\mathbf{w}})^{\mathrm{T}} \mathbf{b} + \frac{1}{2} (\mathbf{w} - \widehat{\mathbf{w}})^{\mathrm{T}} \mathbf{H} (\mathbf{w} - \widehat{\mathbf{w}})$ is given by W(W+3)/2.
It is obvious. Suppose that there are W adaptive parameters in the network. Therefore, **b** has W independent parameters. Since **H** is symmetric, there should be W(W+1)/2 independent parameters in it. Therefore, there are W + W(W+1)/2 = W(W+3)/2 parameters in total.
269
5
5.14
easy
By making a Taylor expansion, verify that the terms that are $O(\epsilon)$ cancel on the right-hand side of $\frac{\partial E_n}{\partial w_{ji}} = \frac{E_n(w_{ji} + \epsilon) - E_n(w_{ji} - \epsilon)}{2\epsilon} + O(\epsilon^2).$.
It is obvious. Since we have $$E_n(w_{ji} + \epsilon) = E_n(w_{ji}) + \epsilon E'_n(w_{ji}) + \frac{\epsilon^2}{2} E''_n(w_{ji}) + O(\epsilon^3)$$ And $$E_n(w_{ji} - \epsilon) = E_n(w_{ji}) - \epsilon E'_n(w_{ji}) + \frac{\epsilon^2}{2} E''_n(w_{ji}) + O(\epsilon^3)$$ We combine those two equations, which gives, $$E_n(w_{ii} + \epsilon) - E_n(w_{ii} - \epsilon) = 2\epsilon E'_n(w_{ii}) + O(\epsilon^3)$$ Rearrange the equation above, we obtain what has been required.
476
5
5.15
medium
In Section 5.3.4, we derived a procedure for evaluating the Jacobian matrix of a neural network using a backpropagation procedure. Derive an alternative formalism for finding the Jacobian based on *forward propagation* equations.
It is obvious. The back propagation formalism starts from performing summation near the input, as shown in $= \sum_j w_{ji} \frac{\partial y_k}{\partial a_j}$. By symmetry, the forward propagation formalism should start near the output. $$J_{ki} = \frac{\partial y_k}{\partial x_i} = \frac{\partial h(\alpha_k)}{\partial x_i} = h'(\alpha_k) \frac{\partial \alpha_k}{\partial x_i} \tag{*}$$ Where $h(\cdot)$ is the activation function at the output node $a_k$ . Considering all the units j, which have links to unit k: $$\frac{\partial a_k}{\partial x_i} = \sum_j \frac{\partial a_k}{\partial a_j} \frac{\partial a_j}{\partial x_i} = \sum_j w_{kj} h'(a_j) \frac{\partial a_j}{\partial x_i} \tag{**}$$ Where we have used: $$a_k = \sum_j w_{kj} z_j, \quad z_j = h(a_j)$$ It is similar for $\partial a_j/\partial x_i$ . In this way we have obtained a recursive formula starting from the input node: $$\frac{\partial a_l}{\partial x_i} = \begin{cases} w_{li}, & \text{if there is a link from input unit } i \text{ to } l\\ 0, & \text{if there isn't a link from input unit } i \text{ to } l \end{cases}$$ Using recursive formula (\*\*) and then (\*), we can obtain the Jacobian Matrix.
1,199
5
5.16
easy
The outer product approximation to the Hessian matrix for a neural network using a sum-of-squares error function is given by $\mathbf{H} \simeq \sum_{n=1}^{N} \mathbf{b}_n \mathbf{b}_n^{\mathrm{T}}$. Extend this result to the case of multiple outputs.
It is obvious. We begin by writing down the error function. $$E = \frac{1}{2} \sum_{n=1}^{N} ||\mathbf{y_n} - \mathbf{t_n}||^2 = \frac{1}{2} \sum_{n=1}^{N} \sum_{m=1}^{M} (y_{n,m} - t_{n,m})^2$$ Where the subscript *m* denotes the *m*the element of the vector. Then we can write down the Hessian Matrix as before. $$\mathbf{H} = \nabla \nabla E = \sum_{n=1}^{N} \sum_{m=1}^{M} \nabla \mathbf{y_{n,m}} \nabla \mathbf{y_{n,m}} + \sum_{n=1}^{N} \sum_{m=1}^{M} (y_{n,m} - t_{n,m}) \nabla \nabla \mathbf{y_{n,m}}$$ Similarly, we now know that the Hessian Matrix can be approximated as: $$\mathbf{H} \simeq \sum_{n=1}^{N} \sum_{m=1}^{M} \mathbf{b}_{n,m} \mathbf{b}_{n,m}^{T}$$ Where we have defined: $$\mathbf{b}_{n,m} = \nabla y_{n,m}$$
738
5
5.17
easy
Consider a squared loss function of the form $$E = \frac{1}{2} \iint \left\{ y(\mathbf{x}, \mathbf{w}) - t \right\}^2 p(\mathbf{x}, t) \, d\mathbf{x} \, dt$$ $E = \frac{1}{2} \iint \left\{ y(\mathbf{x}, \mathbf{w}) - t \right\}^2 p(\mathbf{x}, t) \, d\mathbf{x} \, dt$ where $y(\mathbf{x}, \mathbf{w})$ is a parametric function such as a neural network. The result $y(\mathbf{x}) = \frac{\int tp(\mathbf{x}, t) dt}{p(\mathbf{x})} = \int tp(t|\mathbf{x}) dt = \mathbb{E}_t[t|\mathbf{x}]$ shows that the function $y(\mathbf{x}, \mathbf{w})$ that minimizes this error is given by the conditional expectation of t given $\mathbf{x}$ . Use this result to show that the second derivative of E with respect to two elements $w_r$ and $w_s$ of the vector $\mathbf{w}$ , is given by $$\frac{\partial^2 E}{\partial w_r \partial w_s} = \int \frac{\partial y}{\partial w_r} \frac{\partial y}{\partial w_s} p(\mathbf{x}) \, d\mathbf{x}.$$ $\frac{\partial^2 E}{\partial w_r \partial w_s} = \int \frac{\partial y}{\partial w_r} \frac{\partial y}{\partial w_s} p(\mathbf{x}) \, d\mathbf{x}.$ Note that, for a finite sample from $p(\mathbf{x})$ , we obtain $\mathbf{H} \simeq \sum_{n=1}^{N} \mathbf{b}_n \mathbf{b}_n^{\mathrm{T}}$.
It is obvious. $$\frac{\partial^{2} E}{\partial w_{r} \partial w_{s}} = \frac{\partial}{\partial w_{r}} \frac{1}{2} \int \int 2(y-t) \frac{\partial y}{\partial w_{s}} p(\mathbf{x}, t) d\mathbf{x} dt = \int \int \left[ (y-t) \frac{\partial y^{2}}{\partial w_{r} \partial w_{s}} + \frac{\partial y}{\partial w_{s}} \frac{\partial y}{\partial w_{r}} \right] p(\mathbf{x}, t) d\mathbf{x} dt$$ Since we know that $$\int \int (y-t) \frac{\partial y^2}{\partial w_r \partial w_s} p(\mathbf{x}, t) d\mathbf{x} dt = \int \int (y-t) \frac{\partial y^2}{\partial w_r \partial w_s} p(t|\mathbf{x}) p(\mathbf{x}) d\mathbf{x} dt = \int \frac{\partial y^2}{\partial w_r \partial w_s} \left\{ \int (y-t) p(t|\mathbf{x}) dt \right\} p(\mathbf{x}) d\mathbf{x} = 0$$ Note that in the last step, we have used $y = \int t p(t|\mathbf{x}) dt$ . Then we substitute it into the second derivative, which gives, $$\frac{\partial^{2} E}{\partial w_{r} \partial w_{s}} = \int \int \frac{\partial y}{\partial w_{s}} \frac{\partial y}{\partial w_{r}} p(\mathbf{x}, t) d\mathbf{x} dt$$ $$= \int \frac{\partial y}{\partial w_{s}} \frac{\partial y}{\partial w_{r}} p(\mathbf{x}) d\mathbf{x}$$
1,169
5
5.18
easy
Consider a two-layer network of the form shown in Figure 5.1 with the addition of extra parameters corresponding to skip-layer connections that go directly from the inputs to the outputs. By extending the discussion of Section 5.3.2, write down the equations for the derivatives of the error function with respect to these additional parameters.
By analogy with section 5.3.2, we denote $w_{ki}^{\rm skip}$ as those parameters corresponding to skip-layer connections, i.e., it connects the input unit i with the output unit k. Note that the discussion in section 5.3.2 is still correct and now we only need to obtain the derivative of the error function with respect to the additional parameters $w_{ki}^{\rm skip}$ . $$\frac{\partial E_n}{\partial w_{ki}^{\text{skip}}} = \frac{\partial E_n}{\partial a_k} \frac{\partial a_k}{\partial w_{ki}^{\text{skip}}} = \delta_k x_i$$ Where we have used $a_k = y_k$ due to linear activation at the output unit and: $$y_k = \sum_{j=0}^{M} w_{kj}^{(2)} z_j + \sum_{i} w_{ki}^{\text{skip}} x_i$$ Where the first term on the right side corresponds to those information conveying from the hidden unit to the output and the second term corresponds to the information conveying directly from the input to output.
908
5
5.19
easy
Derive the expression $\mathbf{H} \simeq \sum_{n=1}^{N} y_n (1 - y_n) \mathbf{b}_n \mathbf{b}_n^{\mathrm{T}}.$ for the outer product approximation to the Hessian matrix for a network having a single output with a logistic sigmoid output-unit activation function and a cross-entropy error function, corresponding to the result $\mathbf{H} \simeq \sum_{n=1}^{N} \mathbf{b}_n \mathbf{b}_n^{\mathrm{T}}$ for the sum-of-squares error function.
The error function is given by $E(\mathbf{w}) = -\sum_{n=1}^{N} \left\{ t_n \ln y_n + (1 - t_n) \ln(1 - y_n) \right\}$. Therefore, we can obtain: $$\begin{split} \nabla E(\mathbf{w}) &= \sum_{n=1}^{N} \frac{\partial E}{\partial a_n} \nabla a_n \\ &= -\sum_{n=1}^{N} \frac{\partial}{\partial a_n} \big[ t_n \ln y_n + (1-t_n) \ln(1-y_n) \big] \nabla a_n \\ &= -\sum_{n=1}^{N} \big\{ \frac{\partial (t_n \ln y_n)}{\partial y_n} \frac{\partial y_n}{\partial a_n} + \frac{\partial (1-t_n) \ln(1-y_n)}{\partial y_n} \frac{\partial y_n}{\partial a_n} \big\} \nabla a_n \\ &= -\sum_{n=1}^{N} \big[ \frac{t_n}{y_n} \cdot y_n (1-y_n) + (1-t_n) \frac{-1}{1-y_n} \cdot y_n (1-y_n) \big] \nabla a_n \\ &= -\sum_{n=1}^{N} \big[ t_n (1-y_n) - (1-t_n) y_n \big] \nabla a_n \\ &= \sum_{n=1}^{N} (y_n - t_n) \nabla a_n \end{split}$$ Where we have used the conclusion of problem 5.6. Now we calculate the second derivative. $$\nabla \nabla E(\mathbf{w}) = \sum_{n=1}^{N} \left\{ y_n (1 - y_n) \nabla a_n \nabla a_n + (y_n - t_n) \nabla \nabla a_n \right\}$$ Similarly, we can drop the last term, which gives exactly what has been asked. ## **Problem 5.20 Solution** We begin by writing down the error function. $$E(\mathbf{w}) = -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{nk} \ln y_{nk}$$ Here we assume that the output of the network has K units in total and there are W weights parameters in the network. WE first calculate the first derivative: $$\nabla E = \sum_{n=1}^{N} \frac{dE}{d\mathbf{a}_n} \cdot \nabla \mathbf{a}_n$$ $$= -\sum_{n=1}^{N} \left[ \frac{d}{d\mathbf{a}_n} (\sum_{k=1}^{K} t_{nk} \ln y_{nk}) \right] \cdot \nabla \mathbf{a}_n$$ $$= \sum_{n=1}^{N} \mathbf{c}_n \cdot \nabla \mathbf{a}_n$$ Note that here $\mathbf{c}_n = -dE/d\mathbf{a}_n$ is a vector with size $K \times 1$ , $\nabla \mathbf{a}_n$ is a matrix with size $K \times W$ . Moreover, the operator $\cdot$ means inner product, which gives $\nabla E$ as a vector with size $1 \times W$ . According to $\frac{\partial y_k}{\partial a_j} = y_k (I_{kj} - y_j)$, we can obtain the jth element of $\mathbf{c}_n$ : $$c_{n,j} = -\frac{\partial}{\partial a_j} \left( \sum_{k=1}^K t_{nk} \ln y_{nk} \right)$$ $$= -\sum_{k=1}^K \frac{\partial}{\partial a_j} (t_{nk} \ln y_{nk})$$ $$= -\sum_{k=1}^K \frac{t_{nk}}{y_{nk}} y_{nk} (I_{kj} - y_{nj})$$ $$= -\sum_{k=1}^K t_{nk} I_{kj} + \sum_{k=1}^K t_{nk} y_{nj}$$ $$= -t_{nj} + y_{nj} \left( \sum_{k=1}^K t_{nk} \right)$$ $$= y_{nj} - t_{nj}$$ Now we calculate the second derivative: $$\nabla \nabla E = \sum_{n=1}^{N} \left( \frac{d\mathbf{c}_{n}}{d\mathbf{a}_{n}} \nabla \mathbf{a}_{n} \right) \cdot \nabla \mathbf{a}_{n} + \mathbf{c}_{n} \nabla \nabla \mathbf{a}_{n}$$ Here $d\mathbf{c}_n/d\mathbf{a}_n$ is a matrix with size $K \times K$ . Therefore, the second term can be neglected as before, which gives: $$\mathbf{H} = \sum_{n=1}^{N} (\frac{d\mathbf{c}_n}{d\mathbf{a}_n} \nabla \mathbf{a}_n) \cdot \nabla \mathbf{a_n}$$
2,970
5
5.2
easy
Show that maximizing the likelihood function under the conditional distribution $p(\mathbf{t}|\mathbf{x}, \mathbf{w}) = \mathcal{N}\left(\mathbf{t}|\mathbf{y}(\mathbf{x}, \mathbf{w}), \beta^{-1}\mathbf{I}\right).$ for a multioutput neural network is equivalent to minimizing the sum-of-squares error function $E(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \|\mathbf{y}(\mathbf{x}_n, \mathbf{w}) - \mathbf{t}_n\|^2.$.
It is obvious. We write down the likelihood. $$p(\mathbf{T}|\mathbf{X}, \mathbf{w}) = \prod_{n=1}^{N} \mathcal{N}(\mathbf{t_n}|\mathbf{y}(\mathbf{x_n}, \mathbf{w}), \beta^{-1}\mathbf{I})$$ Taking the negative logarithm, we can obtain: $$E(\mathbf{w},\beta) = -\ln p(\mathbf{T}|\mathbf{X},\mathbf{w}) = \frac{\beta}{2} \sum_{n=1}^{N} \left[ \left( \mathbf{y}(\mathbf{x_n},\mathbf{w}) - \mathbf{t_n} \right)^T \left( \mathbf{y}(\mathbf{x_n},\mathbf{w}) - \mathbf{t_n} \right) \right] - \frac{NK}{2} \ln \beta + \mathrm{const}$$ Here we have used const to denote the term independent of both $\mathbf{w}$ and $\beta$ . Note that here we have used the definition of the multivariate Gaussian Distribution. What's more, we see that the covariance matrix $\beta^{-1}\mathbf{I}$ and the weight parameter $\mathbf{w}$ have decoupled, which is distinct from the next problem. We can first solve $\mathbf{w}_{\mathbf{ML}}$ by minimizing the first term on the right of the equation above or equivalently $E(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \|\mathbf{y}(\mathbf{x}_n, \mathbf{w}) - \mathbf{t}_n\|^2.$, i.e., imaging $\beta$ is fixed. Then according to the derivative of $E(\mathbf{w},\beta)$ with regard to $\beta$ , we can obtain $\frac{1}{\beta_{\text{ML}}} = \frac{1}{NK} \sum_{n=1}^{N} \|\mathbf{y}(\mathbf{x}_n, \mathbf{w}_{\text{ML}}) - \mathbf{t}_n\|^2$ and hence $\beta_{ML}$ .
1,416
5
5.21
hard
$ Extend the expression $\mathbf{H}_N = \sum_{n=1}^N \mathbf{b}_n \mathbf{b}_n^{\mathrm{T}}$ for the outer product approximation of the Hessian matrix to the case of K > 1 output units. Hence, derive a recursive expression analogous to $\mathbf{H}_{L+1} = \mathbf{H}_L + \mathbf{b}_{L+1} \mathbf{b}_{L+1}^{\mathrm{T}}.$ for incrementing the number N of patterns and a similar expression for incrementing the number K of outputs. Use these results, together with the identity $\left(\mathbf{M} + \mathbf{v}\mathbf{v}^{\mathrm{T}}\right)^{-1} = \mathbf{M}^{-1} - \frac{\left(\mathbf{M}^{-1}\mathbf{v}\right)\left(\mathbf{v}^{\mathrm{T}}\mathbf{M}^{-1}\right)}{1 + \mathbf{v}^{\mathrm{T}}\mathbf{M}^{-1}\mathbf{v}}$, to find sequential update expressions analogous to $\mathbf{H}_{L+1}^{-1} = \mathbf{H}_{L}^{-1} - \frac{\mathbf{H}_{L}^{-1} \mathbf{b}_{L+1} \mathbf{b}_{L+1}^{\mathrm{T}} \mathbf{H}_{L}^{-1}}{1 + \mathbf{b}_{L+1}^{\mathrm{T}} \mathbf{H}_{L}^{-1} \mathbf{b}_{L+1}}.$ for finding the inverse of the Hessian by incrementally including both extra patterns and extra outputs.
We first write down the expression of Hessian Matrix in the case of K outputs. $$\mathbf{H}_{N,K} = \sum_{n=1}^{N} \sum_{k=1}^{K} \mathbf{b}_{n,k} \mathbf{b}_{n,k}^{T}$$ Where $\mathbf{b}_{n,k} = \nabla_{\mathbf{w}} \mathbf{a}_{n,k}$ . Therefore, we have: $$\mathbf{H}_{N+1,K} = \mathbf{H}_{N,K} + \sum_{k=1}^{K} \mathbf{b}_{N+1,k} \mathbf{b}_{N+1,k}^{T} = \mathbf{H}_{N,K} + \mathbf{B}_{N+1} \mathbf{B}_{N+1}^{T}$$ Where $\mathbf{B}_{N+1} = [\mathbf{b}_{N+1,1}, \mathbf{b}_{N+1,2}, ..., \mathbf{b}_{N+1,K}]$ is a matrix with size $W \times K$ , and here W is the total number of the parameters in the network. By analogy with (5.88)-(5.89), we can obtain: $$\mathbf{H}_{N+1,K}^{-1} = \mathbf{H}_{N,K}^{-1} - \frac{\mathbf{H}_{N,K}^{-1} \mathbf{B}_{N+1} \mathbf{B}_{N+1}^{T} \mathbf{H}_{N,K}^{-1}}{1 + \mathbf{B}_{N+1}^{T} \mathbf{H}_{N,K}^{-1} \mathbf{B}_{N+1}}$$ (\*) Furthermore, similarly, we have: $$\mathbf{H}_{N+1,K+1} = \mathbf{H}_{N+1,K} + \sum_{n=1}^{N+1} \mathbf{b}_{n,K+1} \mathbf{b}_{n,K+1}^T = \mathbf{H}_{N+1,K} + \mathbf{B}_{K+1} \mathbf{B}_{K+1}^T$$ Where $\mathbf{B}_{K+1} = [\mathbf{b}_{1,K+1}, \mathbf{b}_{2,K+1}, ..., \mathbf{b}_{N+1,K+1}]$ is a matrix with size $W \times (N+1)$ . Also, we can obtain: $$\mathbf{H}_{N+1,K+1}^{-1} = \mathbf{H}_{N+1,K}^{-1} - \frac{\mathbf{H}_{N+1,K}^{-1} \mathbf{B}_{K+1} \mathbf{B}_{K+1}^T \mathbf{H}_{N+1,K}^{-1}}{1 + \mathbf{B}_{K+1}^T \mathbf{H}_{N+1,K}^{-1} \mathbf{B}_{K+1}}$$ Where $\mathbf{H}_{N+1,K}^{-1}$ is defined by (\*). If we substitute (\*) into the expression above, we can obtain the relationship between $\mathbf{H}_{N+1,K+1}^{-1}$ and $\mathbf{H}_{N,K}^{-1}$ .
1,657
5
5.22
medium
Derive the results $\frac{\partial^2 E_n}{\partial w_{kj}^{(2)} \partial w_{k'j'}^{(2)}} = z_j z_{j'} M_{kk'}.$, (5.94), and $\frac{\partial^2 E_n}{\partial w_{ji}^{(1)} \partial w_{kj'}^{(2)}} = x_i h'(a_{j'}) \left\{ \delta_k I_{jj'} + z_j \sum_{k'} w_{k'j'}^{(2)} H_{kk'} \right\}.$ for the elements of the Hessian matrix of a two-layer feed-forward network by application of the chain rule of calculus.
We begin by handling the first case. $$\begin{split} \frac{\partial^2 E_n}{\partial w_{kj}^{(2)} \partial w_{k'j'}^{(2)}} &= \frac{\partial}{\partial w_{kj}^{(2)}} (\frac{\partial E_n}{\partial w_{k'j'}^{(2)}}) \\ &= \frac{\partial}{\partial w_{kj}^{(2)}} (\frac{\partial E_n}{\partial a_{k'}} \frac{\partial a_{k'}}{\partial w_{k'j'}^{(2)}}) \\ &= \frac{\partial}{\partial w_{kj}^{(2)}} (\frac{\partial E_n}{\partial a_{k'}} \frac{\partial \sum_{j'} w_{k'j'} z_{j'}}{\partial w_{k'j'}^{(2)}}) \\ &= \frac{\partial}{\partial w_{kj}^{(2)}} (\frac{\partial E_n}{\partial a_{k'}} z_{j'}) \\ &= \frac{\partial}{\partial w_{kj}^{(2)}} (\frac{\partial E_n}{\partial a_{k'}}) z_{j'} + \frac{\partial E_n}{\partial a_{k'}} \frac{\partial z_{j'}}{\partial w_{kj}^{(2)}} \\ &= \frac{\partial}{\partial a_k} (\frac{\partial E_n}{\partial a_{k'}}) \frac{\partial a_k}{\partial w_{kj}^{(2)}} z_{j'} + 0 \\ &= \frac{\partial}{\partial a_k} (\frac{\partial E_n}{\partial a_{k'}}) z_{j} z_{j'} \\ &= z_{j} z_{j'} M_{kk'} \end{split}$$ Then we focus on the second case, and if here $j \neq j'$ $$\frac{\partial^{2}E_{n}}{\partial w_{ji}^{(1)}\partial w_{j'i'}^{(1)}} = \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_{n}}{\partial w_{j'i'}^{(1)}}) = \frac{\partial}{\partial w_{ji}^{(1)}} (\sum_{k'} \frac{\partial E_{n}}{\partial a_{k'}} \frac{\partial a_{k'}}{\partial w_{j'i'}^{(1)}}) = \sum_{k'} \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_{n}}{\partial a_{k'}} w_{k'j'}^{(2)} h'(a_{j'}) x_{i'}) = \sum_{k'} h'(a_{j'}) x_{i'} \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_{n}}{\partial a_{k'}} w_{k'j'}^{(2)}) = \sum_{k'} h'(a_{j'}) x_{i'} \sum_{k} \frac{\partial}{\partial a_{k}} (\frac{\partial E_{n}}{\partial a_{k'}} w_{k'j'}^{(2)}) \frac{\partial a_{k}}{\partial w_{ji}^{(1)}} = \sum_{k'} h'(a_{j'}) x_{i'} \sum_{k} \frac{\partial}{\partial a_{k}} (\frac{\partial E_{n}}{\partial a_{k'}} w_{k'j'}^{(2)}) \cdot (w_{kj}^{(2)} h'(a_{j}) x_{i}) = \sum_{k'} h'(a_{j'}) x_{i'} \sum_{k} M_{kk'} w_{k'j'}^{(2)} \cdot w_{kj}^{(2)} h'(a_{j}) x_{i} = x_{i'} x_{i} h'(a_{j'}) h'(a_{j}) \sum_{k'} \sum_{k} w_{k'j'}^{(2)} \cdot w_{kj}^{(2)} M_{kk'}$$ When j = j', similarly we have: $$\begin{split} \frac{\partial^{2}E_{n}}{\partial w_{ji}^{(1)}\partial w_{ji'}^{(1)}} &= \sum_{k'} \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_{n}}{\partial a_{k'}} w_{k'j}^{(2)} h'(a_{j}) x_{i'}) \\ &= x_{i'} \sum_{k'} \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_{n}}{\partial a_{k'}} w_{k'j}^{(2)}) h'(a_{j}) + x_{i'} \sum_{k'} (\frac{\partial E_{n}}{\partial a_{k'}} w_{k'j}^{(2)}) \frac{\partial h'(a_{j})}{\partial w_{ji}^{(1)}} \\ &= x_{i'} x_{i} h'(a_{j}) h'(a_{j}) \sum_{k'} \sum_{k} w_{k'j}^{(2)} \cdot w_{kj}^{(2)} M_{kk'} + x_{i'} \sum_{k'} (\frac{\partial E_{n}}{\partial a_{k'}} w_{k'j}^{(2)}) \frac{\partial h'(a_{j})}{\partial w_{ji}^{(1)}} \\ &= x_{i'} x_{i} h'(a_{j}) h'(a_{j}) \sum_{k'} \sum_{k} w_{k'j}^{(2)} \cdot w_{kj}^{(2)} M_{kk'} + x_{i'} \sum_{k'} (\frac{\partial E_{n}}{\partial a_{k'}} w_{k'j}^{(2)}) h''(a_{j}) x_{i} \\ &= x_{i'} x_{i} h'(a_{j}) h'(a_{j}) \sum_{k'} \sum_{k} w_{k'j}^{(2)} \cdot w_{kj}^{(2)} M_{kk'} + h''(a_{j}) x_{i} x_{i'} \sum_{k'} \delta_{k'} w_{k'j}^{(2)} \end{split}$$ It seems that what we have obtained is slightly different from (5.94) when j=j'. However this is not the case, since the summation over k' in the second term of our formulation and the summation over k in the first term of (5.94) is actually the same (i.e., they both represent the summation over all the output units). Combining the situation when j=j' and $j\neq j'$ , we can obtain (5.94) just as required. Finally, we deal with the third case. Similarly we first focus on $j\neq j'$ : $$\frac{\partial^{2}E_{n}}{\partial w_{ji}^{(1)}\partial w_{kj'}^{(2)}} = \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_{n}}{\partial w_{kj'}^{(2)}})$$ $$= \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_{n}}{\partial a_{k}} \frac{\partial a_{k}}{\partial w_{kj'}^{(2)}})$$ $$= \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_{n}}{\partial a_{k}} \frac{\partial \sum_{j'} w_{kj'} z_{j'}}{\partial w_{kj'}^{(2)}})$$ $$= \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_{n}}{\partial a_{k}} z_{j'})$$ $$= z_{j'} \sum_{k'} \frac{\partial}{\partial a_{k'}} (\frac{\partial E_{n}}{\partial a_{k}}) \frac{\partial a_{k'}}{\partial w_{ji}^{(1)}}$$ $$= z_{j'} \sum_{k'} M_{kk'} w_{k'j}^{(2)} h'(a_{j}) x_{i}$$ $$= x_{i} h'(a_{j}) z_{j'} \sum_{k'} M_{kk'} w_{k'j}^{(2)}$$ Note that in $\frac{\partial^2 E_n}{\partial w_{ji}^{(1)} \partial w_{kj'}^{(2)}} = x_i h'(a_{j'}) \left\{ \delta_k I_{jj'} + z_j \sum_{k'} w_{k'j'}^{(2)} H_{kk'} \right\}.$, there are two typos: (i) $H_{kk'}$ should be $M_{kk'}$ . (ii) j should exchange position with j' in the right side of $\frac{\partial^2 E_n}{\partial w_{ji}^{(1)} \partial w_{kj'}^{(2)}} = x_i h'(a_{j'}) \left\{ \delta_k I_{jj'} + z_j \sum_{k'} w_{k'j'}^{(2)} H_{kk'} \right\}.$. When j = j', we have: $$\begin{split} \frac{\partial^2 E_n}{\partial w_{ji}^{(1)} \partial w_{kj}^{(2)}} &= \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_n}{\partial w_{kj}^{(2)}}) \\ &= \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_n}{\partial a_k} \frac{\partial a_k}{\partial w_{kj}^{(2)}}) \\ &= \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_n}{\partial a_k} \frac{\partial \sum_j w_{kj} z_j}{\partial w_{kj}^{(2)}}) \\ &= \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_n}{\partial a_k} z_j) \\ &= \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_n}{\partial a_k} z_j) \\ &= \frac{\partial}{\partial w_{ji}^{(1)}} (\frac{\partial E_n}{\partial a_k} z_j) \\ &= x_i h'(a_j) z_j \sum_{k'} M_{kk'} w_{k'j}^{(2)} + \frac{\partial E_n}{\partial a_k} \frac{\partial z_j}{w_{ji}^{(1)}} \\ &= x_i h'(a_j) z_j \sum_{k'} M_{kk'} w_{k'j}^{(2)} + \delta_k h'(a_j) x_i \end{split}$$ Combing these two situations, we obtain $\frac{\partial^2 E_n}{\partial w_{ji}^{(1)} \partial w_{kj'}^{(2)}} = x_i h'(a_{j'}) \left\{ \delta_k I_{jj'} + z_j \sum_{k'} w_{k'j'}^{(2)} H_{kk'} \right\}.$ just as required.
6,181
5
5.23
medium
Extend the results of Section 5.4.5 for the exact Hessian of a two-layer network to include skip-layer connections that go directly from inputs to outputs.
It is similar to the previous problem. $$\begin{split} \frac{\partial^2 E_n}{\partial w_{k'i'} \partial w_{kj}} &= \frac{\partial}{\partial w_{k'i'}} (\frac{\partial E_n}{\partial w_{kj}}) \\ &= \frac{\partial}{\partial w_{k'i'}} (\frac{\partial E_n}{\partial a_k} z_j) \\ &= z_j \frac{\partial w_{k'i'}}{\partial a_{k'}} \frac{\partial}{\partial a_{k'}} (\frac{\partial E_n}{\partial a_k}) \\ &= z_j x_{i'} M_{kk'} \end{split}$$ And $$\begin{split} \frac{\partial^2 E_n}{\partial w_{k'i'} \partial w_{ji}} &= \frac{\partial}{\partial w_{k'i'}} (\sum_k \frac{\partial E_n}{\partial a_k} \frac{\partial a_k}{\partial w_{ji}}) \\ &= \frac{\partial}{\partial w_{k'i'}} (\sum_k \frac{\partial E_n}{\partial a_k} w_{kj} h'(a_j) x_i) \\ &= \sum_k h'(a_j) x_i w_{kj} \frac{\partial}{\partial w_{k'i'}} (\frac{\partial E_n}{\partial a_k}) \\ &= \sum_k h'(a_j) x_i w_{kj} \frac{\partial}{\partial a_{k'}} (\frac{\partial E_n}{\partial a_k}) \frac{a_{k'}}{w_{k'i'}} \\ &= \sum_k h'(a_j) x_i w_{kj} M_{kk'} x_{i'} \\ &= x_i x_{i'} h'(a_j) \sum_k w_{kj} M_{kk'} \end{split}$$ Finally, we have $$\begin{array}{ll} \frac{\partial^2 E_n}{\partial w_{k'i'}w_{ki}} & = & \frac{\partial}{\partial w_{k'i'}}(\frac{\partial E_n}{\partial w_{ki}}) \\ & = & \frac{\partial}{\partial w_{k'i'}}(\frac{\partial E_n}{\partial a_k}x_i) \\ & = & x_i\frac{\partial}{\partial a_{k'}}(\frac{\partial E_n}{\partial a_k})\frac{\partial a_{k'}}{\partial w_{k'i'}} \\ & = & x_ix_{i'}M_{kk'} \end{array}$$ # **Problem 5.24 Solution** It is obvious. According to $z_j = h\left(\sum_i w_{ji} x_i + w_{j0}\right)$, we have: $$\begin{split} \widetilde{\alpha}_j &= \sum_i \widetilde{w}_{ji} \widetilde{x}_i + \widetilde{w}_{j0} \\ &= \sum_i \frac{1}{a} w_{ji} \cdot (ax_i + b) + w_{j0} - \frac{b}{a} \sum_i w_{ji} \\ &= \sum_i w_{ji} x_i + w_{j0} = a_j \end{split}$$ Where we have used $x_i \to \widetilde{x}_i = ax_i + b.$, $w_{ji} \to \widetilde{w}_{ji} = \frac{1}{a} w_{ji}$ and $w_{j0} \to \widetilde{w}_{j0} = w_{j0} - \frac{b}{a} \sum_{i} w_{ji}.$. Currently, we have proved that under the transformation the hidden unit $a_j$ is unchanged. If the activation function at the hidden unit is also unchanged, we have $\tilde{z}_j = z_j$ . Now we deal with the output unit $\tilde{y}_k$ : $$\begin{split} \widetilde{y}_k &= \sum_j \widetilde{w}_{kj} \widetilde{z}_j + \widetilde{w}_{k0} \\ &= \sum_j c w_{kj} \cdot z_j + c w_{k0} + d \\ &= c \sum_j \left[ w_{kj} \cdot z_j + w_{k0} \right] + d \\ &= c y_k + d \end{split}$$ Where we have used $y_k = \sum_j w_{kj} z_j + w_{k0}.$, $w_{kj} \to \widetilde{w}_{kj} = cw_{kj}$ and $w_{k0} \to \widetilde{w}_{k0} = cw_{k0} + d.$. To be more specific, here we have proved that the linear transformation between $\tilde{y}_k$ and $y_k$ can be achieved by making transformation $w_{kj} \to \widetilde{w}_{kj} = cw_{kj}$ and $w_{k0} \to \widetilde{w}_{k0} = cw_{k0} + d.$.
2,974
5
5.25
hard
$ Consider a quadratic error function of the form $$E = E_0 + \frac{1}{2} (\mathbf{w} - \mathbf{w}^*)^{\mathrm{T}} \mathbf{H} (\mathbf{w} - \mathbf{w}^*)$$ $E = E_0 + \frac{1}{2} (\mathbf{w} - \mathbf{w}^*)^{\mathrm{T}} \mathbf{H} (\mathbf{w} - \mathbf{w}^*)$ where $\mathbf{w}^*$ represents the minimum, and the Hessian matrix $\mathbf{H}$ is positive definite and constant. Suppose the initial weight vector $\mathbf{w}^{(0)}$ is chosen to be at the origin and is updated using simple gradient descent $$\mathbf{w}^{(\tau)} = \mathbf{w}^{(\tau-1)} - \rho \nabla E \tag{5.196}$$ where $\tau$ denotes the step number, and $\rho$ is the learning rate (which is assumed to be small). Show that, after $\tau$ steps, the components of the weight vector parallel to the eigenvectors of $\mathbf{H}$ can be written $$w_i^{(\tau)} = \{1 - (1 - \rho \eta_j)^{\tau}\} w_i^{}$$ $w_i^{(\tau)} = \{1 - (1 - \rho \eta_j)^{\tau}\} w_i^{}$ where $w_j = \mathbf{w}^T \mathbf{u}_j$ , and $\mathbf{u}_j$ and $\eta_j$ are the eigenvectors and eigenvalues, respectively, of $\mathbf{H}$ so that $$\mathbf{H}\mathbf{u}_j = \eta_j \mathbf{u}_j. \tag{5.198}$$ Show that as $\tau \to \infty$ , this gives $\mathbf{w}^{(\tau)} \to \mathbf{w}^*$ as expected, provided $|1 - \rho \eta_j| < 1$ . Now suppose that training is halted after a finite number $\tau$ of steps. Show that the components of the weight vector parallel to the eigenvectors of the Hessian satisfy $$w_i^{(\tau)} \simeq w_i^{} \quad \text{when} \quad \eta_j \gg (\rho \tau)^{-1}$$ $w_i^{(\tau)} \simeq w_i^{} \quad \text{when} \quad \eta_j \gg (\rho \tau)^{-1}$ $$|w_j^{(\tau)}| \ll |w_j^{}| \text{ when } \eta_j \ll (\rho \tau)^{-1}.$$ $|w_j^{(\tau)}| \ll |w_j^{}| \text{ when } \eta_j \ll (\rho \tau)^{-1}.$ Compare this result with the discussion in Section 3.5.3 of regularization with simple weight decay, and hence show that $(\rho\tau)^{-1}$ is analogous to the regularization parameter $\lambda$ . The above results also show that the effective number of parameters in the network, as defined by $\gamma = \sum_{i} \frac{\lambda_i}{\alpha + \lambda_i}.$, grows as the training progresses.
Since we know the gradient of the error function with respect to $\mathbf{w}$ is: $$\nabla E = \mathbf{H}(\mathbf{w} - \mathbf{w}^*)$$ Together with $\mathbf{w}^{(\tau)} = \mathbf{w}^{(\tau-1)} - \rho \nabla E$, we can obtain: $$\mathbf{w}^{(\tau)} = \mathbf{w}^{(\tau-1)} - \rho \nabla E$$ $$= \mathbf{w}^{(\tau-1)} - \rho \mathbf{H} (\mathbf{w}^{(\tau-1)} - \mathbf{w}^*)$$ Multiplying both sides by $\mathbf{u}_j^T$ , using $w_j = \mathbf{w}^T \mathbf{u}_j$ , we can obtain: $$w_{j}^{(\tau)} = \mathbf{u}_{j}^{T} [\mathbf{w}^{(\tau-1)} - \rho \mathbf{H} (\mathbf{w}^{(\tau-1)} - \mathbf{w}^{*})]$$ $$= w_{j}^{(\tau-1)} - \rho \mathbf{u}_{j}^{T} \mathbf{H} (\mathbf{w}^{(\tau-1)} - \mathbf{w}^{*})$$ $$= w_{j}^{(\tau-1)} - \rho \eta_{j} \mathbf{u}_{j}^{T} (\mathbf{w}^{(\tau-1)} - \mathbf{w}^{*})$$ $$= w_{j}^{(\tau-1)} - \rho \eta_{j} (w_{j}^{(\tau-1)} - w_{j}^{*})$$ $$= (1 - \rho \eta_{j}) w_{j}^{(\tau-1)} + \rho \eta_{j} w_{j}^{*}$$ Where we have used $\mathbf{H}\mathbf{u}_j = \eta_j \mathbf{u}_j.$. Then we use mathematical deduction to prove $w_i^{(\tau)} = \{1 - (1 - \rho \eta_j)^{\tau}\} w_i^{}$, beginning by calculating $w_i^{(1)}$ : $$w_{j}^{(1)} = (1 - \rho \eta_{j})w_{j}^{(0)} + \rho \eta_{j}w_{j}^{*}$$ $$= \rho \eta_{j}w_{j}^{*}$$ $$= [1 - (1 - \rho \eta_{j})]w_{j}^{*}$$ Suppose $w_i^{(\tau)} = \{1 - (1 - \rho \eta_j)^{\tau}\} w_i^{}$ holds for $\tau$ , we now prove that it also holds for $\tau + 1$ . $$\begin{split} w_{j}^{(\tau+1)} &= (1 - \rho \eta_{j} w_{j}^{(\tau)} + \rho \eta_{j} w_{j}^{*} \\ &= (1 - \rho \eta_{j}) \big[ 1 - (1 - \rho \eta_{j})^{\tau} \big] w_{j}^{*} + \rho \eta_{j} w_{j}^{*} \\ &= \big\{ (1 - \rho \eta_{j}) \big[ 1 - (1 - \rho \eta_{j})^{\tau} \big] + \rho \eta_{j} \big\} w_{j}^{*} \\ &= \big[ 1 - (1 - \rho \eta_{j})^{\tau+1} \big] w_{j}^{*} \end{split}$$ Hence $w_i^{(\tau)} = \{1 - (1 - \rho \eta_j)^{\tau}\} w_i^{}$ holds for $\tau=1,2,...$ Provided $|1-\rho\eta_j|<1$ , we have $(1-\rho\eta_j)^{\tau}\to 0$ as $\tau\to\infty$ ans thus $\mathbf{w}^{(\tau)}=\mathbf{w}^*$ . If $\tau$ is finite and $\eta_j>>(\rho\tau)^{-1}$ , the above argument still holds since $\tau$ is still relatively large. Conversely, when $\eta_j<<(\rho\tau)^{-1}$ , we expand the expression above: $$|w_j^{(\tau)}| = |\left[1 - (1 - \rho \eta_j)^\tau\right] w_j^*| \approx |\tau \rho \eta_j w_j^*| << |w_j^*|$$ We can see that $(\rho\tau)^{-1}$ works as the regularization parameter $\alpha$ in section 3.5.3.
2,543
5
5.26
medium
Consider a multilayer perceptron with arbitrary feed-forward topology, which is to be trained by minimizing the *tangent propagation* error function $\widetilde{E} = E + \lambda \Omega$ in which the regularizing function is given by $\Omega = \frac{1}{2} \sum_{n} \sum_{k} \left( \frac{\partial y_{nk}}{\partial \xi} \Big|_{\xi=0} \right)^2 = \frac{1}{2} \sum_{n} \sum_{k} \left( \sum_{i=1}^{D} J_{nki} \tau_{ni} \right)^2.$. Show that the regularization term $\Omega$ can be written as a sum over patterns of terms of the form $$\Omega_n = \frac{1}{2} \sum_k \left( \mathcal{G} y_k \right)^2 \tag{5.201}$$ where $\mathcal{G}$ is a differential operator defined by $$\mathcal{G} \equiv \sum_{i} \tau_{i} \frac{\partial}{\partial x_{i}}.$$ $\mathcal{G} \equiv \sum_{i} \tau_{i} \frac{\partial}{\partial x_{i}}.$ By acting on the forward propagation equations $$z_j = h(a_j), a_j = \sum_i w_{ji} z_i (5.203)$$ with the operator $\mathcal{G}$ , show that $\Omega_n$ can be evaluated by forward propagation using the following equations: $$\alpha_j = h'(a_j)\beta_j, \qquad \beta_j = \sum_i w_{ji}\alpha_i.$$ $\alpha_j = h'(a_j)\beta_j, \qquad \beta_j = \sum_i w_{ji}\alpha_i.$ where we have defined the new variables $$\alpha_j \equiv \mathcal{G}z_j, \qquad \beta_j \equiv \mathcal{G}a_j.$$ $\alpha_j \equiv \mathcal{G}z_j, \qquad \beta_j \equiv \mathcal{G}a_j.$ Now show that the derivatives of $\Omega_n$ with respect to a weight $w_{rs}$ in the network can be written in the form $$\frac{\partial \Omega_n}{\partial w_{rs}} = \sum_k \alpha_k \left\{ \phi_{kr} z_s + \delta_{kr} \alpha_s \right\} \tag{5.206}$$ where we have defined $$\delta_{kr} \equiv \frac{\partial y_k}{\partial a_r}, \qquad \phi_{kr} \equiv \mathcal{G}\delta_{kr}.$$ $\delta_{kr} \equiv \frac{\partial y_k}{\partial a_r}, \qquad \phi_{kr} \equiv \mathcal{G}\delta_{kr}.$ Write down the backpropagation equations for $\delta_{kr}$ , and hence derive a set of backpropagation equations for the evaluation of the $\phi_{kr}$ .
Based on definition or by analogy with $\Omega = \frac{1}{2} \sum_{n} \sum_{k} \left( \frac{\partial y_{nk}}{\partial \xi} \Big|_{\xi=0} \right)^2 = \frac{1}{2} \sum_{n} \sum_{k} \left( \sum_{i=1}^{D} J_{nki} \tau_{ni} \right)^2.$, we have: $$\Omega_n = \frac{1}{2} \sum_{k} \left( \frac{\partial y_{nk}}{\partial \xi} \Big|_{\xi=0} \right)^2$$ $$= \frac{1}{2} \sum_{k} \left( \sum_{i} \frac{\partial y_{nk}}{\partial x_i} \frac{\partial x_i}{\partial \xi} \Big|_{\xi=0} \right)^2$$ $$= \frac{1}{2} \sum_{k} \left( \sum_{i} \tau_i \frac{\partial}{\partial x_i} y_{nk} \right)^2$$ Where we have denoted $$\tau_i = \frac{\partial x_i}{\partial \xi} \big|_{\xi=0}$$ And this is exactly the form given in $\Omega_n = \frac{1}{2} \sum_k \left( \mathcal{G} y_k \right)^2$ and $\mathcal{G} \equiv \sum_{i} \tau_{i} \frac{\partial}{\partial x_{i}}.$ if the nth observation $y_{nk}$ is denoted as $y_k$ in short. Firstly, we define $\alpha_j$ and $\beta_j$ as $\alpha_j \equiv \mathcal{G}z_j, \qquad \beta_j \equiv \mathcal{G}a_j.$ shows, where $z_j$ and $a_j$ are given by $z_j = h(a_j), a_j = \sum_i w_{ji} z_i$. Then we will prove $\alpha_j = h'(a_j)\beta_j, \qquad \beta_j = \sum_i w_{ji}\alpha_i.$ holds: $$\alpha_{j} = \sum_{i} \tau_{i} \frac{\partial z_{j}}{\partial x_{i}} = \sum_{i} \tau_{i} \frac{\partial h(a_{j})}{\partial x_{i}}$$ $$= \sum_{i} \tau_{i} \frac{\partial h(a_{j})}{\partial a_{j}} \frac{\partial a_{j}}{\partial x_{i}}$$ $$= h'(a_{j}) \sum_{i} \tau_{i} \frac{\partial}{\partial x_{i}} a_{j} = h'(a_{j}) \beta_{j}$$ Moreover, $$\begin{split} \beta_{j} &= \sum_{i} \tau_{i} \frac{\partial \alpha_{j}}{\partial x_{i}} = \sum_{i} \tau_{i} \frac{\partial \sum_{i'} w_{ji'} z_{i'}}{\partial x_{i}} \\ &= \sum_{i} \tau_{i} \sum_{i'} \frac{\partial w_{ji'} z_{i'}}{\partial x_{i}} = \sum_{i} \tau_{i} \sum_{i'} w_{ji'} \frac{\partial z_{i'}}{\partial x_{i}} \\ &= \sum_{i'} w_{ji'} \sum_{i} \tau_{i} \frac{\partial z_{i'}}{\partial x_{i}} = \sum_{i'} w_{ji'} \alpha_{i'} \end{split}$$ So far we have proved that $\alpha_j = h'(a_j)\beta_j, \qquad \beta_j = \sum_i w_{ji}\alpha_i.$ holds and now we aim to find a forward propagation formula to calculate $\Omega_n$ . We firstly begin by evaluating $\{\beta_j\}$ at the input units, and then use the first equation in $\alpha_j = h'(a_j)\beta_j, \qquad \beta_j = \sum_i w_{ji}\alpha_i.$ to obtain $\{\alpha_j\}$ at the input units, and then the second equation to evaluate $\{\beta_j\}$ at the first hidden layer, and again the first equation to evaluate $\{\alpha_j\}$ at the first hidden layer. We repeatedly evaluate $\{\beta_j\}$ and $\{\alpha_j\}$ in this way until reaching the output layer. Then we deal with (5.206): $$\begin{split} \frac{\partial \Omega_n}{\partial w_{rs}} &= \frac{\partial}{\partial w_{rs}} \{ \frac{1}{2} \sum_k (\mathcal{G} y_k)^2 \} = \frac{1}{2} \sum_k \frac{\partial (\mathcal{G} y_k)^2}{\partial w_{rs}} \\ &= \frac{1}{2} \sum_k \frac{\partial (\mathcal{G} y_k)^2}{\partial (\mathcal{G} y_k)} \frac{\partial (\mathcal{G} y_k)}{\partial w_{rs}} = \sum_k \mathcal{G} y_k \frac{\partial \mathcal{G} y_k}{\partial w_{rs}} \\ &= \sum_k \mathcal{G} y_k \mathcal{G} \left[ \frac{\partial y_k}{\partial w_{rs}} \right] = \sum_k \alpha_k \mathcal{G} \left[ \frac{\partial y_k}{\partial \alpha_r} \frac{\partial \alpha_r}{\partial w_{rs}} \right] \\ &= \sum_k \alpha_k \mathcal{G} \left[ \delta_{kr} z_s \right] = \sum_k \alpha_k \left\{ \mathcal{G} [\delta_{kr}] z_s + \mathcal{G} [z_s] \delta_{kr} \right\} \\ &= \sum_k \alpha_k \left\{ \phi_{kr} z_s + \alpha_s \delta_{kr} \right\} \end{split}$$ Provided with the idea in section 5.3, the backward propagation formula is easy to derive. We can simply replace $E_n$ with $y_k$ to obtain a backward equation, so we omit it here.
3,876
5
5.27
medium
Consider the framework for training with transformed data in the special case in which the transformation consists simply of the addition of random noise $x \to x + \xi$ where $\xi$ has a Gaussian distribution with zero mean and unit covariance. By following an argument analogous to that of Section 5.5.5, show that the resulting regularizer reduces to the Tikhonov form $\Omega = \frac{1}{2} \int \|\nabla y(\mathbf{x})\|^2 p(\mathbf{x}) d\mathbf{x}$.
Following the procedure in section 5.5.5, we can obtain: $$\Omega = \frac{1}{2} \int (\boldsymbol{\tau}^T \nabla y(\mathbf{x}))^2 p(\mathbf{x}) d\mathbf{x}$$ Since we have $\tau = \partial \mathbf{s}(\mathbf{x}, \boldsymbol{\xi}) / \partial \boldsymbol{\xi}$ and $\mathbf{s} = \mathbf{x} + \boldsymbol{\xi}$ , so we have $\tau = \mathbf{I}$ . Therefore, substituting $\tau$ into the equation above, we can obtain: $$\Omega = \frac{1}{2} \int (\nabla y(\mathbf{x}))^2 p(\mathbf{x}) d\mathbf{x}$$ Just as required.
522
5
5.28
easy
- 5.28 (\*) Consider a neural network, such as the convolutional network discussed in Section 5.5.6, in which multiple weights are constrained to have the same value. Discuss how the standard backpropagation algorithm must be modified in order to ensure that such constraints are satisfied when evaluating the derivatives of an error function with respect to the adjustable parameters in the network.
The modifications only affect derivatives with respect to the weights in the convolutional layer. The units within a feature map (indexed m) have different inputs, but all share a common weight vector, $\mathbf{w}^{(m)}$ . Therefore, we can write: $$\frac{\partial E_n}{\partial w_i^{(m)}} = \sum_j \frac{\partial E_n}{\partial a_j^{(m)}} \frac{\partial a_j^{(m)}}{\partial w_i^{(m)}} = \sum_j \delta_j^{(m)} z_{ji}^{(m)}$$ Here $a_j^{(m)}$ denotes the activation of the jth unit in th mth feature map, whereas $w_i^{(m)}$ denotes the ith element of the corresponding feature vector and finally $z_{ij}^{(m)}$ denotes the ith input for the jth unit in the mth feature map. Note that $\delta_j^{(m)}$ can be computed recursively from the units in the following layer.
777
5
5.29
easy
Verify the result $\frac{\partial \widetilde{E}}{\partial w_i} = \frac{\partial E}{\partial w_i} + \lambda \sum_j \gamma_j(w_i) \frac{(w_i - \mu_j)}{\sigma_j^2}.$.
It is obvious. Firstly, we know that: $$\frac{\partial}{\partial w_i} \left\{ \pi_j \mathcal{N}(w_i | \mu_j, \sigma_j^2) \right\} = -\pi_j \frac{w_i - \mu_j}{\sigma_j^2} \mathcal{N}(w_i | \mu_j, \sigma_j^2)$$ We now derive the error function with respect to $w_i$ : $$\begin{split} \frac{\partial \widetilde{E}}{\partial w_{i}} &= \frac{\partial E}{\partial w_{i}} + \frac{\partial \lambda \Omega(\mathbf{w})}{\partial w_{i}} \\ &= \frac{\partial E}{\partial w_{i}} - \lambda \frac{\partial}{\partial w_{i}} \left\{ \sum_{i} \ln \left( \sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right) \right\} \\ &= \frac{\partial E}{\partial w_{i}} - \lambda \frac{\partial}{\partial w_{i}} \left\{ \ln \left( \sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right) \right\} \\ &= \frac{\partial E}{\partial w_{i}} - \lambda \frac{1}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \frac{\partial}{\partial w_{i}} \left\{ \sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right\} \\ &= \frac{\partial E}{\partial w_{i}} + \lambda \frac{1}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \left\{ \sum_{j=1}^{M} \pi_{j} \frac{w_{i} - \mu_{j}}{\sigma_{j}^{2}} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right\} \\ &= \frac{\partial E}{\partial w_{i}} + \lambda \frac{\sum_{j=1}^{M} \pi_{j} \frac{w_{i} - \mu_{j}}{\sigma_{j}^{2}} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})}{\sum_{k} \pi_{k} \mathcal{N}(w_{i} | \mu_{k}, \sigma_{k}^{2})} \\ &= \frac{\partial E}{\partial w_{i}} + \lambda \sum_{j=1}^{M} \frac{\pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})}{\sum_{k} \pi_{k} \mathcal{N}(w_{i} | \mu_{k}, \sigma_{k}^{2})} \frac{w_{i} - \mu_{j}}{\sigma_{j}^{2}} \\ &= \frac{\partial E}{\partial w_{i}} + \lambda \sum_{j=1}^{M} \gamma_{j}(w_{i}) \frac{w_{i} - \mu_{j}}{\sigma_{j}^{2}} \end{aligned}$$ Where we have used $\Omega(\mathbf{w}) = -\sum_{i} \ln \left( \sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right).$ and defined $\gamma_j(w) = \frac{\pi_j \mathcal{N}(w|\mu_j, \sigma_j^2)}{\sum_k \pi_k \mathcal{N}(w|\mu_k, \sigma_k^2)}.$.
2,181
5
5.3
medium
Consider a regression problem involving multiple target variables in which it is assumed that the distribution of the targets, conditioned on the input vector $\mathbf{x}$ , is a Gaussian of the form $$p(\mathbf{t}|\mathbf{x}, \mathbf{w}) = \mathcal{N}(\mathbf{t}|\mathbf{y}(\mathbf{x}, \mathbf{w}), \mathbf{\Sigma})$$ $p(\mathbf{t}|\mathbf{x}, \mathbf{w}) = \mathcal{N}(\mathbf{t}|\mathbf{y}(\mathbf{x}, \mathbf{w}), \mathbf{\Sigma})$ where $\mathbf{y}(\mathbf{x}, \mathbf{w})$ is the output of a neural network with input vector $\mathbf{x}$ and weight vector $\mathbf{w}$ , and $\mathbf{\Sigma}$ is the covariance of the assumed Gaussian noise on the targets. Given a set of independent observations of $\mathbf{x}$ and $\mathbf{t}$ , write down the error function that must be minimized in order to find the maximum likelihood solution for $\mathbf{w}$ , if we assume that $\mathbf{\Sigma}$ is fixed and known. Now assume that $\mathbf{\Sigma}$ is also to be determined from the data, and write down an expression for the maximum likelihood solution for $\mathbf{\Sigma}$ . Note that the optimizations of $\mathbf{w}$ and $\mathbf{\Sigma}$ are now coupled, in contrast to the case of independent target variables discussed in Section 5.2.
Following the process in the previous question, we first write down the negative logarithm of the likelihood function. $$E(\mathbf{w}, \mathbf{\Sigma}) = \frac{1}{2} \sum_{n=1}^{N} \left\{ [\mathbf{y}(\mathbf{x}_n, \mathbf{w}) - \mathbf{t}_n]^T \mathbf{\Sigma}^{-1} [\mathbf{y}(\mathbf{x}_n, \mathbf{w}) - \mathbf{t}_n] \right\} + \frac{N}{2} \ln|\mathbf{\Sigma}| + \text{const} \quad (*)$$ Note here we have assumed $\Sigma$ is unknown and const denotes the term independent of both w and $\Sigma$ . In the first situation, if $\Sigma$ is fixed and known, the equation above will reduce to: $$E(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \left\{ [\mathbf{y}(\mathbf{x_n}, \mathbf{w}) - \mathbf{t_n}]^T \mathbf{\Sigma}^{-1} [\mathbf{y}(\mathbf{x_n}, \mathbf{w}) - \mathbf{t_n}] \right\} + \text{const}$$ We can simply solve $\mathbf{w}_{ML}$ by minimizing it. If $\Sigma$ is unknown, since $\Sigma$ is in the first term on the right of (\*), solving $\mathbf{w}_{ML}$ will involve $\Sigma$ . Note that in the previous problem, the main reason that they can decouple is due to the independent assumption, i.e., $\Sigma$ reduces to $\beta^{-1}\mathbf{I}$ , so that we can bring $\beta$ to the front and view it as a fixed multiplying factor when solving $\mathbf{w}_{ML}$ .
1,293
5
5.30
easy
Verify the result $\frac{\partial \widetilde{E}}{\partial \mu_j} = \lambda \sum_i \gamma_j(w_i) \frac{(\mu_i - w_j)}{\sigma_j^2}$.
Is is similar to the previous problem. Since we know that: $$\frac{\partial}{\partial \mu_j} \left\{ \pi_j \mathcal{N}(w_i | \mu_j, \sigma_j^2) \right\} = \pi_j \frac{w_i - \mu_j}{\sigma_j^2} \mathcal{N}(w_i | \mu_j, \sigma_j^2)$$ We can derive: $$\begin{split} \frac{\partial \widetilde{E}}{\partial \mu_{j}} &= \frac{\partial \lambda \Omega(\mathbf{w})}{\partial \mu_{j}} \\ &= -\lambda \frac{\partial}{\partial \mu_{j}} \left\{ \sum_{i} \ln \left( \sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right) \right\} \\ &= -\lambda \sum_{i} \frac{\partial}{\partial \mu_{j}} \left\{ \ln \left( \sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right) \right\} \\ &= -\lambda \sum_{i} \frac{1}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \frac{\partial}{\partial \mu_{j}} \left\{ \sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right\} \\ &= -\lambda \sum_{i} \frac{1}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \pi_{j} \frac{w_{i} - \mu_{j}}{\sigma_{j}^{2}} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \\ &= \lambda \sum_{i} \frac{\pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})}{\sum_{k=1}^{K} \pi_{k} \mathcal{N}(w_{i} | \mu_{k}, \sigma_{k}^{2})} \frac{\mu_{j} - w_{i}}{\sigma_{j}^{2}} = \lambda \sum_{i} \gamma_{j}(w_{i}) \frac{\mu_{j} - w_{i}}{\sigma_{j}^{2}} \end{split}$$ Note that there is a typo in $\frac{\partial \widetilde{E}}{\partial \mu_j} = \lambda \sum_i \gamma_j(w_i) \frac{(\mu_i - w_j)}{\sigma_j^2}$. The numerator should be $\mu_j - w_i$ instead of $\mu_i - w_j$ . This can be easily seen through the fact that the mean and variance of the Gaussian Distribution should have the same subindex and since $\sigma_j$ is in the denominator, $\mu_j$ should occur in the numerator instead of $\mu_i$ .
1,851
5
5.31
easy
Verify the result $\frac{\partial \widetilde{E}}{\partial \sigma_j} = \lambda \sum_{i} \gamma_j(w_i) \left( \frac{1}{\sigma_j} - \frac{(w_i - \mu_j)^2}{\sigma_j^3} \right)$.
It is similar to the previous problem. Since we know that: $$\frac{\partial}{\partial \sigma_j} \left\{ \pi_j \mathcal{N}(w_i | \mu_j, \sigma_j^2) \right\} = \left( -\frac{1}{\sigma_j} + \frac{(w_i - \mu_j)^2}{\sigma_j^3} \right) \pi_j \mathcal{N}(w_i | \mu_j, \sigma_j^2)$$ We can derive: $$\begin{split} \frac{\partial \widetilde{E}}{\partial \sigma_{j}} &= \frac{\partial \lambda \Omega(\mathbf{w})}{\partial \sigma_{j}} \\ &= -\lambda \frac{\partial}{\partial \sigma_{j}} \left\{ \sum_{i} \ln \left( \sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right) \right\} \\ &= -\lambda \sum_{i} \frac{\partial}{\partial \sigma_{j}} \left\{ \ln \left( \sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right) \right\} \\ &= -\lambda \sum_{i} \frac{1}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \frac{\partial}{\partial \sigma_{j}} \left\{ \sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right\} \\ &= -\lambda \sum_{i} \frac{1}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \frac{\partial}{\partial \sigma_{j}} \left\{ \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right\} \\ &= \lambda \sum_{i} \frac{1}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \left( \frac{1}{\sigma_{j}} - \frac{(w_{i} - \mu_{j})^{2}}{\sigma_{j}^{3}} \right) \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \\ &= \lambda \sum_{i} \frac{\pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})}{\sum_{k=1}^{M} \pi_{k} \mathcal{N}(w_{i} | \mu_{k}, \sigma_{k}^{2})} \left( \frac{1}{\sigma_{j}} - \frac{(w_{i} - \mu_{j})^{2}}{\sigma_{j}^{3}} \right) \\ &= \lambda \sum_{i} \gamma_{j}(w_{i}) \left( \frac{1}{\sigma_{j}} - \frac{(w_{i} - \mu_{j})^{2}}{\sigma_{j}^{3}} \right) \end{split}$$ Just as required.
1,818
5
5.32
medium
Show that the derivatives of the mixing coefficients $\{\pi_k\}$ , defined by $\pi_j = \frac{\exp(\eta_j)}{\sum_{k=1}^{M} \exp(\eta_k)}.$, with respect to the auxiliary parameters $\{\eta_i\}$ are given by $$\frac{\partial \pi_k}{\partial \eta_i} = \delta_{jk} \pi_j - \pi_j \pi_k. \tag{5.208}$$ Hence, by making use of the constraint $\sum_{k} \pi_{k} = 1$ , derive the result $\frac{\partial \widetilde{E}}{\partial \eta_j} = \sum_i \left\{ \pi_j - \gamma_j(w_i) \right\}.$.
It is trivial. We begin by verifying $\frac{\partial \pi_k}{\partial \eta_i} = \delta_{jk} \pi_j - \pi_j \pi_k.$ when $j \neq k$ . $$\begin{array}{ll} \frac{\partial \pi_k}{\partial \eta_j} & = & \frac{\partial}{\partial \eta_j} \left\{ \frac{exp(\eta_k)}{\sum_k exp(\eta_k)} \right\} \\ & = & \frac{-exp(\eta_k) exp(\eta_j)}{\left[\sum_k exp(\eta_k)\right]^2} \\ & = & -\pi_j \pi_k \end{array}$$ And if now we have j = k: $$\begin{array}{lcl} \frac{\partial \pi_k}{\partial \eta_k} & = & \frac{\partial}{\partial \eta_k} \left\{ \frac{exp(\eta_k)}{\sum_k exp(\eta_k)} \right\} \\ & = & \frac{exp(\eta_k) \left[ \sum_k exp(\eta_k) \right] - exp(\eta_k) exp(\eta_k)}{\left[ \sum_k exp(\eta_k) \right]^2} \\ & = & \pi_k - \pi_k \pi_k \end{array}$$ If we combine these two cases, we can easily see that $\frac{\partial \pi_k}{\partial \eta_i} = \delta_{jk} \pi_j - \pi_j \pi_k.$ holds. Now we prove $\frac{\partial \widetilde{E}}{\partial \eta_j} = \sum_i \left\{ \pi_j - \gamma_j(w_i) \right\}.$. $$\begin{split} \frac{\partial \widetilde{E}}{\partial \eta_{j}} &= \lambda \frac{\partial \Omega(\mathbf{w})}{\partial \eta_{j}} \\ &= -\lambda \frac{\partial}{\partial \eta_{j}} \left\{ \sum_{i} \ln \left\{ \sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right\} \right\} \\ &= -\lambda \sum_{i} \frac{\partial}{\partial \eta_{j}} \left\{ \ln \left\{ \sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) \right\} \right\} \\ &= -\lambda \sum_{i} \frac{1}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \frac{\partial}{\partial \eta_{j}} \left\{ \sum_{k=1}^{M} \pi_{k} \mathcal{N}(w_{i} | \mu_{k}, \sigma_{k}^{2}) \right\} \\ &= -\lambda \sum_{i} \frac{1}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \sum_{k=1}^{M} \frac{\partial}{\partial \eta_{j}} \left\{ \pi_{k} \mathcal{N}(w_{i} | \mu_{k}, \sigma_{k}^{2}) \right\} \\ &= -\lambda \sum_{i} \frac{1}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \sum_{k=1}^{M} \frac{\partial}{\partial \eta_{k}} \left\{ \pi_{k} \mathcal{N}(w_{i} | \mu_{k}, \sigma_{k}^{2}) \right\} \frac{\partial \pi_{k}}{\partial \eta_{j}} \\ &= -\lambda \sum_{i} \frac{1}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \sum_{k=1}^{M} \mathcal{N}(w_{i} | \mu_{k}, \sigma_{k}^{2}) (\delta_{jk} \pi_{j} - \pi_{j} \pi_{k}) \\ &= -\lambda \sum_{i} \frac{1}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \left\{ \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2}) - \pi_{j} \sum_{k=1}^{M} \pi_{k} \mathcal{N}(w_{i} | \mu_{k}, \sigma_{k}^{2}) \right\} \\ &= -\lambda \sum_{i} \left\{ \frac{\pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})}{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} - \frac{\pi_{j} \sum_{k=1}^{M} \pi_{k} \mathcal{N}(w_{i} | \mu_{k}, \sigma_{k}^{2}) }{\sum_{j=1}^{M} \pi_{j} \mathcal{N}(w_{i} | \mu_{j}, \sigma_{j}^{2})} \right\} \\ &= -\lambda \sum_{i} \left\{ \gamma_{j}(w_{i}) - \pi_{j} \right\} = \lambda \sum_{i} \left\{ \pi_{j} - \gamma_{j}(w_{i}) \right\} \end{split}$$ Just as required.
3,140
5
5.33
easy
Write down a pair of equations that express the Cartesian coordinates $(x_1, x_2)$ for the robot arm shown in Figure 5.18 in terms of the joint angles $\theta_1$ and $\theta_2$ and the lengths $L_1$ and $L_2$ of the links. Assume the origin of the coordinate system is given by the attachment point of the lower arm. These equations define the 'forward kinematics' of the robot arm.
It is trivial. We set the attachment point of the lower arm with the ground as the origin of the coordinate. We first aim to find the vertical distance from the origin to the target point, and this is also the value of $x_2$ . $$x_2 = L_1 \sin(\pi - \theta_1) + L_2 \sin(\theta_2 - (\pi - \theta_1))$$ = $L_1 \sin \theta_1 - L_2 \sin(\theta_1 + \theta_2)$ Similarly, we calculate the horizontal distance from the origin to the target point. $$x_1 = -L_1 \cos(\pi - \theta_1) + L_2 \cos(\theta_2 - (\pi - \theta_1))$$ = $L_1 \cos \theta_1 - L_2 \cos(\theta_1 + \theta_2)$ From these two equations, we can clearly see the 'forward kinematics' of the robot arm.
673
5
5.34
easy
Derive the result $\frac{\partial E_n}{\partial a_k^{\pi}} = \pi_k - \gamma_k.$ for the derivative of the error function with respect to the network output activations controlling the mixing coefficients in the mixture density network.
By analogy with $\frac{\partial \pi_k}{\partial \eta_i} = \delta_{jk} \pi_j - \pi_j \pi_k.$, we can write: $$\frac{\partial \pi_k(\mathbf{x})}{\partial \alpha_j^\pi} = \delta_{jk} \pi_j(\mathbf{x}) - \pi_j(\mathbf{x}) \pi_k(\mathbf{x})$$ Using $E(\mathbf{w}) = -\sum_{n=1}^{N} \ln \left\{ \sum_{k=1}^{k} \pi_k(\mathbf{x}_n, \mathbf{w}) \mathcal{N} \left( \mathbf{t}_n | \boldsymbol{\mu}_k(\mathbf{x}_n, \mathbf{w}), \sigma_k^2(\mathbf{x}_n, \mathbf{w}) \right) \right\}$, we can see that: $$E_n = -\ln \left\{ \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \right\}$$ Therefore, we can derive: $$\begin{split} \frac{\partial E_n}{\partial a_j^{\pi}} &= -\frac{\partial}{\partial a_j^{\pi}} \ln \left\{ \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \right\} \\ &= -\frac{1}{\sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2)} \frac{\partial}{\partial a_j^{\pi}} \left\{ \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \right\} \\ &= -\frac{1}{\sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2)} \sum_{k=1}^K \frac{\partial \pi_k}{\partial a_j^{\pi}} \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \\ &= -\frac{1}{\sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2)} \sum_{k=1}^K \left[ \delta_{jk} \pi_j(\mathbf{x}_n) - \pi_j(\mathbf{x}_n) \pi_k(\mathbf{x}_n) \right] \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \\ &= -\frac{1}{\sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2)} \left\{ \pi_j(\mathbf{x}_n) \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_j, \sigma_j^2) - \pi_j(\mathbf{x}_n) \sum_{k=1}^K \pi_k(\mathbf{x}_n) \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \right\} \\ &= \frac{1}{\sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2)} \left\{ -\pi_j(\mathbf{x}_n) \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_j, \sigma_j^2) + \pi_j(\mathbf{x}_n) \sum_{k=1}^K \pi_k(\mathbf{x}_n) \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \right\} \end{split}$$ And if we denoted $\gamma_k(\mathbf{t}|\mathbf{x}) = \frac{\pi_k \mathcal{N}_{nk}}{\sum_{l=1}^K \pi_l \mathcal{N}_{nl}}$, we will have: $$\frac{\partial E_n}{\partial \alpha_j^{\pi}} = -\gamma_j + \pi_j$$ Note that our result is slightly different from $\frac{\partial E_n}{\partial a_k^{\pi}} = \pi_k - \gamma_k.$ by the subindex. But there are actually the same if we substitute index j by index k in the final expression. # **Problem 5.35 Solution** We deal with the derivative of error function with respect to $\mu_k$ instead, which will give a vector as result. Furthermore, the lth element of this vector will be what we have been required. Since we know that: $$\frac{\partial}{\partial \boldsymbol{\mu}_k} \left\{ \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \right\} = \frac{\mathbf{t}_n - \boldsymbol{\mu}_k}{\sigma_k^2} \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2)$$ One thing worthy noticing is that here we focus on the isotropic case as stated in page 273 of the textbook. To be more precise, $\mathcal{N}(\mathbf{t}_n|\boldsymbol{\mu}_k,\sigma_k^2)$ should be $\mathcal{N}(\mathbf{t}_n|\boldsymbol{\mu}_k,\sigma_k^2\mathbf{I})$ . Provided with the equation above, we can further obtain: $$\begin{split} \frac{\partial E_n}{\partial \boldsymbol{\mu}_k} &= \frac{\partial}{\partial \boldsymbol{\mu}_k} \left\{ -\ln \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \right\} \\ &= -\frac{1}{\sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2)} \frac{\partial}{\partial \boldsymbol{\mu}_k} \left\{ \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \right\} \\ &= -\frac{1}{\sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2)} \cdot \frac{\mathbf{t}_n - \boldsymbol{\mu}_k}{\sigma_k^2} \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \\ &= -\gamma_k \frac{\mathbf{t}_n - \boldsymbol{\mu}_k}{\sigma_k^2} \end{split}$$ Hence noticing $\mu_{kj}(\mathbf{x}) = a_{kj}^{\mu}.$, the lth element of the result above is what we are required. $$\frac{\partial E_n}{\partial \alpha_{kl}^{\mu}} = \frac{\partial E_n}{\partial \mu_{kl}} = \gamma_k \frac{\mu_{kl} - \mathbf{t}_l}{\sigma_k^2}$$
4,455
5
5.36
easy
Derive the result $\frac{\partial E_n}{\partial a_k^{\sigma}} = -\gamma_k \left\{ \frac{\|\mathbf{t} - \boldsymbol{\mu}_k\|^2}{\sigma_k^3} - \frac{1}{\sigma_k} \right\}.$ for the derivative of the error function with respect to the network output activations controlling the component variances in the mixture density network.
Similarly, we know that: $$\frac{\partial}{\partial \sigma_k} \left\{ \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \right\} = \left\{ -\frac{D}{\sigma_k} + \frac{||\mathbf{t}_n - \boldsymbol{\mu}_k||^2}{\sigma_k^3} \right\} \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2)$$ Therefore, we can obtain: $$\begin{split} \frac{\partial E_n}{\partial \sigma_k} &= \frac{\partial}{\partial \sigma_k} \left\{ -\ln \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \right\} \\ &= -\frac{1}{\sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2)} \frac{\partial}{\partial \sigma_k} \left\{ \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \right\} \\ &= -\frac{1}{\sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2)} \cdot \left\{ -\frac{D}{\sigma_k} + \frac{||\mathbf{t}_n - \boldsymbol{\mu}_k||^2}{\sigma_k^3} \right\} \pi_k \mathcal{N}(\mathbf{t}_n | \boldsymbol{\mu}_k, \sigma_k^2) \\ &= -\gamma_k \left\{ -\frac{D}{\sigma_k} + \frac{||\mathbf{t}_n - \boldsymbol{\mu}_k||^2}{\sigma_k^3} \right\} \end{split}$$ Note that there is a typo in $\frac{\partial E_n}{\partial a_k^{\sigma}} = -\gamma_k \left\{ \frac{\|\mathbf{t} - \boldsymbol{\mu}_k\|^2}{\sigma_k^3} - \frac{1}{\sigma_k} \right\}.$ and the underlying reason is that: $|\sigma_k^2 \mathbf{I}_{D \times D}| = (\sigma_k^2)^D$
1,433
5
5.37
easy
Verify the results $\mathbb{E}\left[\mathbf{t}|\mathbf{x}\right] = \int \mathbf{t}p(\mathbf{t}|\mathbf{x}) \, d\mathbf{t} = \sum_{k=1}^{K} \pi_k(\mathbf{x}) \boldsymbol{\mu}_k(\mathbf{x})$ and $= \sum_{k=1}^{K} \pi_{k}(\mathbf{x}) \left\{ \sigma_{k}^{2}(\mathbf{x}) + \left\|\boldsymbol{\mu}_{k}(\mathbf{x}) - \sum_{l=1}^{K} \pi_{l}(\mathbf{x})\boldsymbol{\mu}_{l}(\mathbf{x})\right\|^{2} \right\}$ for the conditional mean and variance of the mixture density network model.
First we know two properties for the Gaussian distribution $\mathcal{N}(\mathbf{t}|\boldsymbol{\mu}, \sigma^2 \mathbf{I})$ : $$\mathbb{E}[\mathbf{t}] = \int \mathbf{t} \mathcal{N}(\mathbf{t}|\boldsymbol{\mu}, \sigma^2 \mathbf{I}) d\mathbf{t} = \boldsymbol{\mu}$$ And $$\mathbb{E}[||\mathbf{t}||^2] = \int ||\mathbf{t}||^2 \mathcal{N}(\mathbf{t}|\boldsymbol{\mu}, \sigma^2 \mathbf{I}) d\mathbf{t} = L\sigma^2 + ||\boldsymbol{\mu}||^2$$ Where we have used $\mathbb{E}[\mathbf{t}^T \mathbf{A} \mathbf{t}] = \text{Tr}[\mathbf{A} \sigma^2 \mathbf{I}] + \boldsymbol{\mu}^T \mathbf{A} \boldsymbol{\mu}$ by setting $\mathbf{A} = \mathbf{I}$ . This property can be found in Matrixcookbook eq(378). Here L is the dimension of $\mathbf{t}$ . Noticing $p(\mathbf{t}|\mathbf{x}) = \sum_{k=1}^{K} \pi_k(\mathbf{x}) \mathcal{N}\left(\mathbf{t}|\boldsymbol{\mu}_k(\mathbf{x}), \sigma_k^2(\mathbf{x})\right).$, we can write: $$\begin{split} \mathbb{E}[\mathbf{t}|\mathbf{x}] &= \int \mathbf{t} p(\mathbf{t}|\mathbf{x}) d\mathbf{t} \\ &= \int \mathbf{t} \sum_{k=1}^{K} \pi_k \mathcal{N}(\mathbf{t}|\boldsymbol{\mu}_k, \sigma_k^2) d\mathbf{t} \\ &= \sum_{k=1}^{K} \pi_k \int \mathbf{t} \mathcal{N}(\mathbf{t}|\boldsymbol{\mu}_k, \sigma_k^2) d\mathbf{t} \\ &= \sum_{k=1}^{K} \pi_k \boldsymbol{\mu}_k \end{split}$$ Then we prove $= \sum_{k=1}^{K} \pi_{k}(\mathbf{x}) \left\{ \sigma_{k}^{2}(\mathbf{x}) + \left\|\boldsymbol{\mu}_{k}(\mathbf{x}) - \sum_{l=1}^{K} \pi_{l}(\mathbf{x})\boldsymbol{\mu}_{l}(\mathbf{x})\right\|^{2} \right\}$. $$\begin{split} s^2(\mathbf{x}) &= & \mathbb{E}[||\mathbf{t} - \mathbb{E}[\mathbf{t}|\mathbf{x}]||^2|\mathbf{x}] = \mathbb{E}[\mathbf{t}^2 - 2\mathbf{t}\mathbb{E}[\mathbf{t}|\mathbf{x}] + \mathbb{E}[\mathbf{t}|\mathbf{x}]^2)|\mathbf{x}] \\ &= & \mathbb{E}[\mathbf{t}^2|\mathbf{x}] - \mathbb{E}[2\mathbf{t}\mathbb{E}[\mathbf{t}|\mathbf{x}]|\mathbf{x}] + \mathbb{E}[\mathbf{t}|\mathbf{x}]^2 = \mathbb{E}[\mathbf{t}^2|\mathbf{x}] - \mathbb{E}[\mathbf{t}|\mathbf{x}]^2 \\ &= & \int ||\mathbf{t}||^2 \sum_{k=1}^K \pi_k \mathcal{N}(\boldsymbol{\mu}_k, \sigma_k^2) d\mathbf{t} - ||\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l||^2 \\ &= & \sum_{k=1}^K \pi_k \int ||\mathbf{t}||^2 \mathcal{N}(\boldsymbol{\mu}_k, \sigma_k^2) d\mathbf{t} - ||\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l||^2 \\ &= & \sum_{k=1}^K \pi_k (L\sigma_k^2 + ||\boldsymbol{\mu}_k||^2) - ||\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l||^2 \\ &= & L \sum_{k=1}^K \pi_k \sigma_k^2 + \sum_{k=1}^K \pi_k ||\boldsymbol{\mu}_k||^2 - ||\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l||^2 \\ &= & L \sum_{k=1}^K \pi_k \sigma_k^2 + \sum_{k=1}^K \pi_k ||\boldsymbol{\mu}_k||^2 - 2 \times ||\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l||^2 + 1 \times ||\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l||^2 \\ &= & L \sum_{k=1}^K \pi_k \sigma_k^2 + \sum_{k=1}^K \pi_k ||\boldsymbol{\mu}_k||^2 - 2 (\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l) (\sum_{k=1}^K \pi_k \boldsymbol{\mu}_k) + \left(\sum_{k=1}^K \pi_k \right) ||\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l||^2 \\ &= & L \sum_{k=1}^K \pi_k \sigma_k^2 + \sum_{k=1}^K \pi_k ||\boldsymbol{\mu}_k||^2 - 2 (\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l) (\sum_{k=1}^K \pi_k \boldsymbol{\mu}_k) + \sum_{k=1}^K \pi_k ||\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l||^2 \\ &= & L \sum_{k=1}^K \pi_k \sigma_k^2 + \sum_{k=1}^K \pi_k ||\boldsymbol{\mu}_k||^2 - 2 (\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l) (\sum_{k=1}^K \pi_k \boldsymbol{\mu}_k) + \sum_{k=1}^K \pi_k ||\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l||^2 \\ &= & L \sum_{k=1}^K \pi_k \sigma_k^2 + \sum_{k=1}^K \pi_k ||\boldsymbol{\mu}_k||^2 - 2 (\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l) (\sum_{k=1}^K \pi_k \boldsymbol{\mu}_k) + \sum_{k=1}^K \pi_k ||\sum_{l=1}^K \pi_l \boldsymbol{\mu}_l||^2 \\ &= & \sum_{k=1}^K \pi_k (L\sigma_k^2 + ||\boldsymbol{\mu}_k| - \sum_{l=1}^K \pi_l \boldsymbol{\mu}_l||^2 \right) \end{split}$$ Note that there is a typo in $= \sum_{k=1}^{K} \pi_{k}(\mathbf{x}) \left\{ \sigma_{k}^{2}(\mathbf{x}) + \left\|\boldsymbol{\mu}_{k}(\mathbf{x}) - \sum_{l=1}^{K} \pi_{l}(\mathbf{x})\boldsymbol{\mu}_{l}(\mathbf{x})\right\|^{2} \right\}$, i.e., the coefficient L in front of $\sigma_k^2$ is missing.
4,146
5
5.38
easy
Using the general result $p(\mathbf{y}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\boldsymbol{\mu} + \mathbf{b}, \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}})$, derive the predictive distribution $p(t|\mathbf{x}, \mathcal{D}, \alpha, \beta) = \mathcal{N}\left(t|y(\mathbf{x}, \mathbf{w}_{\text{MAP}}), \sigma^2(\mathbf{x})\right)$ for the Laplace approximation to the Bayesian neural network model. ### **290 5. NEURAL NETWORKS**
From $q(\mathbf{w}|\mathcal{D}) = \mathcal{N}(\mathbf{w}|\mathbf{w}_{\text{MAP}}, \mathbf{A}^{-1}).$ and $p(t|\mathbf{x}, \mathbf{w}, \beta) \simeq \mathcal{N}\left(t|y(\mathbf{x}, \mathbf{w}_{\text{MAP}}) + \mathbf{g}^{\mathbf{T}}(\mathbf{w} - \mathbf{w}_{\text{MAP}}), \beta^{-1}\right).$, we can write down the expression for the predictive distribution: $$p(t|\mathbf{x}, D, \alpha, \beta) = \int p(\mathbf{w}|D, \alpha, \beta) p(t|\mathbf{x}, \mathbf{w}, \beta) d\mathbf{w}$$ $$\approx \int q(\mathbf{w}|D) p(t|\mathbf{x}, \mathbf{w}, \beta) d\mathbf{w}$$ $$= \int \mathcal{N}(\mathbf{w}|\mathbf{w}_{MAP}, \mathbf{A}^{-1}) \mathcal{N}(t|\mathbf{g}^T \mathbf{w} - \mathbf{g}^T \mathbf{w}_{MAP} + y(\mathbf{x}, \mathbf{w}_{MAP}), \beta^{-1}) d\mathbf{w}$$ Note here $p(t|\mathbf{x}, \mathbf{w}, \beta)$ is given by $p(t|\mathbf{x}, \mathbf{w}, \beta) \simeq \mathcal{N}\left(t|y(\mathbf{x}, \mathbf{w}_{\text{MAP}}) + \mathbf{g}^{\mathbf{T}}(\mathbf{w} - \mathbf{w}_{\text{MAP}}), \beta^{-1}\right).$ and $q(\mathbf{w}|D)$ is the approximation to the posterior $p(\mathbf{w}|D, \alpha, \beta)$ , which is given by $q(\mathbf{w}|\mathcal{D}) = \mathcal{N}(\mathbf{w}|\mathbf{w}_{\text{MAP}}, \mathbf{A}^{-1}).$. Then by analogy with $p(\mathbf{y}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\boldsymbol{\mu} + \mathbf{b}, \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}})$, we first deal with the mean of the predictive distribution: mean = $$\mathbf{g}^T \mathbf{w} - \mathbf{g}^T \mathbf{w}_{MAP} + y(\mathbf{x}, \mathbf{w}_{MAP})|_{\mathbf{w} = \mathbf{w}_{MAP}}$$ = $y(\mathbf{x}, \mathbf{w}_{MAP})$ Then we deal with the covariance matrix: Covariance matrix = $$\beta^{-1} + \mathbf{g}^T \mathbf{A}^{-1} \mathbf{g}$$ Just as required.
1,826
5
5.39
easy
Make use of the Laplace approximation result $= f(\mathbf{z}_0) \frac{(2\pi)^{M/2}}{|\mathbf{A}|^{1/2}}$ to show that the evidence function for the hyperparameters $\alpha$ and $\beta$ in the Bayesian neural network model can be approximated by $\ln p(\mathcal{D}|\alpha,\beta) \simeq -E(\mathbf{w}_{\text{MAP}}) - \frac{1}{2}\ln|\mathbf{A}| + \frac{W}{2}\ln\alpha + \frac{N}{2}\ln\beta - \frac{N}{2}\ln(2\pi) \quad$.
Using Laplace Approximation, we can obtain: $$p(D|\mathbf{w},\beta)p(\mathbf{w}|\alpha) = p(D|\mathbf{w}_{\text{MAP}},\beta)p(\mathbf{w}_{\text{MAP}}|\alpha)\exp\left\{-(\mathbf{w}-\mathbf{w}_{\text{MAP}})^T\mathbf{A}(\mathbf{w}-\mathbf{w}_{\text{MAP}})\right\}$$ Then using $p(\mathcal{D}|\alpha,\beta) = \int p(\mathcal{D}|\mathbf{w},\beta)p(\mathbf{w}|\alpha) \,d\mathbf{w}.$, $p(\mathbf{w}|\alpha) = \mathcal{N}(\mathbf{w}|\mathbf{0}, \alpha^{-1}\mathbf{I}).$ and $p(\mathcal{D}|\mathbf{w},\beta) = \prod_{n=1}^{N} \mathcal{N}(t_n|y(\mathbf{x}_n, \mathbf{w}), \beta^{-1})$, we can obtain: $$p(D|\alpha,\beta) = \int p(D|\mathbf{w},\beta)p(\mathbf{w},\alpha)d\mathbf{w}$$ $$= \int p(D|\mathbf{w}_{\text{MAP}},\beta)p(\mathbf{w}_{\text{MAP}}|\alpha)\exp\left\{-(\mathbf{w}-\mathbf{w}_{\text{MAP}})^{T}\mathbf{A}(\mathbf{w}-\mathbf{w}_{\text{MAP}})\right\}d\mathbf{w}$$ $$= p(D|\mathbf{w}_{\text{MAP}},\beta)p(\mathbf{w}_{\text{MAP}}|\alpha)\frac{(2\pi)^{W/2}}{|\mathbf{A}|^{1/2}}$$ $$= \prod_{n=1}^{N} \mathcal{N}(t_{n}|y(\mathbf{x}_{n},\mathbf{w}_{\text{MAP}}),\beta^{-1})\mathcal{N}(\mathbf{w}_{\text{MAP}}|\mathbf{0},\alpha^{-1}\mathbf{I})\frac{(2\pi)^{W/2}}{|\mathbf{A}|^{1/2}}$$ If we take logarithm of both sides, we will obtain $\ln p(\mathcal{D}|\alpha,\beta) \simeq -E(\mathbf{w}_{\text{MAP}}) - \frac{1}{2}\ln|\mathbf{A}| + \frac{W}{2}\ln\alpha + \frac{N}{2}\ln\beta - \frac{N}{2}\ln(2\pi) \quad$ just as required.
1,469
5
5.4
medium
Consider a binary classification problem in which the target values are $t \in \{0,1\}$ , with a network output $y(\mathbf{x},\mathbf{w})$ that represents $p(t=1|\mathbf{x})$ , and suppose that there is a probability $\epsilon$ that the class label on a training data point has been incorrectly set. Assuming independent and identically distributed data, write down the error function corresponding to the negative log likelihood. Verify that the error function $E(\mathbf{w}) = -\sum_{n=1}^{N} \left\{ t_n \ln y_n + (1 - t_n) \ln(1 - y_n) \right\}$ is obtained when $\epsilon = 0$ . Note that this error function makes the model robust to incorrectly labelled data, in contrast to the usual error function.
Based on $p(t|\mathbf{x}, \mathbf{w}) = y(\mathbf{x}, \mathbf{w})^t \left\{ 1 - y(\mathbf{x}, \mathbf{w}) \right\}^{1-t}.$, the current conditional distribution of targets, considering mislabel, given input $\mathbf{x}$ and weight $\mathbf{w}$ is: $$p(t = 1|\mathbf{x}, \mathbf{w}) = (1 - \epsilon) \cdot p(t_r = 1|\mathbf{x}, \mathbf{w}) + \epsilon \cdot p(t_r = 0|\mathbf{x}, \mathbf{w})$$ Note that here we use t to denote the observed target label, $t_r$ to denote its real label, and that our network is aimed to predict the real label $t_r$ not t, i.e., $p(t_r = 1|\mathbf{x}, \mathbf{w}) = y(\mathbf{x}, \mathbf{w})$ , hence we see that: $$p(t = 1|\mathbf{x}, \mathbf{w}) = (1 - \epsilon) \cdot y(\mathbf{x}, \mathbf{w}) + \epsilon \cdot [1 - y(\mathbf{x}, \mathbf{w})]$$ (\*) Also, it is the same for $p(t = 0 | \mathbf{x}, \mathbf{w})$ : $$p(t = 0|\mathbf{x}, \mathbf{w}) = (1 - \epsilon) \cdot [1 - y(\mathbf{x}, \mathbf{w})] + \epsilon \cdot y(\mathbf{x}, \mathbf{w})$$ (\*\*) Combing (\*) and (\*\*), we can obtain: $$p(t|\mathbf{x}, \mathbf{w}) = (1 - \epsilon) \cdot y^t (1 - y)^{1 - t} + \epsilon \cdot (1 - y)^t y^{1 - t}$$ Where y is short for $y(\mathbf{x}, \mathbf{w})$ . Therefore, taking the negative logarithm, we can obtain the error function: $$E(\mathbf{w}) = -\sum_{n=1}^{N} \ln \left\{ (1 - \epsilon) \cdot y_n^{t_n} (1 - y_n)^{1 - t_n} + \epsilon \cdot (1 - y_n)^{t_n} y_n^{1 - t_n} \right\}$$ When $\epsilon = 0$ , it is obvious that the equation above will reduce to $E(\mathbf{w}) = -\sum_{n=1}^{N} \left\{ t_n \ln y_n + (1 - t_n) \ln(1 - y_n) \right\}$.
1,624
5
5.40
easy
Outline the modifications needed to the framework for Bayesian neural networks, discussed in Section 5.7.3, to handle multiclass problems using networks having softmax output-unit activation functions.
For a k-class classification problem, we need to use softmax activation function and also the error function is now given by $E(\mathbf{w}) = -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{kn} \ln y_k(\mathbf{x}_n, \mathbf{w}).$. Therefore, the Hessian matrix should be derived from $E(\mathbf{w}) = -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{kn} \ln y_k(\mathbf{x}_n, \mathbf{w}).$ and the cross entropy in $E(\mathbf{w}_{\text{MAP}}) = -\sum_{n=1}^{N} \{t_n \ln y_n + (1 - t_n) \ln(1 - y_n)\} + \frac{\alpha}{2} \mathbf{w}_{\text{MAP}}^{\text{T}} \mathbf{w}_{\text{MAP}}$ will also be replaced by $E(\mathbf{w}) = -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{kn} \ln y_k(\mathbf{x}_n, \mathbf{w}).$.
702
5
5.41
medium
By following analogous steps to those given in Section 5.7.1 for regression networks, derive the result $\ln p(\mathcal{D}|\alpha) \simeq -E(\mathbf{w}_{\text{MAP}}) - \frac{1}{2} \ln |\mathbf{A}| + \frac{W}{2} \ln \alpha + \text{const}$ for the marginal likelihood in the case of a network having a cross-entropy error function and logistic-sigmoid output-unit activation function. ![](_page_310_Picture_0.jpeg) In Chapters 3 and 4, we considered linear parametric models for regression and classification in which the form of the mapping $y(\mathbf{x}, \mathbf{w})$ from input $\mathbf{x}$ to output y is governed by a vector $\mathbf{w}$ of adaptive parameters. During the learning phase, a set of training data is used either to obtain a point estimate of the parameter vector or to determine a posterior distribution over this vector. The training data is then discarded, and predictions for new inputs are based purely on the learned parameter vector $\mathbf{w}$ . This approach is also used in nonlinear parametric models such as neural networks. However, there is a class of pattern recognition techniques, in which the training data points, or a subset of them, are kept and used also during the prediction phase. For instance, the Parzen probability density model comprised a linear combination of 'kernel' functions each one centred on one of the training data points. Similarly, in Section 2.5.2 we introduced a simple technique for classification called nearest neighbours, which involved assigning to each new test vector the same label as the Chapter 5 Section 2.5.1 closest example from the training set. These are examples of *memory-based* methods that involve storing the entire training set in order to make predictions for future data points. They typically require a metric to be defined that measures the similarity of any two vectors in input space, and are generally fast to 'train' but slow at making predictions for test data points. Many linear parametric models can be re-cast into an equivalent 'dual representation' in which the predictions are also based on linear combinations of a *kernel function* evaluated at the training data points. As we shall see, for models which are based on a fixed nonlinear *feature space* mapping $\phi(\mathbf{x})$ , the kernel function is given by the relation $$k(\mathbf{x}, \mathbf{x}') = \phi(\mathbf{x})^{\mathrm{T}} \phi(\mathbf{x}'). \tag{6.1}$$ From this definition, we see that the kernel is a symmetric function of its arguments so that $k(\mathbf{x}, \mathbf{x}') = k(\mathbf{x}', \mathbf{x})$ . The kernel concept was introduced into the field of pattern recognition by Aizerman *et al.* (1964) in the context of the method of potential functions, so-called because of an analogy with electrostatics. Although neglected for many years, it was re-introduced into machine learning in the context of large-margin classifiers by Boser *et al.* (1992) giving rise to the technique of *support vector machines*. Since then, there has been considerable interest in this topic, both in terms of theory and applications. One of the most significant developments has been the extension of kernels to handle symbolic objects, thereby greatly expanding the range of problems that can be addressed. The simplest example of a kernel function is obtained by considering the identity mapping for the feature space in $k(\mathbf{x}, \mathbf{x}') = \phi(\mathbf{x})^{\mathrm{T}} \phi(\mathbf{x}').$ so that $\phi(\mathbf{x}) = \mathbf{x}$ , in which case $k(\mathbf{x}, \mathbf{x}') = \mathbf{x}^T \mathbf{x}'$ . We shall refer to this as the linear kernel. The concept of a kernel formulated as an inner product in a feature space allows us to build interesting extensions of many well-known algorithms by making use of the *kernel trick*, also known as *kernel substitution*. The general idea is that, if we have an algorithm formulated in such a way that the input vector $\mathbf{x}$ enters only in the form of scalar products, then we can replace that scalar product with some other choice of kernel. For instance, the technique of kernel substitution can be applied to principal component analysis in order to develop a nonlinear variant of PCA (Schölkopf *et al.*, 1998). Other examples of kernel substitution include nearest-neighbour classifiers and the kernel Fisher discriminant (Mika *et al.*, 1999; Roth and Steinhage, 2000; Baudat and Anouar, 2000). There are numerous forms of kernel functions in common use, and we shall encounter several examples in this chapter. Many have the property of being a function only of the difference between the arguments, so that $k(\mathbf{x}, \mathbf{x}') = k(\mathbf{x} - \mathbf{x}')$ , which are known as *stationary* kernels because they are invariant to translations in input space. A further specialization involves *homogeneous* kernels, also known as *radial basis functions*, which depend only on the magnitude of the distance (typically Euclidean) between the arguments so that $k(\mathbf{x}, \mathbf{x}') = k(||\mathbf{x} - \mathbf{x}'||)$ . For recent textbooks on kernel methods, see Schölkopf and Smola (2002), Herbrich (2002), and Shawe-Taylor and Cristianini (2004). Chapter 7 Section 12.3 Section 6.3 ### 6.1. Dual Representations Many linear models for regression and classification can be reformulated in terms of a dual representation in which the kernel function arises naturally. This concept will play an important role when we consider support vector machines in the next chapter. Here we consider a linear regression model whose parameters are determined by minimizing a regularized sum-of-squares error function given by $$J(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \left\{ \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) - t_n \right\}^2 + \frac{\lambda}{2} \mathbf{w}^{\mathrm{T}} \mathbf{w}$$ $J(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \left\{ \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) - t_n \right\}^2 + \frac{\lambda}{2} \mathbf{w}^{\mathrm{T}} \mathbf{w}$ where $\lambda \geqslant 0$ . If we set the gradient of $J(\mathbf{w})$ with respect to $\mathbf{w}$ equal to zero, we see that the solution for $\mathbf{w}$ takes the form of a linear combination of the vectors $\phi(\mathbf{x}_n)$ , with coefficients that are functions of $\mathbf{w}$ , of the form $$\mathbf{w} = -\frac{1}{\lambda} \sum_{n=1}^{N} \left\{ \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) - t_n \right\} \boldsymbol{\phi}(\mathbf{x}_n) = \sum_{n=1}^{N} a_n \boldsymbol{\phi}(\mathbf{x}_n) = \boldsymbol{\Phi}^{\mathrm{T}} \mathbf{a}$$ $\mathbf{w} = -\frac{1}{\lambda} \sum_{n=1}^{N} \left\{ \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) - t_n \right\} \boldsymbol{\phi}(\mathbf{x}_n) = \sum_{n=1}^{N} a_n \boldsymbol{\phi}(\mathbf{x}_n) = \boldsymbol{\Phi}^{\mathrm{T}} \mathbf{a}$ where $\Phi$ is the design matrix, whose $n^{\text{th}}$ row is given by $\phi(\mathbf{x}_n)^{\text{T}}$ . Here the vector $\mathbf{a} = (a_1, \dots, a_N)^{\text{T}}$ , and we have defined $$a_n = -\frac{1}{\lambda} \left\{ \mathbf{w}^{\mathrm{T}} \phi(\mathbf{x}_n) - t_n \right\}. \tag{6.4}$$
By analogy to Prob.5.39, we can write: $$p(D|\alpha) = p(D|\mathbf{w}_{\text{MAP}})p(\mathbf{w}_{\text{MAP}}|\alpha) \frac{(2\pi)^{W/2}}{|\mathbf{A}|^{1/2}}$$ Since we know that the prior $p(\mathbf{w}|\alpha)$ follows a Gaussian distribution, i.e., $p(\mathbf{w}|\alpha) = \mathcal{N}(\mathbf{w}|\mathbf{0}, \alpha^{-1}\mathbf{I}).$, as stated in the text. Therefore we can obtain: $$\begin{split} \ln p(D|\alpha) &= & \ln p(D|\mathbf{w}_{\text{MAP}}) + \ln p(\mathbf{w}_{\text{MAP}}|\alpha) - \frac{1}{2}\ln |\mathbf{A}| + \text{const} \\ &= & \ln p(D|\mathbf{w}_{\text{MAP}}) - \frac{\alpha}{2}\mathbf{w}^T\mathbf{w} + \frac{W}{2}\ln \alpha - \frac{1}{2}\ln |\mathbf{A}| + \text{const} \\ &= & -E(\mathbf{w}_{\text{MAP}}) + \frac{W}{2}\ln \alpha - \frac{1}{2}\ln |\mathbf{A}| + \text{const} \end{split}$$ Just as required. ## 0.6 Kernel Methods
863
5
5.5
easy
Show that maximizing likelihood for a multiclass neural network model in which the network outputs have the interpretation $y_k(\mathbf{x}, \mathbf{w}) = p(t_k = 1|\mathbf{x})$ is equivalent to the minimization of the cross-entropy error function $E(\mathbf{w}) = -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{kn} \ln y_k(\mathbf{x}_n, \mathbf{w}).$.
It is obvious by using $p(\mathbf{t}|\mathbf{x}, \mathbf{w}) = \prod_{k=1}^{K} y_k(\mathbf{x}, \mathbf{w})^{t_k} \left[ 1 - y_k(\mathbf{x}, \mathbf{w}) \right]^{1 - t_k}.$. $$E(\mathbf{w}) = -\ln \prod_{n=1}^{N} p(\mathbf{t}|\mathbf{x_n}, \mathbf{w})$$ $$= -\ln \prod_{n=1}^{N} \prod_{k=1}^{K} y_k(\mathbf{x_n}, \mathbf{w})^{t_{nk}} [1 - y_k(\mathbf{x_n}, \mathbf{w})]^{1 - t_{nk}}$$ $$= -\sum_{n=1}^{N} \sum_{k=1}^{K} \ln \{y_k(\mathbf{x_n}, \mathbf{w})^{t_{nk}} [1 - y_k(\mathbf{x_n}, \mathbf{w})]^{1 - t_{nk}} \}$$ $$= -\sum_{n=1}^{N} \sum_{k=1}^{K} \ln [y_{nk}^{t_{nk}} (1 - y_{nk})^{1 - t_{nk}}]$$ $$= -\sum_{n=1}^{N} \sum_{k=1}^{K} \{t_{nk} \ln y_{nk} + (1 - t_{nk}) \ln (1 - y_{nk}) \}$$ Where we have denoted $$y_{nk} = y_k(\mathbf{x_n}, \mathbf{w})$$
774
5
5.6
easy
Show the derivative of the error function $E(\mathbf{w}) = -\sum_{n=1}^{N} \left\{ t_n \ln y_n + (1 - t_n) \ln(1 - y_n) \right\}$ with respect to the activation $a_k$ for an output unit having a logistic sigmoid activation function satisfies $\frac{\partial E}{\partial a_k} = y_k - t_k$.
We know that $y_k = \sigma(a_k)$ , where $\sigma(\cdot)$ represents the logistic sigmoid function. Moreover, $$\frac{d\sigma}{da} = \sigma(1 - \sigma)$$ $$\frac{dE(\mathbf{w})}{da_k} = -t_k \frac{1}{y_k} [y_k (1 - y_k)] + (1 - t_k) \frac{1}{1 - y_k} [y_k (1 - y_k)]$$ $$= [y_k (1 - y_k)] [\frac{1 - t_k}{1 - y_k} - \frac{t_k}{y_k}]$$ $$= (1 - t_k) y_k - t_k (1 - y_k)$$ $$= y_k - t_k$$ Just as required.
412
5
5.7
easy
Show the derivative of the error function $E(\mathbf{w}) = -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{kn} \ln y_k(\mathbf{x}_n, \mathbf{w}).$ with respect to the activation $a_k$ for output units having a softmax activation function satisfies $\frac{\partial E}{\partial a_k} = y_k - t_k$.
It is similar to the previous problem. First we denote $y_{kn} = y_k(\mathbf{x_n}, \mathbf{w})$ . If we use softmax function as activation for the output unit, according to $\frac{\partial y_k}{\partial a_j} = y_k (I_{kj} - y_j)$, we have: $$\frac{dy_{kn}}{da_j} = y_{kn}(I_{kj} - y_{jn})$$ Therefore, $$\frac{dE(\mathbf{w})}{da_{j}} = \frac{d}{da_{k}} \left\{ -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{kn} \ln y_{k}(\mathbf{x_{n}}, \mathbf{w}) \right\} = -\sum_{n=1}^{N} \sum_{k=1}^{K} \frac{d}{da_{j}} \left\{ t_{kn} \ln y_{kn} \right\} = -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{kn} \frac{1}{y_{kn}} \left[ y_{kn} (I_{kj} - y_{jn}) \right] = -\sum_{n=1}^{N} \sum_{k=1}^{K} (t_{kn} I_{kj} - t_{kn} y_{jn}) = -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{kn} I_{kj} + \sum_{n=1}^{N} \sum_{k=1}^{K} t_{kn} y_{jn} = -\sum_{n=1}^{N} t_{jn} + \sum_{n=1}^{N} y_{jn} = \sum_{n=1}^{N} (y_{jn} - t_{jn})$$ Where we have used the fact that only when $k=j,\ I_{kj}=1\neq 0$ and that $\sum_{k=1}^K t_{kn}=1.$
994
5
5.8
easy
We saw in $\frac{d\sigma}{da} = \sigma(1 - \sigma).$ that the derivative of the logistic sigmoid activation function can be expressed in terms of the function value itself. Derive the corresponding result for the 'tanh' activation function defined by $\tanh(a) = \frac{e^a - e^{-a}}{e^a + e^{-a}}.$.
It is obvious based on definition of 'tanh', i.e., $\tanh(a) = \frac{e^a - e^{-a}}{e^a + e^{-a}}.$. $$\frac{d}{da}tanh(a) = \frac{(e^a + e^{-a})(e^a + e^{-a}) - (e^a - e^{-a})(e^a - e^{-a})}{(e^a + e^{-a})^2}$$ $$= 1 - \frac{(e^a - e^{-a})^2}{(e^a + e^{-a})^2}$$ $$= 1 - tanh(a)^2$$
293
5
5.9
easy
The error function $E(\mathbf{w}) = -\sum_{n=1}^{N} \left\{ t_n \ln y_n + (1 - t_n) \ln(1 - y_n) \right\}$ for binary classification problems was derived for a network having a logistic-sigmoid output activation function, so that $0 \le y(\mathbf{x}, \mathbf{w}) \le 1$ , and data having target values $t \in \{0, 1\}$ . Derive the corresponding error function if we consider a network having an output $-1 \le y(\mathbf{x}, \mathbf{w}) \le 1$ and target values t = 1 for class $C_1$ and t = -1 for class $C_2$ . What would be the appropriate choice of output unit activation function?
We know that the logistic sigmoid function $\sigma(a) \in [0,1]$ , therefore if we perform a linear transformation $h(a) = 2\sigma(a) - 1$ , we can find a mapping function h(a) from $(-\infty, +\infty)$ to [-1,1]. In this case, the conditional distribution of targets given inputs can be similarly written as: $$p(t|\mathbf{x}, \mathbf{w}) = \left[\frac{1 + y(\mathbf{x}, \mathbf{w})}{2}\right]^{(1+t)/2} \left[\frac{1 - y(\mathbf{x}, \mathbf{w})}{2}\right]^{(1-t)/2}$$ Where $[1+y(\mathbf{x},\mathbf{w})]/2$ represents the conditional probability $p(C_1|x)$ . Since now $y(\mathbf{x},\mathbf{w}) \in [-1,1]$ , we also need to perform the linear transformation to make it satisfy the constraint for probability. Then we can further obtain: $$E(\mathbf{w}) = -\sum_{n=1}^{N} \left\{ \frac{1+t_n}{2} \ln \frac{1+y_n}{2} + \frac{1-t_n}{2} \ln \frac{1-y_n}{2} \right\}$$ $$= -\frac{1}{2} \sum_{n=1}^{N} \left\{ (1+t_n) \ln(1+y_n) + (1-t_n) \ln(1-y_n) \right\} + N \ln 2$$
978
6
6.1
medium
Consider the dual formulation of the least squares linear regression problem given in Section 6.1. Show that the solution for the components $a_n$ of the vector $\mathbf{a}$ can be expressed as a linear combination of the elements of the vector $\phi(\mathbf{x}_n)$ . Denoting these coefficients by the vector $\mathbf{w}$ , show that the dual of the dual formulation is given by the original representation in terms of the parameter vector $\mathbf{w}$ .
Recall that in section.6.1, $a_n$ can be written as $a_n = -\frac{1}{\lambda} \left\{ \mathbf{w}^{\mathrm{T}} \phi(\mathbf{x}_n) - t_n \right\}.$. We can derive: $$a_n = -\frac{1}{\lambda} \{ \mathbf{w}^T \boldsymbol{\phi}(\mathbf{x}_n) - t_n \}$$ $$= -\frac{1}{\lambda} \{ w_1 \phi_1(\mathbf{x}_n) + w_2 \phi_2(\mathbf{x}_n) + \dots + w_M \phi_M(\mathbf{x}_n) - t_n \}$$ $$= -\frac{w_1}{\lambda} \phi_1(\mathbf{x}_n) - \frac{w_2}{\lambda} \phi_2(\mathbf{x}_n) - \dots - \frac{w_M}{\lambda} \phi_M(\mathbf{x}_n) + \frac{t_n}{\lambda}$$ $$= (c_n - \frac{w_1}{\lambda}) \phi_1(\mathbf{x}_n) + (c_n - \frac{w_2}{\lambda}) \phi_2(\mathbf{x}_n) + \dots + (c_n - \frac{w_M}{\lambda}) \phi_M(\mathbf{x}_n)$$ Here we have defined: $$c_n = \frac{t_n/\lambda}{\phi_1(\mathbf{x}_n) + \phi_2(\mathbf{x}_n) + \dots + \phi_M(\mathbf{x}_n)}$$ From what we have derived above, we can see that $a_n$ is a linear combination of $\phi(\mathbf{x}_n)$ . What's more, we first substitute $\mathbf{K} = \mathbf{\Phi}\mathbf{\Phi}^T$ into $J(\mathbf{a}) = \frac{1}{2} \mathbf{a}^{\mathrm{T}} \mathbf{K} \mathbf{K} \mathbf{a} - \mathbf{a}^{\mathrm{T}} \mathbf{K} \mathbf{t} + \frac{1}{2} \mathbf{t}^{\mathrm{T}} \mathbf{t} + \frac{\lambda}{2} \mathbf{a}^{\mathrm{T}} \mathbf{K} \mathbf{a}.$, and then we will obtain $J(\mathbf{a}) = \frac{1}{2} \mathbf{a}^{\mathrm{T}} \mathbf{\Phi} \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi} \mathbf{\Phi}^{\mathrm{T}} \mathbf{a} - \mathbf{a}^{\mathrm{T}} \mathbf{\Phi} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t} + \frac{1}{2} \mathbf{t}^{\mathrm{T}} \mathbf{t} + \frac{\lambda}{2} \mathbf{a}^{\mathrm{T}} \mathbf{\Phi} \mathbf{\Phi}^{\mathrm{T}} \mathbf{a}$. Next we substitute $\mathbf{w} = -\frac{1}{\lambda} \sum_{n=1}^{N} \left\{ \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) - t_n \right\} \boldsymbol{\phi}(\mathbf{x}_n) = \sum_{n=1}^{N} a_n \boldsymbol{\phi}(\mathbf{x}_n) = \boldsymbol{\Phi}^{\mathrm{T}} \mathbf{a}$ into $J(\mathbf{a}) = \frac{1}{2} \mathbf{a}^{\mathrm{T}} \mathbf{\Phi} \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi} \mathbf{\Phi}^{\mathrm{T}} \mathbf{a} - \mathbf{a}^{\mathrm{T}} \mathbf{\Phi} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t} + \frac{1}{2} \mathbf{t}^{\mathrm{T}} \mathbf{t} + \frac{\lambda}{2} \mathbf{a}^{\mathrm{T}} \mathbf{\Phi} \mathbf{\Phi}^{\mathrm{T}} \mathbf{a}$ we will obtain $J(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \left\{ \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_n) - t_n \right\}^2 + \frac{\lambda}{2} \mathbf{w}^{\mathrm{T}} \mathbf{w}$ just as required.
2,582
6
6.10
easy
Show that an excellent choice of kernel for learning a function $f(\mathbf{x})$ is given by $k(\mathbf{x}, \mathbf{x}') = f(\mathbf{x}) f(\mathbf{x}')$ by showing that a linear learning machine based on this kernel will always find a solution proportional to $f(\mathbf{x})$ .
According to $y(\mathbf{x}) = \mathbf{w}^{\mathrm{T}} \phi(\mathbf{x}) = \mathbf{a}^{\mathrm{T}} \Phi \phi(\mathbf{x}) = \mathbf{k}(\mathbf{x})^{\mathrm{T}} (\mathbf{K} + \lambda \mathbf{I}_{N})^{-1} \mathbf{t}$, we have: $$y(\mathbf{x}) = \mathbf{k}(\mathbf{x})^T (\mathbf{K} + \lambda \mathbf{I}_N)^{-1} \mathbf{t} = \mathbf{k}(\mathbf{x})^T \mathbf{a} = \sum_{n=1}^N f(\mathbf{x}_n) \cdot f(\mathbf{x}) \cdot a_n = \left[ \sum_{n=1}^N f(\mathbf{x}_n) \cdot a_n \right] f(\mathbf{x})$$ We see that if we choose $k(\mathbf{x}, \mathbf{x}') = f(\mathbf{x})f(\mathbf{x}')$ we will always find a solution $y(\mathbf{x})$ proportional to $f(\mathbf{x})$ .
666
6
6.11
easy
By making use of the expansion $k(\mathbf{x}, \mathbf{x}') = \exp\left(-\mathbf{x}^{\mathrm{T}}\mathbf{x}/2\sigma^{2}\right) \exp\left(\mathbf{x}^{\mathrm{T}}\mathbf{x}'/\sigma^{2}\right) \exp\left(-(\mathbf{x}')^{\mathrm{T}}\mathbf{x}'/2\sigma^{2}\right)$, and then expanding the middle factor as a power series, show that the Gaussian kernel $k(\mathbf{x}, \mathbf{x}') = \exp\left(-\|\mathbf{x} - \mathbf{x}'\|^2 / 2\sigma^2\right)$ can be expressed as the inner product of an infinite-dimensional feature vector.
We follow the hint. $$k(\mathbf{x}, \mathbf{x}') = \exp(-\mathbf{x}^T \mathbf{x}/2\sigma^2) \cdot \exp(\mathbf{x}^T \mathbf{x}'/\sigma^2) \cdot \exp(-(\mathbf{x}')^T \mathbf{x}'/2\sigma^2)$$ $$= \exp(-\mathbf{x}^T \mathbf{x}/2\sigma^2) \cdot \left(1 + \frac{\mathbf{x}^T \mathbf{x}'}{\sigma^2} + \frac{(\frac{\mathbf{x}^T \mathbf{x}'}{\sigma^2})^2}{2!} + \cdots\right) \cdot \exp(-(\mathbf{x}')^T \mathbf{x}'/2\sigma^2)$$ $$= \phi(\mathbf{x})^T \phi(\mathbf{x}')$$ where $\phi(\mathbf{x})$ is a column vector with infinite dimension. To be more specific, $= \phi(\mathbf{x})^{\mathrm{T}} \phi(\mathbf{z}).$ gives a simple example on how to decompose $(\mathbf{x}^T\mathbf{x}')^2$ . In our case, we can also decompose $(\mathbf{x}^T\mathbf{x}')^k$ , $k = 1, 2, ..., \infty$ in the similar way. However, since $k \to \infty$ , i.e., the decomposition will consist monomials with infinite degree. Thus, there will be infinite terms in the decomposition and the feature mapping function $\phi(\mathbf{x})$ will have infinite dimension.
1,052
6
6.12
medium
Consider the space of all possible subsets A of a given fixed set D. Show that the kernel function $k(A_1, A_2) = 2^{|A_1 \cap A_2|}$ corresponds to an inner product in a feature space of dimensionality $2^{|D|}$ defined by the mapping $\phi(A)$ where A is a subset of D and the element $\phi_U(A)$ , indexed by the subset U, is given by $$\phi_U(A) = \begin{cases} 1, & \text{if } U \subseteq A; \\ 0, & \text{otherwise.} \end{cases}$$ $\phi_U(A) = \begin{cases} 1, & \text{if } U \subseteq A; \\ 0, & \text{otherwise.} \end{cases}$ Here $U \subseteq A$ denotes that U is either a subset of A or is equal to A.
First, let's explain the problem a little bit. According to $k(A_1, A_2) = 2^{|A_1 \cap A_2|}$, what we need to prove here is: $$k(A_1, A_2) = 2^{|A_1 \cap A_2|} = \phi(A_1)^T \phi(A_2)$$ The biggest difference from the previous problem is that $\phi(A)$ is a $2^{|D|} \times 1$ column vector and instead of indexed by $1, 2, ..., 2^{|D|}$ here we index it by $\{U|U \subseteq D\}$ (Note that $\{U|U \subseteq D\}$ is all the possible subsets of D and thus there are $2^{|D|}$ elements in total). Therefore, according to $\phi_U(A) = \begin{cases} 1, & \text{if } U \subseteq A; \\ 0, & \text{otherwise.} \end{cases}$, we can obtain: $$\boldsymbol{\phi}(A_1)^T\boldsymbol{\phi}(A_2) = \sum_{U\subseteq D} \phi_U(A_1)\phi_U(A_2)$$ By using the summation, we actually iterate through all the possible subsets of D. If and only if the current iterating subset U is a subset of both $A_1$ and $A_2$ simultaneously, the current adding term equals to 1. Therefore, we actually count how many subsets of D is in the intersection of $A_1$ and $A_2$ . Moreover, since $A_1$ and $A_2$ are both defined in the subset space of D, what we have deduced above can be written as: $$\phi(A_1)^T \phi(A_2) = 2^{|A_1 \cap A_2|}$$ Just as required. Problem 6.13 Solution Wait for update
1,313
6
6.14
easy
Write down the form of the Fisher kernel, defined by $k(\mathbf{x}, \mathbf{x}') = \mathbf{g}(\boldsymbol{\theta}, \mathbf{x})^{\mathrm{T}} \mathbf{F}^{-1} \mathbf{g}(\boldsymbol{\theta}, \mathbf{x}').$, for the case of a distribution $p(\mathbf{x}|\boldsymbol{\mu}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \mathbf{S})$ that is Gaussian with mean $\boldsymbol{\mu}$ and fixed covariance $\mathbf{S}$ .
Since the covariance matrix S is fixed, according to $\mathbf{g}(\boldsymbol{\theta}, \mathbf{x}) = \nabla_{\boldsymbol{\theta}} \ln p(\mathbf{x}|\boldsymbol{\theta})$ we can obtain: $$\mathbf{g}(\boldsymbol{\mu}, \mathbf{x}) = \nabla_{\boldsymbol{\mu}} \ln p(\mathbf{x}|\boldsymbol{\mu}) = \frac{\partial}{\partial \boldsymbol{\mu}} \left( -\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^T \mathbf{S}^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right) = \mathbf{S}^{-1} (\mathbf{x} - \boldsymbol{\mu})$$ Therefore, according to $\mathbf{F} = \mathbb{E}_{\mathbf{x}} \left[ \mathbf{g}(\boldsymbol{\theta}, \mathbf{x}) \mathbf{g}(\boldsymbol{\theta}, \mathbf{x})^{\mathrm{T}} \right]$, we can obtain: $$\mathbf{F} = \mathbb{E}_{\mathbf{x}} \left[ \mathbf{g}(\boldsymbol{\mu}, \mathbf{x}) \mathbf{g}(\boldsymbol{\mu}, \mathbf{x})^T \right] = \mathbf{S}^{-1} \mathbb{E}_{\mathbf{x}} \left[ (\mathbf{x} - \boldsymbol{\mu}) (\mathbf{x} - \boldsymbol{\mu})^T \right] \mathbf{S}^{-1}$$ Since $\mathbf{x} \sim \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \mathbf{S})$ , we have: $$\mathbb{E}_{\mathbf{x}}\left[(\mathbf{x}-\boldsymbol{\mu})(\mathbf{x}-\boldsymbol{\mu})^{T}\right] = \mathbf{S}$$ So we obtain $\mathbf{F} = \mathbf{S}^{-1}$ and then according to $k(\mathbf{x}, \mathbf{x}') = \mathbf{g}(\boldsymbol{\theta}, \mathbf{x})^{\mathrm{T}} \mathbf{F}^{-1} \mathbf{g}(\boldsymbol{\theta}, \mathbf{x}').$, we have: $$k(\mathbf{x}, \mathbf{x}') = \mathbf{g}(\boldsymbol{\mu}, \mathbf{x})^T \mathbf{F}^{-1} \mathbf{g}(\boldsymbol{\mu}, \mathbf{x}') = (\mathbf{x} - \boldsymbol{\mu})^T \mathbf{S}^{-1} (\mathbf{x}' - \boldsymbol{\mu})$$
1,652
6
6.15
easy
By considering the determinant of a $2 \times 2$ Gram matrix, show that a positive-definite kernel function k(x, x') satisfies the Cauchy-Schwartz inequality $$k(x_1, x_2)^2 \le k(x_1, x_1)k(x_2, x_2).$$ $k(x_1, x_2)^2 \le k(x_1, x_1)k(x_2, x_2).$
We rewrite the problem. What we are required to prove is that the Gram matrix $\mathbf{K}$ : $$\mathbf{K} = \left[ \begin{array}{cc} k_{11} & k_{12} \\ k_{21} & k_{22} \end{array} \right],$$ where $k_{ij}$ (i,j = 1,2) is short for $k(x_i,x_j)$ , should be positive semidefinite. A positive semidefinite matrix should have positive determinant, i.e., $$k_{12}k_{21} \leq k_{11}k_{22}$$ . Using the symmetric property of kernel, i.e., $k_{12} = k_{21}$ , we obtain what has been required.
495
6
6.16
medium
Consider a parametric model governed by the parameter vector w together with a data set of input values $\mathbf{x}_1, \dots, \mathbf{x}_N$ and a nonlinear feature mapping $\phi(\mathbf{x})$ . Suppose that the dependence of the error function on w takes the form $$J(\mathbf{w}) = f(\mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_1), \dots, \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_N)) + g(\mathbf{w}^{\mathrm{T}} \mathbf{w})$$ $J(\mathbf{w}) = f(\mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_1), \dots, \mathbf{w}^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}_N)) + g(\mathbf{w}^{\mathrm{T}} \mathbf{w})$ where $g(\cdot)$ is a monotonically increasing function. By writing w in the form $$\mathbf{w} = \sum_{n=1}^{N} \alpha_n \boldsymbol{\phi}(\mathbf{x}_n) + \mathbf{w}_{\perp}$$ $\mathbf{w} = \sum_{n=1}^{N} \alpha_n \boldsymbol{\phi}(\mathbf{x}_n) + \mathbf{w}_{\perp}$ show that the value of w that minimizes $J(\mathbf{w})$ takes the form of a linear combination of the basis functions $\phi(\mathbf{x}_n)$ for n = 1, ..., N.
Based on the total derivative of function f, we have: $$f\left[(\mathbf{w} + \Delta \mathbf{w})^T \boldsymbol{\phi}_1, (\mathbf{w} + \Delta \mathbf{w})^T \boldsymbol{\phi}_2, ..., (\mathbf{w} + \Delta \mathbf{w})^T \boldsymbol{\phi}_N\right] = \sum_{n=1}^N \frac{\partial f}{\partial (\mathbf{w}^T \boldsymbol{\phi}_n)} \cdot \Delta \mathbf{w}^T \boldsymbol{\phi}_n$$ Which can be further written as: $$f\left[(\mathbf{w} + \Delta \mathbf{w})^T \boldsymbol{\phi}_1, (\mathbf{w} + \Delta \mathbf{w})^T \boldsymbol{\phi}_2, ..., (\mathbf{w} + \Delta \mathbf{w})^T \boldsymbol{\phi}_N\right] = \left[\sum_{n=1}^N \frac{\partial f}{\partial (\mathbf{w}^T \boldsymbol{\phi}_n)} \cdot \boldsymbol{\phi}_n^T\right] \Delta \mathbf{w}$$ Note that here $\phi_n$ is short for $\phi(\mathbf{x}_n)$ . Based on the equation above, we can obtain: $$\nabla_{\mathbf{w}} f = \sum_{n=1}^{N} \frac{\partial f}{\partial (\mathbf{w}^{T} \boldsymbol{\phi}_{n})} \cdot \boldsymbol{\phi}_{n}^{T}$$ Now we focus on the derivative of function g with respect to $\mathbf{w}$ : $$\nabla_{\mathbf{w}} g = \frac{\partial g}{\partial (\mathbf{w}^T \mathbf{w})} \cdot 2\mathbf{w}^T$$ In order to find the optimal $\mathbf{w}$ , we set the derivative of J with respect to $\mathbf{w}$ equal to $\mathbf{0}$ , yielding: $$\nabla_{\mathbf{w}} J = \nabla_{\mathbf{w}} f + \nabla_{\mathbf{w}} g = \sum_{n=1}^{N} \frac{\partial f}{\partial (\mathbf{w}^{T} \boldsymbol{\phi}_{n})} \cdot \boldsymbol{\phi}_{n}^{T} + \frac{\partial g}{\partial (\mathbf{w}^{T} \mathbf{w})} \cdot 2\mathbf{w}^{T} = \mathbf{0}$$ Rearranging the equation above, we can obtain: $$\mathbf{w} = \frac{1}{2a} \sum_{n=1}^{N} \frac{\partial f}{\partial (\mathbf{w}^{T} \boldsymbol{\phi}_{n})} \cdot \boldsymbol{\phi}_{n}$$ Where we have defined: $a = 1 \div \frac{\partial g}{\partial (\mathbf{w}^T \mathbf{w})}$ , and since g is a monotonically increasing function, we have a > 0.
1,935
6
6.17
medium
Consider the sum-of-squares error function $E = \frac{1}{2} \sum_{n=1}^{N} \int \{y(\mathbf{x}_n + \boldsymbol{\xi}) - t_n\}^2 \nu(\boldsymbol{\xi}) \,d\boldsymbol{\xi}.$ for data having noisy inputs, where $\nu(\xi)$ is the distribution of the noise. Use the calculus of variations to minimize this error function with respect to the function y(x), and hence show that the optimal solution is given by an expansion of the form $y(\mathbf{x}_n) = \sum_{n=1}^{N} t_n h(\mathbf{x} - \mathbf{x}_n)$ in which the basis functions are given by $h(\mathbf{x} - \mathbf{x}_n) = \frac{\nu(\mathbf{x} - \mathbf{x}_n)}{\sum_{n=1}^{N} \nu(\mathbf{x} - \mathbf{x}_n)}.$.
We consider a variation in the function $y(\mathbf{x})$ of the form: $$y(\mathbf{x}) \to y(\mathbf{x}) + \epsilon \eta(\mathbf{x})$$ Substituting it into $E = \frac{1}{2} \sum_{n=1}^{N} \int \{y(\mathbf{x}_n + \boldsymbol{\xi}) - t_n\}^2 \nu(\boldsymbol{\xi}) \,d\boldsymbol{\xi}.$ yields: $$\begin{split} E[y+\epsilon\eta] &= \frac{1}{2} \sum_{n=1}^{N} \int \left\{ y+\epsilon\eta - t_n \right\}^2 v(\boldsymbol{\xi}) d\boldsymbol{\xi} \\ &= \frac{1}{2} \sum_{n=1}^{N} \int \left\{ (y-t_n)^2 + 2 \cdot (\epsilon\eta) \cdot (y-t_n) + (\epsilon\eta)^2 \right\} v(\boldsymbol{\xi}) d\boldsymbol{\xi} \\ &= E[y] + \epsilon \sum_{n=1}^{N} \int \left\{ y-t_n \right\} \eta v d\boldsymbol{\xi} + O(\epsilon^2) \end{split}$$ Note that here y is short for $y(\mathbf{x}_n + \boldsymbol{\xi})$ , $\eta$ is short for $\eta(\mathbf{x}_n + \boldsymbol{\xi})$ and v is short for $v(\boldsymbol{\xi})$ respectively. Several clarifications must be made here. What we have done is that we vary the function y by a little bit (i.e., $\epsilon\eta$ ) and then we expand the corresponding error with respect to the small variation $\epsilon$ . The coefficient before $\epsilon$ is actually the first derivative of the error $E[y + \epsilon\eta]$ with respect to $\epsilon$ at $\epsilon = 0$ . Since we know that y is the optimal function that can make E the smallest, the first derivative of the error $E[y + \epsilon\eta]$ should equal to zero at $\epsilon = 0$ , which gives: $$\sum_{n=1}^{N} \int \{y(\mathbf{x}_n + \boldsymbol{\xi}) - t_n\} \eta(\mathbf{x}_n + \boldsymbol{\xi}) v(\boldsymbol{\xi}) d\boldsymbol{\xi} = 0$$ Now we are required to find a function y that can satisfy the equation above no matter what $\eta$ is. We choose: $$\eta(\mathbf{x}) = \delta(\mathbf{x} - \mathbf{z})$$ This allows us to evaluate the integral: $$\sum_{n=1}^{N} \int \left\{ y(\mathbf{x}_n + \boldsymbol{\xi}) - t_n \right\} \eta(\mathbf{x}_n + \boldsymbol{\xi}) v(\boldsymbol{\xi}) d\boldsymbol{\xi} = \sum_{n=1}^{N} \left\{ y(\mathbf{z}) - t_n \right\} v(\mathbf{z} - \mathbf{x}_n)$$ We set it to zero and rearrange it, which finally gives $y(\mathbf{x}_n) = \sum_{n=1}^{N} t_n h(\mathbf{x} - \mathbf{x}_n)$ just as required.
2,249
6
6.18
easy
Consider a Nadaraya-Watson model with one input variable x and one target variable t having Gaussian components with isotropic covariances, so that the covariance matrix is given by $\sigma^2 \mathbf{I}$ where $\mathbf{I}$ is the unit matrix. Write down expressions for the conditional density p(t|x) and for the conditional mean $\mathbb{E}[t|x]$ and variance var[t|x], in terms of the kernel function $k(x, x_n)$ .
According to the main text below Eq $p(t|\mathbf{x}) = \frac{p(t,\mathbf{x})}{\int p(t,\mathbf{x}) dt} = \frac{\sum_{n} f(\mathbf{x} - \mathbf{x}_{n}, t - t_{n})}{\sum_{m} \int f(\mathbf{x} - \mathbf{x}_{m}, t - t_{m}) dt}$, we know that f(x,t), i.e., $f(\mathbf{z})$ , follows a zero-mean isotropic Gaussian: $$f(\mathbf{z}) = \mathcal{N}(\mathbf{z}|\mathbf{0}, \sigma^2\mathbf{I})$$ Then $f(x-x_m,t-t_m)$ , i.e., $f(\mathbf{z}-\mathbf{z}_m)$ should also satisfy a Gaussian distribution: $$f(\mathbf{z} - \mathbf{z}_m) = \mathcal{N}(\mathbf{z}|\mathbf{z}_m, \sigma^2 \mathbf{I})$$ Where we have defined: $$\mathbf{z}_m = (x_m, t_m)$$ The integral $\int f(\mathbf{z} - \mathbf{z}_m) dt$ corresponds to the marginal distribution with respect to the remaining variable x and, thus, we obtain: $$\int f(\mathbf{z} - \mathbf{z}_m) dt = \mathcal{N}(x|x_m, \sigma^2)$$ We substitute all the expressions into Eq $p(t|\mathbf{x}) = \frac{p(t,\mathbf{x})}{\int p(t,\mathbf{x}) dt} = \frac{\sum_{n} f(\mathbf{x} - \mathbf{x}_{n}, t - t_{n})}{\sum_{m} \int f(\mathbf{x} - \mathbf{x}_{m}, t - t_{m}) dt}$, which gives: $$\begin{split} p(t|x) &= \frac{p(t,x)}{\int p(t,x)dt} = \frac{\sum_{n} \mathcal{N}(\mathbf{z}|\mathbf{z}_{m},\sigma^{2}\mathbf{I})}{\sum_{m} \mathcal{N}(x|x_{m},\sigma^{2})} \\ &= \frac{\sum_{n} \frac{1}{2\pi\sigma^{2}} exp\left(-\frac{1}{2}(\mathbf{z}-\mathbf{z}_{n})^{T}(\sigma^{2}\mathbf{I})^{-1}(\mathbf{z}-\mathbf{z}_{n})\right)}{\sum_{m} \frac{1}{(2\pi\sigma^{2})^{1/2}} exp\left(-\frac{1}{2\sigma^{2}}(x-x_{m})^{2}\right)} \\ &= \frac{\sum_{n} \frac{1}{2\pi\sigma^{2}} exp\left(-\frac{1}{2\sigma^{2}}(x-x_{n})^{2}\right) exp\left(-\frac{1}{2\sigma^{2}}(t-t_{n})^{2}\right)}{\sum_{m} \frac{1}{(2\pi\sigma^{2})^{1/2}} exp\left(-\frac{1}{2\sigma^{2}}(x-x_{m})^{2}\right)} \\ &= \sum_{n} \frac{\frac{1}{\sqrt{2\pi\sigma^{2}}} exp\left(-\frac{1}{2\sigma^{2}}(x-x_{n})^{2}\right)}{\sum_{m} \frac{1}{(2\pi\sigma^{2})^{1/2}} exp\left(-\frac{1}{2\sigma^{2}}(x-x_{m})^{2}\right)} \cdot \frac{1}{\sqrt{2\pi\sigma^{2}}} exp\left(-\frac{1}{2\sigma^{2}}(t-t_{n})^{2}\right) \\ &= \sum_{n} \pi_{n} \cdot \mathcal{N}(t|t_{n},\sigma^{2}) \end{split}$$ Where we have defined: $$\pi_n = \frac{exp\left(-\frac{1}{2\sigma^2}(x - x_n)^2\right)}{\sum_m exp\left(-\frac{1}{2\sigma^2}(x - x_m)^2\right)}$$ We also observe that: $$\sum_{n} \pi_n = 1$$ Therefore, the conditional distribution p(t|x) is given by a Gaussian Mixture. Similarly, we attempt to find a specific form for Eq (6.46): $$k(x,x_n) = \frac{\int f(x-x_n,t) dt}{\sum_m \int f(x-x_m,t) dt}$$ $$= \frac{\mathcal{N}(x|x_n,\sigma^2)}{\sum_m \mathcal{N}(x|x_m,\sigma^2)}$$ $$= \pi_n$$ In other words, the conditional distribution can be more precisely written as: $$p(t|x) = \sum_{n} k(x, x_n) \cdot \mathcal{N}(t|t_n, \sigma^2)$$ Thus its mean is given by: $$\mathbb{E}[t|x] = \sum_{n} k(x, x_n) \cdot t_n$$ Its variance is given by: $$\begin{aligned} \operatorname{var}[t|x] &= & \mathbb{E}[(t|x)^2] - \mathbb{E}[t|x]^2 \\ &= & \sum_n k(x, x_n) \cdot (t_n^2 + \sigma^2) - \left(\sum_n k(x, x_n) \cdot t_n\right)^2 \end{aligned}$$
3,127
6
6.19
medium
- 6.19 (\*\*) Another viewpoint on kernel regression comes from a consideration of regression problems in which the input variables as well as the target variables are corrupted with additive noise. Suppose each target value t<sub>n</sub> is generated as usual by taking a function y(z<sub>n</sub>) evaluated at a point z<sub>n</sub>, and adding Gaussian noise. The value of z<sub>n</sub> is not directly observed, however, but only a noise corrupted version x<sub>n</sub> = z<sub>n</sub> + ξ<sub>n</sub> where the random variable ξ is governed by some distribution g(ξ). Consider a set of observations {x<sub>n</sub>, t<sub>n</sub>}, where n = 1,..., N, together with a corresponding sum-of-squares error function defined by averaging over the distribution of input noise to give $$E = \frac{1}{2} \sum_{n=1}^{N} \int \{y(\mathbf{x}_n - \boldsymbol{\xi}_n) - t_n\}^2 g(\boldsymbol{\xi}_n) \, d\boldsymbol{\xi}_n.$$ $E = \frac{1}{2} \sum_{n=1}^{N} \int \{y(\mathbf{x}_n - \boldsymbol{\xi}_n) - t_n\}^2 g(\boldsymbol{\xi}_n) \, d\boldsymbol{\xi}_n.$ By minimizing E with respect to the function $y(\mathbf{z})$ using the calculus of variations (Appendix D), show that optimal solution for $y(\mathbf{x})$ is given by a Nadaraya-Watson kernel regression solution of the form $= \sum_{n} k(\mathbf{x}, \mathbf{x}_{n})t_{n}$ with a kernel of the form $k(\mathbf{x}, \mathbf{x}_n) = \frac{g(\mathbf{x} - \mathbf{x}_n)}{\sum_{m} g(\mathbf{x} - \mathbf{x}_m)}$.
Similar to Prob.6.17, it is straightforward to show that: $$y(\mathbf{x}) = \sum_{n} t_n \, k(\mathbf{x}, \mathbf{x}_n)$$ Where we have defined: $$k(\mathbf{x}, \mathbf{x}_n) = \frac{g(\mathbf{x}_n - \mathbf{x})}{\sum_n g(\mathbf{x}_n - \mathbf{x})}$$
254
6
6.2
medium
In this exercise, we develop a dual formulation of the perceptron learning algorithm. Using the perceptron learning rule $\mathbf{w}^{(\tau+1)} = \mathbf{w}^{(\tau)} - \eta \nabla E_{P}(\mathbf{w}) = \mathbf{w}^{(\tau)} + \eta \phi_n t_n$, show that the learned weight vector $\mathbf{w}$ can be written as a linear combination of the vectors $t_n \phi(\mathbf{x}_n)$ where $t_n \in \{-1, +1\}$ . Denote the coefficients of this linear combination by $\alpha_n$ and derive a formulation of the perceptron learning algorithm, and the predictive function for the perceptron, in terms of the $\alpha_n$ . Show that the feature vector $\phi(\mathbf{x})$ enters only in the form of the kernel function $k(\mathbf{x}, \mathbf{x}') = \phi(\mathbf{x})^T \phi(\mathbf{x}')$ .
If we set $\mathbf{w}^{(0)} = \mathbf{0}$ in $\mathbf{w}^{(\tau+1)} = \mathbf{w}^{(\tau)} - \eta \nabla E_{P}(\mathbf{w}) = \mathbf{w}^{(\tau)} + \eta \phi_n t_n$, we can obtain: $$\mathbf{w}^{(\tau+1)} = \sum_{n=1}^{N} \eta c_n t_n \boldsymbol{\phi}_n$$ where N is the total number of samples and $c_n$ is the times that $t_n \phi_n$ has been added from step 0 to step $\tau + 1$ . Therefore, it is obvious that we have: $$\mathbf{w} = \sum_{n=1}^{N} \alpha_n t_n \boldsymbol{\phi}_n$$ We further substitute the expression above into $\mathbf{w}^{(\tau+1)} = \mathbf{w}^{(\tau)} - \eta \nabla E_{P}(\mathbf{w}) = \mathbf{w}^{(\tau)} + \eta \phi_n t_n$, which gives: $$\sum_{n=1}^{N} \alpha_n^{(\tau+1)} t_n \boldsymbol{\phi}_n = \sum_{n=1}^{N} \alpha_n^{(\tau)} t_n \boldsymbol{\phi}_n + \eta t_n \boldsymbol{\phi}_n$$ In other words, the update process is to add learning rate $\eta$ to the coefficient $\alpha_n$ corresponding to the misclassified pattern $\mathbf{x}_n$ , i.e., $$\alpha_n^{(\tau+1)} = \alpha_n^{(\tau)} + \eta$$ Now we similarly substitute it into (4.52): $$y(\mathbf{x}) = f(\mathbf{w}^T \boldsymbol{\phi}(\mathbf{x}))$$ $$= f(\sum_{n=1}^N \alpha_n t_n \boldsymbol{\phi}_n^T \boldsymbol{\phi}(\mathbf{x}))$$ $$= f(\sum_{n=1}^N \alpha_n t_n k(\mathbf{x}_n, \mathbf{x}))$$
1,331
6
6.20
medium
Verify the results $\sigma^{2}(\mathbf{x}_{N+1}) = c - \mathbf{k}^{\mathrm{T}} \mathbf{C}_{N}^{-1} \mathbf{k}.$ and $\sigma^{2}(\mathbf{x}_{N+1}) = c - \mathbf{k}^{\mathrm{T}} \mathbf{C}_{N}^{-1} \mathbf{k}.$.
Since we know that $\mathbf{t}_{N+1} = (t_1, t_2, ..., t_N, t_{N+1})^T$ follows a Gaussian distribution, i.e., $\mathbf{t}_{N+1} \sim \mathcal{N}(\mathbf{t}_{N+1}|\mathbf{0}, \mathbf{C}_{N+1})$ given in Eq (6.64), if we rearrange its order by putting the last element (i.e., $t_{N+1}$ ) to the first position, denoted as $\bar{\mathbf{t}}_{N+1}$ , it should also satisfy a Gaussian distribution: $$\bar{\mathbf{t}}_{N+1} = (t_{N+1}, t_1, ..., t_2, t_N)^T \sim \mathcal{N}(\bar{\mathbf{t}}_{N+1} | \mathbf{0}, \bar{\mathbf{C}}_{N+1})$$ Where we have defined: $$\bar{\mathbf{C}}_{N+1} = \left( \begin{array}{cc} c & \mathbf{k}^T \\ \mathbf{k} & \mathbf{C}_N \end{array} \right)$$ Where **k** and *c* have been given in the main text below Eq $\mathbf{C}_{N+1} = \begin{pmatrix} \mathbf{C}_N & \mathbf{k} \\ \mathbf{k}^{\mathrm{T}} & c \end{pmatrix}$. The conditional distribution $p(t_{N+1}|\mathbf{t}_N)$ should also be a Gaussian. By analogy to Eq (2.94)-(2.98), we can simply treat $t_{N+1}$ as $\mathbf{x}_a$ , $\mathbf{t}_N$ as $\mathbf{x}_b$ , c as $\mathbf{\Sigma}_{aa}$ , **k** as $\mathbf{\Sigma}_{ba}$ , $\mathbf{k}^T$ as $\mathbf{\Sigma}_{ab}$ and $\mathbf{C}_N$ as $\mathbf{\Sigma}_{bb}$ . Substituting them into Eq $\Lambda_{aa} = (\Sigma_{aa} - \Sigma_{ab} \Sigma_{bb}^{-1} \Sigma_{ba})^{-1}$ and Eq $\Lambda_{ab} = -(\Sigma_{aa} - \Sigma_{ab}\Sigma_{bb}^{-1}\Sigma_{ba})^{-1}\Sigma_{ab}\Sigma_{bb}^{-1}.$ yields: $$\boldsymbol{\Lambda}_{aa} = (c - \mathbf{k}^T \mathbf{C}_N^{-1} \mathbf{k})^{-1}$$ And: $$\mathbf{\Lambda}_{ab} = -(c - \mathbf{k}^T \mathbf{C}_N^{-1} \mathbf{k})^{-1} \mathbf{k}^T \mathbf{C}_N^{-1}$$ Then we substitute them into Eq $p(\mathbf{x}_a|\mathbf{x}_b) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}_{a|b}, \boldsymbol{\Lambda}_{aa}^{-1})$ and $\boldsymbol{\mu}_{a|b} = \boldsymbol{\mu}_a - \boldsymbol{\Lambda}_{aa}^{-1} \boldsymbol{\Lambda}_{ab} (\mathbf{x}_b - \boldsymbol{\mu}_b).$, yields: $$p(t_{N+1}|\mathbf{t}_N) = \mathcal{N}(\boldsymbol{\mu}_{a|b}, \boldsymbol{\Lambda}_{aa}^{-1})$$ For its mean $\mu_{a|b}$ , we have: $$\mu_{a|b} = 0 - \left(c - \mathbf{k}^T \mathbf{C}_N^{-1} \mathbf{k}\right) \cdot \left[-(c - \mathbf{k}^T \mathbf{C}_N^{-1} \mathbf{k})^{-1} \mathbf{k}^T \mathbf{C}_N^{-1}\right] \cdot (\mathbf{t}_N - \mathbf{0})$$ $$= \mathbf{k}^T \mathbf{C}_N^{-1} \mathbf{t}_N = m(\mathbf{x}_{N+1})$$ Similarly, for its variance $\Lambda_{aa}^{-1}$ (Note that here since $t_{N+1}$ is a scalar, the mean and the covariance matrix actually degenerate to one dimension case), we have: $$\boldsymbol{\Lambda}_{aa}^{-1} = c - \mathbf{k}^T \mathbf{C}_N^{-1} \mathbf{k} = \sigma^2(\mathbf{x}_{N+1})$$
2,726
6
6.21
medium
- 6.21 (\*\*) Consider a Gaussian process regression model in which the kernel function is defined in terms of a fixed set of nonlinear basis functions. Show that the predictive distribution is identical to the result $p(t|\mathbf{x}, \mathbf{t}, \alpha, \beta) = \mathcal{N}(t|\mathbf{m}_N^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}), \sigma_N^2(\mathbf{x}))$ obtained in Section 3.3.2 for the Bayesian linear regression model. To do this, note that both models have Gaussian predictive distributions, and so it is only necessary to show that the conditional mean and variance are the same. For the mean, make use of the matrix identity (C.6), and for the variance, make use of the matrix identity (C.7).
We follow the hint beginning by verifying the mean. We write Eq $C(\mathbf{x}_n, \mathbf{x}_m) = k(\mathbf{x}_n, \mathbf{x}_m) + \beta^{-1} \delta_{nm}.$ in a matrix form: $$\mathbf{C}_N = \frac{1}{\alpha} \mathbf{\Phi} \mathbf{\Phi}^T + \beta^{-1} \mathbf{I}_N$$ Where we have used Eq $K_{nm} = k(\mathbf{x}_n, \mathbf{x}_m) = \frac{1}{\alpha} \phi(\mathbf{x}_n)^{\mathrm{T}} \phi(\mathbf{x}_m)$. Here $\Phi$ is the design matrix defined below Eq $\mathbf{V} = \mathbf{\Phi}\mathbf{w}$ and $\mathbf{I}_N$ is an identity matrix. Before we use Eq $\sigma^{2}(\mathbf{x}_{N+1}) = c - \mathbf{k}^{\mathrm{T}} \mathbf{C}_{N}^{-1} \mathbf{k}.$, we need to obtain $\mathbf{k}$ : $$\mathbf{k} = [k(\mathbf{x}_1, \mathbf{x}_{N+1}), k(\mathbf{x}_2, \mathbf{x}_{N+1}), ..., k(\mathbf{x}_N, \mathbf{x}_{N+1})]^T$$ $$= \frac{1}{\alpha} [\boldsymbol{\phi}(\mathbf{x}_1)^T \boldsymbol{\phi}(\mathbf{x}_{N+1}), \boldsymbol{\phi}(\mathbf{x}_2)^T \boldsymbol{\phi}(\mathbf{x}_{N+1}), ..., \boldsymbol{\phi}(\mathbf{x}_n)^T \boldsymbol{\phi}(\mathbf{x}_{N+1})]^T$$ $$= \frac{1}{\alpha} \boldsymbol{\Phi} \boldsymbol{\phi}(\mathbf{x}_{N+1})^T$$ Now we substitute all the expressions into Eq $\sigma^{2}(\mathbf{x}_{N+1}) = c - \mathbf{k}^{\mathrm{T}} \mathbf{C}_{N}^{-1} \mathbf{k}.$, yielding: $$m(\mathbf{x}_{N+1}) = \alpha^{-1} \boldsymbol{\phi}(\mathbf{x}_{N+1})^T \mathbf{\Phi}^T \left[ \alpha^{-1} \mathbf{\Phi} \mathbf{\Phi}^T + \beta^{-1} \mathbf{I}_N \right]^{-1} \mathbf{t}$$ Next using matrix identity (C.6), we obtain: $$\mathbf{\Phi}^T \left[ \alpha^{-1} \mathbf{\Phi} \mathbf{\Phi}^T + \beta^{-1} \mathbf{I}_N \right]^{-1} = \alpha \beta \left[ \beta \mathbf{\Phi}^T \mathbf{\Phi} + \alpha \mathbf{I}_M \right]^{-1} \mathbf{\Phi}^T = \alpha \beta \mathbf{S}_N \mathbf{\Phi}^T$$ Where we have used Eq $\mathbf{S}_{N}^{-1} = \alpha \mathbf{I} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi}.$. Substituting it into $\mathbf{m}(\mathbf{x}_{N+1})$ , we obtain: $$m(\mathbf{x}_{N+1}) = \beta \phi(\mathbf{x}_{N+1})^T \mathbf{S}_N \mathbf{\Phi}^T \mathbf{t} = \langle \phi(\mathbf{x}_{N+1})^T, \beta \mathbf{S}_N \mathbf{\Phi}^T \mathbf{t} \rangle$$ Where $\langle \cdot, \cdot \rangle$ represents the inner product. Comparing the result above with Eq $p(t|\mathbf{x}, \mathbf{t}, \alpha, \beta) = \mathcal{N}(t|\mathbf{m}_N^{\mathrm{T}} \boldsymbol{\phi}(\mathbf{x}), \sigma_N^2(\mathbf{x}))$, $\mathbf{S}_{N}^{-1} = \alpha \mathbf{I} + \beta \mathbf{\Phi}^{\mathrm{T}} \mathbf{\Phi}.$ and $\mathbf{m}_{N} = \beta \mathbf{S}_{N} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t}$, we conclude that the means are equal. It is similar for the variance. We substitute c, $\mathbf{k}$ and $\mathbf{C}_N$ into Eq $\sigma^{2}(\mathbf{x}_{N+1}) = c - \mathbf{k}^{\mathrm{T}} \mathbf{C}_{N}^{-1} \mathbf{k}.$. Then we simplify the expression using matrix identity (C.7). Finally, we will observe that it is equal to Eq (3.59).
3,001
6
6.22
medium
Consider a regression problem with N training set input vectors $\mathbf{x}_1, \dots, \mathbf{x}_N$ and L test set input vectors $\mathbf{x}_{N+1}, \dots, \mathbf{x}_{N+L}$ , and suppose we define a Gaussian process prior over functions $t(\mathbf{x})$ . Derive an expression for the joint predictive distribution for $t(\mathbf{x}_{N+1}), \dots, t(\mathbf{x}_{N+L})$ , given the values of $t(\mathbf{x}_1), \dots, t(\mathbf{x}_N)$ . Show the marginal of this distribution for one of the test observations $t_j$ where $N+1 \leq j \leq N+L$ is given by the usual Gaussian process regression result $\sigma^{2}(\mathbf{x}_{N+1}) = c - \mathbf{k}^{\mathrm{T}} \mathbf{C}_{N}^{-1} \mathbf{k}.$ and $\sigma^{2}(\mathbf{x}_{N+1}) = c - \mathbf{k}^{\mathrm{T}} \mathbf{C}_{N}^{-1} \mathbf{k}.$.
Based on Eq (6.64) and $\mathbf{C}_{N+1} = \begin{pmatrix} \mathbf{C}_N & \mathbf{k} \\ \mathbf{k}^{\mathrm{T}} & c \end{pmatrix}$, We first write down the joint distribution for $\mathbf{t}_{N+L} = [t_1(\mathbf{x}), t_2(\mathbf{x}), ..., t_{N+L}(\mathbf{x})]^T$ : $$p(\mathbf{t}_{N+L}) = \mathcal{N}(\mathbf{t}_{N+L}|\mathbf{0}, \mathbf{C}_{N+L})$$ Where $\mathbf{C}_{N+L}$ is similarly given by: $$\mathbf{C}_{N+L} = \left( \begin{array}{cc} \mathbf{C}_{1,N} & \mathbf{K} \\ \mathbf{K}^T & \mathbf{C}_{N+1,N+L} \end{array} \right)$$ The expression above has already implicitly divided the vector $\mathbf{t}_{N+L}$ into two parts. Similar to Prob.6.20, for later simplicity we rearrange the order of $\mathbf{t}_{N+L}$ denoted as $\bar{\mathbf{t}}_{N+L} = [t_{N+1},...,t_{N+L},t_1,...,t_N]^T$ . Moreover, $\bar{\mathbf{t}}_{N+L}$ should also follows a Gaussian distribution: $$p(\bar{\mathbf{t}}_{N+L}) = \mathcal{N}(\bar{\mathbf{t}}_{N+L}|\mathbf{0},\bar{\mathbf{C}}_{N+L})$$ Where we have defined: $$\bar{\mathbf{C}}_{N+L} = \left( \begin{array}{cc} \mathbf{C}_{N+1,N+L} & \mathbf{K}^T \\ \mathbf{K} & \mathbf{C}_{1,N} \end{array} \right)$$ Now we use Eq (2.94)-(2.98) and Eq (2.79)-(2.80) to derive the conditional distribution, beginning by calculate $\Lambda_{aa}$ : $$\boldsymbol{\Lambda}_{aa} = (\mathbf{C}_{N+1,N+L} - \mathbf{K}^T \cdot \mathbf{C}_{1,N}^{-1} \cdot \mathbf{K})^{-1}$$ and $\Lambda_{ab}$ : $$\Lambda_{ab} = -(\mathbf{C}_{N+1,N+L} - \mathbf{K}^T \cdot \mathbf{C}_{1N}^{-1} \cdot \mathbf{K})^{-1} \cdot \mathbf{K}^T \cdot \mathbf{C}_{1N}^{-1}$$ Now we can obtain: $$p(t_{N+1},...,t_{N+L}|\mathbf{t}_N) = \mathcal{N}(\boldsymbol{\mu}_{a|b},\boldsymbol{\Lambda}_{aa}^{-1})$$ Where we have defined: $$\boldsymbol{\mu}_{a|b} = \mathbf{0} + \mathbf{K}^T \cdot \mathbf{C}_{1N}^{-1} \cdot \mathbf{t}_N = \mathbf{K}^T \cdot \mathbf{C}_{1N}^{-1} \cdot \mathbf{t}_N$$ If now we want to find the conditional distribution $p(t_j|\mathbf{t}_N)$ , where $N+1 \le j \le N+L$ , we only need to find the corresponding entry in the mean (i.e., the (j-N)-th entry) and covariance matrix (i.e., the (j-N)-th diagonal entry) of $p(t_{N+1},...,t_{N+L}|\mathbf{t}_N)$ . In this case, it will degenerate to Eq $\sigma^{2}(\mathbf{x}_{N+1}) = c - \mathbf{k}^{\mathrm{T}} \mathbf{C}_{N}^{-1} \mathbf{k}.$ and $\sigma^{2}(\mathbf{x}_{N+1}) = c - \mathbf{k}^{\mathrm{T}} \mathbf{C}_{N}^{-1} \mathbf{k}.$ just as required.
2,471
6
6.24
easy
Show that a diagonal matrix W whose elements satisfy $0 < W_{ii} < 1$ is positive definite. Show that the sum of two positive definite matrices is itself positive definite.
By definition, we only need to prove that for arbitrary vector $\mathbf{x} \neq \mathbf{0}$ , $\mathbf{x}^T \mathbf{W} \mathbf{x}$ is positive. Here suppose that $\mathbf{W}$ is a $M \times M$ matrix. We expand the multiplication: $$\mathbf{x}^T \mathbf{W} \mathbf{x} = \sum_{i=1}^M \sum_{j=1}^M W_{ij} \cdot x_i \cdot x_j = \sum_{i=1}^M W_{ii} \cdot x_i^2$$ where we have used the fact that **W** is a diagonal matrix. Since $W_{ii} > 0$ , we obtain $\mathbf{x}^T \mathbf{W} \mathbf{x} > 0$ just as required. Suppose we have two positive definite matrix, denoted as $\mathbf{A}_1$ and $\mathbf{A}_2$ , i.e., for arbitrary vector $\mathbf{x}$ , we have $\mathbf{x}^T \mathbf{A}_1 \mathbf{x} > 0$ and $\mathbf{x}^T \mathbf{A}_2 \mathbf{x} > 0$ . Therefore, we can obtain: $$\mathbf{x}^T(\mathbf{A}_1 + \mathbf{A}_2)\mathbf{x} = \mathbf{x}^T\mathbf{A}_1\mathbf{x} + \mathbf{x}^T\mathbf{A}_2\mathbf{x} > 0$$ Just as required.
943
6
6.25
easy
Using the Newton-Raphson formula (4.92), derive the iterative update formula $\mathbf{a}_N^{\text{new}} = \mathbf{C}_N (\mathbf{I} + \mathbf{W}_N \mathbf{C}_N)^{-1} \left\{ \mathbf{t}_N - \boldsymbol{\sigma}_N + \mathbf{W}_N \mathbf{a}_N \right\}.$ for finding the mode $\mathbf{a}_N^{}$ of the posterior distribution in the Gaussian process classification model.
Based on Newton-Raphson formula, Eq(6.81) and Eq(6.82), we have: $$\mathbf{a}_{N}^{new} = \mathbf{a}_{N} - (-\mathbf{W}_{N} - \mathbf{C}_{N}^{-1})^{-1} (\mathbf{t}_{N} - \sigma_{N} - \mathbf{C}_{N}^{-1} \mathbf{a}_{N})$$ $$= \mathbf{a}_{N} + (\mathbf{W}_{N} + \mathbf{C}_{N}^{-1})^{-1} (\mathbf{t}_{N} - \sigma_{N} - \mathbf{C}_{N}^{-1} \mathbf{a}_{N})$$ $$= (\mathbf{W}_{N} + \mathbf{C}_{N}^{-1})^{-1} [(\mathbf{W}_{N} + \mathbf{C}_{N}^{-1}) \mathbf{a}_{N} + \mathbf{t}_{N} - \sigma_{N} - \mathbf{C}_{N}^{-1} \mathbf{a}_{N}]$$ $$= \mathbf{C}_{N} \mathbf{C}_{N}^{-1} (\mathbf{W}_{N} + \mathbf{C}_{N}^{-1})^{-1} (\mathbf{t}_{N} - \sigma_{N} + \mathbf{W}_{N} \mathbf{a}_{N})$$ $$= \mathbf{C}_{N} (\mathbf{C}_{N} \mathbf{W}_{N} + \mathbf{I})^{-1} (\mathbf{t}_{N} - \sigma_{N} + \mathbf{W}_{N} \mathbf{a}_{N})$$ Just as required. # **Problem 6.26 Solution** Using Eq(6.77), $p(a_{N+1}|\mathbf{a}_N) = \mathcal{N}(a_{N+1}|\mathbf{k}^{\mathrm{T}}\mathbf{C}_N^{-1}\mathbf{a}_N, c - \mathbf{k}^{\mathrm{T}}\mathbf{C}_N^{-1}\mathbf{k}).$ and $q(\mathbf{a}_N) = \mathcal{N}(\mathbf{a}_N | \mathbf{a}_N^*, \mathbf{H}^{-1}).$, we can obtain: $$p(a_{N+1}|\mathbf{t}_N) = \int p(a_{N+1}|\mathbf{a}_N)p(\mathbf{a}_N|\mathbf{t}_N)d\mathbf{a}_N$$ $$= \int N(a_{N+1}|\mathbf{k}^T\mathbf{C}_N^{-1}\mathbf{a}_N, c - \mathbf{k}^T\mathbf{C}_N^{-1}\mathbf{k}) \cdot N(\mathbf{a}_N|\mathbf{a}_N^{}, \mathbf{H}^{-1})d\mathbf{a}_N$$ By analogy to Eq $p(\mathbf{y}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\boldsymbol{\mu} + \mathbf{b}, \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}})$, i.e., $$p(\mathbf{y}) = \int p(\mathbf{y}|\mathbf{x})p(\mathbf{x})d\mathbf{x}$$ We can obtain: $$p(a_{N+1}|\mathbf{t}_N) = N(\mathbf{A}\boldsymbol{\mu} + \mathbf{b}, \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^T)$$ (\*) Where we have defined: $$\mathbf{A} = \mathbf{k}^T \mathbf{C}_N^{-1}, \mathbf{b} = \mathbf{0}, \mathbf{L}^{-1} = c - \mathbf{k}^T \mathbf{C}_N^{-1} \mathbf{k}$$ And $$\mu = \mathbf{a}_N^{}, \Lambda = \mathbf{H}$$ Therefore, the mean is given by: $$\mathbf{A}\boldsymbol{\mu} + \mathbf{b} = \mathbf{k}^T \mathbf{C}_N^{-1} \mathbf{a}_N^* = \mathbf{k}^T \mathbf{C}_N^{-1} \mathbf{C}_N (\mathbf{t}_N - \sigma_N) = \mathbf{k}^T (\mathbf{t}_N - \sigma_N)$$ Where we have used Eq $\mathbf{a}_N^{} = \mathbf{C}_N(\mathbf{t}_N - \boldsymbol{\sigma}_N).$. The covariance matrix is given by: $$\begin{split} \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^T &= c - \mathbf{k}^T \mathbf{C}_N^{-1}\mathbf{k} + \mathbf{k}^T \mathbf{C}_N^{-1}\mathbf{H}^{-1}(\mathbf{k}^T \mathbf{C}_N^{-1})^T \\ &= c - \mathbf{k}^T (\mathbf{C}_N^{-1} - \mathbf{C}_N^{-1}\mathbf{H}^{-1}\mathbf{C}_N^{-1})\mathbf{k} \\ &= c - \mathbf{k}^T \Big(\mathbf{C}_N^{-1} - \mathbf{C}_N^{-1}(\mathbf{W}_N + \mathbf{C}_N^{-1})^{-1}\mathbf{C}_N^{-1}\Big)\mathbf{k} \\ &= c - \mathbf{k}^T \Big(\mathbf{C}_N^{-1} - (\mathbf{C}_N \mathbf{W}_N \mathbf{C}_N + \mathbf{C}_N^{-1})^{-1}\Big)\mathbf{k} \end{split}$$ Where we have used Eq $\mathbf{H} = -\nabla \nabla \Psi(\mathbf{a}_N) = \mathbf{W}_N + \mathbf{C}_N^{-1}$ and the fact that $\mathbf{C}_N$ is symmetric. Then we use matrix identity (C.7) to further reduce the expression, which will finally give Eq $\operatorname{var}[a_{N+1}|\mathbf{t}_N] = c - \mathbf{k}^{\mathrm{T}}(\mathbf{W}_N^{-1} + \mathbf{C}_N)^{-1}\mathbf{k}.$. **Problem 6.27 Solution**(Wait for update) This problem is really complicated. What's more, I find that Eq (6.91) seems not right. # 0.7 Sparse Kernel Machines
3,626
6
6.3
easy
The nearest-neighbour classifier (Section 2.5.2) assigns a new input vector $\mathbf{x}$ to the same class as that of the nearest input vector $\mathbf{x}_n$ from the training set, where in the simplest case, the distance is defined by the Euclidean metric $\|\mathbf{x} \mathbf{x}_n\|^2$ . By expressing this rule in terms of scalar products and then making use of kernel substitution, formulate the nearest-neighbour classifier for a general nonlinear kernel.
We begin by expanding the Euclidean metric. $$||\mathbf{x} - \mathbf{x}_n||^2 = (\mathbf{x} - \mathbf{x}_n)^T (\mathbf{x} - \mathbf{x}_n)$$ $$= (\mathbf{x}^T - \mathbf{x}_n^T)(\mathbf{x} - \mathbf{x}_n)$$ $$= \mathbf{x}^T \mathbf{x} - 2\mathbf{x}_n^T \mathbf{x} + \mathbf{x}_n^T \mathbf{x}_n$$ Similar to (6.24)-(6.26), we use a nonlinear kernel $k(\mathbf{x}_n, \mathbf{x})$ to replace $\mathbf{x}_n^T \mathbf{x}$ , which gives a general nonlinear nearest-neighbor classifier with cost function defined as: $$k(\mathbf{x}, \mathbf{x}) + k(\mathbf{x}_n, \mathbf{x}_n) - 2k(\mathbf{x}_n, \mathbf{x})$$
608
6
6.4
easy
In Appendix C, we give an example of a matrix that has positive elements but that has a negative eigenvalue and hence that is not positive definite. Find an example of the converse property, namely a 2 × 2 matrix with positive eigenvalues yet that has at least one negative element.
To construct such a matrix, let us suppose the two eigenvalues are 1 and 2, and the matrix has form: $$\left[ egin{array}{cc} a & b \\ c & d \end{array} \right]$$ Therefore, based on the definition of eigenvalue, we have two equations: $$\begin{cases} (a-2)(d-2) = bc & (1) \\ (a-1)(d-1) = bc & (2) \end{cases}$$ (2)-(1), yielding: $$a + d = 3$$ Therefore, we set a = 4 and d = -1. Then we substitute them into (1), and thus we see: $$bc = -6$$ Finally, we choose b = 3 and c = -2. The constructed matrix is: $$\left[\begin{array}{cc}4&3\\-2&-1\end{array}\right]$$
573
6
6.5
easy
Verify the results $k(\mathbf{x}, \mathbf{x}') = ck_1(\mathbf{x}, \mathbf{x}')$ and $k(\mathbf{x}, \mathbf{x}') = f(\mathbf{x})k_1(\mathbf{x}, \mathbf{x}')f(\mathbf{x}')$ for constructing valid kernels.
Since $k_1(\mathbf{x}, \mathbf{x}')$ is a valid kernel, it can be written as: $$k_1(\mathbf{x}, \mathbf{x}') = \phi(\mathbf{x})^T \phi(\mathbf{x}')$$ We can obtain: $$k(\mathbf{x}, \mathbf{x}') = c k_1(\mathbf{x}, \mathbf{x}') = \left[\sqrt{c}\phi(\mathbf{x})\right]^T \left[\sqrt{c}\phi(\mathbf{x}')\right]$$ Therefore, $k(\mathbf{x}, \mathbf{x}') = ck_1(\mathbf{x}, \mathbf{x}')$ is a valid kernel. It is similar for (6.14): $$k(\mathbf{x}, \mathbf{x}') = f(\mathbf{x})k_1(\mathbf{x}, \mathbf{x}')f(\mathbf{x}') = [f(\mathbf{x})\phi(\mathbf{x})]^T [f(\mathbf{x}')\phi(\mathbf{x}')]$$ Just as required.
619
6
6.6
easy
Verify the results $k(\mathbf{x}, \mathbf{x}') = q(k_1(\mathbf{x}, \mathbf{x}'))$ and $k(\mathbf{x}, \mathbf{x}') = \exp(k_1(\mathbf{x}, \mathbf{x}'))$ for constructing valid kernels.
We suppose q(x) can be written as: $$q(x) = a_n x^n + a_{n-1} x^{n-1} + \dots + a_1 x + a_0$$ We now obtain: $$k(\mathbf{x}, \mathbf{x}') = a_n k_1(\mathbf{x}, \mathbf{x}')^n + a_{n-1} k_1(\mathbf{x}, \mathbf{x}')^{n-1} + \dots + a_1 k_1(\mathbf{x}, \mathbf{x}') + a_0$$ By repeatedly using $k(\mathbf{x}, \mathbf{x}') = ck_1(\mathbf{x}, \mathbf{x}')$, $k(\mathbf{x}, \mathbf{x}') = k_1(\mathbf{x}, \mathbf{x}') + k_2(\mathbf{x}, \mathbf{x}')$ and $k(\mathbf{x}, \mathbf{x}') = k_1(\mathbf{x}, \mathbf{x}')k_2(\mathbf{x}, \mathbf{x}')$, we can easily verify $k(\mathbf{x}, \mathbf{x}')$ is a valid kernel. For $k(\mathbf{x}, \mathbf{x}') = \exp(k_1(\mathbf{x}, \mathbf{x}'))$, we can use Taylor expansion, and since the coefficients of Taylor expansion are all positive, we can similarly prove its validity.
845