Dataset Viewer
chapter
int64 1
14
| question_number
stringlengths 3
5
| difficulty
stringclasses 3
values | question_text
stringlengths 64
8.24k
| answer
stringlengths 56
16.5k
| answer_length
int64 14
16.6k
|
|---|---|---|---|---|---|
1
|
1.1
|
easy
|
Consider the sum-of-squares error function given by $E(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \{y(x_n, \mathbf{w}) - t_n\}^2$ in which the function $y(x, \mathbf{w})$ is given by the polynomial $y(x, \mathbf{w}) = w_0 + w_1 x + w_2 x^2 + \ldots + w_M x^M = \sum_{j=0}^{M} w_j x^j$. Show that the coefficients $\mathbf{w} = \{w_i\}$ that minimize this error function are given by the solution to the following set of linear equations
$$\sum_{j=0}^{M} A_{ij} w_j = T_i {(1.122)}$$
where
$$A_{ij} = \sum_{n=1}^{N} (x_n)^{i+j}, T_i = \sum_{n=1}^{N} (x_n)^i t_n. (1.123)$$
Here a suffix i or j denotes the index of a component, whereas $(x)^i$ denotes x raised to the power of i.
|
We let the derivative of *error function E* with respect to vector $\mathbf{w}$ equals to $\mathbf{0}$ , (i.e. $\frac{\partial E}{\partial \mathbf{w}} = 0$ ), and this will be the solution of $\mathbf{w} = \{w_i\}$ which minimizes *error function E*. To solve this problem, we will calculate the derivative of E with respect to every $w_i$ , and let them equal to 0 instead. Based on $y(x, \mathbf{w}) = w_0 + w_1 x + w_2 x^2 + \ldots + w_M x^M = \sum_{j=0}^{M} w_j x^j$ and $E(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \{y(x_n, \mathbf{w}) - t_n\}^2$ we can obtain:
$$\frac{\partial E}{\partial w_{i}} = \sum_{n=1}^{N} \{y(x_{n}, \mathbf{w}) - t_{n}\} x_{n}^{i} = 0$$
$$= > \sum_{n=1}^{N} y(x_{n}, \mathbf{w}) x_{n}^{i} = \sum_{n=1}^{N} x_{n}^{i} t_{n}$$
$$= > \sum_{n=1}^{N} (\sum_{j=0}^{M} w_{j} x_{n}^{j}) x_{n}^{i} = \sum_{n=1}^{N} x_{n}^{i} t_{n}$$
$$= > \sum_{n=1}^{N} \sum_{j=0}^{M} w_{j} x_{n}^{(j+i)} = \sum_{n=1}^{N} x_{n}^{i} t_{n}$$
$$= > \sum_{j=0}^{M} \sum_{n=1}^{N} x_{n}^{(j+i)} w_{j} = \sum_{n=1}^{N} x_{n}^{i} t_{n}$$
If we denote $A_{ij} = \sum_{n=1}^{N} x_n^{i+j}$ and $T_i = \sum_{n=1}^{N} x_n^i t_n$ , the equation above can be written exactly as (1.222), Therefore the problem is solved.
| 1,240
|
1
|
1.10
|
easy
|
Suppose that the two variables x and z are statistically independent. Show that the mean and variance of their sum satisfies
$$\mathbb{E}[x+z] = \mathbb{E}[x] + \mathbb{E}[z] \tag{1.128}$$
$$var[x+z] = var[x] + var[z]. \tag{1.129}$$
|
We will solve this problem based on the definition of expectation, variation
and independence.
$$\mathbb{E}[x+z] = \int \int (x+z)p(x,z)dxdz$$
$$= \int \int (x+z)p(x)p(z)dxdz$$
$$= \int \int xp(x)p(z)dxdz + \int \int zp(x)p(z)dxdz$$
$$= \int (\int p(z)dz)xp(x)dx + \int (\int p(x)dx)zp(z)dz$$
$$= \int xp(x)dx + \int zp(z)dz$$
$$= \mathbb{E}[x] + \mathbb{E}[z]$$
$$var[x+z] = \int \int (x+z-\mathbb{E}[x+z])^2 p(x,z) dx dz$$
$$= \int \int \{(x+z)^2 - 2(x+z)\mathbb{E}[x+z]\} + \mathbb{E}^2[x+z]\} p(x,z) dx dz$$
$$= \int \int (x+z)^2 p(x,z) dx dz - 2\mathbb{E}[x+z] \int (x+z) p(x,z) dx dz + \mathbb{E}^2[x+z]$$
$$= \int \int (x+z)^2 p(x,z) dx dz - \mathbb{E}^2[x+z]$$
$$= \int \int (x^2 + 2xz + z^2) p(x) p(z) dx dz - \mathbb{E}^2[x+z]$$
$$= \int (\int p(z) dz) x^2 p(x) dx + \int \int 2xz p(x) p(z) dx dz + \int (\int p(x) dx) z^2 p(z) dz - \mathbb{E}^2[x+z]$$
$$= \mathbb{E}[x^2] + \mathbb{E}[z^2] - \mathbb{E}^2[x+z] + \int \int 2xz p(x) p(z) dx dz$$
$$= \mathbb{E}[x^2] + \mathbb{E}[z^2] - (\mathbb{E}[x] + \mathbb{E}[z])^2 + \int \int 2xz p(x) p(z) dx dz$$
$$= \mathbb{E}[x^2] - \mathbb{E}^2[x] + \mathbb{E}[z^2] - \mathbb{E}^2[z] - 2\mathbb{E}[x] \mathbb{E}[z] + 2 \int \int xz p(x) p(z) dx dz$$
$$= var[x] + var[z] - 2\mathbb{E}[x] \mathbb{E}[z] + 2(\int xp(x) dx) (\int zp(z) dz)$$
$$= var[x] + var[z]$$
# **Problem 1.11 Solution**
Based on prior knowledge that $\mu_{ML}$ and $\sigma_{ML}^2$ will decouple. We will first calculate $\mu_{ML}$ :
$$\frac{d(\ln p(\mathbf{x} \mid \mu, \sigma^2))}{d\mu} = \frac{1}{\sigma^2} \sum_{n=1}^{N} (x_n - \mu)$$
We let:
$$\frac{d(\ln p(\mathbf{x} \, \big| \, \mu, \sigma^2))}{d\mu} = 0$$
Therefore:
$$\mu_{ML} = \frac{1}{N} \sum_{n=1}^{N} x_n$$
And because:
$$\frac{d(\ln p(\mathbf{x}\,\big|\,\mu,\sigma^2))}{d\sigma^2} = \frac{1}{2\sigma^4}(\sum_{n=1}^N(x_n-\mu)^2 - N\sigma^2)$$
We let:
$$\frac{d(\ln p(\mathbf{x} \mid \mu, \sigma^2))}{d\sigma^2} = 0$$
Therefore:
$$\sigma_{ML}^2 = \frac{1}{N} \sum_{n=1}^{N} (x_n - \mu_{ML})^2$$
| 2,015
|
1
|
1.12
|
medium
|
Using the results $\mathbb{E}[x] = \int_{-\infty}^{\infty} \mathcal{N}(x|\mu, \sigma^2) x \, \mathrm{d}x = \mu.$ and $\mathbb{E}[x^2] = \int_{-\infty}^{\infty} \mathcal{N}\left(x|\mu, \sigma^2\right) x^2 \, \mathrm{d}x = \mu^2 + \sigma^2.$, show that
$$\mathbb{E}[x_n x_m] = \mu^2 + I_{nm} \sigma^2 \tag{1.130}$$
where $x_n$ and $x_m$ denote data points sampled from a Gaussian distribution with mean $\mu$ and variance $\sigma^2$ , and $I_{nm}$ satisfies $I_{nm}=1$ if n=m and $I_{nm}=0$ otherwise. Hence prove the results (1.57) and $\mathbb{E}[\sigma_{\mathrm{ML}}^2] = \left(\frac{N-1}{N}\right)\sigma^2$.
|
It is quite straightforward for $\mathbb{E}[\mu_{ML}]$ , with the prior knowledge that $x_n$ is i.i.d. and it also obeys Gaussian distribution $\mathcal{N}(\mu, \sigma^2)$ .
$$\mathbb{E}[\mu_{ML}] = \mathbb{E}[\frac{1}{N}\sum_{n=1}^N x_n] = \frac{1}{N}\mathbb{E}[\sum_{n=1}^N x_n] = \mathbb{E}[x_n] = \mu$$
For $\mathbb{E}[\sigma_{ML}^2]$ , we need to take advantage of $\sigma_{\rm ML}^2 = \frac{1}{N} \sum_{n=1}^{N} (x_n - \mu_{\rm ML})^2$ and what has been given in the problem :
$$\mathbb{E}[\sigma_{ML}^{2}] = \mathbb{E}\left[\frac{1}{N}\sum_{n=1}^{N}(x_{n} - \mu_{ML})^{2}\right]$$
$$= \frac{1}{N}\mathbb{E}\left[\sum_{n=1}^{N}(x_{n} - \mu_{ML})^{2}\right]$$
$$= \frac{1}{N}\mathbb{E}\left[\sum_{n=1}^{N}(x_{n}^{2} - 2x_{n}\mu_{ML} + \mu_{ML}^{2})\right]$$
$$= \frac{1}{N}\mathbb{E}\left[\sum_{n=1}^{N}x_{n}^{2}\right] - \frac{1}{N}\mathbb{E}\left[\sum_{n=1}^{N}2x_{n}\mu_{ML}\right] + \frac{1}{N}\mathbb{E}\left[\sum_{n=1}^{N}\mu_{ML}^{2}\right]$$
$$= \mu^{2} + \sigma^{2} - \frac{2}{N}\mathbb{E}\left[\sum_{n=1}^{N}x_{n}\left(\frac{1}{N}\sum_{n=1}^{N}x_{n}\right)\right] + \mathbb{E}\left[\mu_{ML}^{2}\right]$$
$$= \mu^{2} + \sigma^{2} - \frac{2}{N^{2}}\mathbb{E}\left[\sum_{n=1}^{N}x_{n}\left(\sum_{n=1}^{N}x_{n}\right)\right] + \mathbb{E}\left[\left(\frac{1}{N}\sum_{n=1}^{N}x_{n}\right)^{2}\right]$$
$$= \mu^{2} + \sigma^{2} - \frac{1}{N^{2}}\mathbb{E}\left[\left(\sum_{n=1}^{N}x_{n}\right)^{2}\right]$$
$$= \mu^{2} + \sigma^{2} - \frac{1}{N^{2}}[N(N\mu^{2} + \sigma^{2})]$$
Therefore we have:
$$\mathbb{E}[\sigma_{ML}^2] = (\frac{N-1}{N})\sigma^2$$
| 1,585
|
1
|
1.13
|
easy
|
Suppose that the variance of a Gaussian is estimated using the result $\sigma_{\rm ML}^2 = \frac{1}{N} \sum_{n=1}^{N} (x_n - \mu_{\rm ML})^2$ but with the maximum likelihood estimate $\mu_{\rm ML}$ replaced with the true value $\mu$ of the mean. Show that this estimator has the property that its expectation is given by the true variance $\sigma^2$ .
|
This problem can be solved in the same method used in Prob.1.12:
$$\begin{split} \mathbb{E}[\sigma_{ML}^2] &= \mathbb{E}[\frac{1}{N} \sum_{n=1}^{N} (x_n - \mu)^2] \quad \text{(Because here we use } \mu \text{ to replace } \mu_{ML}) \\ &= \frac{1}{N} \mathbb{E}[\sum_{n=1}^{N} (x_n - \mu)^2] \\ &= \frac{1}{N} \mathbb{E}[\sum_{n=1}^{N} (x_n^2 - 2x_n \mu + \mu^2)] \\ &= \frac{1}{N} \mathbb{E}[\sum_{n=1}^{N} x_n^2] - \frac{1}{N} \mathbb{E}[\sum_{n=1}^{N} 2x_n \mu] + \frac{1}{N} \mathbb{E}[\sum_{n=1}^{N} \mu^2] \\ &= \mu^2 + \sigma^2 - \frac{2\mu}{N} \mathbb{E}[\sum_{n=1}^{N} x_n] + \mu^2 \\ &= \mu^2 + \sigma^2 - 2\mu^2 + \mu^2 \\ &= \sigma^2 \end{split}$$
Note: The biggest difference between Prob.1.12 and Prob.1.13 is that the mean of Gaussian Distribution is known previously (in Prob.1.13) or not (in Prob.1.12). In other words, the difference can be shown by the following equations:
$$\begin{split} \mathbb{E}[\mu^2] &= \mu^2 \quad (\mu \text{ is determined, i.e. its } expectation \text{ is itself, also true for } \mu^2) \\ \mathbb{E}[\mu^2_{ML}] &= \mathbb{E}[(\frac{1}{N}\sum_{n=1}^N x_n)^2] = \frac{1}{N^2}\mathbb{E}[(\sum_{n=1}^N x_n)^2] = \frac{1}{N^2}N(N\mu^2 + \sigma^2) = \mu^2 + \frac{\sigma^2}{N} \end{split}$$
| 1,234
|
1
|
1.14
|
medium
|
Show that an arbitrary square matrix with elements $w_{ij}$ can be written in the form $w_{ij} = w_{ij}^{\rm S} + w_{ij}^{\rm A}$ where $w_{ij}^{\rm S}$ and $w_{ij}^{\rm A}$ are symmetric and anti-symmetric matrices, respectively, satisfying $w_{ij}^{\rm S} = w_{ji}^{\rm S}$ and $w_{ij}^{\rm A} = -w_{ji}^{\rm A}$ for all i and j. Now consider the second order term in a higher order polynomial in D dimensions, given by
$$\sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij} x_i x_j. \tag{1.131}$$
Show that
$$\sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij} x_i x_j = \sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij}^{S} x_i x_j$$
$\sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij} x_i x_j = \sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij}^{S} x_i x_j$
so that the contribution from the anti-symmetric matrix vanishes. We therefore see that, without loss of generality, the matrix of coefficients $w_{ij}$ can be chosen to be symmetric, and so not all of the $D^2$ elements of this matrix can be chosen independently. Show that the number of independent parameters in the matrix $w_{ij}^{\rm S}$ is given by D(D+1)/2.
|
This problem is quite similar to the fact that any function f(x) can be written into the sum of an odd function and an even function. If we let:
$$w_{ij}^S = \frac{w_{ij} + w_{ji}}{2}$$
and $w_{ij}^A = \frac{w_{ij} - w_{ji}}{2}$
It is obvious that they satisfy the constraints described in the problem, which are:
$$w_{ij} = w_{ij}^S + w_{ij}^A$$
, $w_{ij}^S = w_{ji}^S$ , $w_{ij}^A = -w_{ji}^A$
To prove $\sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij} x_i x_j = \sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij}^{S} x_i x_j$, we only need to simplify it:
$$\sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij} x_i x_j = \sum_{i=1}^{D} \sum_{j=1}^{D} (w_{ij}^S + w_{ij}^A) x_i x_j$$
$$= \sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij}^S x_i x_j + \sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij}^A x_i x_j$$
Therefore, we only need to prove that the second term equals to 0, and here we use a simple trick: we will prove twice of the second term equals to 0 instead.
$$2\sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij}^{A} x_{i} x_{j} = \sum_{i=1}^{D} \sum_{j=1}^{D} (w_{ij}^{A} + w_{ij}^{A}) x_{i} x_{j}$$
$$= \sum_{i=1}^{D} \sum_{j=1}^{D} (w_{ij}^{A} - w_{ji}^{A}) x_{i} x_{j}$$
$$= \sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij}^{A} x_{i} x_{j} - \sum_{i=1}^{D} \sum_{j=1}^{D} w_{ji}^{A} x_{i} x_{j}$$
$$= \sum_{i=1}^{D} \sum_{j=1}^{D} w_{ij}^{A} x_{i} x_{j} - \sum_{j=1}^{D} \sum_{i=1}^{D} w_{ji}^{A} x_{j} x_{i}$$
$$= 0$$
Therefore, we choose the coefficient matrix to be symmetric as described in the problem. Considering about the symmetry, we can see that if and only if for i=1,2,...,D and $i \leq j$ , $w_{ij}$ is given, the whole matrix will be determined. Hence, the number of independent parameters are given by :
$$D + D - 1 + \dots + 1 = \frac{D(D+1)}{2}$$
Note: You can view this intuitively by considering if the upper triangular part of a symmetric matrix is given, the whole matrix will be determined.
| 1,868
|
1
|
1.15
|
hard
|
$ In this exercise and the next, we explore how the number of independent parameters in a polynomial grows with the order M of the polynomial and with the dimensionality D of the input space. We start by writing down the $M^{\rm th}$ order term for a polynomial in D dimensions in the form
$$\sum_{i_1=1}^{D} \sum_{i_2=1}^{D} \cdots \sum_{i_M=1}^{D} w_{i_1 i_2 \cdots i_M} x_{i_1} x_{i_2} \cdots x_{i_M}.$$
$\sum_{i_1=1}^{D} \sum_{i_2=1}^{D} \cdots \sum_{i_M=1}^{D} w_{i_1 i_2 \cdots i_M} x_{i_1} x_{i_2} \cdots x_{i_M}.$
The coefficients $w_{i_1i_2\cdots i_M}$ comprise $D^M$ elements, but the number of independent parameters is significantly fewer due to the many interchange symmetries of the factor $x_{i_1}x_{i_2}\cdots x_{i_M}$ . Begin by showing that the redundancy in the coefficients can be removed by rewriting this $M^{\text{th}}$ order term in the form
$$\sum_{i_1=1}^{D} \sum_{i_2=1}^{i_1} \cdots \sum_{i_M=1}^{i_{M-1}} \widetilde{w}_{i_1 i_2 \cdots i_M} x_{i_1} x_{i_2} \cdots x_{i_M}.$$
$\sum_{i_1=1}^{D} \sum_{i_2=1}^{i_1} \cdots \sum_{i_M=1}^{i_{M-1}} \widetilde{w}_{i_1 i_2 \cdots i_M} x_{i_1} x_{i_2} \cdots x_{i_M}.$
Note that the precise relationship between the $\widetilde{w}$ coefficients and w coefficients need not be made explicit. Use this result to show that the number of *independent* parameters n(D,M), which appear at order M, satisfies the following recursion relation
$$n(D,M) = \sum_{i=1}^{D} n(i, M-1).$$
$n(D,M) = \sum_{i=1}^{D} n(i, M-1).$
Next use proof by induction to show that the following result holds
$$\sum_{i=1}^{D} \frac{(i+M-2)!}{(i-1)!(M-1)!} = \frac{(D+M-1)!}{(D-1)!M!}$$
$\sum_{i=1}^{D} \frac{(i+M-2)!}{(i-1)!(M-1)!} = \frac{(D+M-1)!}{(D-1)!M!}$
which can be done by first proving the result for D=1 and arbitrary M by making use of the result 0!=1, then assuming it is correct for dimension D and verifying that it is correct for dimension D+1. Finally, use the two previous results, together with proof by induction, to show
$$n(D,M) = \frac{(D+M-1)!}{(D-1)!M!}.$$
$n(D,M) = \frac{(D+M-1)!}{(D-1)!M!}.$
To do this, first show that the result is true for M=2, and any value of $D\geqslant 1$ , by comparison with the result of Exercise 1.14. Then make use of $n(D,M) = \sum_{i=1}^{D} n(i, M-1).$, together with $\sum_{i=1}^{D} \frac{(i+M-2)!}{(i-1)!(M-1)!} = \frac{(D+M-1)!}{(D-1)!M!}$, to show that, if the result holds at order M-1, then it will also hold at order M
|
This problem is a more general form of Prob.1.14, so the method can also be used here: we will find a way to use $w_{i_1i_2...i_M}$ to represent $\widetilde{w}_{i_1i_2...i_M}$ .
We begin by introducing a mapping function:
$$F(x_{i1}x_{i2}...x_{iM}) = x_{j1}x_{j2}...,x_{jM}$$
$$s.t. \bigcup_{k=1}^{M} x_{ik} = \bigcup_{k=1}^{M} x_{jk}, \text{ and } x_{j1} \ge x_{j2} \ge x_{j3}... \ge x_{jM}$$
It is complexed to write F in mathematical form. Actually this function does a simple work: it rearranges the element in a decreasing order based on its subindex. Several examples are given below, when D = 5, M = 4:
$$F(x_5x_2x_3x_2) = x_5x_3x_2x_2$$
$$F(x_1x_3x_3x_2) = x_3x_3x_2x_1$$
$$F(x_1x_4x_2x_3) = x_4x_3x_2x_1$$
$$F(x_1x_1x_5x_2) = x_5x_2x_1x_1$$
After introducing F, the solution will be very simple, based on the fact that F will not change the value of the term, but only rearrange it.
$$\sum_{i_1=1}^D \sum_{i_2=1}^D \dots \sum_{i_M=1}^D w_{i_1 i_2 \dots i_M} x_{i1} x_{i2} \dots x_{iM} = \sum_{j_1=1}^D \sum_{j_2=1}^{j_1} \dots \sum_{j_M=1}^{j_{M-1}} \widetilde{w}_{j_1 j_2 \dots j_M} x_{j1} x_{j2} \dots x_{jM}$$
where
$$\begin{split} \widetilde{w}_{j_1 j_2 \dots j_M} &= \sum_{w \in \Omega} w \\ \Omega &= \{ w_{i_1 i_2 \dots i_M} \mid F(x_{i1} x_{i2} \dots x_{iM}) = x_{j1} x_{j2} \dots x_{jM}, \ \forall x_{i1} x_{i2} \dots x_{iM} \ \} \end{split}$$
By far, we have already proven $\sum_{i_1=1}^{D} \sum_{i_2=1}^{i_1} \cdots \sum_{i_M=1}^{i_{M-1}} \widetilde{w}_{i_1 i_2 \cdots i_M} x_{i_1} x_{i_2} \cdots x_{i_M}.$. *Mathematical induction* will be used to prove $n(D,M) = \sum_{i=1}^{D} n(i, M-1).$ and we will begin by proving D=1, i.e. n(1,M)=n(1,M-1). When D=1, $\sum_{i_1=1}^{D} \sum_{i_2=1}^{i_1} \cdots \sum_{i_M=1}^{i_{M-1}} \widetilde{w}_{i_1 i_2 \cdots i_M} x_{i_1} x_{i_2} \cdots x_{i_M}.$ will degenerate into $\widetilde{w}x_1^M$ , i.e., it only has one term, whose coefficient is govern by $\widetilde{w}$ regardless the value of M.
Therefore, we have proven when D = 1, n(D,M) = 1. Suppose $n(D,M) = \sum_{i=1}^{D} n(i, M-1).$ holds for D, let's prove it will also hold for D + 1, and then $n(D,M) = \sum_{i=1}^{D} n(i, M-1).$ will be proved based on *Mathematical induction*.
Let's begin based on (1.134):
$$\sum_{i_1=1}^{D+1} \sum_{i_2=1}^{i_1} \dots \sum_{i_M=1}^{i_{M-1}} \widetilde{w}_{i_1 i_2 \dots i_M} x_{i_1} x_{i_2} \dots x_{i_M}$$
(\*)
We divide (\*) into two parts based on the first summation: the first part is made up of $i_i = 1, 2, ..., D$ and the second part $i_1 = D + 1$ . After division, the first part corresponds to n(D, M), and the second part corresponds to n(D + 1, M - 1). Therefore we obtain:
$$n(D+1,M) = n(D,M) + n(D+1,M-1) \tag{**}$$
And given the fact that $n(D,M) = \sum_{i=1}^{D} n(i, M-1).$ holds for D:
$$n(D, M) = \sum_{i=1}^{D} n(i, M-1)$$
Therefore, we substitute it into (\*\*)
$$n(D+1,M) = \sum_{i=1}^{D} n(i,M-1) + n(D+1,M-1) = \sum_{i=1}^{D+1} n(i,M-1)$$
We will prove $\sum_{i=1}^{D} \frac{(i+M-2)!}{(i-1)!(M-1)!} = \frac{(D+M-1)!}{(D-1)!M!}$ in a different but simple way. We rewrite $\sum_{i=1}^{D} \frac{(i+M-2)!}{(i-1)!(M-1)!} = \frac{(D+M-1)!}{(D-1)!M!}$ in *Permutation and Combination* view:
$$\sum_{i=1}^{D} C_{i+M-2}^{M-1} = C_{D+M-1}^{M}$$
Firstly, We expand the summation.
$$C_{M-1}^{M-1} + C_{M}^{M-1} + \dots C_{D+M-2}^{M-1} = C_{D+M-1}^{M}$$
Secondly, we rewrite the first term on the left side to $C_M^M$ , because $C_{M-1}^{M-1}=C_M^M=1$ . In other words, we only need to prove:
$$C_M^M + C_M^{M-1} + \dots C_{D+M-2}^{M-1} = C_{D+M-1}^M$$
Thirdly, we take advantage of the property : $C_N^r = C_{N-1}^r + C_{N-1}^{r-1}$ . So we can recursively combine the first term and the second term on the left side, and it will ultimately equal to the right side.
$n(D,M) = \frac{(D+M-1)!}{(D-1)!M!}.$ gives the mathematical form of n(D, M), and we need all the conclusions above to prove it.
Let's give some intuitive concepts by illustrating M=0,1,2. When M=0, $\sum_{i_1=1}^{D} \sum_{i_2=1}^{i_1} \cdots \sum_{i_M=1}^{i_{M-1}} \widetilde{w}_{i_1 i_2 \cdots i_M} x_{i_1} x_{i_2} \cdots x_{i_M}.$ will consist of only a constant term, which means n(D,0)=1. When M=1, it is obvious n(D,1)=D, because in this case (1.134) will only have D terms if we expand it. When M=2, it degenerates to Prob.1.14, so $n(D,2)=\frac{D(D+1)}{2}$ is also obvious. Suppose (1.137) holds for M-1, let's prove it will also hold for M.
$$\begin{split} n(D,M) &= \sum_{i=1}^{D} n(i,M-1) \quad (\text{ based on } (1.135)) \\ &= \sum_{i=1}^{D} C_{i+M-2}^{M-1} \quad (\text{ based on } (1.137) \text{ holds for } M-1) \\ &= C_{M-1}^{M-1} + C_{M}^{M-1} + C_{M+1}^{M-1} \dots + C_{D+M-2}^{M-1} \\ &= (C_{M}^{M} + C_{M}^{M-1}) + C_{M+1}^{M-1} \dots + C_{D+M-2}^{M-1} \\ &= (C_{M+1}^{M} + C_{M+1}^{M-1}) \dots + C_{D+M-2}^{M-1} \\ &= C_{M+2}^{M} \dots + C_{D+M-2}^{M-1} \\ &\dots \\ &= C_{D+M-1}^{M} \end{split}$$
By far, all have been proven.
| 5,028
|
1
|
1.16
|
hard
|
$ In Exercise 1.15, we proved the result $n(D,M) = \sum_{i=1}^{D} n(i, M-1).$ for the number of independent parameters in the $M^{\rm th}$ order term of a D-dimensional polynomial. We now find an expression for the total number N(D,M) of independent parameters in all of the terms up to and including the M6th order. First show that N(D,M) satisfies
$$N(D,M) = \sum_{m=0}^{M} n(D,m)$$
$N(D,M) = \sum_{m=0}^{M} n(D,m)$
where n(D, m) is the number of independent parameters in the term of order m. Now make use of the result $n(D,M) = \frac{(D+M-1)!}{(D-1)!M!}.$, together with proof by induction, to show that
$$N(d, M) = \frac{(D+M)!}{D! M!}.$$
$N(d, M) = \frac{(D+M)!}{D! M!}.$
This can be done by first proving that the result holds for M=0 and arbitrary $D \geqslant 1$ , then assuming that it holds at order M, and hence showing that it holds at order M+1. Finally, make use of Stirling's approximation in the form
$$n! \simeq n^n e^{-n} \tag{1.140}$$
for large n to show that, for $D\gg M$ , the quantity N(D,M) grows like $D^M$ , and for $M\gg D$ it grows like $M^D$ . Consider a cubic (M=3) polynomial in D dimensions, and evaluate numerically the total number of independent parameters for (i) D=10 and (ii) D=100, which correspond to typical small-scale and medium-scale machine learning applications.
|
This problem can be solved in the same way as the one in Prob.1.15. Firstly, we should write the expression consisted of all the independent terms up to Mth order corresponding to N(D,M). By adding a summation regarding to M on the left side of $\sum_{i_1=1}^{D} \sum_{i_2=1}^{i_1} \cdots \sum_{i_M=1}^{i_{M-1}} \widetilde{w}_{i_1 i_2 \cdots i_M} x_{i_1} x_{i_2} \cdots x_{i_M}.$, we obtain:
$$\sum_{m=0}^{M} \sum_{i_1=1}^{D} \sum_{i_2=1}^{i_1} \dots \sum_{i_m=1}^{i_{m-1}} \widetilde{w}_{i_1 i_2 \dots i_m} x_{i_1} x_{i_2} \dots x_{i_m}$$
(\*)
$N(D,M) = \sum_{m=0}^{M} n(D,m)$ is quite obvious if we view m as an looping variable, iterating through all the possible orders less equal than M, and for every possible oder m, the independent parameters are given by n(D,m).
Let's prove $N(D,M) = \sum_{m=0}^{M} n(D,m)$ in a formal way by using *Mathematical Induction*. When M = 1,(\*) will degenerate to two terms: m = 0, corresponding to n(D,0) and m = 1, corresponding to n(D,1). Therefore N(D,1) = n(D,0) + n(D,1). Suppose $N(D,M) = \sum_{m=0}^{M} n(D,m)$ holds for M, we will see that it will also hold for M+1. Let's begin by writing all the independent terms based on (\*):
$$\sum_{m=0}^{M+1} \sum_{i_1=1}^{D} \sum_{i_2=1}^{i_1} \dots \sum_{i_m=1}^{i_{m-1}} \widetilde{w}_{i_1 i_2 \dots i_m} x_{i_1} x_{i_2} \dots x_{i_m}$$
(\*\*)
Using the same technique as in Prob.1.15, we divide (\*\*) to two parts based on the summation regarding to m: the first part consisted of m = 0,1,...,M and the second part m = M+1. Hence, the first part will correspond to N(D,M) and the second part will correspond to n(D,M+1). So we obtain:
$$N(D, M+1) = N(D, M) + n(D, M+1)$$
Then we substitute $N(D,M) = \sum_{m=0}^{M} n(D,m)$ into the equation above:
$$N(D, M+1) = \sum_{m=0}^{M} n(D, m) + n(D, M+1)$$
= $\sum_{m=0}^{M+1} n(D, m)$
To prove $N(d, M) = \frac{(D+M)!}{D! M!}.$, we will also use the same technique in Prob.1.15 instead of *Mathematical Induction*. We begin based on already proved (1.138):
$$N(D,M) = \sum_{m=0}^{M} n(D,M)$$
We then take advantage of (1.137):
$$\begin{split} N(D,M) &= \sum_{m=0}^{M} C_{D+m-1}^{m} \\ &= C_{D-1}^{0} + C_{D}^{1} + C_{D+1}^{2} + \ldots + C_{D+M-1}^{M} \\ &= (C_{D}^{0} + C_{D}^{1}) + C_{D+1}^{2} + \ldots + C_{D+M-1}^{M} \\ &= (C_{D+1}^{1} + C_{D+1}^{2}) + \ldots + C_{D+M-1}^{M} \\ &= \ldots \\ &= C_{D+M}^{M} \end{split}$$
Here as asked by the problem, we will view the growing speed of N(D,M). We should see that in n(D,M), D and M are symmetric, meaning that we only need to prove when $D \gg M$ , it will grow like $D^M$ , and then the situation of $M \gg D$ will be solved by symmetry.
$$N(D,M) = \frac{(D+M)!}{D!M!} \approx \frac{(D+M)^{D+M}}{D^D M^M}$$
$$= \frac{1}{M^M} (\frac{D+M}{D})^D (D+M)^M$$
$$= \frac{1}{M^M} [(1+\frac{M}{D})^{\frac{D}{M}}]^M (D+M)^M$$
$$\approx (\frac{e}{M})^M (D+M)^M$$
$$= \frac{e^M}{M^M} (1+\frac{M}{D})^M D^M$$
$$= \frac{e^M}{M^M} [(1+\frac{M}{D})^{\frac{D}{M}}]^{\frac{M^2}{D}} D^M$$
$$\approx \frac{e^{M+\frac{M^2}{D}}}{M^M} D^M \approx \frac{e^M}{M^M} D^M$$
Where we use Stirling's approximation, $\lim_{n\to +\infty}(1+\frac{1}{n})^n=e$ and $e^{\frac{M^2}{D}}\approx e^0=1$ . According to the description in the problem, When $D\gg M$ , we can actually view $\frac{e^M}{M^M}$ as a constant, so N(D,M) will grow like $D^M$ in this case. And by symmetry, N(D,M) will grow like $M^D$ , when $M\gg D$ .
Finally, we are asked to calculate N(10,3) and N(100,3):
$$N(10,3) = C_{13}^3 = 286$$
$N(100,3) = C_{103}^3 = 176851$
| 3,596
|
1
|
1.17
|
medium
|
$ The gamma function is defined by
$$\Gamma(x) \equiv \int_0^\infty u^{x-1} e^{-u} \, \mathrm{d}u. \tag{1.141}$$
Using integration by parts, prove the relation $\Gamma(x+1) = x\Gamma(x)$ . Show also that $\Gamma(1) = 1$ and hence that $\Gamma(x+1) = x!$ when x is an integer.
|
$$\Gamma(x+1) = \int_0^{+\infty} u^x e^{-u} du$$
$$= \int_0^{+\infty} -u^x de^{-u}$$
$$= -u^x e^{-u} \Big|_0^{+\infty} - \int_0^{+\infty} e^{-u} d(-u^x)$$
$$= -u^x e^{-u} \Big|_0^{+\infty} + x \int_0^{+\infty} e^{-u} u^{x-1} du$$
$$= -u^x e^{-u} \Big|_0^{+\infty} + x \Gamma(x)$$
Where we have taken advantage of *Integration by parts* and according to the equation above, we only need to prove the first term equals to 0. Given *L'Hospital's Rule*:
$$\lim_{u \to +\infty} -\frac{u^x}{e^u} = \lim_{u \to +\infty} -\frac{x!}{e^u} = 0$$
And also when $u = 0, -u^x e^u = 0$ , so we have proved $\Gamma(x+1) = x\Gamma(x)$ . Based on the definition of $\Gamma(x)$ , we can write:
$$\Gamma(1) = \int_0^{+\infty} e^{-u} du = -e^{-u} \Big|_0^{+\infty} = -(0-1) = 1$$
Therefore when x is an integer:
$$\Gamma(x) = (x-1)\Gamma(x-1) = (x-1)(x-2)\Gamma(x-2) = \dots = x!\Gamma(1) = x!$$
| 887
|
1
|
1.18
|
medium
|
We can use the result $I = (2\pi\sigma^2)^{1/2}.$ to derive an expression for the surface area $S_D$ , and the volume $V_D$ , of a sphere of unit radius in D dimensions. To do this, consider the following result, which is obtained by transforming from Cartesian to polar coordinates
$$\prod_{i=1}^{D} \int_{-\infty}^{\infty} e^{-x_i^2} dx_i = S_D \int_{0}^{\infty} e^{-r^2} r^{D-1} dr.$$
$\prod_{i=1}^{D} \int_{-\infty}^{\infty} e^{-x_i^2} dx_i = S_D \int_{0}^{\infty} e^{-r^2} r^{D-1} dr.$
Using the definition $\Gamma(x) \equiv \int_0^\infty u^{x-1} e^{-u} \, \mathrm{d}u.$ of the Gamma function, together with $I = (2\pi\sigma^2)^{1/2}.$, evaluate both sides of this equation, and hence show that
$$S_D = \frac{2\pi^{D/2}}{\Gamma(D/2)}. (1.143)$$
Next, by integrating with respect to radius from 0 to 1, show that the volume of the unit sphere in D dimensions is given by
$$V_D = \frac{S_D}{D}. ag{1.144}$$
Finally, use the results $\Gamma(1)=1$ and $\Gamma(3/2)=\sqrt{\pi}/2$ to show that $S_D = \frac{2\pi^{D/2}}{\Gamma(D/2)}.$ and (1.144) reduce to the usual expressions for D=2 and D=3.
|
Based on $I = \int_{-\infty}^{\infty} \exp\left(-\frac{1}{2\sigma^2}x^2\right) dx$ and $I = (2\pi\sigma^2)^{1/2}.$ and by substituting x to $\sqrt{2}\sigma y$ , it is quite obvious to obtain :
$$\int_{-\infty}^{+\infty} e^{-x_i^2} dx_i = \sqrt{\pi}$$
Therefore, the left side of $= \mathbb{E}_{\mathbf{x}, \mathbf{y}} [\mathbf{x} \mathbf{y}^{\mathrm{T}}] - \mathbb{E}[\mathbf{x}] \mathbb{E}[\mathbf{y}^{\mathrm{T}}].$ will equal to $\pi^{\frac{D}{2}}$ . For the right side of (1.42):
$$\begin{split} S_D \int_0^{+\infty} e^{-r^2} r^{D-1} dr &= S_D \int_0^{+\infty} e^{-u} u^{\frac{D-1}{2}} d\sqrt{u} \quad (u = r^2) \\ &= \frac{S_D}{2} \int_0^{+\infty} e^{-u} u^{\frac{D}{2} - 1} du \\ &= \frac{S_D}{2} \Gamma(\frac{D}{2}) \end{split}$$
Hence, we obtain:
$$\pi^{\frac{D}{2}} = \frac{S_D}{2} \Gamma(\frac{D}{2}) \quad \Longrightarrow \quad S_D = \frac{2\pi^{\frac{D}{2}}}{\Gamma(\frac{D}{2})}$$
$S_D$ has given the expression of the surface area with radius 1 in dimension D, we can further expand the conclusion: the surface area with radius r in dimension D will equal to $S_D \cdot r^{D-1}$ , and when r=1, it will reduce to $S_D$ . This conclusion is naive, if you find that the surface area of different sphere in dimension D is proportion to the D-1th power of radius, i.e. $r^{D-1}$ . Considering the relationship between V and S of a sphere with arbitrary radius in dimension D: $\frac{dV}{dr} = S$ , we can obtain:
$$V = \int S dr = \int S_D r^{D-1} dr = \frac{S_D}{D} r^D$$
The equation above gives the expression of the volume of a sphere with radius r in dimension D, so we let r=1:
$$V_D = \frac{S_D}{D}$$
For D = 2 and D = 3:
$$V_2 = \frac{S_2}{2} = \frac{1}{2} \cdot \frac{2\pi}{\Gamma(1)} = \pi$$
$$V_3 = \frac{S_3}{3} = \frac{1}{3} \cdot \frac{2\pi^{\frac{3}{2}}}{\Gamma(\frac{3}{2})} = \frac{1}{3} \cdot \frac{2\pi^{\frac{3}{2}}}{\frac{\sqrt{\pi}}{2}} = \frac{4}{3}\pi$$
| 1,933
|
1
|
1.19
|
medium
|
Consider a sphere of radius a in D-dimensions together with the concentric hypercube of side 2a, so that the sphere touches the hypercube at the centres of each of its sides. By using the results of Exercise 1.18, show that the ratio of the volume of the sphere to the volume of the cube is given by
$$\frac{\text{volume of sphere}}{\text{volume of cube}} = \frac{\pi^{D/2}}{D2^{D-1}\Gamma(D/2)}.$$
$\frac{\text{volume of sphere}}{\text{volume of cube}} = \frac{\pi^{D/2}}{D2^{D-1}\Gamma(D/2)}.$
Now make use of Stirling's formula in the form
$$\Gamma(x+1) \simeq (2\pi)^{1/2} e^{-x} x^{x+1/2}$$
$\Gamma(x+1) \simeq (2\pi)^{1/2} e^{-x} x^{x+1/2}$
which is valid for $x\gg 1$ , to show that, as $D\to\infty$ , the ratio $\frac{\text{volume of sphere}}{\text{volume of cube}} = \frac{\pi^{D/2}}{D2^{D-1}\Gamma(D/2)}.$ goes to zero. Show also that the ratio of the distance from the centre of the hypercube to one of the corners, divided by the perpendicular distance to one of the sides, is $\sqrt{D}$ , which therefore goes to $\infty$ as $D\to\infty$ . From these results we see that, in a space of high dimensionality, most of the volume of a cube is concentrated in the large number of corners, which themselves become very long 'spikes'!
|
We have already given a hint in the solution of Prob.1.18, and here we will make it more clearly: the volume of a sphere with radius r is $V_D \cdot r^D$ . This is quite similar with the conclusion we obtained in Prob.1.18 about the surface area except that it is proportion to Dth power of its radius, i.e. $r^D$ not $r^{D-1}$
$$\frac{\text{volume of sphere}}{\text{volume of cube}} = \frac{V_D a^D}{(2a)^D} = \frac{S_D}{2^D D} = \frac{\pi^{\frac{D}{2}}}{2^{D-1} D \Gamma(\frac{D}{2})} \tag{*}$$
Where we have used the result of $S_D = \frac{2\pi^{D/2}}{\Gamma(D/2)}.$. And when $D \to +\infty$ , we will use a simple method to show that (\*) will converge to 0. We rewrite it :
$$(*) = \frac{2}{D} \cdot (\frac{\pi}{4})^{\frac{D}{2}} \cdot \frac{1}{\Gamma(\frac{D}{2})}$$
Hence, it is now quite obvious, all the three terms will converge to 0 when $D \to +\infty$ . Therefore their product will also converge to 0. The last problem is quite simple :
$$\frac{\text{center to one corner}}{\text{center to one side}} = \frac{\sqrt{a^2 \cdot D}}{a} = \sqrt{D} \quad \text{and} \quad \lim_{D \to +\infty} \sqrt{D} = +\infty$$
| 1,143
|
1
|
1.2
|
easy
|
Write down the set of coupled linear equations, analogous to (1.122), satisfied by the coefficients $w_i$ which minimize the regularized sum-of-squares error function given by $\widetilde{E}(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \{y(x_n, \mathbf{w}) - t_n\}^2 + \frac{\lambda}{2} ||\mathbf{w}||^2$.
|
This problem is similar to Prob.1.1, and the only difference is the last term on the right side of $\widetilde{E}(\mathbf{w}) = \frac{1}{2} \sum_{n=1}^{N} \{y(x_n, \mathbf{w}) - t_n\}^2 + \frac{\lambda}{2} ||\mathbf{w}||^2$, the penalty term. So we will do the same thing as in Prob.1.1:
$$\frac{\partial E}{\partial w_{i}} = \sum_{n=1}^{N} \{y(x_{n}, \mathbf{w}) - t_{n}\} x_{n}^{i} + \lambda w_{i} = 0$$
$$= > \sum_{j=0}^{M} \sum_{n=1}^{N} x_{n}^{(j+i)} w_{j} + \lambda w_{i} = \sum_{n=1}^{N} x_{n}^{i} t_{n}$$
$$= > \sum_{j=0}^{M} \{\sum_{n=1}^{N} x_{n}^{(j+i)} + \delta_{ji} \lambda\} w_{j} = \sum_{n=1}^{N} x_{n}^{i} t_{n}$$
where
$$\delta_{ji} \begin{cases} 0 & j \neq i \\ 1 & j = i \end{cases}$$
| 715
|
1
|
1.20
|
medium
|
In this exercise, we explore the behaviour of the Gaussian distribution in high-dimensional spaces. Consider a Gaussian distribution in D dimensions given by
$$p(\mathbf{x}) = \frac{1}{(2\pi\sigma^2)^{D/2}} \exp\left(-\frac{\|\mathbf{x}\|^2}{2\sigma^2}\right).$$
$p(\mathbf{x}) = \frac{1}{(2\pi\sigma^2)^{D/2}} \exp\left(-\frac{\|\mathbf{x}\|^2}{2\sigma^2}\right).$
We wish to find the density with respect to radius in polar coordinates in which the direction variables have been integrated out. To do this, show that the integral of the probability density over a thin shell of radius r and thickness $\epsilon$ , where $\epsilon \ll 1$ , is given by $p(r)\epsilon$ where
$$p(r) = \frac{S_D r^{D-1}}{(2\pi\sigma^2)^{D/2}} \exp\left(-\frac{r^2}{2\sigma^2}\right)$$
$p(r) = \frac{S_D r^{D-1}}{(2\pi\sigma^2)^{D/2}} \exp\left(-\frac{r^2}{2\sigma^2}\right)$
where $S_D$ is the surface area of a unit sphere in D dimensions. Show that the function p(r) has a single stationary point located, for large D, at $\hat{r} \simeq \sqrt{D}\sigma$ . By considering $p(\hat{r} + \epsilon)$ where $\epsilon \ll \hat{r}$ , show that for large D,
$$p(\hat{r} + \epsilon) = p(\hat{r}) \exp\left(-\frac{3\epsilon^2}{2\sigma^2}\right)$$
$p(\hat{r} + \epsilon) = p(\hat{r}) \exp\left(-\frac{3\epsilon^2}{2\sigma^2}\right)$
which shows that $\widehat{r}$ is a maximum of the radial probability density and also that p(r) decays exponentially away from its maximum at $\widehat{r}$ with length scale $\sigma$ . We have already seen that $\sigma \ll \widehat{r}$ for large D, and so we see that most of the probability mass is concentrated in a thin shell at large radius. Finally, show that the probability density $p(\mathbf{x})$ is larger at the origin than at the radius $\widehat{r}$ by a factor of $\exp(D/2)$ . We therefore see that most of the probability mass in a high-dimensional Gaussian distribution is located at a different radius from the region of high probability density. This property of distributions in spaces of high dimensionality will have important consequences when we consider Bayesian inference of model parameters in later chapters.
|
The density of probability in a thin shell with radius r and thickness $\epsilon$ can be viewed as a constant. And considering that a sphere in dimension D with radius r has surface area $S_D r^{D-1}$ , which has already been proved in Prob 1.19.
$$\int_{shell} p(\mathbf{x}) d\mathbf{x} = p(\mathbf{x}) \int_{shell} d\mathbf{x} = \frac{exp(-\frac{r^2}{2\sigma^2})}{(2\pi\sigma^2)^{\frac{D}{2}}} \cdot V(\text{shell}) = \frac{exp(-\frac{r^2}{2\sigma^2})}{(2\pi\sigma^2)^{\frac{D}{2}}} S_D r^{D-1} \epsilon$$
Thus we denote:
$$p(r) = \frac{S_D r^{D-1}}{(2\pi\sigma^2)^{\frac{D}{2}}} exp(-\frac{r^2}{2\sigma^2})$$
We calculate the derivative of $p(r) = \frac{S_D r^{D-1}}{(2\pi\sigma^2)^{D/2}} \exp\left(-\frac{r^2}{2\sigma^2}\right)$ with respect to r:
$$\frac{dp(r)}{dr} = \frac{S_D}{(2\pi\sigma^2)^{\frac{D}{2}}} r^{D-2} exp(-\frac{r^2}{2\sigma^2})(D-1-\frac{r^2}{\sigma^2}) \tag{*}$$
We let the derivative equal to 0, we will obtain its unique root( stationary point) $\hat{r} = \sqrt{D-1}\sigma$ , because $r \in [0,+\infty]$ . When $r < \hat{r}$ , the derivative is large than 0, p(r) will increase as $r \uparrow$ , and when $r > \hat{r}$ , the derivative is less than 0, p(r) will decrease as $r \uparrow$ . Therefore $\hat{r}$ will be the only maximum point. And it is obvious when $D \gg 1$ , $\hat{r} \approx \sqrt{D}\sigma$ .
$$\frac{p(\hat{r}+\epsilon)}{p(\hat{r})} = \frac{(\hat{r}+\epsilon)^{D-1} exp(-\frac{(\hat{r}+\epsilon)^2}{2\sigma^2})}{\hat{r}^{D-1} exp(-\frac{\hat{r}^2}{2\sigma^2})}$$
$$= (1+\frac{\epsilon}{\hat{r}})^{D-1} exp(-\frac{2\epsilon\,\hat{r}+\epsilon^2}{2\sigma^2})$$
$$= exp(-\frac{2\epsilon\,\hat{r}+\epsilon^2}{2\sigma^2} + (D-1)ln(1+\frac{\epsilon}{\hat{r}}))$$
We process for the exponential term by using *Taylor Theorems*.
$$-\frac{2\epsilon \,\hat{r} + \epsilon^2}{2\sigma^2} + (D-1)ln(1 + \frac{\epsilon}{\hat{r}}) \approx -\frac{2\epsilon \,\hat{r} + \epsilon^2}{2\sigma^2} + (D-1)(\frac{\epsilon}{\hat{r}} - \frac{\epsilon^2}{2\hat{r}^2})$$
$$= -\frac{2\epsilon \,\hat{r} + \epsilon^2}{2\sigma^2} + \frac{2\hat{r}\epsilon - \epsilon^2}{2\sigma^2}$$
$$= -\frac{\epsilon^2}{\sigma^2}$$
Therefore, $p(\hat{r}+\epsilon)=p(\hat{r})exp(-\frac{\epsilon^2}{\sigma^2})$ . Note: Here I draw a different conclusion compared with $p(\hat{r} + \epsilon) = p(\hat{r}) \exp\left(-\frac{3\epsilon^2}{2\sigma^2}\right)$, but I do not think there is any mistake in my deduction.
Finally, we see from (1.147):
$$p(\mathbf{x})\Big|_{\mathbf{x}=0} = \frac{1}{(2\pi\sigma^2)^{\frac{D}{2}}}$$
$$p(\mathbf{x})\Big|_{||\mathbf{x}||^2 = \hat{r}^2} = \frac{1}{(2\pi\sigma^2)^{\frac{D}{2}}} exp(-\frac{\hat{r}^2}{2\sigma^2}) \approx \frac{1}{(2\pi\sigma^2)^{\frac{D}{2}}} exp(-\frac{D}{2})$$
| 2,757
|
1
|
1.21
|
medium
|
$ Consider two nonnegative numbers a and b, and show that, if $a \leq b$ , then $a \leq (ab)^{1/2}$ . Use this result to show that, if the decision regions of a two-class classification problem are chosen to minimize the probability of misclassification, this probability will satisfy
$$p(\text{mistake}) \leqslant \int \left\{ p(\mathbf{x}, C_1) p(\mathbf{x}, C_2) \right\}^{1/2} d\mathbf{x}.$$
$p(\text{mistake}) \leqslant \int \left\{ p(\mathbf{x}, C_1) p(\mathbf{x}, C_2) \right\}^{1/2} d\mathbf{x}.$
|
The first question is rather simple:
$$(ab)^{\frac{1}{2}} - a = a^{\frac{1}{2}}(b^{\frac{1}{2}} - a^{\frac{1}{2}}) \ge 0$$
Where we have taken advantage of $b \ge a \ge 0$ . And based on (1.78):
$$\begin{split} p(\text{mistake}) &= p(\mathbf{x} \in R_1, C_2) + p(\mathbf{x} \in R_2, C_1) \\ &= \int_{R_1} p(\mathbf{x}, C_2) dx + \int_{R_2} p(\mathbf{x}, C_1) dx \end{split}$$
Recall that the decision rule which can minimize misclassification is that if $p(\mathbf{x}, C_1) > p(\mathbf{x}, C_2)$ , for a given value of $\mathbf{x}$ , we will assign that $\mathbf{x}$ to class $C_1$ . We can see that in decision area $R_1$ , it should satisfy $p(\mathbf{x}, C_1) > p(\mathbf{x}, C_2)$ . Therefore, using what we have proved, we can obtain:
$$\int_{R_1} p(\mathbf{x}, C_2) dx \le \int_{R_1} \{ p(\mathbf{x}, C_1) p(\mathbf{x}, C_2) \}^{\frac{1}{2}} dx$$
It is the same for decision area $R_2$ . Therefore we can obtain:
$$p(\text{mistake}) \le \int \{p(\mathbf{x}, C_1) p(\mathbf{x}, C_2)\}^{\frac{1}{2}} dx$$
# **Problem 1.22 Solution**
We need to deeply understand $\sum_{k} L_{kj} p(\mathcal{C}_k | \mathbf{x})$. When $L_{kj} = 1 - I_{kj}$ :
$$\sum_{k} L_{kj} p(C_k | \mathbf{x}) = \sum_{k} p(C_k | \mathbf{x}) - p(C_j | \mathbf{x})$$
Given a specific $\mathbf{x}$ , the first term on the right side is a constant, which equals to 1, no matter which class $C_j$ we assign $\mathbf{x}$ to. Therefore if we want to minimize the loss, we will maximize $p(C_j|\mathbf{x})$ . Hence, we will assign $\mathbf{x}$ to class $C_j$ , which can give the biggest posterior probability $p(C_j|\mathbf{x})$ .
The explanation of the loss matrix is quite simple. If we label correctly, there is no loss. Otherwise, we will incur a loss, in the same degree whichever class we label it to. The loss matrix is given below to give you an intuitive view:
$$\begin{bmatrix} 0 & 1 & 1 & \dots & 1 \\ 1 & 0 & 1 & \dots & 1 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & 1 & \dots & 0 \end{bmatrix}$$
| 2,027
|
1
|
1.23
|
easy
|
Derive the criterion for minimizing the expected loss when there is a general loss matrix and general prior probabilities for the classes.
|
$$\mathbb{E}[L] = \sum_{k} \sum_{j} \int_{R_{j}} L_{kj} p(\mathbf{x}, C_{k}) d\mathbf{x} = \sum_{k} \sum_{j} \int_{R_{j}} L_{kj} p(C_{k}) p(\mathbf{x} | C_{k}) d\mathbf{x}$$
If we denote a new loss matrix by $L_{jk}^{} = L_{jk}p(C_k)$ , we can obtain a new equation :
$\mathbb{E}[L] = \sum_{k} \sum_{i} \int_{R_{i}} L_{kj}^{} p(\mathbf{x} | C_{k}) d\mathbf{x}$
| 374
|
1
|
1.24
|
medium
|
- 1.24 (\*\*) Consider a classification problem in which the loss incurred when an input vector from class $C_k$ is classified as belonging to class $C_j$ is given by the loss matrix $L_{kj}$ , and for which the loss incurred in selecting the reject option is $\lambda$ . Find the decision criterion that will give the minimum expected loss. Verify that this reduces to the reject criterion discussed in Section 1.5.3 when the loss matrix is given by $L_{kj} = 1 I_{kj}$ . What is the relationship between $\lambda$ and the rejection threshold $\theta$ ?
|
This description of the problem is a little confusing, and what it really mean is that $\lambda$ is the parameter governing the loss, just like $\theta$ governing the posterior probability $p(C_k|\mathbf{x})$ when we introduce the reject option. Therefore the reject option can be written in a new way when we view it from the view of $\lambda$ and the loss:
$$\text{choice} \begin{cases} \text{class } C_j & \min_{l} \sum_{k} L_{kl} p(C_k|x) < \lambda \\ \text{reject} & \text{else} \end{cases}$$
Where $C_j$ is the class that can obtain the minimum. If $L_{kj} = 1 - I_{kj}$ , according to what we have proved in Prob.1.22:
$$\sum_{k} L_{kj} p(C_k | \mathbf{x}) = \sum_{k} p(C_k | \mathbf{x}) - p(C_j | \mathbf{x}) = 1 - p(C_j | \mathbf{x})$$
Therefore, the reject criterion from the view of $\lambda$ above is actually equivalent to the largest posterior probability is larger than $1 - \lambda$ :
$$\min_{l} \sum_{k} L_{kl} p(C_k|x) < \lambda \quad <=> \quad \max_{l} p(C_l|x) > 1 - \lambda$$
And from the view of $\theta$ and posterior probability, we label a class for **x** (i.e. we do not reject) is given by the constrain :
$$\max_{l} p(C_l|x) > \theta$$
Hence from the two different views, we can see that $\lambda$ and $\theta$ are correlated with:
$$\lambda + \theta = 1$$
| 1,313
|
1
|
1.25
|
easy
|
Consider the generalization of the squared loss function $\mathbb{E}[L] = \iint \{y(\mathbf{x}) - t\}^2 p(\mathbf{x}, t) \, d\mathbf{x} \, dt.$ for a single target variable t to the case of multiple target variables described by the vector t given by
$$\mathbb{E}[L(\mathbf{t}, \mathbf{y}(\mathbf{x}))] = \iint \|\mathbf{y}(\mathbf{x}) - \mathbf{t}\|^2 p(\mathbf{x}, \mathbf{t}) \, d\mathbf{x} \, d\mathbf{t}. \tag{1.151}$$
Using the calculus of variations, show that the function $\mathbf{y}(\mathbf{x})$ for which this expected loss is minimized is given by $\mathbf{y}(\mathbf{x}) = \mathbb{E}_{\mathbf{t}}[\mathbf{t}|\mathbf{x}]$ . Show that this result reduces to $y(\mathbf{x}) = \frac{\int tp(\mathbf{x}, t) dt}{p(\mathbf{x})} = \int tp(t|\mathbf{x}) dt = \mathbb{E}_t[t|\mathbf{x}]$ for the case of a single target variable t.
|
We can prove this informally by dealing with one dimension once a time just as the same process in $\mathbb{E}[L] = \iint \{y(\mathbf{x}) - t\}^2 p(\mathbf{x}, t) \, d\mathbf{x} \, dt.$ - $y(\mathbf{x}) = \frac{\int tp(\mathbf{x}, t) dt}{p(\mathbf{x})} = \int tp(t|\mathbf{x}) dt = \mathbb{E}_t[t|\mathbf{x}]$ until all has been done, due to the fact that the total loss E can be divided to the summation of loss on every
dimension, and what's more they are independent. Here, we will use a more informal way to prove this. In this case, the expected loss can be written:
$$\mathbb{E}[L] = \int \int \{\mathbf{y}(\mathbf{x}) - \mathbf{t}\}^2 p(\mathbf{x}, \mathbf{t}) d\mathbf{t} d\mathbf{x}$$
Therefore, just as the same process in $\mathbb{E}[L] = \iint \{y(\mathbf{x}) - t\}^2 p(\mathbf{x}, t) \, d\mathbf{x} \, dt.$ - (1.89):
$$\frac{\partial \mathbb{E}[L]}{\partial y(\mathbf{x})} = 2 \int \{\mathbf{y}(\mathbf{x}) - \mathbf{t}\} p(\mathbf{x}, \mathbf{t}) d\mathbf{t} = \mathbf{0}$$
$$=> \mathbf{y}(\mathbf{x}) = \frac{\int \mathbf{t} p(\mathbf{x}, \mathbf{t}) d\mathbf{t}}{p(\mathbf{x})} = \mathbb{E}_{\mathbf{t}}[\mathbf{t}|\mathbf{x}]$$
| 1,173
|
1
|
1.26
|
easy
|
- 1.26 (\*) By expansion of the square in $\mathbb{E}[L(\mathbf{t}, \mathbf{y}(\mathbf{x}))] = \iint \|\mathbf{y}(\mathbf{x}) - \mathbf{t}\|^2 p(\mathbf{x}, \mathbf{t}) \, d\mathbf{x} \, d\mathbf{t}.$, derive a result analogous to $\mathbb{E}[L] = \int \{y(\mathbf{x}) - \mathbb{E}[t|\mathbf{x}]\}^2 p(\mathbf{x}) d\mathbf{x} + \int \{\mathbb{E}[t|\mathbf{x}] - t\}^2 p(\mathbf{x}) d\mathbf{x}.$ and hence show that the function y(x) that minimizes the expected squared loss for the case of a vector t of target variables is again given by the conditional expectation of t.
|
The process is identical as the deduction we conduct for $\mathbb{E}[L] = \int \{y(\mathbf{x}) - \mathbb{E}[t|\mathbf{x}]\}^2 p(\mathbf{x}) d\mathbf{x} + \int \{\mathbb{E}[t|\mathbf{x}] - t\}^2 p(\mathbf{x}) d\mathbf{x}.$. We will not repeat here. And what we should emphasize is that $\mathbb{E}[\mathbf{t}|\mathbf{x}]$ is a function of $\mathbf{x}$ , not $\mathbf{t}$ . Thus the integral over $\mathbf{t}$ and $\mathbf{x}$ can be simplified based on *Integration by parts* and that is how we obtain $\mathbb{E}[L] = \int \{y(\mathbf{x}) - \mathbb{E}[t|\mathbf{x}]\}^2 p(\mathbf{x}) d\mathbf{x} + \int \{\mathbb{E}[t|\mathbf{x}] - t\}^2 p(\mathbf{x}) d\mathbf{x}.$.
**Note**: There is a mistake in $\mathbb{E}[L] = \int \{y(\mathbf{x}) - \mathbb{E}[t|\mathbf{x}]\}^2 p(\mathbf{x}) d\mathbf{x} + \int \{\mathbb{E}[t|\mathbf{x}] - t\}^2 p(\mathbf{x}) d\mathbf{x}.$, i.e. the second term on the right side is wrong. You can view $\mathbb{E}[L] = \int \left\{ y(\mathbf{x}) - h(\mathbf{x}) \right\}^2 p(\mathbf{x}) \, d\mathbf{x} + \int \left\{ h(\mathbf{x}) - t \right\}^2 p(\mathbf{x}, t) \, d\mathbf{x} \, dt.$ on P148 for reference. It should be:
$$\mathbb{E}[L] = \int \{y(\mathbf{x}) - \mathbb{E}[t|\mathbf{x}]\}^2 p(\mathbf{x}) d\mathbf{x} + \int \{\mathbb{E}[t|\mathbf{x} - t]\}^2 p(\mathbf{x}, t) d\mathbf{x} dt$$
Moreover, this mistake has already been revised in the errata.
# **Problem 1.27 Solution**
We deal with this problem based on *Calculus of Variations*.
$$\frac{\partial \mathbb{E}[L_q]}{\partial y(\mathbf{x})} = q \int [y(\mathbf{x} - t)]^{q-1} sign(y(\mathbf{x}) - t) p(\mathbf{x}, t) dt = 0$$
$$= > \int_{-\infty}^{y(\mathbf{x})} [y(\mathbf{x}) - t]^{q-1} p(\mathbf{x}, t) dt = \int_{y(\mathbf{x})}^{+\infty} [y(\mathbf{x}) - t]^{q-1} p(\mathbf{x}, t) dt$$
$$= > \int_{-\infty}^{y(\mathbf{x})} [y(\mathbf{x}) - t]^{q-1} p(t|\mathbf{x}) dt = \int_{y(\mathbf{x})}^{+\infty} [y(\mathbf{x}) - t]^{q-1} p(t|\mathbf{x}) dt$$
Where we take advantage of $p(\mathbf{x},t) = p(t|\mathbf{x})p(\mathbf{x})$ and the property of sign function. Hence, when q=1, the equation above will reduce to :
$$\int_{-\infty}^{y(\mathbf{x})} p(t|\mathbf{x}) dt = \int_{y(\mathbf{x})}^{+\infty} p(t|\mathbf{x}) dt$$
In other words, when q = 1, the optimal $y(\mathbf{x})$ will be given by conditional median. When q = 0, it is non-trivial. We need to rewrite (1.91):
$$\mathbb{E}[L_q] = \int \left\{ \int |y(\mathbf{x}) - t|^q p(t|\mathbf{x}) p(\mathbf{x}) dt \right\} d\mathbf{x}$$
$$= \int \left\{ p(\mathbf{x}) \int |y(\mathbf{x}) - t|^q p(t|\mathbf{x}) dt \right\} d\mathbf{x} \quad (*)$$
If we want to minimize $\mathbb{E}[L_q]$ , we only need to minimize the integrand of (\*):
$$\int |y(\mathbf{x}) - t|^q p(t|\mathbf{x}) dt \tag{**}$$
When q = 0, $|y(\mathbf{x}) - t|^q$ is close to 1 everywhere except in the neighborhood around $t = y(\mathbf{x})$ (This can be seen from Fig1.29). Therefore:
$$(**) \approx \int_{\mathcal{U}} p(t|\mathbf{x}) dt - \int_{\varepsilon} (1 - |y(\mathbf{x}) - t|^q) p(t|\mathbf{x}) dt \approx \int_{\mathcal{U}} p(t|\mathbf{x}) dt - \int_{\varepsilon} p(t|\mathbf{x}) dt$$
Where $\epsilon$ means the small neighborhood, $\mathscr U$ means the whole space $\mathbf x$ lies in. Note that $y(\mathbf x)$ has no correlation with the first term, but the second term (because how to choose $y(\mathbf x)$ will affect the location of $\epsilon$ ). Hence we will put $\epsilon$ at the location where $p(t|\mathbf x)$ achieve its largest value, i.e. the mode, because in this way we can obtain the largest reduction. Therefore, it is natural we choose $y(\mathbf x)$ equals to t that maximize $p(t|\mathbf x)$ for every $\mathbf x$ .
| 3,743
|
1
|
1.27
|
medium
|
Consider the expected loss for regression problems under the $L_q$ loss function given by $\mathbb{E}[L_q] = \iint |y(\mathbf{x}) - t|^q p(\mathbf{x}, t) \, d\mathbf{x} \, dt$. Write down the condition that $y(\mathbf{x})$ must satisfy in order to minimize $\mathbb{E}[L_q]$ . Show that, for q=1, this solution represents the conditional median, i.e., the function $y(\mathbf{x})$ such that the probability mass for $t < y(\mathbf{x})$ is the same as for $t \geqslant y(\mathbf{x})$ . Also show that the minimum expected $L_q$ loss for $q \to 0$ is given by the conditional mode, i.e., by the function $y(\mathbf{x})$ equal to the value of t that maximizes $p(t|\mathbf{x})$ for each $\mathbf{x}$ .
|
Since we can choose $y(\mathbf{x})$ independently for each value of $\mathbf{x}$ , the minimum of the expected $L_q$ loss can be found by minimizing the integrand given by
$$\int |y(\mathbf{x}) - t|^q p(t|\mathbf{x}) \, \mathrm{d}t \tag{42}$$
for each value of $\mathbf{x}$ . Setting the derivative of (42) with respect to $y(\mathbf{x})$ to zero gives the stationarity condition
$$\int q|y(\mathbf{x}) - t|^{q-1} \operatorname{sign}(y(\mathbf{x}) - t)p(t|\mathbf{x}) dt$$
$$= q \int_{-\infty}^{y(\mathbf{x})} |y(\mathbf{x}) - t|^{q-1} p(t|\mathbf{x}) dt - q \int_{y(\mathbf{x})}^{\infty} |y(\mathbf{x}) - t|^{q-1} p(t|\mathbf{x}) dt = 0$$
which can also be obtained directly by setting the functional derivative of $\mathbb{E}[L_q] = \iint |y(\mathbf{x}) - t|^q p(\mathbf{x}, t) \, d\mathbf{x} \, dt$ with respect to $y(\mathbf{x})$ equal to zero. It follows that $y(\mathbf{x})$ must satisfy
$$\int_{-\infty}^{y(\mathbf{x})} |y(\mathbf{x}) - t|^{q-1} p(t|\mathbf{x}) \, \mathrm{d}t = \int_{y(\mathbf{x})}^{\infty} |y(\mathbf{x}) - t|^{q-1} p(t|\mathbf{x}) \, \mathrm{d}t. \tag{43}$$
For the case of q = 1 this reduces to
$$\int_{-\infty}^{y(\mathbf{x})} p(t|\mathbf{x}) \, \mathrm{d}t = \int_{y(\mathbf{x})}^{\infty} p(t|\mathbf{x}) \, \mathrm{d}t. \tag{44}$$
which says that $y(\mathbf{x})$ must be the conditional median of t.
For $q \to 0$ we note that, as a function of t, the quantity $|y(\mathbf{x}) - t|^q$ is close to 1 everywhere except in a small neighbourhood around $t = y(\mathbf{x})$ where it falls to zero. The value of (42) will therefore be close to 1, since the density p(t) is normalized, but reduced slightly by the 'notch' close to $t = y(\mathbf{x})$ . We obtain the biggest reduction in (42) by choosing the location of the notch to coincide with the largest value of p(t), i.e. with the (conditional) mode.
| 1,871
|
1
|
1.28
|
easy
|
In Section 1.6, we introduced the idea of entropy h(x) as the information gained on observing the value of a random variable x having distribution p(x). We saw that, for independent variables x and y for which p(x,y) = p(x)p(y), the entropy functions are additive, so that h(x,y) = h(x) + h(y). In this exercise, we derive the relation between h and p in the form of a function h(p). First show that $h(p^2) = 2h(p)$ , and hence by induction that $h(p^n) = nh(p)$ where n is a positive integer. Hence show that $h(p^{n/m}) = (n/m)h(p)$ where m is also a positive integer. This implies that $h(p^x) = xh(p)$ where x is a positive rational number, and hence by continuity when it is a positive real number. Finally, show that this implies h(p) must take the form $h(p) \propto \ln p$ .
|
Basically this problem is focused on the definition of *Information Content*, i.e.h(x). We will rewrite the problem more precisely. In *Information Theory*, $h(\cdot)$ is also called *Information Content* and denoted as $I(\cdot)$ . Here we will still use $h(\cdot)$ for consistency. The whole problem is about the property of h(x). Based on our knowledge that $h(\cdot)$ is a monotonic function of the probability p(x), we can obtain:
$$h(x) = f(p(x))$$
The equation above means that the *Information* we obtain for a specific value of a random variable x is correlated with its occurring probability p(x), and its relationship is given by a mapping function $f(\cdot)$ . Suppose C is the intersection of two independent event A and B, then the information of event C occurring is the compound message of both independent events A and B occurring:
$$h(C) = h(A \cap B) = h(A) + h(B) \tag{*}$$
Because *A* and *B* is independent:
$$P(C) = P(A) \cdot P(B)$$
We apply function $f(\cdot)$ to both side:
$$f(P(C)) = f(P(A) \cdot P(B)) \tag{**}$$
Moreover, the left side of (\*) and (\*\*) are equivalent by definition, so we can obtain:
$$h(A) + h(B) = f(P(A) \cdot P(B))$$
$$= f(p(A)) + f(p(B)) = f(P(A) \cdot P(B))$$
We obtain an important property of function $f(\cdot)$ : $f(x \cdot y) = f(x) + f(y)$ . Note: In problem $P(z) = \int_{-\infty}^{z} p(x) dx$, what it really wants us to prove is about the form and property of function f in our formulation, because there is one sentence in the description of the problem : "In this exercise, we derive the relation between h and p in the form of a function h(p)", (i.e. $f(\cdot)$ in our formulation is equivalent to h(p) in the description).
At present, what we know is the property of function $f(\cdot)$ :
$$f(xy) = f(x) + f(y) \tag{*}$$
Firstly, we choose x = y, and then it is obvious : $f(x^2) = 2f(x)$ . Secondly, it is obvious $f(x^n) = nf(x)$ , $n \in \mathbb{N}$ is true for n = 1, n = 2. Suppose it is also true for n, we will prove it is true for n + 1:
$$f(x^{n+1}) = f(x^n) + f(x) = nf(x) + f(x) = (n+1)f(x)$$
Therefore, $f(x^n) = nf(x)$ , $n \in \mathbb{N}$ has been proved. For an integer m, we rewrite $x^n$ as $(x^{\frac{n}{m}})^m$ , and take advantage of what we have proved, we will obtain:
$$f(x^n) = f((x^{\frac{n}{m}})^m) = m f(x^{\frac{n}{m}})$$
Because $f(x^n)$ also equals to nf(x), therefore $nf(x) = mf(x^{\frac{n}{m}})$ . We simplify the equation and obtain:
$$f(x^{\frac{n}{m}}) = \frac{n}{m}f(x)$$
For an arbitrary positive x, $x \in \mathbb{R}^+$ , we can find two positive rational array $\{y_n\}$ and $\{z_n\}$ , which satisfy:
$$y_1 < y_2 < \dots < y_N < x$$
and $\lim_{N \to +\infty} y_N = x$
$$z_1 > z_2 > \dots > z_N > x$$
, and $\lim_{N \to +\infty} z_N = x$
We take advantage of function $f(\cdot)$ is monotonic:
$$y_N f(p) = f(p^{y_N}) \le f(p^x) \le f(p^{z_N}) = z_N f(p)$$
And when $N \to +\infty$ , we will obtain: $f(p^x) = xf(p)$ , $x \in \mathbb{R}^+$ . We let p = e, it can be rewritten as : $f(e^x) = xf(e)$ . Finally, We denote $y = e^x$ :
$$f(y) = ln(y)f(e)$$
Where f(e) is a constant once function $f(\cdot)$ is decided. Therefore $f(x) \propto ln(x)$ .
| 3,235
|
1
|
1.29
|
easy
|
Consider an M-state discrete random variable x, and use Jensen's inequality in the form $f\left(\sum_{i=1}^{M} \lambda_i x_i\right) \leqslant \sum_{i=1}^{M} \lambda_i f(x_i)$ to show that the entropy of its distribution p(x) satisfies $H[x] \leq \ln M$ .
|
This problem is a little bit tricky. The entropy for a M-state discrete random variable x can be written as :
$$H[x] = -\sum_{i}^{M} \lambda_{i} ln(\lambda_{i})$$
Where $\lambda_i$ is the probability that x choose state i. Here we choose a concave function $f(\cdot) = ln(\cdot)$ , we rewrite *Jensen's inequality*, i.e.(1.115):
$$ln(\sum_{i=1}^{M} \lambda_i x_i) \ge \sum_{i=1}^{M} \lambda_i ln(x_i)$$
We choose $x_i = \frac{1}{\lambda_i}$ and simplify the equation above, we will obtain :
$$lnM \geq -\sum_{i=1}^{M} \lambda_i ln(\lambda_i) = H[x]$$
| 560
|
1
|
1.3
|
medium
|
Suppose that we have three coloured boxes r (red), b (blue), and g (green). Box r contains 3 apples, 4 oranges, and 3 limes, box b contains 1 apple, 1 orange, and 0 limes, and box g contains 3 apples, 3 oranges, and 4 limes. If a box is chosen at random with probabilities p(r) = 0.2, p(b) = 0.2, p(g) = 0.6, and a piece of fruit is removed from the box (with equal probability of selecting any of the items in the box), then what is the probability of selecting an apple? If we observe that the selected fruit is in fact an orange, what is the probability that it came from the green box?
|
This problem can be solved by *Bayes' theorem*. The probability of selecting an apple P(a):
$$P(a) = P(a|r)P(r) + P(a|b)P(b) + P(a|g)P(g) = \frac{3}{10} \times 0.2 + \frac{1}{2} \times 0.2 + \frac{3}{10} \times 0.6 = 0.34$$
Based on *Bayes' theorem*, the probability of an selected orange coming from the green box P(g|o):
$$P(g|o) = \frac{P(o|g)P(g)}{P(o)}$$
We calculate the probability of selecting an orange P(o) first :
$$P(o) = P(o|r)P(r) + P(o|b)P(b) + P(o|g)P(g) = \frac{4}{10} \times 0.2 + \frac{1}{2} \times 0.2 + \frac{3}{10} \times 0.6 = 0.36$$
Therefore we can get:
$$P(g|o) = \frac{P(o|g)P(g)}{P(o)} = \frac{\frac{3}{10} \times 0.6}{0.36} = 0.5$$
| 667
|
1
|
1.30
|
medium
|
Evaluate the Kullback-Leibler divergence $= -\int p(\mathbf{x}) \ln \left\{\frac{q(\mathbf{x})}{p(\mathbf{x})}\right\} d\mathbf{x}.$ between two Gaussians $p(x) = \mathcal{N}(x|\mu, \sigma^2)$ and $q(x) = \mathcal{N}(x|m, s^2)$ .
**Table 1.3** The joint distribution p(x, y) for two binary variables x and y used in Exercise 1.39.
$$\begin{array}{c|cccc}
& y \\
\hline
& 0 & 1 \\
\hline
& 0 & 1/3 & 1/3 \\
& 1 & 0 & 1/3
\end{array}$$
|
Based on definition:
$$ln\{\frac{p(x)}{q(x)}\} = ln(\frac{s}{\sigma}) - \left[\frac{1}{2\sigma^2}(x-\mu)^2 - \frac{1}{2s^2}(x-m)^2\right]$$
$$= ln(\frac{s}{\sigma}) - \left[\left(\frac{1}{2\sigma^2} - \frac{1}{2s^2}\right)x^2 - \left(\frac{\mu}{\sigma^2} - \frac{m}{s^2}\right)x + \left(\frac{\mu^2}{2\sigma^2} - \frac{m^2}{2s^2}\right)\right]$$
We will take advantage of the following equations to solve this problem.
$$\mathbb{E}[x^2] = \int x^2 \mathcal{N}(x|\mu, \sigma^2) dx = \mu^2 + \sigma^2$$
$$\mathbb{E}[x] = \int x \mathcal{N}(x|\mu, \sigma^2) dx = \mu$$
$$\int \mathcal{N}(x|\mu, \sigma^2) dx = 1$$
Given the equations above, it is easy to see:
$$\begin{split} KL(p||q) &= -\int p(x)ln\{\frac{q(x)}{p(x)}\}dx \\ &= \int \mathcal{N}(x|\mu,\sigma)ln\{\frac{p(x)}{q(x)}\}dx \\ &= ln(\frac{s}{\sigma}) - (\frac{1}{2\sigma^2} - \frac{1}{2s^2})(\mu^2 + \sigma^2) + (\frac{\mu}{\sigma^2} - \frac{m}{s^2})\mu - (\frac{\mu^2}{2\sigma^2} - \frac{m^2}{2s^2}) \\ &= ln(\frac{s}{\sigma}) + \frac{\sigma^2 + (\mu - m)^2}{2s^2} - \frac{1}{2} \end{split}$$
We will discuss this result in more detail. Firstly, if KL distance is defined in *Information Theory*, the first term of the result will be $log_2(\frac{s}{\sigma})$ instead of $ln(\frac{s}{\sigma})$ . Secondly, if we denote $x = \frac{s}{\sigma}$ , KL distance can be rewritten as:
$$KL(p||q) = ln(x) + \frac{1}{2x^2} - \frac{1}{2} + a$$
, where $a = \frac{(\mu - m)^2}{2s^2}$
We calculate the derivative of KL with respect to x, and let it equal to 0:
$$\frac{d(KL)}{dx} = \frac{1}{x} - x^{-3} = 0 \quad => \quad x = 1 \ (\because s, \, \sigma > 0)$$
When x < 1 the derivative is less than 0, and when x > 1, it is greater than 0, which makes x = 1 the global minimum. When x = 1, KL(p||q) = a. What's more, when $\mu = m$ , a will achieve its minimum 0. In this way, we have shown that the KL distance between two Gaussian Distributions is not less than 0, and only when the two Gaussian Distributions are identical, i.e. having same mean and variance, KL distance will equal to 0.
| 2,057
|
1
|
1.31
|
medium
|
$ Consider two variables x and y having joint distribution p(x, y). Show that the differential entropy of this pair of variables satisfies
$$H[\mathbf{x}, \mathbf{y}] \leqslant H[\mathbf{x}] + H[\mathbf{y}] \tag{1.152}$$
with equality if, and only if, x and y are statistically independent.
|
We evaluate $H[\mathbf{x}] + H[\mathbf{y}] - H[\mathbf{x}, \mathbf{y}]$ by definition. Firstly, let's calculate $H[\mathbf{x}, \mathbf{v}]$ :
$$H[\mathbf{x}, \mathbf{y}] = -\int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{x}, \mathbf{y}) d\mathbf{x} d\mathbf{y}$$
$$= -\int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{x}) d\mathbf{x} d\mathbf{y} - \int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{y}|\mathbf{x}) d\mathbf{x} d\mathbf{y}$$
$$= -\int p(\mathbf{x}) lnp(\mathbf{x}) d\mathbf{x} - \int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{y}|\mathbf{x}) d\mathbf{x} d\mathbf{y}$$
$$= H[\mathbf{x}] + H[\mathbf{y}|\mathbf{x}]$$
Where we take advantage of $p(\mathbf{x}, \mathbf{y}) = p(\mathbf{x})p(\mathbf{y}|\mathbf{x})$ , $\int p(\mathbf{x}, \mathbf{y})d\mathbf{y} = p(\mathbf{x})$ and $H[\mathbf{y}|\mathbf{x}] = -\iint p(\mathbf{y}, \mathbf{x}) \ln p(\mathbf{y}|\mathbf{x}) \, d\mathbf{y} \, d\mathbf{x}$. Therefore, we have actually solved Prob.1.37 here. We will continue our proof for this problem, based on what we have proved:
$$H[\mathbf{x}] + H[\mathbf{y}] - H[\mathbf{x}, \mathbf{y}] = H[\mathbf{y}] - H[\mathbf{y}|\mathbf{x}]$$
$$= -\int p(\mathbf{y})lnp(\mathbf{y})d\mathbf{y} + \int \int p(\mathbf{x}, \mathbf{y})lnp(\mathbf{y}|\mathbf{x})d\mathbf{x}d\mathbf{y}$$
$$= -\int \int p(\mathbf{x}, \mathbf{y})lnp(\mathbf{y})d\mathbf{x}d\mathbf{y} + \int \int p(\mathbf{x}, \mathbf{y})lnp(\mathbf{y}|\mathbf{x})d\mathbf{x}d\mathbf{y}$$
$$= -\int \int p(\mathbf{x}, \mathbf{y})ln(\frac{p(\mathbf{x})p(\mathbf{y})}{p(\mathbf{x}, \mathbf{y})})d\mathbf{x}d\mathbf{y}$$
$$= KL(p(\mathbf{x}, \mathbf{y})||p(\mathbf{x})p(\mathbf{y})) = I(\mathbf{x}, \mathbf{y}) \ge 0$$
Where we take advantage of the following properties:
$$p(\mathbf{y}) = \int p(\mathbf{x}, \mathbf{y}) d\mathbf{x}$$
$$\frac{p(\mathbf{y})}{p(\mathbf{v}|\mathbf{x})} = \frac{p(\mathbf{x})p(\mathbf{y})}{p(\mathbf{x},\mathbf{v})}$$
Moreover, it is straightforward that if and only if $\mathbf{x}$ and $\mathbf{y}$ is statistically independent, the equality holds, due to the property of *KL distance*. You can also view this result by :
$$H[\mathbf{x}, \mathbf{y}] = -\int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{x}, \mathbf{y}) d\mathbf{x} d\mathbf{y}$$
$$= -\int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{x}) d\mathbf{x} d\mathbf{y} - \int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{y}) d\mathbf{x} d\mathbf{y}$$
$$= -\int p(\mathbf{x}) lnp(\mathbf{x}) d\mathbf{x} - \int \int p(\mathbf{y}) lnp(\mathbf{y}) d\mathbf{y}$$
$$= H[\mathbf{x}] + H[\mathbf{y}]$$
| 2,566
|
1
|
1.32
|
easy
|
Consider a vector x of continuous variables with distribution p(x) and corresponding entropy H[x]. Suppose that we make a nonsingular linear transformation of x to obtain a new variable y = Ax. Show that the corresponding entropy is given by $H[y] = H[x] + \ln |A|$ where |A| denotes the determinant of A.
|
It is straightforward based on definition and note that if we want to change variable in integral, we have to introduce a redundant term called *Jacobian Determinant*.
$$H[\mathbf{y}] = -\int p(\mathbf{y}) ln p(\mathbf{y}) d\mathbf{y}$$
$$= -\int \frac{p(\mathbf{x})}{|\mathbf{A}|} ln \frac{p(\mathbf{x})}{|\mathbf{A}|} |\frac{\partial \mathbf{y}}{\partial \mathbf{x}}| d\mathbf{x}$$
$$= -\int p(\mathbf{x}) ln \frac{p(\mathbf{x})}{|\mathbf{A}|} d\mathbf{x}$$
$$= -\int p(\mathbf{x}) ln p(\mathbf{x}) d\mathbf{x} - \int p(\mathbf{x}) ln \frac{1}{|\mathbf{A}|} d\mathbf{x}$$
$$= H[\mathbf{x}] + ln |\mathbf{A}|$$
Where we have taken advantage of the following equations:
$$\frac{\partial \mathbf{y}}{\partial \mathbf{x}} = \mathbf{A} \quad \text{and} \quad p(\mathbf{x}) = p(\mathbf{y}) |\frac{\partial \mathbf{y}}{\partial \mathbf{x}}| = p(\mathbf{y}) |\mathbf{A}|$$
$$\int p(\mathbf{x}) d\mathbf{x} = 1$$
| 912
|
1
|
1.33
|
medium
|
Suppose that the conditional entropy H[y|x] between two discrete random variables x and y is zero. Show that, for all values of x such that p(x) > 0, the variable y must be a function of x, in other words for each x there is only one value of y such that $p(y|x) \neq 0$ .
|
Based on the definition of *Entropy*, we write:
$$H[y|x] = -\sum_{x_i} \sum_{y_j} p(x_i, y_j) ln p(y_j|x_i)$$
Considering the property of *probability*, we can obtain that $0 \le p(y_j|x_i) \le 1$ , $0 \le p(x_i, y_j) \le 1$ . Therefore, we can see that $-p(x_i, y_j) \ln p(y_j|x_i) \ge 0$ when $0 < p(y_j|x_i) \le 1$ . And when $p(y_j|x_i) = 0$ , provided with the fact that $\lim_{n \to 0} p \ln p = 0$
0, we can see that $-p(x_i, y_j) ln p(y_j|x_i) = -p(x_i) p(y_j|x_i) ln p(y_j|x_i) \approx 0$ , (here we view p(x) as a constant). Hence for an arbitrary term in the equation above, we have proved that it can not be less than 0. In other words, if and only if every term of H[y|x] equals to 0, H[y|x] will equal to 0.
Therefore, for each possible value of random variable x, denoted as $x_i$ :
$$-\sum_{y_j} p(x_i, y_j) \ln p(y_j | x_i) = 0 \tag{*}$$
If there are more than one possible value of random variable y given $x = x_i$ , denoted as $y_j$ , such that $p(y_j|x_i) \neq 0$ (Because $x_i, y_j$ are both "possible", $p(x_i, y_j)$ will also not equal to 0), constrained by $0 \leq p(y_j|x_i) \leq 1$ and $\sum_j p(y_j|x_i) = 1$ , there should be at least two value of y satisfied $0 < p(y_j|x_i) < 1$ , which ultimately leads to (\*) > 0.
Therefore, for each possible value of x, there will only be one y such that $p(y|x) \neq 0$ . In other words, y is determined by x. Note: This result is quite straightforward. If y is a function of x, we can obtain the value of y as soon as observing a x. Therefore we will obtain no additional information when observing a $y_j$ given an already observed x.
| 1,638
|
1
|
1.34
|
medium
|
$ Use the calculus of variations to show that the stationary point of the functional $p(x) = \exp\left\{-1 + \lambda_1 + \lambda_2 x + \lambda_3 (x - \mu)^2\right\}.$ is given by $p(x) = \exp\left\{-1 + \lambda_1 + \lambda_2 x + \lambda_3 (x - \mu)^2\right\}.$. Then use the constraints $\int_{-\infty}^{\infty} p(x) \, \mathrm{d}x = 1$, $\int_{-\infty}^{\infty} x p(x) \, \mathrm{d}x = \mu$, and $\int_{-\infty}^{\infty} (x - \mu)^2 p(x) \, \mathrm{d}x = \sigma^2.$ to eliminate the Lagrange multipliers and hence show that the maximum entropy solution is given by the Gaussian $p(x) = \frac{1}{(2\pi\sigma^2)^{1/2}} \exp\left\{-\frac{(x-\mu)^2}{2\sigma^2}\right\}$.
|
This problem is complicated. We will explain it in detail. According to Appenddix D, we can obtain the relation, i.e. (D.3):
$$F[y(x) + \epsilon \eta(x)] = F[y(x)] + \int \frac{\partial F}{\partial y} \epsilon \eta(x) dx \qquad (**)$$
Where y(x) can be viewed as an operator that for any input x it will give an output value y, and equivalently, F[y(x)] can be viewed as an functional operator that for any input value y(x), it will give an ouput value F[y(x)]. Then we consider a functional operator:
$$I[p(x)] = \int p(x)f(x) dx$$
Under a small variation $p(x) \rightarrow p(x) + \epsilon \eta(x)$ , we will obtain :
$$I[p(x) + \epsilon \eta(x)] = \int p(x)f(x)dx + \int \epsilon \eta(x)f(x)dx$$
Comparing the equation above and (\*), we can draw a conclusion :
$$\frac{\partial I}{\partial p(x)} = f(x)$$
Similarly, let's consider another functional operator:
$$J[p(x)] = \int p(x)lnp(x)dx$$
Then under a small variation $p(x) \rightarrow p(x) + \epsilon \eta(x)$ :
$$J[p(x) + \epsilon \eta(x)] = \int (p(x) + \epsilon \eta(x)) \ln(p(x) + \epsilon \eta(x)) dx$$
$$= \int p(x) \ln(p(x) + \epsilon \eta(x)) dx + \int \epsilon \eta(x) \ln(p(x) + \epsilon \eta(x)) dx$$
Note that $\epsilon \eta(x)$ is much smaller than p(x), we will write its *Taylor Theorems* at point p(x):
$$ln(p(x) + \epsilon \eta(x)) = lnp(x) + \frac{\epsilon \eta(x)}{p(x)} + O(\epsilon \eta(x)^{2})$$
Therefore, we substitute the equation above into $J[p(x) + \epsilon \eta(x)]$ :
$$J[p(x) + \epsilon \eta(x)] = \int p(x) \ln p(x) dx + \epsilon \eta(x) \int (\ln p(x) + 1) dx + O(\epsilon^{2})$$
Therefore, we also obtain:
$$\frac{\partial J}{\partial p(x)} = lnp(x) + 1$$
Now we can go back to $p(x) = \exp\left\{-1 + \lambda_1 + \lambda_2 x + \lambda_3 (x - \mu)^2\right\}.$. Based on $\frac{\partial J}{\partial p(x)}$ and $\frac{\partial I}{\partial p(x)}$ , we can calculate the derivative of the expression just before $p(x) = \exp\left\{-1 + \lambda_1 + \lambda_2 x + \lambda_3 (x - \mu)^2\right\}.$ and let it equal to 0:
$$-ln p(x) - 1 + \lambda_1 + \lambda_2 x + \lambda_3 (x - \mu)^2 = 0$$
Hence we rearrange it and obtain $p(x) = \exp\left\{-1 + \lambda_1 + \lambda_2 x + \lambda_3 (x - \mu)^2\right\}.$. From $p(x) = \exp\left\{-1 + \lambda_1 + \lambda_2 x + \lambda_3 (x - \mu)^2\right\}.$ we can see that p(x) should take the form of a Gaussian distribution. So we rewrite it into Gaussian form and then compare it to a Gaussian distribution with mean $\mu$ and variance $\sigma^2$ , it is straightforward:
$$exp(-1+\lambda_1) = \frac{1}{(2\pi\sigma^2)^{\frac{1}{2}}} \quad , \quad exp(\lambda_2 x + \lambda_3 (x-\mu)^2) = exp\{\frac{(x-\mu)^2}{2\sigma^2}\}$$
Finally, we obtain:
$$\lambda_1 = 1 - \ln(2\pi\sigma^2)$$
$$\lambda_2 = 0$$
$$\lambda_3 = -\frac{1}{2\sigma^2}$$
Note that there is a typo in the official solution manual about $\lambda_3$ . Moreover, in the following parts, we will substitute p(x) back into the three constraints and analytically prove that p(x) is Gaussian. You can skip the following part. (The writer would especially thank Dr.Spyridon Chavlis from IMBB,FORTH for this analysis)
We already know:
$$p(x) = exp(-1 + \lambda_1 + \lambda_2 x + \lambda_3 (x - \mu)^2)$$
Where the exponent is equal to:
$$-1 + \lambda_1 + \lambda_2 x + \lambda_3 (x - \mu)^2 = \lambda_3 x^2 + (\lambda_2 - 2\lambda_3 \mu) x + (\lambda_3 \mu^2 + \lambda_1 - 1)$$
Completing the square, we can obtain that:
$$ax^{2} + bx + c = a(x - d)^{2} + f, d = -\frac{b}{2a}, f = c - \frac{b^{2}}{4a}$$
Using this quadratic form, the constraints can be written as
1.
$$\int_{-\infty}^{\infty} p(x)dx = \int_{-\infty}^{\infty} e^{[a(x-d)^2+f]} dx = 1$$
2.
$$\int_{-\infty}^{\infty} x p(x) dx = \int_{-\infty}^{\infty} x e^{[a(x-d)^2+f]} dx = \mu$$
3.
$$\int_{-\infty}^{\infty} (x-\mu)^2 p(x) dx = \int_{-\infty}^{\infty} (x-\mu)^2 e^{[a(x-d)^2+f]} dx = \sigma^2$$
The first constraint can be written as:
$$I_1 = \int_{-\infty}^{\infty} e^{[a(x-d)^2 + f]} dx = e^f \int_{-\infty}^{\infty} e^{a(x-d)^2} dx$$
Let u = x - d, which gives du = dx, and thus:
$$I_1 = e^f \int_{-\infty}^{\infty} e^{au^2} du$$
Let $-w^2 = au^2 \Rightarrow w = \sqrt{-a}u \Rightarrow dw = \sqrt{-a}du$ , and thus:
$$I_1 = \frac{e^f}{\sqrt{-a}} \int_{-\infty}^{\infty} e^{-w^2} dw$$
As $e^{-x^2}$ is an even function the integral is written as:
$$I_1 = \frac{2e^f}{\sqrt{-a}} \int_0^\infty e^{-w^2} dw$$
Let $w^2 = t \Rightarrow w = \sqrt{t} \Rightarrow dw = \frac{1}{2\sqrt{t}}dt$ , and thus:
$$I_1 = \frac{2e^f}{\sqrt{-a}} \int_0^\infty t^{-\frac{1}{2}} e^{-t} dt = \frac{2e^f}{\sqrt{-a}} \int_0^\infty \frac{1}{2} t^{\frac{1}{2} - 1} e^{-t} dt = \frac{e^f}{\sqrt{-a}} \Gamma(\frac{1}{2}) = e^f \sqrt{\frac{\pi}{-a}}$$
Here the Gamma function is used. Gamma function is defined as
$$\Gamma(z) = \int_0^\infty t^{z-1} e^{-t} dt$$
where for non-negative integer values of n, we have:
$$\Gamma(\frac{1}{2}+n) = \frac{(2n)!}{4^n n!} \sqrt{\pi}$$
Thus, the first constraint can be rewritten as:
$$e^f \sqrt{\frac{\pi}{-a}} = 1 \tag{*}$$
The second constraint can be written as:
$$I_2 = \int_{-\infty}^{\infty} x e^{[a(x-d)^2 + f]} dx = e^f \int_{-\infty}^{\infty} x e^{a(x-d)^2} dx$$
Let $u = x - d \Rightarrow x = u + d \Rightarrow du = dx$ , and thus:
$$I_2 = e^f \int_{-\infty}^{\infty} (u+d)e^{au^2} du$$
Using integral additivity, we have:
$$I_2 = e^f \int_{-\infty}^{\infty} u e^{au^2} du + e^f \int_{-\infty}^{\infty} de^{au^2} du$$
We first deal with the first term on the right hand side. Here we denote it as $I_{21}$ :
$$I_{21}=e^f\int_{-\infty}^{\infty}ue^{au^2}du=e^f\left(\int_{-\infty}^{0}ue^{au^2}du+\int_{0}^{\infty}ue^{au^2}du\right)$$
Swapping the integration limits, we obtain:
$$\begin{split} I_{21} &= e^f \left( -\int_0^{-\infty} u e^{au^2} du + \int_0^{\infty} u e^{au^2} du \right) \\ &= e^f \left( \int_0^{-\infty} (-u) e^{a(-u)^2} du + \int_0^{\infty} u e^{au^2} du \right) \\ &= e^f \left( -\int_0^{\infty} (-u) e^{a(-u)^2} (-du) + \int_0^{\infty} u e^{au^2} du \right) = 0 \end{split}$$
Then we deal with the second term $I_{22}$ :
$$I_{22} = e^f \int_{-\infty}^{\infty} de^{au^2} du$$
Let $-w^2 = au^2 \Rightarrow w = \sqrt{-a}u \Rightarrow dw = \sqrt{-a}du$ , and thus:
$$I_{22} = \frac{e^f d}{\sqrt{-a}} \int_{-\infty}^{\infty} e^{-w^2} dw$$
As $e^{-x^2}$ is an even function the integral is written as:
$$I_{22} = \frac{2e^f d}{\sqrt{-a}} \int_0^\infty e^{-w^2} dw$$
Let $w^2 = t \Rightarrow w = \sqrt{t} \Rightarrow dw = \frac{1}{2\sqrt{t}}dt$ , and thus:
$$I_{22} = \frac{2e^f d}{\sqrt{-a}} \int_0^\infty t^{-\frac{1}{2}} e^{-t} dt = \frac{2e^f d}{\sqrt{-a}} \int_0^\infty \frac{1}{2} t^{\frac{1}{2}-1} e^{-t} dt = \frac{e^f d}{\sqrt{-a}} \Gamma(\frac{1}{2}) = e^f d\sqrt{\frac{\pi}{-a}}$$
Thus, the second constraint can be rewritten
$$e^f d\sqrt{\frac{\pi}{-a}} = \mu \tag{**}$$
Combining (\*) and (\*\*), we can obtain that $d = \mu$ . Recall that:
$$d = -\frac{b}{2a} = -\frac{\lambda_2 - 2\lambda_3\mu}{2\lambda_3} = \mu \Rightarrow \lambda_2 - 2\lambda_3\mu = -2\lambda_3\mu \Rightarrow \lambda_2 = 0$$
So far, we have:
$$b = -2\lambda_3\mu$$
And
$$f = c - \frac{b^2}{4a} = \lambda_3 \mu^2 + \lambda_1 - 1 - \frac{4\lambda_3^2 \mu^2}{4\lambda_3} = \lambda_1 - 1$$
Finally, we deal with the third also the last constraint. Substituting $\lambda_2 = 0$ into the last constraint we have:
$$I_{3} = \int_{-\infty}^{\infty} (x - \mu)^{2} e^{[\lambda_{3}(x - \mu)^{2} + \lambda_{1} - 1]} dx = e^{\lambda_{1} - 1} \int_{-\infty}^{\infty} (x - \mu)^{2} e^{\lambda_{3}(x - \mu)^{2}} dx$$
Let $u = x - \mu \Rightarrow du = dx$ , and thus:
$$I_3 = e^{\lambda_1 - 1} \int_{-\infty}^{\infty} u^2 e^{\lambda_3 u^2} dx$$
Let $-w^2 = \lambda_3 u^2 \Rightarrow w = \sqrt{-\lambda_3} u \Rightarrow dw = \sqrt{-\lambda_3} du$ , and thus:
$$I_{3} = e^{\lambda_{1} - 1} \int_{-\infty}^{\infty} -\frac{1}{\lambda_{3}} w^{2} e^{-w^{2}} \frac{dw}{\sqrt{-\lambda_{3}}} = \frac{e^{\lambda_{1} - 1}}{-\lambda_{3}^{\frac{3}{2}}} \int_{-\infty}^{\infty} w^{2} e^{-w^{2}} dw$$
Because it is an even function, we can further obtain:
$$I_3 = 2\frac{e^{\lambda_1 - 1}}{-\lambda_3^{\frac{3}{2}}} \int_0^\infty w^2 e^{-w^2} dw$$
Let $w^2 = t \Rightarrow w = \sqrt{t} \Rightarrow dw = \frac{1}{2\sqrt{t}}dt$ , and thus:
$$\begin{split} I_3 &= 2\frac{e^{\lambda_1 - 1}}{-\lambda_3^{\frac{3}{2}}} \int_0^\infty t e^{-t} \frac{1}{2\sqrt(t)} dt = \frac{e^{\lambda_1 - 1}}{-\lambda_3^{\frac{3}{2}}} \int_0^\infty t^{1 - \frac{1}{2}} e^{-t} dt \\ &= \frac{e^{\lambda_1 - 1}}{-\lambda_3^{\frac{3}{2}}} \int_0^\infty t^{\frac{3}{2} - 1} e^{-t} dt \\ &= \frac{e^{\lambda_1 - 1}}{-\lambda_2^{\frac{3}{2}}} \Gamma(\frac{3}{2}) = \frac{e^{\lambda_1 - 1}}{-\lambda_2^{\frac{3}{2}}} \frac{\pi}{2} \end{split}$$
Thus, the third constraint can be rewritten
$$\frac{e^{\lambda_1 - 1}}{-\lambda_3^{\frac{3}{2}}} \frac{\sqrt{\pi}}{2} = \sigma^2 \tag{***}$$
Rewriting (\*) with $f = \lambda_1 - 1, d = \mu$ and $a = \lambda_3$ , we obtain the following equation
$$e^{\lambda_1 - 1} \sqrt{\frac{\pi}{-\lambda_2}} = 1 \tag{****}$$
Substituting the equation above back into (\*\*\*), we obtain
$$\sqrt{\frac{-\lambda_3}{\pi}} \frac{1}{-\lambda_3^{\frac{3}{2}}} \frac{\sqrt{\pi}}{2} = \sigma^2 \Leftrightarrow -\frac{1}{\lambda_3} = 2\sigma^2 \Leftrightarrow \lambda_3 = -\frac{1}{2\sigma^2}$$
Substituting $\lambda_3$ back into (\* \* \* \*), we obtain:
$$e^{\lambda_1-1}\sqrt{\frac{\pi}{-\lambda_3}}=1\Leftrightarrow e^{\lambda_1-1}\sqrt{\frac{\pi}{\frac{1}{2\sigma^2}}}=1\Leftrightarrow e^{\lambda_1-1}=\frac{1}{\sqrt{2\pi\sigma^2}}\Leftrightarrow \lambda_1-1=\ln(\frac{1}{\sqrt{2\pi\sigma^2}})$$
Thus, we obtain:
$$\lambda_1 = 1 - \frac{1}{2} \ln(2\pi\sigma^2)$$
So far, we have obtainde $\lambda_i$ , where i = 1,2,3. We substitute them back into p(x), yielding:
$$p(x) = \exp\left(-1 + 1 - \frac{1}{2}\ln(2\pi\sigma^2) - \frac{1}{2\sigma^2}(x - \mu)^2\right)$$
$$= \exp\left(-\frac{1}{2}\ln(2\pi\sigma^2)\right)\exp\left(-\frac{1}{2\sigma^2}(x - \mu)^2\right)$$
$$= \exp\left(\ln\left(\frac{1}{\sqrt{2\pi\sigma^2}}\right)\right)\exp\left(-\frac{1}{2\sigma^2}(x - \mu)^2\right)$$
Thus,
$$p(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{1}{2\sigma^2}(x-\mu)^2\right)$$
Just as required.
| 10,306
|
1
|
1.35
|
easy
|
Use the results $\int_{-\infty}^{\infty} x p(x) \, \mathrm{d}x = \mu$ and $\int_{-\infty}^{\infty} (x - \mu)^2 p(x) \, \mathrm{d}x = \sigma^2.$ to show that the entropy of the univariate Gaussian $p(x) = \frac{1}{(2\pi\sigma^2)^{1/2}} \exp\left\{-\frac{(x-\mu)^2}{2\sigma^2}\right\}$ is given by $H[x] = \frac{1}{2} \left\{ 1 + \ln(2\pi\sigma^2) \right\}.$.
|
If $p(x) = \mathcal{N}(\mu, \sigma^2)$ , we write its entropy:
$$\begin{split} H[x] &= -\int p(x) ln p(x) dx \\ &= -\int p(x) ln \{ \frac{1}{2\pi\sigma^2} \} dx - \int p(x) \{ -\frac{(x-\mu)^2}{2\sigma^2} \} dx \\ &= -ln \{ \frac{1}{2\pi\sigma^2} \} + \frac{\sigma^2}{2\sigma^2} \\ &= \frac{1}{2} \{ 1 + ln(2\pi\sigma^2) \} \end{split}$$
Where we have taken advantage of the following properties of a Gaussian distribution:
$\int p(x)dx = 1 \text{ and } \int (x-\mu)^2 p(x)dx = \sigma^2$
| 492
|
1
|
1.36
|
easy
|
A strictly convex function is defined as one for which every chord lies above the function. Show that this is equivalent to the condition that the second derivative of the function be positive.
|
Here we should make it clear that if the second derivative is strictly positive, the function must be strictly convex. However, the converse may not be true. For example $f(x) = x^4$ , $g(x) = x^2$ , $x \in \mathcal{R}$ are both strictly convex by definition, but their second derivatives at x = 0 are both indeed 0 (See keyword convex function on Wikipedia or Page 71 of the book Convex Optimization written by Boyd, Vandenberghe for more details). Hence, here more precisely we will prove that a convex function is equivalent to its second derivative is non-negative by first considering *Taylor Theorems*:
$$f(x+\epsilon) = f(x) + \frac{f'(x)}{1!}\epsilon + \frac{f''(x)}{2!}\epsilon^2 + \frac{f'''(x)}{3!}\epsilon^3 + \dots$$
$$f(x-\epsilon) = f(x) - \frac{f'(x)}{1!}\epsilon + \frac{f''(x)}{2!}\epsilon^2 - \frac{f'''(x)}{3!}\epsilon^3 + \dots$$
Then we can obtain the expression of f''(x):
$$f''(x) = \lim_{\epsilon \to 0} \frac{f(x+\epsilon) + f(x-\epsilon) - 2f(x)}{\epsilon^2}$$
Where $O(\epsilon^4)$ is neglected and if f(x) is convex, we can obtain:
$$f(x) = f(\frac{1}{2}(x+\epsilon) + \frac{1}{2}(x-\epsilon)) \le \frac{1}{2}f(x+\epsilon) + \frac{1}{2}f(x-\epsilon)$$
Hence $f''(x) \ge 0$ . The converse situation is a little bit complex, we will use *Lagrange form of Taylor Theorems* to rewrite the Taylor Series Expansion above :
$$f(x) = f(x_0) + f'(x_0)(x - x_0) + \frac{f''(x^*)}{2}(x - x_0)$$
Where $x^*$ lies between x and $x_0$ . By hypothesis, $f''(x) \ge 0$ , the last term is non-negative for all x. We let $x_0 = \lambda x_1 + (1 - \lambda)x_2$ , and $x = x_1$ :
$$f(x_1) \ge f(x_0) + (1 - \lambda)(x_1 - x_2)f'(x_0) \tag{*}$$
And then, we let $x = x_2$ :
$$f(x_2) \ge f(x_0) + \lambda(x_2 - x_1)f'(x_0) \tag{**}$$
We multiply (\*) by $\lambda$ , (\*\*) by $1-\lambda$ and then add them together, we will see :
$$\lambda f(x_1) + (1 - \lambda)f(x_2) \ge f(\lambda x_1 + (1 - \lambda)x_2)$$
| 1,946
|
1
|
1.37
|
easy
|
Using the definition $H[\mathbf{y}|\mathbf{x}] = -\iint p(\mathbf{y}, \mathbf{x}) \ln p(\mathbf{y}|\mathbf{x}) \, d\mathbf{y} \, d\mathbf{x}$ together with the product rule of probability, prove the result $H[\mathbf{x}, \mathbf{y}] = H[\mathbf{y}|\mathbf{x}] + H[\mathbf{x}]$.
|
We evaluate $H[\mathbf{x}] + H[\mathbf{y}] - H[\mathbf{x}, \mathbf{y}]$ by definition. Firstly, let's calculate $H[\mathbf{x}, \mathbf{v}]$ :
$$H[\mathbf{x}, \mathbf{y}] = -\int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{x}, \mathbf{y}) d\mathbf{x} d\mathbf{y}$$
$$= -\int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{x}) d\mathbf{x} d\mathbf{y} - \int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{y}|\mathbf{x}) d\mathbf{x} d\mathbf{y}$$
$$= -\int p(\mathbf{x}) lnp(\mathbf{x}) d\mathbf{x} - \int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{y}|\mathbf{x}) d\mathbf{x} d\mathbf{y}$$
$$= H[\mathbf{x}] + H[\mathbf{y}|\mathbf{x}]$$
Where we take advantage of $p(\mathbf{x}, \mathbf{y}) = p(\mathbf{x})p(\mathbf{y}|\mathbf{x})$ , $\int p(\mathbf{x}, \mathbf{y})d\mathbf{y} = p(\mathbf{x})$ and $H[\mathbf{y}|\mathbf{x}] = -\iint p(\mathbf{y}, \mathbf{x}) \ln p(\mathbf{y}|\mathbf{x}) \, d\mathbf{y} \, d\mathbf{x}$. Therefore, we have actually solved Prob.1.37 here. We will continue our proof for this problem, based on what we have proved:
$$H[\mathbf{x}] + H[\mathbf{y}] - H[\mathbf{x}, \mathbf{y}] = H[\mathbf{y}] - H[\mathbf{y}|\mathbf{x}]$$
$$= -\int p(\mathbf{y})lnp(\mathbf{y})d\mathbf{y} + \int \int p(\mathbf{x}, \mathbf{y})lnp(\mathbf{y}|\mathbf{x})d\mathbf{x}d\mathbf{y}$$
$$= -\int \int p(\mathbf{x}, \mathbf{y})lnp(\mathbf{y})d\mathbf{x}d\mathbf{y} + \int \int p(\mathbf{x}, \mathbf{y})lnp(\mathbf{y}|\mathbf{x})d\mathbf{x}d\mathbf{y}$$
$$= -\int \int p(\mathbf{x}, \mathbf{y})ln(\frac{p(\mathbf{x})p(\mathbf{y})}{p(\mathbf{x}, \mathbf{y})})d\mathbf{x}d\mathbf{y}$$
$$= KL(p(\mathbf{x}, \mathbf{y})||p(\mathbf{x})p(\mathbf{y})) = I(\mathbf{x}, \mathbf{y}) \ge 0$$
Where we take advantage of the following properties:
$$p(\mathbf{y}) = \int p(\mathbf{x}, \mathbf{y}) d\mathbf{x}$$
$$\frac{p(\mathbf{y})}{p(\mathbf{v}|\mathbf{x})} = \frac{p(\mathbf{x})p(\mathbf{y})}{p(\mathbf{x},\mathbf{v})}$$
Moreover, it is straightforward that if and only if $\mathbf{x}$ and $\mathbf{y}$ is statistically independent, the equality holds, due to the property of *KL distance*. You can also view this result by :
$$H[\mathbf{x}, \mathbf{y}] = -\int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{x}, \mathbf{y}) d\mathbf{x} d\mathbf{y}$$
$$= -\int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{x}) d\mathbf{x} d\mathbf{y} - \int \int p(\mathbf{x}, \mathbf{y}) lnp(\mathbf{y}) d\mathbf{x} d\mathbf{y}$$
$$= -\int p(\mathbf{x}) lnp(\mathbf{x}) d\mathbf{x} - \int \int p(\mathbf{y}) lnp(\mathbf{y}) d\mathbf{y}$$
$$= H[\mathbf{x}] + H[\mathbf{y}]$$
| 14
|
1
|
1.38
|
medium
|
$ Using proof by induction, show that the inequality $f(\lambda a + (1 - \lambda)b) \leqslant \lambda f(a) + (1 - \lambda)f(b).$ for convex functions implies the result $f\left(\sum_{i=1}^{M} \lambda_i x_i\right) \leqslant \sum_{i=1}^{M} \lambda_i f(x_i)$.
|
When M = 2, $f\left(\sum_{i=1}^{M} \lambda_i x_i\right) \leqslant \sum_{i=1}^{M} \lambda_i f(x_i)$ will reduce to $f(\lambda a + (1 - \lambda)b) \leqslant \lambda f(a) + (1 - \lambda)f(b).$. We suppose $f\left(\sum_{i=1}^{M} \lambda_i x_i\right) \leqslant \sum_{i=1}^{M} \lambda_i f(x_i)$ holds for M, we will prove that it will also hold for M + 1.
$$\begin{split} f(\sum_{m=1}^{M} \lambda_m x_m) &= f(\lambda_{M+1} x_{M+1} + (1 - \lambda_{M+1}) \sum_{m=1}^{M} \frac{\lambda_m}{1 - \lambda_{M+1}} x_m) \\ &\leq \lambda_{M+1} f(x_{M+1}) + (1 - \lambda_{M+1}) f(\sum_{m=1}^{M} \frac{\lambda_m}{1 - \lambda_{M+1}} x_m) \\ &\leq \lambda_{M+1} f(x_{M+1}) + (1 - \lambda_{M+1}) \sum_{m=1}^{M} \frac{\lambda_m}{1 - \lambda_{M+1}} f(x_m) \\ &\leq \sum_{m=1}^{M+1} \lambda_m f(x_m) \end{split}$$
Hence, Jensen's Inequality, i.e. $f\left(\sum_{i=1}^{M} \lambda_i x_i\right) \leqslant \sum_{i=1}^{M} \lambda_i f(x_i)$, has been proved.
| 963
|
1
|
1.39
|
hard
|
$ Consider two binary variables x and y having the joint distribution given in Table 1.3.
Evaluate the following quantities
(a) H[x]
(c) H[y|x] (e) H[x,y] (d) H[x|y] (f) I[x,y].
**(b)** H[y]
Draw a diagram to show the relationship between these various quantities.
### 66 1. INTRODUCTION
|
It is quite straightforward based on definition.
$$H[x] = -\sum_{i} p(x_{i}) ln p(x_{i}) = -\frac{2}{3} ln \frac{2}{3} - \frac{1}{3} ln \frac{1}{3} = 0.6365$$
$$H[y] = -\sum_{i} p(y_{i}) ln p(y_{i}) = -\frac{2}{3} ln \frac{2}{3} - \frac{1}{3} ln \frac{1}{3} = 0.6365$$
$$H[x, y] = -\sum_{i,j} p(x_{i}, y_{j}) ln p(x_{i}, y_{j}) = -3 \cdot \frac{1}{3} ln \frac{1}{3} - 0 = 1.0986$$
$$H[x|y] = -\sum_{i,j} p(x_{i}, y_{j}) ln p(x_{i}|y_{j}) = -\frac{1}{3} ln 1 - \frac{1}{3} ln \frac{1}{2} - \frac{1}{3} ln \frac{1}{2} = 0.4621$$
$$H[y|x] = -\sum_{i,j} p(x_{i}, y_{j}) ln p(y_{j}|x_{i}) = -\frac{1}{3} ln \frac{1}{2} - -\frac{1}{3} ln \frac{1}{2} - \frac{1}{3} ln 1 = 0.4621$$
$$I[x, y] = -\sum_{i,j} p(x_{i}, y_{j}) ln \frac{p(x_{i}) p(y_{j})}{p(x_{i}, y_{j})}$$
$$= -\frac{1}{3} ln \frac{\frac{2}{3} \cdot \frac{1}{3}}{1/3} - \frac{1}{3} ln \frac{\frac{2}{3} \cdot \frac{2}{3}}{1/3} - \frac{1}{3} ln \frac{\frac{1}{3} \cdot \frac{2}{3}}{1/3} = 0.1744$$
Their relations are given below, diagrams omitted.
$$I[x, y] = H[x] - H[x|y] = H[y] - H[y|x]$$
$$H[x, y] = H[y|x] + H[x] = H[x|y] + H[y]$$
| 1,100
|
1
|
1.4
|
medium
|
Consider a probability density $p_x(x)$ defined over a continuous variable x, and suppose that we make a nonlinear change of variable using x = g(y), so that the density transforms according to $= p_{x}(g(y)) |g'(y)|.$. By differentiating $= p_{x}(g(y)) |g'(y)|.$, show that the location $\widehat{y}$ of the maximum of the density in y is not in general related to the location $\widehat{x}$ of the maximum of the density over x by the simple functional relation $\widehat{x} = g(\widehat{y})$ as a consequence of the Jacobian factor. This shows that the maximum of a probability density (in contrast to a simple function) is dependent on the choice of variable. Verify that, in the case of a linear transformation, the location of the maximum transforms in the same way as the variable itself.
|
This problem needs knowledge about *calculus*, especially about *Chain rule*. We calculate the derivative of $P_y(y)$ with respect to y, according to (1.27):
$$\frac{dp_{y}(y)}{dy} = \frac{d(p_{x}(g(y))|g'(y)|)}{dy} = \frac{dp_{x}(g(y))}{dy}|g'(y)| + p_{x}(g(y))\frac{d|g'(y)|}{dy} \qquad (*)$$
The first term in the above equation can be further simplified:
$$\frac{dp_x(g(y))}{dy}|g'(y)| = \frac{dp_x(g(y))}{dg(y)}\frac{dg(y)}{dy}|g'(y)|$$
(\*\*)
If $\hat{x}$ is the maximum of density over x, we can obtain :
$$\frac{dp_x(x)}{dx}\big|_{\hat{x}}=0$$
Therefore, when $y = \hat{y}, s.t.\hat{x} = g(\hat{y})$ , the first term on the right side of (\*\*) will be 0, leading the first term in (\*) equals to 0, however because of the existence of the second term in (\*), the derivative may not equal to 0. But
when linear transformation is applied, the second term in (\*) will vanish, (e.g. x = ay + b). A simple example can be shown by :
$$p_x(x) = 2x, \quad x \in [0,1] = \hat{x} = 1$$
And given that:
$$x = sin(y)$$
Therefore, $p_y(y) = 2\sin(y)|\cos(y)|, y \in [0, \frac{\pi}{2}],$ which can be simplified :
$$p_{y}(y) = \sin(2y), \quad y \in [0, \frac{\pi}{2}] \quad => \quad \hat{y} = \frac{\pi}{4}$$
However, it is quite obvious:
$$\hat{x} \neq \sin(\hat{y})$$
| 1,288
|
1
|
1.40
|
easy
|
By applying Jensen's inequality $f\left(\sum_{i=1}^{M} \lambda_i x_i\right) \leqslant \sum_{i=1}^{M} \lambda_i f(x_i)$ with $f(x) = \ln x$ , show that the arithmetic mean of a set of real numbers is never less than their geometrical mean.
|
f(x) = lnx is actually a strict concave function, therefore we take advantage of *Jensen's Inequality* to obtain:
$$f(\sum_{i=1}^{M} \lambda_m x_m) \ge \sum_{i=1}^{M} \lambda_m f(x_m)$$
We let $\lambda_m = \frac{1}{M}, m = 1, 2, ..., M$ . Hence we will obtain:
$$ln(\frac{x_1 + x_2 + \dots + x_m}{M}) \ge \frac{1}{M}[ln(x_1) + ln(x_2) + \dots + ln(x_M)] = \frac{1}{M}ln(x_1x_2...x_M)$$
We take advantage of the fact that f(x) = lnx is strictly increasing and then obtain:
$$\frac{x_1 + x_2 + ... + x_m}{M} \ge \sqrt[M]{x_1 x_2 ... x_M}$$
| 543
|
1
|
1.41
|
easy
|
Using the sum and product rules of probability, show that the mutual information $I(\mathbf{x}, \mathbf{y})$ satisfies the relation $I[\mathbf{x}, \mathbf{y}] = H[\mathbf{x}] - H[\mathbf{x}|\mathbf{y}] = H[\mathbf{y}] - H[\mathbf{y}|\mathbf{x}].$.
|
Based on definition of $I[\mathbf{x}, \mathbf{y}]$ , i.e.(1.120), we obtain:
$$I[\mathbf{x}, \mathbf{y}] = -\int \int p(\mathbf{x}, \mathbf{y}) ln \frac{p(\mathbf{x})p(\mathbf{y})}{p(\mathbf{x}, \mathbf{y})} d\mathbf{x} d\mathbf{y}$$
$$= -\int \int p(\mathbf{x}, \mathbf{y}) ln \frac{p(\mathbf{x})}{p(\mathbf{x}|\mathbf{y})} d\mathbf{x} d\mathbf{y}$$
$$= -\int \int p(\mathbf{x}, \mathbf{y}) ln p(\mathbf{x}) d\mathbf{x} d\mathbf{y} + \int \int p(\mathbf{x}, \mathbf{y}) ln p(\mathbf{x}|\mathbf{y}) d\mathbf{x} d\mathbf{y}$$
$$= -\int \int p(\mathbf{x}) ln p(\mathbf{x}) d\mathbf{x} + \int \int p(\mathbf{x}, \mathbf{y}) ln p(\mathbf{x}|\mathbf{y}) d\mathbf{x} d\mathbf{y}$$
$$= H[\mathbf{x}] - H[\mathbf{x}|\mathbf{y}]$$
Where we have taken advantage of the fact: $p(\mathbf{x}, \mathbf{y}) = p(\mathbf{y})p(\mathbf{x}|\mathbf{y})$ , and $\int p(\mathbf{x}, \mathbf{y}) d\mathbf{y} = p(\mathbf{x})$ . The same process can be used for proving $I[\mathbf{x}, \mathbf{y}] = H[\mathbf{y}] - H[\mathbf{y}|\mathbf{x}]$ , if we substitute $p(\mathbf{x}, \mathbf{y})$ with $p(\mathbf{x})p(\mathbf{y}|\mathbf{x})$ in the second step.
# 0.2 Probability Distribution
| 1,171
|
1
|
1.5
|
easy
|
Using the definition $var[f] = \mathbb{E}\left[ \left( f(x) - \mathbb{E}[f(x)] \right)^2 \right]$ show that var[f(x)] satisfies $var[f] = \mathbb{E}[f(x)^2] - \mathbb{E}[f(x)]^2.$.
|
This problem takes advantage of the property of expectation:
$$\begin{aligned} var[f] &= & \mathbb{E}[(f(x) - \mathbb{E}[f(x)])^2] \\ &= & \mathbb{E}[f(x)^2 - 2f(x)\mathbb{E}[f(x)] + \mathbb{E}[f(x)]^2] \\ &= & \mathbb{E}[f(x)^2] - 2\mathbb{E}[f(x)]^2 + \mathbb{E}[f(x)]^2 \\ => & var[f] &= & \mathbb{E}[f(x)^2] - \mathbb{E}[f(x)]^2 \end{aligned}$$
| 349
|
1
|
1.6
|
easy
|
Show that if two variables x and y are independent, then their covariance is zero.
|
Based on (1.41), we only need to prove when x and y is independent, $\mathbb{E}_{x,y}[xy] = \mathbb{E}[x]\mathbb{E}[y]$ . Because x and y is independent, we have :
$$p(x, y) = p_x(x) p_y(y)$$
Therefore:
$$\iint xyp(x,y)dxdy = \iint xyp_x(x)p_y(y)dxdy$$
$$= (\int xp_x(x)dx)(\int yp_y(y)dy)$$
$$=> \mathbb{E}_{x,y}[xy] = \mathbb{E}[x]\mathbb{E}[y]$$
| 354
|
1
|
1.7
|
medium
|
In this exercise, we prove the normalization condition $\int_{-\infty}^{\infty} \mathcal{N}\left(x|\mu,\sigma^2\right) \, \mathrm{d}x = 1.$ for the univariate Gaussian. To do this consider, the integral
$$I = \int_{-\infty}^{\infty} \exp\left(-\frac{1}{2\sigma^2}x^2\right) dx \tag{1.124}$$
which we can evaluate by first writing its square in the form
$$I^{2} = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \exp\left(-\frac{1}{2\sigma^{2}}x^{2} - \frac{1}{2\sigma^{2}}y^{2}\right) dx dy.$$
$I^{2} = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \exp\left(-\frac{1}{2\sigma^{2}}x^{2} - \frac{1}{2\sigma^{2}}y^{2}\right) dx dy.$
Now make the transformation from Cartesian coordinates (x, y) to polar coordinates $(r, \theta)$ and then substitute $u = r^2$ . Show that, by performing the integrals over $\theta$ and u, and then taking the square root of both sides, we obtain
$$I = (2\pi\sigma^2)^{1/2}. (1.126)$$
Finally, use this result to show that the Gaussian distribution $\mathcal{N}(x|\mu,\sigma^2)$ is normalized.
|
We need Integration by substitution.
$$\begin{split} I^2 &= \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} exp(-\frac{1}{2\sigma^2}x^2 - \frac{1}{2\sigma^2}y^2) dx dy \\ &= \int_{0}^{2\pi} \int_{0}^{+\infty} exp(-\frac{1}{2\sigma^2}r^2) r dr d\theta \end{split}$$
Here we utilize:
$$x = r\cos\theta$$
, $y = r\sin\theta$
Based on the fact:
$$\int_{0}^{+\infty} exp(-\frac{r^{2}}{2\sigma^{2}})r\,dr = -\sigma^{2}exp(-\frac{r^{2}}{2\sigma^{2}})\big|_{0}^{+\infty} = -\sigma^{2}(0-1) = \sigma^{2}$$
Therefore, I can be solved:
$$I^{2} = \int_{0}^{2\pi} \sigma^{2} d\theta = 2\pi\sigma^{2}, = > I = \sqrt{2\pi}\sigma$$
And next,we will show that Gaussian distribution $\mathcal{N}(x|\mu,\sigma^2)$ is normalized, (i.e. $\int_{-\infty}^{+\infty} \mathcal{N}(x|\mu,\sigma^2) dx = 1$ ):
$$\begin{split} \int_{-\infty}^{+\infty} \mathcal{N}(x \big| \mu, \sigma^2) \, dx &= \int_{-\infty}^{+\infty} \frac{1}{\sqrt{2\pi\sigma^2}} exp\{ -\frac{1}{2\sigma^2} (x - \mu)^2 \} \, dx \\ &= \int_{-\infty}^{+\infty} \frac{1}{\sqrt{2\pi\sigma^2}} exp\{ -\frac{1}{2\sigma^2} y^2 \} \, dy \quad (y = x - \mu) \\ &= \frac{1}{\sqrt{2\pi\sigma^2}} \int_{-\infty}^{+\infty} exp\{ -\frac{1}{2\sigma^2} y^2 \} \, dy \\ &= 1 \end{split}$$
| 1,228
|
1
|
1.8
|
medium
|
By using a change of variables, verify that the univariate Gaussian distribution given by $\mathcal{N}(x|\mu,\sigma^2) = \frac{1}{(2\pi\sigma^2)^{1/2}} \exp\left\{-\frac{1}{2\sigma^2}(x-\mu)^2\right\}$ satisfies $\mathbb{E}[x] = \int_{-\infty}^{\infty} \mathcal{N}(x|\mu, \sigma^2) x \, \mathrm{d}x = \mu.$. Next, by differentiating both sides of the normalization condition
$$\int_{-\infty}^{\infty} \mathcal{N}\left(x|\mu,\sigma^2\right) \, \mathrm{d}x = 1 \tag{1.127}$$
with respect to $\sigma^2$ , verify that the Gaussian satisfies $\mathbb{E}[x^2] = \int_{-\infty}^{\infty} \mathcal{N}\left(x|\mu, \sigma^2\right) x^2 \, \mathrm{d}x = \mu^2 + \sigma^2.$. Finally, show that $var[x] = \mathbb{E}[x^2] - \mathbb{E}[x]^2 = \sigma^2$ holds.
|
The first question will need the result of Prob.1.7:
$$\begin{split} \int_{-\infty}^{+\infty} \mathcal{N}(x|\mu,\sigma^2) x \, dx &= \int_{-\infty}^{+\infty} \frac{1}{\sqrt{2\pi\sigma^2}} exp\{-\frac{1}{2\sigma^2} (x-\mu)^2\} x \, dx \\ &= \int_{-\infty}^{+\infty} \frac{1}{\sqrt{2\pi\sigma^2}} exp\{-\frac{1}{2\sigma^2} y^2\} (y+\mu) \, dy \quad (y=x-\mu) \\ &= \mu \int_{-\infty}^{+\infty} \frac{1}{\sqrt{2\pi\sigma^2}} exp\{-\frac{1}{2\sigma^2} y^2\} \, dy + \int_{-\infty}^{+\infty} \frac{1}{\sqrt{2\pi\sigma^2}} exp\{-\frac{1}{2\sigma^2} y^2\} y \, dy \\ &= \mu + 0 = \mu \end{split}$$
The second problem has already be given hint in the description. Given that:
$$\frac{d(fg)}{dx} = f\frac{dg}{dx} + g\frac{df}{dx}$$
We differentiate both side of $\int_{-\infty}^{\infty} \mathcal{N}\left(x|\mu,\sigma^2\right) \, \mathrm{d}x = 1$ with respect to $\sigma^2$ , we will obtain :
$$\int_{-\infty}^{+\infty} (-\frac{1}{2\sigma^2} + \frac{(x-\mu)^2}{2\sigma^4}) \mathcal{N}(x|\mu,\sigma^2) dx = 0$$
Provided the fact that $\sigma \neq 0$ , we can get:
$$\int_{-\infty}^{+\infty} (x-\mu)^2 \mathcal{N}(x\big|\mu,\sigma^2) dx = \int_{-\infty}^{+\infty} \sigma^2 \mathcal{N}(x\big|\mu,\sigma^2) dx = \sigma^2$$
So the equation above has actually proven $var[x] = \mathbb{E}[x^2] - \mathbb{E}[x]^2 = \sigma^2$, according to the definition:
$$var[x] = \int_{-\infty}^{+\infty} (x - \mathbb{E}[x])^2 \mathcal{N}(x|\mu, \sigma^2) dx$$
Where $\mathbb{E}[x] = \mu$ has already been proved. Therefore :
$$var[x] = \sigma^2$$
Finally,
$$\mathbb{E}[x^2] = var[x] + \mathbb{E}[x]^2 = \sigma^2 + u^2$$
| 1,622
|
1
|
1.9
|
easy
|
Show that the mode (i.e. the maximum) of the Gaussian distribution $\mathcal{N}(x|\mu,\sigma^2) = \frac{1}{(2\pi\sigma^2)^{1/2}} \exp\left\{-\frac{1}{2\sigma^2}(x-\mu)^2\right\}$ is given by $\mu$ . Similarly, show that the mode of the multivariate Gaussian $\mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\boldsymbol{\Sigma}|^{1/2}} \exp\left\{-\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu})\right\}$ is given by $\mu$ .
|
Here we only focus on $\mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\boldsymbol{\Sigma}|^{1/2}} \exp\left\{-\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu})\right\}$, because $\mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\boldsymbol{\Sigma}|^{1/2}} \exp\left\{-\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu})\right\}$ is the general form of $= \mathbb{E}_{\mathbf{x}, \mathbf{y}} [\mathbf{x} \mathbf{y}^{\mathrm{T}}] - \mathbb{E}[\mathbf{x}] \mathbb{E}[\mathbf{y}^{\mathrm{T}}].$. Based on the definition: The maximum of distribution is known as its mode and $\mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\boldsymbol{\Sigma}|^{1/2}} \exp\left\{-\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu})\right\}$, we can obtain:
$$\frac{\partial \mathcal{N}(\mathbf{x} | \boldsymbol{\mu}, \boldsymbol{\Sigma})}{\partial \mathbf{x}} = -\frac{1}{2} [\boldsymbol{\Sigma}^{-1} + (\boldsymbol{\Sigma}^{-1})^T] (\mathbf{x} - \boldsymbol{\mu}) \mathcal{N}(\mathbf{x} | \boldsymbol{\mu}, \boldsymbol{\Sigma})$$
$$= -\boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) \mathcal{N}(\mathbf{x} | \boldsymbol{\mu}, \boldsymbol{\Sigma})$$
Where we take advantage of:
$$\frac{\partial \mathbf{x}^T \mathbf{A} \mathbf{x}}{\partial \mathbf{x}} = (\mathbf{A} + \mathbf{A}^T) \mathbf{x} \quad \text{and} \quad (\mathbf{\Sigma}^{-1})^T = \mathbf{\Sigma}^{-1}$$
Therefore,
only when
$$\mathbf{x} = \boldsymbol{\mu}, \frac{\partial \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma})}{\partial \mathbf{x}} = 0$$
Note: You may also need to calculate *Hessian Matrix* to prove that it is maximum. However, here we find that the first derivative only has one root. Based on the description in the problem, this point should be maximum point.
| 2,113
|
2
|
2.1
|
easy
|
Verify that the Bernoulli distribution $(x|\mu) = \mu^x (1-\mu)^{1-x}$ satisfies the following properties
$$\sum_{x=0}^{1} p(x|\mu) = 1 (2.257)$$
$$\mathbb{E}[x] = \mu \tag{2.258}$$
$$var[x] = \mu(1-\mu).$$
$var[x] = \mu(1-\mu).$
Show that the entropy $\mathrm{H}[x]$ of a Bernoulli distributed random binary variable x is given by
$$H[x] = -\mu \ln \mu - (1 - \mu) \ln(1 - \mu). \tag{2.260}$$
|
Based on definition, we can obtain:
$$\sum_{x_i=0,1} p(x_i) = \mu + (1-\mu) = 1$$
$$\mathbb{E}[x] = \sum_{x_i=0,1} x_i \, p(x_i) = 0 \cdot (1-\mu) + 1 \cdot \mu = \mu$$
$$var[x] = \sum_{x_i=0,1} (x_i - \mathbb{E}[x])^2 p(x_i)$$
$$= (0 - \mu)^2 (1 - \mu) + (1 - \mu)^2 \cdot \mu$$
$$= \mu(1 - \mu)$$
$$H[x] = -\sum_{x_i=0,1} p(x_i) \ln p(x_i) = -\mu \ln \mu - (1-\mu) \ln (1-\mu)$$
| 384
|
2
|
2.10
|
medium
|
Using the property $\Gamma(x+1) = x\Gamma(x)$ of the gamma function, derive the following results for the mean, variance, and covariance of the Dirichlet distribution given by $Dir(\boldsymbol{\mu}|\boldsymbol{\alpha}) = \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)\cdots\Gamma(\alpha_K)} \prod_{k=1}^K \mu_k^{\alpha_k - 1}$
$$\mathbb{E}[\mu_j] = \frac{\alpha_j}{\alpha_0} \tag{2.273}$$
$$\operatorname{var}[\mu_j] = \frac{\alpha_j(\alpha_0 - \alpha_j)}{\alpha_0^2(\alpha_0 + 1)}$$
$\operatorname{var}[\mu_j] = \frac{\alpha_j(\alpha_0 - \alpha_j)}{\alpha_0^2(\alpha_0 + 1)}$
$$\operatorname{cov}[\mu_j \mu_l] = -\frac{\alpha_j \alpha_l}{\alpha_0^2 (\alpha_0 + 1)}, \qquad j \neq l$$
$\operatorname{cov}[\mu_j \mu_l] = -\frac{\alpha_j \alpha_l}{\alpha_0^2 (\alpha_0 + 1)}, \qquad j \neq l$
where $\alpha_0$ is defined by $\alpha_0 = \sum_{k=1}^K \alpha_k.$.
|
Based on definition of *Expectation* and $Dir(\boldsymbol{\mu}|\boldsymbol{\alpha}) = \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)\cdots\Gamma(\alpha_K)} \prod_{k=1}^K \mu_k^{\alpha_k - 1}$, we can write:
$$\begin{split} \mathbb{E}[\mu_j] &= \int \mu_j Dir(\pmb{\mu}|\pmb{\alpha}) d\pmb{\mu} \\ &= \int \mu_j \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)\Gamma(\alpha_2)...\Gamma(\alpha_K)} \prod_{k=1}^K \mu_k^{\alpha_k-1} d\pmb{\mu} \\ &= \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)\Gamma(\alpha_2)...\Gamma(\alpha_K)} \int \mu_j \prod_{k=1}^K \mu_k^{\alpha_k-1} d\pmb{\mu} \\ &= \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)\Gamma(\alpha_2)...\Gamma(\alpha_K)} \frac{\Gamma(\alpha_1)\Gamma(\alpha_2)...\Gamma(\alpha_{j-1})\Gamma(\alpha_j+1)\Gamma(\alpha_{j+1})...\Gamma(\alpha_K)}{\Gamma(\alpha_0+1)} \\ &= \frac{\Gamma(\alpha_0)\Gamma(\alpha_j+1)}{\Gamma(\alpha_j)\Gamma(\alpha_0+1)} = \frac{\alpha_j}{\alpha_0} \end{split}$$
It is quite the same for variance, let's begin by calculating $\mathbb{E}[\mu_i^2]$ .
$$\begin{split} \mathbb{E}[\mu_j^2] &= \int \mu_j^2 Dir(\pmb{\mu}|\pmb{\alpha}) d\pmb{\mu} \\ &= \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)\Gamma(\alpha_2)...\Gamma(\alpha_K)} \int \mu_j^2 \prod_{k=1}^K \mu_k^{\alpha_k-1} d\pmb{\mu} \\ &= \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)\Gamma(\alpha_2)...\Gamma(\alpha_K)} \frac{\Gamma(\alpha_1)\Gamma(\alpha_2)...\Gamma(\alpha_{j-1})\Gamma(\alpha_j+2)\Gamma(\alpha_{j+1})...\Gamma(\alpha_K)}{\Gamma(\alpha_0+2)} \\ &= \frac{\Gamma(\alpha_0)\Gamma(\alpha_j+2)}{\Gamma(\alpha_j)\Gamma(\alpha_0+2)} = \frac{\alpha_j(\alpha_j+1)}{\alpha_0(\alpha_0+1)} \end{split}$$
Hence, we obtain:
$$var[\mu_j] = \mathbb{E}[\mu_j^2] - \mathbb{E}[\mu_j]^2 = \frac{\alpha_j(\alpha_j + 1)}{\alpha_0(\alpha_0 + 1)} - (\frac{\alpha_j}{\alpha_0})^2 = \frac{\alpha_j(\alpha_0 - \alpha_j)}{\alpha_0^2(\alpha_0 + 1)}$$
It is the same for covariance.
$$\begin{split} cov[\mu_{j}\mu_{l}] &= \int (\mu_{j} - \mathbb{E}[\mu_{j}])(\mu_{l} - \mathbb{E}[\mu_{l}])Dir(\pmb{\mu}|\pmb{\alpha})d\pmb{\mu} \\ &= \int (\mu_{j}\mu_{l} - \mathbb{E}[\mu_{j}]\mu_{l} - \mathbb{E}[\mu_{l}]\mu_{j} + \mathbb{E}[\mu_{j}]\mathbb{E}[\mu_{l}])Dir(\pmb{\mu}|\pmb{\alpha})d\pmb{\mu} \\ &= \frac{\Gamma(\alpha_{0})\Gamma(\alpha_{j} + 1)\Gamma(\alpha_{l} + 1)}{\Gamma(\alpha_{j})\Gamma(\alpha_{0} + 2)} - 2\mathbb{E}[\mu_{j}]\mathbb{E}[\mu_{l}] + \mathbb{E}[\mu_{j}]\mathbb{E}[\mu_{l}] \\ &= \frac{\alpha_{j}\alpha_{l}}{\alpha_{0}(\alpha_{0} + 1)} - \mathbb{E}[\mu_{j}]\mathbb{E}[\mu_{l}] \\ &= \frac{\alpha_{j}\alpha_{l}}{\alpha_{0}(\alpha_{0} + 1)} - \frac{\alpha_{j}\alpha_{l}}{\alpha_{0}^{2}} \\ &= -\frac{\alpha_{j}\alpha_{l}}{\alpha_{0}^{2}(\alpha_{0} + 1)} \quad (j \neq l) \end{split}$$
Note: when j=l, $cov[\mu_j\mu_l]$ will actually reduce to $var[\mu_j]$ , however we cannot simply replace l with j in the expression of $cov[\mu_j\mu_l]$ to get the right result and that is because $\int \mu_j \mu_l Dir(\boldsymbol{\mu}|\boldsymbol{\alpha}) d\boldsymbol{\alpha}$ will reduce to $\int \mu_j^2 Dir(\boldsymbol{\mu}|\boldsymbol{\alpha}) d\boldsymbol{\alpha}$ in this case.
| 3,094
|
2
|
2.11
|
easy
|
By expressing the expectation of $\ln \mu_j$ under the Dirichlet distribution $Dir(\boldsymbol{\mu}|\boldsymbol{\alpha}) = \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)\cdots\Gamma(\alpha_K)} \prod_{k=1}^K \mu_k^{\alpha_k - 1}$ as a derivative with respect to $\alpha_j$ , show that
$$\mathbb{E}[\ln \mu_j] = \psi(\alpha_j) - \psi(\alpha_0) \tag{2.276}$$
where $\alpha_0$ is given by $\alpha_0 = \sum_{k=1}^K \alpha_k.$ and
$$\psi(a) \equiv \frac{d}{da} \ln \Gamma(a) \tag{2.277}$$
is the digamma function.
|
Based on definition of *Expectation* and $Dir(\boldsymbol{\mu}|\boldsymbol{\alpha}) = \frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)\cdots\Gamma(\alpha_K)} \prod_{k=1}^K \mu_k^{\alpha_k - 1}$, we first denote:
$$\frac{\Gamma(\alpha_0)}{\Gamma(\alpha_1)\Gamma(\alpha_2)...\Gamma(\alpha_K)} = K(\pmb{\alpha})$$
Then we can write:
$$\begin{split} \frac{\partial Dir(\pmb{\mu}|\pmb{\alpha})}{\partial \alpha_j} &= \partial (K(\pmb{\alpha}) \prod_{i=1}^K \mu_i^{\alpha_i-1})/\partial \alpha_j \\ &= \frac{\partial K(\pmb{\alpha})}{\partial \alpha_j} \prod_{i=1}^K \mu_i^{\alpha_i-1} + K(\pmb{\alpha}) \frac{\partial \prod_{i=1}^K \mu_i^{\alpha_i-1}}{\partial \alpha_j} \\ &= \frac{\partial K(\pmb{\alpha})}{\partial \alpha_j} \prod_{i=1}^K \mu_i^{\alpha_i-1} + ln\mu_j \cdot Dir(\pmb{\mu}|\pmb{\alpha}) \end{split}$$
Then let us perform integral to both sides:
$$\int \frac{\partial Dir(\boldsymbol{\mu}|\boldsymbol{\alpha})}{\partial \alpha_{i}} d\boldsymbol{\mu} = \int \frac{\partial K(\boldsymbol{\alpha})}{\partial \alpha_{i}} \prod_{i=1}^{K} \mu_{i}^{\alpha_{i}-1} d\boldsymbol{\mu} + \int ln \mu_{j} \cdot Dir(\boldsymbol{\mu}|\boldsymbol{\alpha}) d\boldsymbol{\mu}$$
The left side can be further simplified as:
left side =
$$\frac{\partial \int Dir(\boldsymbol{\mu}|\boldsymbol{\alpha}) d\boldsymbol{\mu}}{\partial \alpha_j} = \frac{\partial 1}{\partial \alpha_j} = 0$$
The right side can be further simplified as:
$$\begin{array}{ll} \text{right side} & = & \displaystyle \frac{\partial K(\pmb{\alpha})}{\partial \alpha_j} \int \prod_{i=1}^K \mu_i^{\alpha_i-1} d \, \pmb{\mu} + \mathbb{E}[ln\mu_j] \\ \\ & = & \displaystyle \frac{\partial K(\pmb{\alpha})}{\partial \alpha_j} \, \frac{1}{K(\pmb{\alpha})} + \mathbb{E}[ln\mu_j] \\ \\ & = & \displaystyle \frac{\partial lnK(\pmb{\alpha})}{\partial \alpha_j} + \mathbb{E}[ln\mu_j] \end{array}$$
Therefore, we obtain:
$$\mathbb{E}[\ln \mu_{j}] = -\frac{\partial \ln K(\alpha)}{\partial \alpha_{j}}$$
$$= -\frac{\partial \left\{ \ln \Gamma(\alpha_{0}) - \sum_{i=1}^{K} \ln \Gamma(\alpha_{i}) \right\}}{\partial \alpha_{j}}$$
$$= \frac{\partial \ln \Gamma(\alpha_{j})}{\partial \alpha_{j}} - \frac{\partial \ln \Gamma(\alpha_{0})}{\partial \alpha_{j}}$$
$$= \frac{\partial \ln \Gamma(\alpha_{j})}{\partial \alpha_{j}} - \frac{\partial \ln \Gamma(\alpha_{0})}{\partial \alpha_{0}} \frac{\partial \alpha_{0}}{\partial \alpha_{j}}$$
$$= \frac{\partial \ln \Gamma(\alpha_{j})}{\partial \alpha_{j}} - \frac{\partial \ln \Gamma(\alpha_{0})}{\partial \alpha_{0}}$$
$$= \psi(\alpha_{j}) - \psi(\alpha_{0})$$
Therefore, the problem has been solved.
| 2,606
|
2
|
2.12
|
easy
|
The uniform distribution for a continuous variable x is defined by
$$U(x|a,b) = \frac{1}{b-a}, \qquad a \leqslant x \leqslant b.$$
$U(x|a,b) = \frac{1}{b-a}, \qquad a \leqslant x \leqslant b.$
Verify that this distribution is normalized, and find expressions for its mean and variance.
|
Since we have:
$$\int_{a}^{b} \frac{1}{b-a} dx = 1$$
It is straightforward that it is normalized. Then we calculate its mean:
$$\mathbb{E}[x] = \int_{a}^{b} x \frac{1}{b-a} dx = \frac{x^{2}}{2(b-a)} \Big|_{a}^{b} = \frac{a+b}{2}$$
Then we calculate its variance.
$$var[x] = \mathbb{E}[x^2] - \mathbb{E}[x]^2 = \int_a^b \frac{x^2}{b-a} dx - (\frac{a+b}{2})^2 = \frac{x^3}{3(b-a)} \Big|_a^b - (\frac{a+b}{2})^2$$
Hence we obtain:
$$var[x] = \frac{(b-a)^2}{12}$$
| 466
|
2
|
2.13
|
medium
|
Evaluate the Kullback-Leibler divergence $= -\int p(\mathbf{x}) \ln \left\{\frac{q(\mathbf{x})}{p(\mathbf{x})}\right\} d\mathbf{x}.$ between two Gaussians $p(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma})$ and $q(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\mathbf{m}, \mathbf{L})$ .
|
Let's begin by calculating $\ln \frac{p(x)}{q(x)}$ :
$$ln(\frac{p(\mathbf{x})}{g(\mathbf{x})}) = \frac{1}{2}ln(\frac{|\mathbf{L}|}{|\mathbf{\Sigma}|}) + \frac{1}{2}(\mathbf{x} - \mathbf{m})^T \mathbf{L}^{-1}(\mathbf{x} - \mathbf{m}) - \frac{1}{2}(\mathbf{x} - \boldsymbol{\mu})^T \mathbf{\Sigma}^{-1}(\mathbf{x} - \boldsymbol{\mu})$$
If $x \sim p(x) = \mathcal{N}(\mu|\Sigma)$ , we then take advantage of the following properties.
$$\int p(\mathbf{x})d\mathbf{x} = 1$$
$$\mathbb{E}[\mathbf{x}] = \int \mathbf{x} p(\mathbf{x}) d\mathbf{x} = \mu$$
$$\mathbb{E}[(\mathbf{x} - \mathbf{a})^T \mathbf{A} (\mathbf{x} - \mathbf{a})] = \operatorname{tr}(\mathbf{A} \mathbf{\Sigma}) + (\mathbf{\mu} - \mathbf{a})^T \mathbf{A} (\mathbf{\mu} - \mathbf{a})$$
We obtain:
$$KL = \int \left\{ \frac{1}{2} ln \frac{|\boldsymbol{L}|}{|\boldsymbol{\Sigma}|} - \frac{1}{2} (\boldsymbol{x} - \boldsymbol{\mu})^T \boldsymbol{\Sigma}^{-1} (\boldsymbol{x} - \boldsymbol{\mu}) + \frac{1}{2} (\boldsymbol{x} - \boldsymbol{m})^T \boldsymbol{L}^{-1} (\boldsymbol{x} - \boldsymbol{m}) \right\} p(\boldsymbol{x}) d\boldsymbol{x}$$
$$= \frac{1}{2} ln \frac{|\boldsymbol{L}|}{|\boldsymbol{\Sigma}|} - \frac{1}{2} E[(\boldsymbol{x} - \boldsymbol{\mu}) \boldsymbol{\Sigma}^{-1} (\boldsymbol{x} - \boldsymbol{\mu})^T] + \frac{1}{2} E[(\boldsymbol{x} - \boldsymbol{m})^T \boldsymbol{L}^{-1} (\boldsymbol{x} - \boldsymbol{m})]$$
$$= \frac{1}{2} ln \frac{|\boldsymbol{L}|}{|\boldsymbol{\Sigma}|} - \frac{1}{2} tr \{\boldsymbol{I}_D\} + \frac{1}{2} (\boldsymbol{\mu} - \boldsymbol{m})^T \boldsymbol{L}^{-1} (\boldsymbol{\mu} - \boldsymbol{m}) + \frac{1}{2} tr \{\boldsymbol{L}^{-1} \boldsymbol{\Sigma}\}$$
$$= \frac{1}{2} [ ln \frac{|\boldsymbol{L}|}{|\boldsymbol{\Sigma}|} - D + tr \{\boldsymbol{L}^{-1} \boldsymbol{\Sigma}\} + (\boldsymbol{m} - \boldsymbol{\mu})^T \boldsymbol{L}^{-1} (\boldsymbol{m} - \boldsymbol{\mu})]$$
| 1,987
|
2
|
2.14
|
medium
|
This exercise demonstrates that the multivariate distribution with maximum entropy, for a given covariance, is a Gaussian. The entropy of a distribution $p(\mathbf{x})$ is given by
$$H[\mathbf{x}] = -\int p(\mathbf{x}) \ln p(\mathbf{x}) \, d\mathbf{x}. \tag{2.279}$$
We wish to maximize H[x] over all distributions p(x) subject to the constraints that p(x) be normalized and that it have a specific mean and covariance, so that
$$\int p(\mathbf{x}) \, \mathrm{d}\mathbf{x} = 1 \tag{2.280}$$
$$\int p(\mathbf{x})\mathbf{x} \, \mathrm{d}\mathbf{x} = \boldsymbol{\mu} \tag{2.281}$$
$$\int p(\mathbf{x})(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^{\mathrm{T}} d\mathbf{x} = \boldsymbol{\Sigma}.$$
$\int p(\mathbf{x})(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^{\mathrm{T}} d\mathbf{x} = \boldsymbol{\Sigma}.$
By performing a variational maximization of $H[\mathbf{x}] = -\int p(\mathbf{x}) \ln p(\mathbf{x}) \, d\mathbf{x}.$ and using Lagrange multipliers to enforce the constraints $\int p(\mathbf{x}) \, \mathrm{d}\mathbf{x} = 1$, $\int p(\mathbf{x})\mathbf{x} \, \mathrm{d}\mathbf{x} = \boldsymbol{\mu}$, and $\int p(\mathbf{x})(\mathbf{x} - \boldsymbol{\mu})(\mathbf{x} - \boldsymbol{\mu})^{\mathrm{T}} d\mathbf{x} = \boldsymbol{\Sigma}.$, show that the maximum likelihood distribution is given by the Gaussian $\mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\boldsymbol{\Sigma}|^{1/2}} \exp\left\{-\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu})\right\}$.
|
The hint given in the problem is straightforward, however it is a little bit difficult to calculate, and here we will use a more simple method to solve this problem, taking advantage of the property of Kullback— $Leibler\ Distance$ . Let g(x) be a Gaussian PDF with mean $\mu$ and variance $\Sigma$ , and f(x) an arbitrary PDF with the same mean and variance.
$$0 \le KL(f||g) = -\int f(\mathbf{x})ln\left\{\frac{g(\mathbf{x})}{f(\mathbf{x})}\right\}d\mathbf{x} = -H(f) - \int f(\mathbf{x})lng(\mathbf{x})d\mathbf{x} \qquad (*)$$
Let's calculate the second term of the equation above.
$$\int f(\mathbf{x}) lng(\mathbf{x}) d\mathbf{x} = \int f(\mathbf{x}) ln \left\{ \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf{\Sigma}|^{1/2}} exp \left[ -\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^T \Sigma^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right] \right\} d\mathbf{x}$$
$$= \int f(\mathbf{x}) ln \left\{ \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf{\Sigma}|^{1/2}} \right\} d\mathbf{x} + \int f(\mathbf{x}) \left[ -\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^T \Sigma^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right] d\mathbf{x}$$
$$= ln \left\{ \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf{\Sigma}|^{1/2}} \right\} - \frac{1}{2} \mathbb{E} \left[ (\mathbf{x} - \boldsymbol{\mu})^T \Sigma^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right]$$
$$= ln \left\{ \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf{\Sigma}|^{1/2}} \right\} - \frac{1}{2} \text{tr} \{I_D\}$$
$$= -\left\{ \frac{1}{2} ln |\mathbf{\Sigma}| + \frac{D}{2} (1 + ln(2\pi)) \right\}$$
$$= -H(g)$$
We take advantage of two properties of PDF f(x), with mean $\mu$ and variance $\Sigma$ , as listed below. What's more, we also use the result of Prob.2.15, which we will proof later.
$$\int f(\boldsymbol{x})d\boldsymbol{x} = 1$$
$$\mathbb{E}[(\boldsymbol{x} - \boldsymbol{a})^T \boldsymbol{A} (\boldsymbol{x} - \boldsymbol{a})] = \operatorname{tr}(\boldsymbol{A}\boldsymbol{\Sigma}) + (\boldsymbol{\mu} - \boldsymbol{a})^T \boldsymbol{A} (\boldsymbol{\mu} - \boldsymbol{a})$$
Now we can further simplify (\*) to obtain:
$$H(g) \ge H(f)$$
In other words, we have proved that an arbitrary PDF f(x) with the same mean and variance as a Gaussian PDF g(x), its entropy cannot be greater than that of Gaussian PDF.
| 2,251
|
2
|
2.15
|
medium
|
Show that the entropy of the multivariate Gaussian $\mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma})$ is given by
$$H[\mathbf{x}] = \frac{1}{2} \ln |\mathbf{\Sigma}| + \frac{D}{2} (1 + \ln(2\pi))$$
$H[\mathbf{x}] = \frac{1}{2} \ln |\mathbf{\Sigma}| + \frac{D}{2} (1 + \ln(2\pi))$
where D is the dimensionality of $\mathbf{x}$ .
|
We have already used the result of this problem to solve Prob.2.14, and now we will prove it. Suppose $x \sim p(x) = \mathcal{N}(\mu|\Sigma)$ :
$$H[\mathbf{x}] = -\int p(\mathbf{x})lnp(\mathbf{x})d\mathbf{x}$$
$$= -\int p(\mathbf{x})ln \left\{ \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf{\Sigma}|^{1/2}} exp \left[ -\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^T \Sigma^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right] \right\} d\mathbf{x}$$
$$= -\int p(\mathbf{x})ln \left\{ \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf{\Sigma}|^{1/2}} \right\} d\mathbf{x} - \int f(\mathbf{x}) \left[ -\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^T \Sigma^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right] d\mathbf{x}$$
$$= -ln \left\{ \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf{\Sigma}|^{1/2}} \right\} + \frac{1}{2} \mathbb{E} \left[ (\mathbf{x} - \boldsymbol{\mu})^T \Sigma^{-1} (\mathbf{x} - \boldsymbol{\mu}) \right]$$
$$= -ln \left\{ \frac{1}{(2\pi)^{D/2}} \frac{1}{|\mathbf{\Sigma}|^{1/2}} \right\} + \frac{1}{2} \text{tr} \{ I_D \}$$
$$= \frac{1}{2} ln |\mathbf{\Sigma}| + \frac{D}{2} (1 + ln(2\pi))$$
Where we have taken advantage of:
$$\int p(\boldsymbol{x})d\boldsymbol{x} = 1$$
$$\mathbb{E}[(\boldsymbol{x} - \boldsymbol{a})^T \boldsymbol{A} (\boldsymbol{x} - \boldsymbol{a})] = \operatorname{tr}(\boldsymbol{A}\boldsymbol{\Sigma}) + (\boldsymbol{\mu} - \boldsymbol{a})^T \boldsymbol{A} (\boldsymbol{\mu} - \boldsymbol{a})$$
Note: Actually in Prob.2.14, we have already solved this problem, you can intuitively view it by replacing the integrand f(x)lng(x) with g(x)lng(x), and the same procedure in Prob.2.14 still holds to calculate $\int g(x)lng(x)dx$ .
| 1,646
|
2
|
2.16
|
hard
|
$ **** Consider two random variables $x_1$ and $x_2$ having Gaussian distributions with means $\mu_1, \mu_2$ and precisions $\tau_1, \tau_2$ respectively. Derive an expression for the differential entropy of the variable $x = x_1 + x_2$ . To do this, first find the distribution of x by using the relation
$$p(x) = \int_{-\infty}^{\infty} p(x|x_2)p(x_2) dx_2$$
$p(x) = \int_{-\infty}^{\infty} p(x|x_2)p(x_2) dx_2$
and completing the square in the exponent. Then observe that this represents the convolution of two Gaussian distributions, which itself will be Gaussian, and finally make use of the result $H[x] = \frac{1}{2} \left\{ 1 + \ln(2\pi\sigma^2) \right\}.$ for the entropy of the univariate Gaussian.
|
Let us consider a more general conclusion about the *Probability Density Function* (PDF) of the summation of two independent random variables. We denote two random variables X and Y. Their summation Z = X + Y, is still a random variable. We also denote $f(\cdot)$ as PDF, and $F(\cdot)$ as *Cumulative Distribution Function* (CDF). We can obtain :
$$F_Z(z) = P(Z < z) = \iint_{x+y \le z} f_{X,Y}(x,y) dx dy$$
Where z represents an arbitrary real number. We rewrite the *double integral* into *iterated integral*:
$$F_Z(z) = \int_{-\infty}^{+\infty} \left[ \int_{-\infty}^{z-y} f_{X,Y}(x,y) dx \right] dy$$
We fix *z* and *y*, and then make a change of variable x = u - y to the integral.
$$F_Z(z) = \int_{-\infty}^{+\infty} \left[ \int_{-\infty}^{z-y} f_{X,Y}(x,y) dx \right] dy = \int_{-\infty}^{+\infty} \left[ \int_{-\infty}^{z} f_{X,Y}(u-x,y) du \right] dy$$
Note: $f_{X,Y}(\cdot)$ is the joint PDF of X and Y, and then we rearrange the order, we will obtain :
$$F_Z(z) = \int_{-\infty}^{z} \left[ \int_{-\infty}^{+\infty} f_{X,Y}(u - y, y) dy \right] du$$
Compare the equation above with th definition of CDF:
$$F_Z(z) = \int_{-\infty}^z f_Z(u) \, du$$
We can obtain :
$$f_Z(u) = \int_{-\infty}^{+\infty} f_{X,Y}(u - y, y) dy$$
And if X and Y are independent, which means $f_{X,Y}(x,y) = f_X(x)f_Y(y)$ , we can simplify $f_Z(z)$ :
$$f_Z(u) = \int_{-\infty}^{+\infty} f_X(u - y) f_Y(y) dy$$
i.e. $f_Z = f_X * f_Y$
Until now we have proved that the PDF of the summation of two independent random variable is the convolution of the PDF of them. Hence it is straightforward to see that in this problem, where random variable x is the summation of random variable $x_1$ and $x_2$ , the PDF of x should be the convolution of the PDF of $x_1$ and $x_2$ . To find the entropy of x, we will use a simple method, taking advantage of (2.113)-(2.117). With the knowledge :
$$p(x_2)=\mathcal{N}(\mu_2,\tau_2^{-1})$$
$$p(x|x_2) = \mathcal{N}(\mu_1 + x_2, \tau_1^{-1})$$
We make analogies: $x_2$ in this problem to $\boldsymbol{x}$ in $p(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Lambda}^{-1})$, x in this problem to $\boldsymbol{y}$ in $p(\mathbf{y}|\mathbf{x}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\mathbf{x} + \mathbf{b}, \mathbf{L}^{-1})$. Hence by using $p(\mathbf{y}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\boldsymbol{\mu} + \mathbf{b}, \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}})$, we can obtain p(x) is still a normal distribution, and since the entropy of a Gaussian is fully decided by its variance, there is no need to calculate the mean. Still by using $p(\mathbf{y}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\boldsymbol{\mu} + \mathbf{b}, \mathbf{L}^{-1} + \mathbf{A}\boldsymbol{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}})$, the variance of x is $\tau_1^{-1} + \tau_2^{-1}$ , which finally gives its entropy:
$$H[x] = \frac{1}{2} \left[ 1 + ln2\pi (\tau_1^{-1} + \tau_2^{-1}) \right]$$
| 3,009
|
2
|
2.17
|
easy
|
Consider the multivariate Gaussian distribution given by $\mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) = \frac{1}{(2\pi)^{D/2}} \frac{1}{|\boldsymbol{\Sigma}|^{1/2}} \exp\left\{-\frac{1}{2} (\mathbf{x} - \boldsymbol{\mu})^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu})\right\}$. By writing the precision matrix (inverse covariance matrix) $\Sigma^{-1}$ as the sum of a symmetric and an anti-symmetric matrix, show that the anti-symmetric term does not appear in the exponent of the Gaussian, and hence that the precision matrix may be taken to be symmetric without loss of generality. Because the inverse of a symmetric matrix is also symmetric (see Exercise 2.22), it follows that the covariance matrix may also be chosen to be symmetric without loss of generality.
|
This is an extension of Prob.1.14. The same procedure can be used here. We suppose an arbitrary precision matrix $\Lambda$ can be written as $\Lambda^S + \Lambda^A$ , where they satisfy:
$$\Lambda^S_{ij} = \frac{\Lambda_{ij} + \Lambda_{ji}}{2} \quad , \quad \Lambda^A_{ij} = \frac{\Lambda_{ij} - \Lambda_{ji}}{2}$$
Hence it is straightforward that $\Lambda^S_{ij} = \Lambda^S_{ji}$ , and $\Lambda^A_{ij} = -\Lambda^A_{ji}$ . If we expand the quadratic form of exponent, we will obtain:
$$(\boldsymbol{x} - \boldsymbol{\mu})^T \boldsymbol{\Lambda} (\boldsymbol{x} - \boldsymbol{\mu}) = \sum_{i=1}^D \sum_{j=1}^D (x_i - \mu_i) \Lambda_{ij} (x_j - \mu_j)$$
(\*)
It is straightforward then:
$$(*) = \sum_{i=1}^{D} \sum_{j=1}^{D} (x_i - \mu_i) \Lambda_{ij}^S(x_j - \mu_j) + \sum_{i=1}^{D} \sum_{j=1}^{D} (x_i - \mu_i) \Lambda_{ij}^A(x_j - \mu_j)$$
$$= \sum_{i=1}^{D} \sum_{j=1}^{D} (x_i - \mu_i) \Lambda_{ij}^S(x_j - \mu_j)$$
Therefore, we can assume precision matrix is symmetric, and so is covariance matrix.
| 1,017
|
2
|
2.18
|
hard
|
$ Consider a real, symmetric matrix $\Sigma$ whose eigenvalue equation is given by $\mathbf{\Sigma}\mathbf{u}_i = \lambda_i \mathbf{u}_i$. By taking the complex conjugate of this equation and subtracting the original equation, and then forming the inner product with eigenvector $\mathbf{u}_i$ , show that the eigenvalues $\lambda_i$ are real. Similarly, use the symmetry property of $\Sigma$ to show that two eigenvectors $\mathbf{u}_i$ and $\mathbf{u}_j$ will be orthogonal provided $\lambda_j \neq \lambda_i$ . Finally, show that without loss of generality, the set of eigenvectors can be chosen to be orthonormal, so that they satisfy $\mathbf{u}_i^{\mathrm{T}} \mathbf{u}_j = I_{ij}$, even if some of the eigenvalues are zero.
|
We will just follow the hint given in the problem. Firstly, we take complex conjugate on both sides of (2.45):
$$\overline{\Sigma u_i} = \overline{\lambda_i u_i} = \sum \overline{\lambda_i} \overline{u_i}$$
Where we have taken advantage of the fact that $\Sigma$ is a real matrix, i.e., $\overline{\Sigma} = \Sigma$ . Then using that $\Sigma$ is a symmetric, i.e., $\Sigma^T = \Sigma$ :
$$\overline{\boldsymbol{u_i}}^T\boldsymbol{\Sigma}\boldsymbol{u_i} = \overline{\boldsymbol{u_i}}^T(\boldsymbol{\Sigma}\boldsymbol{u_i}) = \overline{\boldsymbol{u_i}}^T(\lambda_i\boldsymbol{u_i}) = \lambda_i\overline{\boldsymbol{u_i}}^T\boldsymbol{u_i}$$
$$\overline{\boldsymbol{u}_{i}}^{T} \boldsymbol{\Sigma} \boldsymbol{u}_{i} = (\boldsymbol{\Sigma} \overline{\boldsymbol{u}_{i}})^{T} \boldsymbol{u}_{i} = (\overline{\lambda_{i}} \overline{\boldsymbol{u}_{i}})^{T} \boldsymbol{u}_{i} = \overline{\lambda_{i}}^{T} \overline{\boldsymbol{u}_{i}}^{T} \boldsymbol{u}_{i}$$
Since $\boldsymbol{u_i} \neq 0$ , we have $\overline{\boldsymbol{u_i}}^T \boldsymbol{u_i} \neq 0$ . Thus $\lambda_i^T = \overline{\lambda}_i^T$ , which means $\lambda_i$ is real. Next we will proof that two eigenvectors corresponding to different eigenvalues are orthogonal.
$$\lambda_i < u_i, u_i > = < \lambda_i u_i, u_i > = < \Sigma u_i, u_i > = < u_i, \Sigma^T u_i > = \lambda_i < u_i, u_i >$$
Where we have taken advantage of $\Sigma^T = \Sigma$ and for arbitrary real matrix A and vector x, y, we have :
$$< Ax, y> = < x, A^Ty>$$
Provided $\lambda_i \neq \lambda_j$ , we have $\langle \boldsymbol{u_i}, \boldsymbol{u_j} \rangle = 0$ , i.e., $\boldsymbol{u_i}$ and $\boldsymbol{u_j}$ are orthogonal. And then if we perform normalization on every eigenvector to force its *Euclidean norm* to equal to 1, $\mathbf{u}_i^{\mathrm{T}} \mathbf{u}_j = I_{ij}$ is straightforward. By performing normalization, I mean multiplying the eigenvector by a real number a to let its *Euclidean norm* (length) to equal to 1, meanwhile we should also divide its corresponding eigenvalue by a.
| 2,072
|
2
|
2.19
|
medium
|
Show that a real, symmetric matrix $\Sigma$ having the eigenvector equation $\mathbf{\Sigma}\mathbf{u}_i = \lambda_i \mathbf{u}_i$ can be expressed as an expansion in the eigenvectors, with coefficients given by the eigenvalues, of the form $\Sigma = \sum_{i=1}^{D} \lambda_i \mathbf{u}_i \mathbf{u}_i^{\mathrm{T}}$. Similarly, show that the inverse matrix $\Sigma^{-1}$ has a representation of the form $\Sigma^{-1} = \sum_{i=1}^{D} \frac{1}{\lambda_i} \mathbf{u}_i \mathbf{u}_i^{\mathrm{T}}.$.
|
For every $N \times N$ real symmetric matrix, the eigenvalues are real and the eigenvectors can be chosen such that they are orthogonal to each other. Thus a real symmetric matrix $\Sigma$ can be decomposed as $\Sigma = U \Lambda U^T$ , where U is an orthogonal matrix, and $\Lambda$ is a diagonal matrix whose entries are the eigenvalues of $\Lambda$ . Hence for an arbitrary vector $\mathbf{x}$ , we have:
$$\Sigma x = U \Lambda U^T x = U \Lambda \begin{bmatrix} u_1^T x \\ \vdots \\ u_D^T x \end{bmatrix} = U \begin{bmatrix} \lambda_1 u_1^T x \\ \vdots \\ \lambda_D u_D^T x \end{bmatrix} = (\sum_{k=1}^D \lambda_k u_k u_k^T) x$$
And since $\Sigma^{-1} = U\Lambda^{-1}U^T$ , the same procedure can be used to prove $\Sigma^{-1} = \sum_{i=1}^{D} \frac{1}{\lambda_i} \mathbf{u}_i \mathbf{u}_i^{\mathrm{T}}.$.
| 828
|
2
|
2.2
|
medium
|
$ The form of the Bernoulli distribution given by $(x|\mu) = \mu^x (1-\mu)^{1-x}$ is not symmetric between the two values of x. In some situations, it will be more convenient to use an equivalent formulation for which $x \in \{-1, 1\}$ , in which case the distribution can be written
$$p(x|\mu) = \left(\frac{1-\mu}{2}\right)^{(1-x)/2} \left(\frac{1+\mu}{2}\right)^{(1+x)/2}$$
$p(x|\mu) = \left(\frac{1-\mu}{2}\right)^{(1-x)/2} \left(\frac{1+\mu}{2}\right)^{(1+x)/2}$
where $\mu \in [-1, 1]$ . Show that the distribution $p(x|\mu) = \left(\frac{1-\mu}{2}\right)^{(1-x)/2} \left(\frac{1+\mu}{2}\right)^{(1+x)/2}$ is normalized, and evaluate its mean, variance, and entropy.
|
The proof in Prob.2.1. can also be used here.
$$\sum_{x_i = -1, 1} p(x_i) = \frac{1 - \mu}{2} + \frac{1 + \mu}{2} = 1$$
$$\mathbb{E}[x] = \sum_{x_i = -1, 1} x_i \cdot p(x_i) = -1 \cdot \frac{1 - \mu}{2} + 1 \cdot \frac{1 + \mu}{2} = \mu$$
$$\operatorname{var}[x] = \sum_{x_i = -1, 1} (x_i - \mathbb{E}[x])^2 \cdot p(x_i)$$
$$= (-1 - \mu)^2 \cdot \frac{1 - \mu}{2} + (1 - \mu)^2 \cdot \frac{1 + \mu}{2}$$
$$= 1 - \mu^2$$
$$H[x] = -\sum_{x_i = -1, 1} p(x_i) \cdot \ln p(x_i) = -\frac{1 - \mu}{2} \cdot \ln \frac{1 - \mu}{2} - \frac{1 + \mu}{2} \cdot \ln \frac{1 + \mu}{2}$$
| 577
|
2
|
2.20
|
medium
|
A positive definite matrix $\Sigma$ can be defined as one for which the quadratic form
$$\mathbf{a}^{\mathrm{T}}\mathbf{\Sigma}\mathbf{a}\tag{2.285}$$
is positive for any real value of the vector $\mathbf{a}$ . Show that a necessary and sufficient condition for $\Sigma$ to be positive definite is that all of the eigenvalues $\lambda_i$ of $\Sigma$ , defined by $\mathbf{\Sigma}\mathbf{u}_i = \lambda_i \mathbf{u}_i$, are positive.
|
Since $u_1, u_2, ..., u_D$ can constitute a basis for $\mathbb{R}^D$ , we can make projection for a:
$$\boldsymbol{a} = a_1 \boldsymbol{u_1} + a_2 \boldsymbol{u_2} + \dots + a_D \boldsymbol{u_D}$$
We substitute the expression above into $\boldsymbol{a}^T \boldsymbol{\Sigma} \boldsymbol{a}$ , taking advantage of the property: $\boldsymbol{u}_i \boldsymbol{u}_j = 1$ only if i = j, otherwise 0, we will obtain:
$$\mathbf{a}^{T} \mathbf{\Sigma} \mathbf{a} = (a_{1} \mathbf{u}_{1} + a_{2} \mathbf{u}_{2} + \dots + a_{D} \mathbf{u}_{D})^{T} \mathbf{\Sigma} (a_{1} \mathbf{u}_{1} + a_{2} \mathbf{u}_{2} + \dots + a_{D} \mathbf{u}_{D})$$
$$= (a_{1} \mathbf{u}_{1}^{T} + a_{2} \mathbf{u}_{2}^{T} + \dots + a_{D} \mathbf{u}_{D}^{T}) \mathbf{\Sigma} (a_{1} \mathbf{u}_{1} + a_{2} \mathbf{u}_{2} + \dots + a_{D} \mathbf{u}_{D})$$
$$= (a_{1} \mathbf{u}_{1}^{T} + a_{2} \mathbf{u}_{2}^{T} + \dots + a_{D} \mathbf{u}_{D}^{T}) (a_{1} \lambda_{1} \mathbf{u}_{1} + a_{2} \lambda_{2} \mathbf{u}_{2} + \dots + a_{D} \lambda_{D} \mathbf{u}_{D})$$
$$= \lambda_{1} a_{1}^{2} + \lambda_{2} a_{2}^{2} + \dots + \lambda_{D} a_{D}^{2}$$
Since $\boldsymbol{a}$ is real,the expression above will be strictly positive for any non-zero $\boldsymbol{a}$ , if all eigenvalues are strictly positive. It is also clear that if an eigenvalue, $\lambda_i$ , is zero or negative, there will exist a vector $\boldsymbol{a}$ (e.g. $\boldsymbol{a} = \boldsymbol{u_i}$ ), for which this expression will be no greater than 0. Thus, that a real symmetric matrix has eigenvectors which are all strictly positive is a sufficient and necessary condition for the matrix to be positive definite.
| 1,668
|
2
|
2.21
|
easy
|
Show that a real, symmetric matrix of size $D \times D$ has D(D+1)/2 independent parameters.
|
It is straightforward. For a symmetric matrix $\Lambda$ of size $D \times D$ , when the lower triangular part is decided, the whole matrix will be decided due to
symmetry. Hence the number of independent parameters is D + (D-1) + ... + 1, which equals to D(D+1)/2.
| 268
|
2
|
2.22
|
easy
|
Show that the inverse of a symmetric matrix is itself symmetric.
|
Suppose A is a symmetric matrix, and we need to prove that $A^{-1}$ is also symmetric, i.e., $A^{-1} = (A^{-1})^T$ . Since identity matrix I is also symmetric, we have:
$$AA^{-1} = (AA^{-1})^T$$
And since $AB^T = B^TA^T$ holds for arbitrary matrix A and B, we will obtain:
$$\boldsymbol{A}\boldsymbol{A}^{-1} = (\boldsymbol{A}^{-1})^T \boldsymbol{A}^T$$
Since $\mathbf{A} = \mathbf{A}^T$ , we substitute the right side:
$$\boldsymbol{A}\boldsymbol{A}^{-1} = (\boldsymbol{A}^{-1})^T \boldsymbol{A}$$
And note that $\mathbf{A}\mathbf{A}^{-1} = \mathbf{A}^{-1}\mathbf{A} = \mathbf{I}$ , we rearrange the order of the left side :
$$\boldsymbol{A}^{-1}\boldsymbol{A} = (\boldsymbol{A}^{-1})^T \boldsymbol{A}$$
Finally, by multiplying $A^{-1}$ to both sides, we can obtain:
$$A^{-1}AA^{-1} = (A^{-1})^T AA^{-1}$$
Using $AA^{-1} = I$ , we will get what we are asked:
$$\boldsymbol{A}^{-1} = \left(\boldsymbol{A}^{-1}\right)^T$$
| 941
|
2
|
2.23
|
medium
|
By diagonalizing the coordinate system using the eigenvector expansion $\mathbf{\Sigma}\mathbf{u}_i = \lambda_i \mathbf{u}_i$, show that the volume contained within the hyperellipsoid corresponding to a constant
Mahalanobis distance $\Delta$ is given by
$$V_D |\mathbf{\Sigma}|^{1/2} \Delta^D \tag{2.286}$$
where $V_D$ is the volume of the unit sphere in D dimensions, and the Mahalanobis distance is defined by $\Delta^{2} = (\mathbf{x} - \boldsymbol{\mu})^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu})$.
|
Let's reformulate the problem. What the problem wants us to prove is that if $(\mathbf{x} - \boldsymbol{\mu})^T \mathbf{\Sigma}^{-1} (\mathbf{x} - \boldsymbol{\mu}) = r^2$ , where $r^2$ is a constant, we will have the volume of the hyperellipsoid decided by the equation above will equal to $V_D |\mathbf{\Sigma}|^{1/2} r^D$ . Note that the center of this hyperellipsoid locates at $\boldsymbol{\mu}$ , and a translation operation won't change its volume, thus we only need to prove that the volume of a hyperellipsoid decided by $\mathbf{x}^T \mathbf{\Sigma}^{-1} \mathbf{x} = r^2$ , whose center locates at $\mathbf{0}$ equals to $V_D |\mathbf{\Sigma}|^{1/2} r^D$ .
This problem can be viewed as two parts. Firstly, let's discuss about $V_D$ , the volume of a unit sphere in dimension D. The expression of $V_D$ has already be given in the solution procedure of Prob.1.18, i.e., (1.144):
$$V_D = \frac{S_D}{D} = \frac{2\pi^{D/2}}{\Gamma(\frac{D}{2} + 1)}$$
And also in the procedure, we show that a D dimensional sphere with radius r, i.e., $\mathbf{x}^T\mathbf{x} = r^2$ , has volume $V(r) = V_D r^D$ . We move a step forward: we
perform a linear transform using matrix $\Sigma^{1/2}$ , i.e., $\mathbf{y}^T \mathbf{y} = r^2$ , where $\mathbf{y} = \Sigma^{1/2} \mathbf{x}$ . After the linear transformation, we actually get a hyperellipsoid whose center locates at $\mathbf{0}$ , and its volume is given by multiplying V(r) with the determinant of the transformation matrix, which gives $|\Sigma|^{1/2}V_D r^D$ , just as required.
| 1,555
|
2
|
2.24
|
medium
|
$ Prove the identity $\begin{pmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{pmatrix}^{-1} = \begin{pmatrix} \mathbf{M} & -\mathbf{M}\mathbf{B}\mathbf{D}^{-1} \\ -\mathbf{D}^{-1}\mathbf{C}\mathbf{M} & \mathbf{D}^{-1} + \mathbf{D}^{-1}\mathbf{C}\mathbf{M}\mathbf{B}\mathbf{D}^{-1} \end{pmatrix}$ by multiplying both sides by the matrix
$$\begin{pmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{pmatrix} \tag{2.287}$$
and making use of the definition $\mathbf{M} = (\mathbf{A} - \mathbf{B}\mathbf{D}^{-1}\mathbf{C})^{-1}.$.
|
We just following the hint, and firstly let's calculate:
$$\left[\begin{array}{cc} A & B \\ C & D \end{array}\right] \times \left[\begin{array}{cc} M & -MBD^{-1} \\ -D^{-1}CM & D^{-1} + D^{-1}CMBD^{-1} \end{array}\right]$$
The result can also be partitioned into four blocks. The block located at left top equals to :
$$AM - BD^{-1}CM = (A - BD^{-1}C)(A - BD^{-1}C)^{-1} = I$$
Where we have taken advantage of $\mathbf{M} = (\mathbf{A} - \mathbf{B}\mathbf{D}^{-1}\mathbf{C})^{-1}.$. And the right top equals to:
$$-AMBD^{-1} + BD^{-1} + BD^{-1}CMBD^{-1} = (I - AM + BD^{-1}CM)BD^{-1} = 0$$
Where we have used the result of the left top block. And the left bottom equals to :
$$CM - DD^{-1}CM = 0$$
And the right bottom equals to:
$$-CMRD^{-1} + DD^{-1} + DD^{-1}CMDD^{-1} = I$$
we have proved what we are asked. Note: if you want to be more precise, you should also multiply the block matrix on the right side of $\begin{pmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{pmatrix}^{-1} = \begin{pmatrix} \mathbf{M} & -\mathbf{M}\mathbf{B}\mathbf{D}^{-1} \\ -\mathbf{D}^{-1}\mathbf{C}\mathbf{M} & \mathbf{D}^{-1} + \mathbf{D}^{-1}\mathbf{C}\mathbf{M}\mathbf{B}\mathbf{D}^{-1} \end{pmatrix}$ and then prove that it will equal to a identity matrix. However, the procedure above can be also used there, so we omit the proof and what's more, if two arbitrary square matrix X and Y satisfied XY = I, it can be shown that YX = I also holds.
| 1,473
|
2
|
2.25
|
medium
|
In Sections 2.3.1 and 2.3.2, we considered the conditional and marginal distributions for a multivariate Gaussian. More generally, we can consider a partitioning of the components of $\mathbf{x}$ into three groups $\mathbf{x}_a$ , $\mathbf{x}_b$ , and $\mathbf{x}_c$ , with a corresponding partitioning of the mean vector $\boldsymbol{\mu}$ and of the covariance matrix $\boldsymbol{\Sigma}$ in the form
$$\mu = \begin{pmatrix} \mu_a \\ \mu_b \\ \mu_c \end{pmatrix}, \qquad \Sigma = \begin{pmatrix} \Sigma_{aa} & \Sigma_{ab} & \Sigma_{ac} \\ \Sigma_{ba} & \Sigma_{bb} & \Sigma_{bc} \\ \Sigma_{ca} & \Sigma_{cb} & \Sigma_{cc} \end{pmatrix}. \tag{2.288}$$
By making use of the results of Section 2.3, find an expression for the conditional distribution $p(\mathbf{x}_a|\mathbf{x}_b)$ in which $\mathbf{x}_c$ has been marginalized out.
|
We will take advantage of the result of (2.94)-(2.98). Let's first begin by grouping $x_a$ and $x_b$ together, and then we rewrite what has been given as:
$$m{x} = \left( egin{array}{c} m{x}_{a,b} \\ m{x}_c \end{array}
ight) \quad m{\mu} = \left( egin{array}{c} m{\mu}_{a,b} \\ m{\mu}_c \end{array}
ight) \quad m{\Sigma} = \left[ egin{array}{c} m{\Sigma}_{(a,b)(a,b)} & m{\Sigma}_{(a,b)c} \\ m{\Sigma}_{(a,b)c} & m{\Sigma}_{cc} \end{array}
ight]$$
Then we take advantage of $p(\mathbf{x}_a) = \mathcal{N}(\mathbf{x}_a | \boldsymbol{\mu}_a, \boldsymbol{\Sigma}_{aa}).$, we can obtain:
$$p(\boldsymbol{x}_{a,b}) = \mathcal{N}(\boldsymbol{x}_{a,b}|\boldsymbol{\mu}_{a,b},\boldsymbol{\Sigma}_{(a,b)(a,b)})$$
Where we have defined:
$$\boldsymbol{\mu}_{a,b} = \left( \begin{array}{c} \boldsymbol{\mu}_a \\ \boldsymbol{\mu}_b \end{array} \right) \quad \boldsymbol{\Sigma}_{(a,b)(a,b)} = \left[ \begin{array}{cc} \boldsymbol{\Sigma}_{aa} & \boldsymbol{\Sigma}_{ab} \\ \boldsymbol{\Sigma}_{ba} & \boldsymbol{\Sigma}_{bb} \end{array} \right]$$
Since now we have obtained the joint contribution of $x_a$ and $x_b$ , we will take advantage of $p(\mathbf{x}_a|\mathbf{x}_b) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}_{a|b}, \boldsymbol{\Lambda}_{aa}^{-1})$ $\boldsymbol{\mu}_{a|b} = \boldsymbol{\mu}_a - \boldsymbol{\Lambda}_{aa}^{-1} \boldsymbol{\Lambda}_{ab} (\mathbf{x}_b - \boldsymbol{\mu}_b).$ to obtain conditional distribution, which gives:
$$p(\mathbf{x}_a|\mathbf{x}_b) = \mathcal{N}(\mathbf{x}|\mathbf{\mu}_{a|b}, \mathbf{\Lambda}_{aa}^{-1})$$
Where we have defined
$$\boldsymbol{\mu}_{a|b} = \boldsymbol{\mu}_a - \boldsymbol{\Lambda}_{aa}^{-1} \boldsymbol{\Lambda}_{ab} (\boldsymbol{x}_b - \boldsymbol{\mu}_b)$$
And the expression of $\Lambda_{aa}^{-1}$ and $\Lambda_{ab}$ can be given by using $\begin{pmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{pmatrix}^{-1} = \begin{pmatrix} \mathbf{M} & -\mathbf{M}\mathbf{B}\mathbf{D}^{-1} \\ -\mathbf{D}^{-1}\mathbf{C}\mathbf{M} & \mathbf{D}^{-1} + \mathbf{D}^{-1}\mathbf{C}\mathbf{M}\mathbf{B}\mathbf{D}^{-1} \end{pmatrix}$ and $\mathbf{M} = (\mathbf{A} - \mathbf{B}\mathbf{D}^{-1}\mathbf{C})^{-1}.$ once we notice that the following relation exits:
$$\left[\begin{array}{cc} \boldsymbol{\Lambda}_{aa} & \boldsymbol{\Lambda}_{ab} \\ \boldsymbol{\Lambda}_{ba} & \boldsymbol{\Lambda}_{bb} \end{array}\right] = \left[\begin{array}{cc} \boldsymbol{\Sigma}_{aa} & \boldsymbol{\Sigma}_{ab} \\ \boldsymbol{\Sigma}_{ba} & \boldsymbol{\Sigma}_{bb} \end{array}\right]^{-1}$$
| 2,588
|
2
|
2.26
|
medium
|
$ A very useful result from linear algebra is the *Woodbury* matrix inversion formula given by
$$(\mathbf{A} + \mathbf{BCD})^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1}\mathbf{B}(\mathbf{C}^{-1} + \mathbf{D}\mathbf{A}^{-1}\mathbf{B})^{-1}\mathbf{D}\mathbf{A}^{-1}.$$
$(\mathbf{A} + \mathbf{BCD})^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1}\mathbf{B}(\mathbf{C}^{-1} + \mathbf{D}\mathbf{A}^{-1}\mathbf{B})^{-1}\mathbf{D}\mathbf{A}^{-1}.$
By multiplying both sides by (A + BCD) prove the correctness of this result.
|
This problem is quite straightforward, if we just follow the hint.
$$(A + BCD) (A^{-1} - A^{-1}B(C^{-1} + DA^{-1}B)^{-1}DA^{-1})$$
$$= AA^{-1} - AA^{-1}B(C^{-1} + DA^{-1}B)^{-1}DA^{-1} + BCDA^{-1} - BCDA^{-1}B(C^{-1} + DA^{-1}B)^{-1}DA^{-1}$$
$$= I - B(C^{-1} + DA^{-1}B)^{-1}DA^{-1} + BCDA^{-1} + B(C^{-1} + DA^{-1}B)^{-1}DA^{-1} - BCDA^{-1}$$
$$= I$$
Where we have taken advantage of
$$-BCDA^{-1}B(C^{-1} + DA^{-1}B)^{-1}DA^{-1}$$
$$= -BC(-C^{-1} + C^{-1} + DA^{-1}B)(C^{-1} + DA^{-1}B)^{-1}DA^{-1}$$
$$= (-BC)(-C^{-1})(C^{-1} + DA^{-1}B)^{-1}DA^{-1} + (-BC)(C^{-1} + DA^{-1}B)(C^{-1} + DA^{-1}B)^{-1}DA^{-1}$$
$$= B(C^{-1} + DA^{-1}B)^{-1}DA^{-1} - BCDA^{-1}$$
Here we will also directly calculate the inverse matrix instead to give another solution. Let's first begin by introducing two useful formulas.
$$(I + P)^{-1} = (I + P)^{-1}(I + P - P)$$
= $I - (I + P)^{-1}P$
And since
$$P + PQP = P(I + QP) = (I + PQ)P$$
The second formula is:
$$(\boldsymbol{I} + \boldsymbol{P}\boldsymbol{Q})^{-1}\boldsymbol{P} = \boldsymbol{P}(\boldsymbol{I} + \boldsymbol{Q}\boldsymbol{P})^{-1}$$
And now let's directly calculate $(A + BCD)^{-1}$ :
$$(A + BCD)^{-1} = [A(I + A^{-1}BCD)]^{-1}$$
$$= (I + A^{-1}BCD)^{-1}A^{-1}$$
$$= [I - (I + A^{-1}BCD)^{-1}A^{-1}BCD]A^{-1}$$
$$= A^{-1} - (I + A^{-1}BCD)^{-1}A^{-1}BCDA^{-1}$$
Where we have assumed that $\boldsymbol{A}$ is invertible and also used the first formula we introduced. Then we also assume that $\boldsymbol{C}$ is invertible and recursively use the second formula:
$$(A + BCD)^{-1} = A^{-1} - (I + A^{-1}BCD)^{-1}A^{-1}BCDA^{-1}$$
$$= A^{-1} - A^{-1}(I + BCDA^{-1})^{-1}BCDA^{-1}$$
$$= A^{-1} - A^{-1}B(I + CDA^{-1}B)^{-1}CDA^{-1}$$
$$= A^{-1} - A^{-1}B[C(C^{-1} + DA^{-1}B)]^{-1}CDA^{-1}$$
$$= A^{-1} - A^{-1}B(C^{-1} + DA^{-1}B)^{-1}C^{-1}CDA^{-1}$$
$$= A^{-1} - A^{-1}B(C^{-1} + DA^{-1}B)^{-1}DA^{-1}$$
Just as required.
| 1,908
|
2
|
2.27
|
easy
|
Let $\mathbf{x}$ and $\mathbf{z}$ be two independent random vectors, so that $p(\mathbf{x}, \mathbf{z}) = p(\mathbf{x})p(\mathbf{z})$ . Show that the mean of their sum $\mathbf{y} = \mathbf{x} + \mathbf{z}$ is given by the sum of the means of each of the variable separately. Similarly, show that the covariance matrix of $\mathbf{y}$ is given by the sum of the covariance matrices of $\mathbf{x}$ and $\mathbf{z}$ . Confirm that this result agrees with that of Exercise 1.10.
|
The same procedure used in Prob.1.10 can be used here similarly.
$$\mathbb{E}[\mathbf{x}+\mathbf{z}] = \int \int (\mathbf{x}+\mathbf{z})p(\mathbf{x},\mathbf{z})d\mathbf{x}d\mathbf{z}$$
$$= \int \int (\mathbf{x}+\mathbf{z})p(\mathbf{x})p(\mathbf{z})d\mathbf{x}d\mathbf{z}$$
$$= \int \int \mathbf{x}p(\mathbf{x})p(\mathbf{z})d\mathbf{x}d\mathbf{z} + \int \int \mathbf{z}p(\mathbf{x})p(\mathbf{z})d\mathbf{x}d\mathbf{z}$$
$$= \int (\int p(\mathbf{z})d\mathbf{z})\mathbf{x}p(\mathbf{x})d\mathbf{x} + \int (\int p(\mathbf{x})d\mathbf{x})\mathbf{z}p(\mathbf{z})d\mathbf{z}$$
$$= \int \mathbf{x}p(\mathbf{x})d\mathbf{x} + \int \mathbf{z}p(\mathbf{z})d\mathbf{z}$$
$$= \mathbb{E}[\mathbf{x}] + \mathbb{E}[\mathbf{z}]$$
And for covariance matrix, we will use matrix integral:
$$cov[x+z] = \int \int (x+z-\mathbb{E}[x+z])(x+z-\mathbb{E}[x+z])^T p(x,z) dx dz$$
Also the same procedure can be used here. We omit the proof for simplicity.
| 934
|
2
|
2.28
|
hard
|
$ Consider a joint distribution over the variable
$$\mathbf{z} = \begin{pmatrix} \mathbf{x} \\ \mathbf{y} \end{pmatrix} \tag{2.290}$$
whose mean and covariance are given by $\mathbb{E}[\mathbf{z}] = \begin{pmatrix} \boldsymbol{\mu} \\ \mathbf{A}\boldsymbol{\mu} + \mathbf{b} \end{pmatrix}.$ and $cov[\mathbf{z}] = \mathbf{R}^{-1} = \begin{pmatrix} \mathbf{\Lambda}^{-1} & \mathbf{\Lambda}^{-1} \mathbf{A}^{\mathrm{T}} \\ \mathbf{A} \mathbf{\Lambda}^{-1} & \mathbf{L}^{-1} + \mathbf{A} \mathbf{\Lambda}^{-1} \mathbf{A}^{\mathrm{T}} \end{pmatrix}.$ respectively. By making use of the results $\mathbb{E}[\mathbf{x}_a] = \boldsymbol{\mu}_a$ and $cov[\mathbf{x}_a] = \mathbf{\Sigma}_{aa}.$ show that the marginal distribution $p(\mathbf{x})$ is given $p(\mathbf{x}) = \mathcal{N}\left(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Lambda}^{-1}\right)$. Similarly, by making use of the results $\boldsymbol{\mu}_{a|b} = \boldsymbol{\mu}_a + \boldsymbol{\Sigma}_{ab} \boldsymbol{\Sigma}_{bb}^{-1} (\mathbf{x}_b - \boldsymbol{\mu}_b)$ and $\Sigma_{a|b} = \Sigma_{aa} - \Sigma_{ab} \Sigma_{bb}^{-1} \Sigma_{ba}.$ show that the conditional distribution $p(\mathbf{y}|\mathbf{x})$ is given by $p(\mathbf{y}|\mathbf{x}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\mathbf{x} + \mathbf{b}, \mathbf{L}^{-1})$.
|
It is quite straightforward when we compare the problem with (2.94)-(2.98). We treat $\boldsymbol{x}$ in $\mathbf{x} = \begin{pmatrix} \mathbf{x}_a \\ \mathbf{x}_b \end{pmatrix}, \quad \boldsymbol{\mu} = \begin{pmatrix} \boldsymbol{\mu}_a \\ \boldsymbol{\mu}_b \end{pmatrix}$ as $\boldsymbol{z}$ in this problem, $\boldsymbol{x}_a$ in $\mathbf{x} = \begin{pmatrix} \mathbf{x}_a \\ \mathbf{x}_b \end{pmatrix}, \quad \boldsymbol{\mu} = \begin{pmatrix} \boldsymbol{\mu}_a \\ \boldsymbol{\mu}_b \end{pmatrix}$ as $\boldsymbol{x}$ in this problem, $\boldsymbol{x}_b$ in $\mathbf{x} = \begin{pmatrix} \mathbf{x}_a \\ \mathbf{x}_b \end{pmatrix}, \quad \boldsymbol{\mu} = \begin{pmatrix} \boldsymbol{\mu}_a \\ \boldsymbol{\mu}_b \end{pmatrix}$ as $\boldsymbol{y}$ in this problem. In other words, we rewrite the problem in the form of (2.94)-(2.98), which gives:
$$\boldsymbol{z} = \begin{pmatrix} \boldsymbol{x} \\ \boldsymbol{y} \end{pmatrix} \quad \mathbb{E}(\boldsymbol{z}) = \begin{pmatrix} \boldsymbol{\mu} \\ \boldsymbol{A}\boldsymbol{\mu} + \boldsymbol{b} \end{pmatrix} \quad cov(\boldsymbol{z}) = \begin{bmatrix} \boldsymbol{\Lambda}^{-1} & \boldsymbol{\Lambda}^{-1}\boldsymbol{A}^T \\ \boldsymbol{A}\boldsymbol{\Lambda}^{-1} & \boldsymbol{L}^{-1} + \boldsymbol{A}\boldsymbol{\Lambda}^{-1}\boldsymbol{A}^T \end{bmatrix}$$
By using $p(\mathbf{x}_a) = \mathcal{N}(\mathbf{x}_a | \boldsymbol{\mu}_a, \boldsymbol{\Sigma}_{aa}).$, we can obtain:
$$p(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Lambda}^{-1})$$
And by using $p(\mathbf{x}_a|\mathbf{x}_b) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}_{a|b}, \boldsymbol{\Lambda}_{aa}^{-1})$ and $\boldsymbol{\mu}_{a|b} = \boldsymbol{\mu}_a - \boldsymbol{\Lambda}_{aa}^{-1} \boldsymbol{\Lambda}_{ab} (\mathbf{x}_b - \boldsymbol{\mu}_b).$, we can obtain:
$$p(\mathbf{y}|\mathbf{x}) = \mathcal{N}(\mathbf{y}|\boldsymbol{\mu}_{\mathbf{y}|\mathbf{x}}, \boldsymbol{\Lambda}_{\mathbf{y}\mathbf{y}}^{-1})$$
Where $\Lambda_{yy}$ can be obtained by the right bottom part of $\mathbf{R} = \begin{pmatrix} \mathbf{\Lambda} + \mathbf{A}^{\mathrm{T}} \mathbf{L} \mathbf{A} & -\mathbf{A}^{\mathrm{T}} \mathbf{L} \\ -\mathbf{L} \mathbf{A} & \mathbf{L} \end{pmatrix}.$, which gives $\Lambda_{yy} = L^{-1}$ , and you can also calculate it using $cov[\mathbf{z}] = \mathbf{R}^{-1} = \begin{pmatrix} \mathbf{\Lambda}^{-1} & \mathbf{\Lambda}^{-1} \mathbf{A}^{\mathrm{T}} \\ \mathbf{A} \mathbf{\Lambda}^{-1} & \mathbf{L}^{-1} + \mathbf{A} \mathbf{\Lambda}^{-1} \mathbf{A}^{\mathrm{T}} \end{pmatrix}.$ combined with $\begin{pmatrix} \mathbf{\Sigma}_{aa} & \mathbf{\Sigma}_{ab} \\ \mathbf{\Sigma}_{ba} & \mathbf{\Sigma}_{bb} \end{pmatrix}^{-1} = \begin{pmatrix} \mathbf{\Lambda}_{aa} & \mathbf{\Lambda}_{ab} \\ \mathbf{\Lambda}_{ba} & \mathbf{\Lambda}_{bb} \end{pmatrix}$ and $\Lambda_{aa} = (\Sigma_{aa} - \Sigma_{ab} \Sigma_{bb}^{-1} \Sigma_{ba})^{-1}$. Finally the conditional mean is given by (2.97):
$$\mu_{y|x} = A\mu + L - L^{-1}(-LA)(x - \mu) = Ax + L$$
| 3,105
|
2
|
2.29
|
medium
|
Using the partitioned matrix inversion formula $\begin{pmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{C} & \mathbf{D} \end{pmatrix}^{-1} = \begin{pmatrix} \mathbf{M} & -\mathbf{M}\mathbf{B}\mathbf{D}^{-1} \\ -\mathbf{D}^{-1}\mathbf{C}\mathbf{M} & \mathbf{D}^{-1} + \mathbf{D}^{-1}\mathbf{C}\mathbf{M}\mathbf{B}\mathbf{D}^{-1} \end{pmatrix}$, show that the inverse of the precision matrix $\mathbf{R} = \begin{pmatrix} \mathbf{\Lambda} + \mathbf{A}^{\mathrm{T}} \mathbf{L} \mathbf{A} & -\mathbf{A}^{\mathrm{T}} \mathbf{L} \\ -\mathbf{L} \mathbf{A} & \mathbf{L} \end{pmatrix}.$ is given by the covariance matrix $cov[\mathbf{z}] = \mathbf{R}^{-1} = \begin{pmatrix} \mathbf{\Lambda}^{-1} & \mathbf{\Lambda}^{-1} \mathbf{A}^{\mathrm{T}} \\ \mathbf{A} \mathbf{\Lambda}^{-1} & \mathbf{L}^{-1} + \mathbf{A} \mathbf{\Lambda}^{-1} \mathbf{A}^{\mathrm{T}} \end{pmatrix}.$.
|
It is straightforward. Firstly, we calculate the left top block:
left top =
$$\left[ (\boldsymbol{\Lambda} + \boldsymbol{A}^T \boldsymbol{L} \boldsymbol{A}) - (-\boldsymbol{A}^T \boldsymbol{L})(\boldsymbol{L}^{-1})(-\boldsymbol{L} \boldsymbol{A}) \right]^{-1} = \boldsymbol{\Lambda}^{-1}$$
And then the right top block:
right top =
$$-\boldsymbol{\Lambda}^{-1}(-\boldsymbol{A}^T\boldsymbol{L})\boldsymbol{L}^{-1} = \boldsymbol{\Lambda}^{-1}\boldsymbol{A}^T$$
And then the left bottom block:
left bottom =
$$-\boldsymbol{L}^{-1}(-\boldsymbol{L}\boldsymbol{A})\boldsymbol{\Lambda}^{-1} = \boldsymbol{A}\boldsymbol{\Lambda}^{-1}$$
Finally the right bottom block:
right bottom =
$$L^{-1} + L^{-1}(-LA)\Lambda^{-1}(-A^TL)L^{-1} = L^{-1} + A\Lambda^{-1}A^T$$
| 763
|
2
|
2.3
|
medium
|
In this exercise, we prove that the binomial distribution $(m|N,\mu) = \binom{N}{m} \mu^m (1-\mu)^{N-m}$ is normalized. First use the definition $\binom{N}{m} \equiv \frac{N!}{(N-m)!m!}$ of the number of combinations of m identical objects chosen from a total of N to show that
$$\binom{N}{m} + \binom{N}{m-1} = \binom{N+1}{m}.$$
$\binom{N}{m} + \binom{N}{m-1} = \binom{N+1}{m}.$
Use this result to prove by induction the following result
$$(1+x)^N = \sum_{m=0}^N \binom{N}{m} x^m$$
$(1+x)^N = \sum_{m=0}^N \binom{N}{m} x^m$
which is known as the *binomial theorem*, and which is valid for all real values of x. Finally, show that the binomial distribution is normalized, so that
$$\sum_{m=0}^{N} \binom{N}{m} \mu^m (1-\mu)^{N-m} = 1$$
$\sum_{m=0}^{N} \binom{N}{m} \mu^m (1-\mu)^{N-m} = 1$
which can be done by first pulling out a factor $(1 - \mu)^N$ out of the summation and then making use of the binomial theorem.
|
$\binom{N}{m} + \binom{N}{m-1} = \binom{N+1}{m}.$ is an important property of Combinations, which we have used before, such as in Prob.1.15. We will use the 'old fashioned' denotation $C_N^m$ to represent choose m objects from a total of N. With the prior knowledge:
$$C_N^m = \frac{N!}{m!(N-m)!}$$
We evaluate the left side of (2.262):
$$C_N^m + C_N^{m-1} = \frac{N!}{m!(N-m)!} + \frac{N!}{(m-1)!(N-(m-1))!}$$
$$= \frac{N!}{(m-1)!(N-m)!} (\frac{1}{m} + \frac{1}{N-m+1})$$
$$= \frac{(N+1)!}{m!(N+1-m)!} = C_{N+1}^m$$
To proof $(1+x)^N = \sum_{m=0}^N \binom{N}{m} x^m$, here we will proof a more general form:
$$(x+y)^{N} = \sum_{m=0}^{N} C_{N}^{m} x^{m} y^{N-m}$$
(\*)
If we let y = 1, (\*) will reduce to $(1+x)^N = \sum_{m=0}^N \binom{N}{m} x^m$. We will proof it by induction. First, it is obvious when N = 1, (\*) holds. We assume that it holds for N, we will proof that it also holds for N + 1.
$$(x+y)^{N+1} = (x+y) \sum_{m=0}^{N} C_N^m x^m y^{N-m}$$
$$= x \sum_{m=0}^{N} C_N^m x^m y^{N-m} + y \sum_{m=0}^{N} C_N^m x^m y^{N-m}$$
$$= \sum_{m=0}^{N} C_N^m x^{m+1} y^{N-m} + \sum_{m=0}^{N} C_N^m x^m y^{N+1-m}$$
$$= \sum_{m=1}^{N+1} C_N^{m-1} x^m y^{N+1-m} + \sum_{m=0}^{N} C_N^m x^m y^{N+1-m}$$
$$= \sum_{m=1}^{N} (C_N^{m-1} + C_N^m) x^m y^{N+1-m} + x^{N+1} + y^{N+1}$$
$$= \sum_{m=1}^{N} C_{N+1}^m x^m y^{N+1-m} + x^{N+1} + y^{N+1}$$
$$= \sum_{m=0}^{N+1} C_{N+1}^m x^m y^{N+1-m}$$
By far, we have proved (\*). Therefore, if we let y = 1 in (\*), $(1+x)^N = \sum_{m=0}^N \binom{N}{m} x^m$ has been proved. If we let $x = \mu$ and $y = 1 - \mu$ , $\sum_{m=0}^{N} \binom{N}{m} \mu^m (1-\mu)^{N-m} = 1$ has been proved.
| 1,687
|
2
|
2.30
|
easy
|
By starting from $\mathbb{E}[\mathbf{z}] = \mathbf{R}^{-1} \begin{pmatrix} \mathbf{\Lambda} \boldsymbol{\mu} - \mathbf{A}^{\mathrm{T}} \mathbf{L} \mathbf{b} \\ \mathbf{L} \mathbf{b} \end{pmatrix}.$ and making use of the result $cov[\mathbf{z}] = \mathbf{R}^{-1} = \begin{pmatrix} \mathbf{\Lambda}^{-1} & \mathbf{\Lambda}^{-1} \mathbf{A}^{\mathrm{T}} \\ \mathbf{A} \mathbf{\Lambda}^{-1} & \mathbf{L}^{-1} + \mathbf{A} \mathbf{\Lambda}^{-1} \mathbf{A}^{\mathrm{T}} \end{pmatrix}.$, verify the result $\mathbb{E}[\mathbf{z}] = \begin{pmatrix} \boldsymbol{\mu} \\ \mathbf{A}\boldsymbol{\mu} + \mathbf{b} \end{pmatrix}.$.
|
It is straightforward by multiplying $cov[\mathbf{z}] = \mathbf{R}^{-1} = \begin{pmatrix} \mathbf{\Lambda}^{-1} & \mathbf{\Lambda}^{-1} \mathbf{A}^{\mathrm{T}} \\ \mathbf{A} \mathbf{\Lambda}^{-1} & \mathbf{L}^{-1} + \mathbf{A} \mathbf{\Lambda}^{-1} \mathbf{A}^{\mathrm{T}} \end{pmatrix}.$ and $\mathbb{E}[\mathbf{z}] = \mathbf{R}^{-1} \begin{pmatrix} \mathbf{\Lambda} \boldsymbol{\mu} - \mathbf{A}^{\mathrm{T}} \mathbf{L} \mathbf{b} \\ \mathbf{L} \mathbf{b} \end{pmatrix}.$, which gives:
$$\begin{pmatrix} \mathbf{\Lambda}^{-1} & \mathbf{\Lambda}^{-1} \mathbf{A}^{T} \\ \mathbf{A}\mathbf{\Lambda}^{-1} & \mathbf{L}^{-1} + \mathbf{A}\mathbf{\Lambda}^{-1} \mathbf{A}^{T} \end{pmatrix} \begin{pmatrix} \mathbf{\Lambda}\boldsymbol{\mu} - \mathbf{A}^{T} \mathbf{L} \boldsymbol{b} \\ \mathbf{L} \boldsymbol{b} \end{pmatrix} = \begin{pmatrix} \boldsymbol{\mu} \\ \mathbf{A}\boldsymbol{\mu} + \boldsymbol{b} \end{pmatrix}$$
Just as required in the problem.
| 968
|
2
|
2.31
|
medium
|
Consider two multidimensional random vectors $\mathbf{x}$ and $\mathbf{z}$ having Gaussian distributions $p(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}_{\mathbf{x}}, \boldsymbol{\Sigma}_{\mathbf{x}})$ and $p(\mathbf{z}) = \mathcal{N}(\mathbf{z}|\boldsymbol{\mu}_{\mathbf{z}}, \boldsymbol{\Sigma}_{\mathbf{z}})$ respectively, together with their sum $\mathbf{y} = \mathbf{x} + \mathbf{z}$ . Use the results $\mathbb{E}[\mathbf{y}] = \mathbf{A}\boldsymbol{\mu} + \mathbf{b}$ and $\operatorname{cov}[\mathbf{y}] = \mathbf{L}^{-1} + \mathbf{A}\mathbf{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}}.$ to find an expression for the marginal distribution $p(\mathbf{y})$ by considering the linear-Gaussian model comprising the product of the marginal distribution $p(\mathbf{x})$ and the conditional distribution $p(\mathbf{y}|\mathbf{x})$ .
|
According to the problem, we can write two expressions:
$$p(\mathbf{x}) = \mathcal{N}(\mathbf{x}|\boldsymbol{\mu}_{\mathbf{x}}, \boldsymbol{\Sigma}_{\mathbf{x}}), \quad p(\mathbf{y}|\mathbf{x}) = \mathcal{N}(\mathbf{y}|\boldsymbol{\mu}_{\mathbf{z}} + \mathbf{x}, \boldsymbol{\Sigma}_{\mathbf{z}})$$
By comparing the expression above and (2.113)-(2.117), we can write the expression of p(y):
$$p(\mathbf{y}) = \mathcal{N}(\mathbf{y}|\boldsymbol{\mu}_{x} + \boldsymbol{\mu}_{z}, \boldsymbol{\Sigma}_{x} + \boldsymbol{\Sigma}_{z})$$
| 532
|
2
|
2.32
|
hard
|
This exercise and the next provide practice at manipulating the quadratic forms that arise in linear-Gaussian models, as well as giving an independent check of results derived in the main text. Consider a joint distribution $p(\mathbf{x}, \mathbf{y})$ defined by the marginal and conditional distributions given by $p(\mathbf{x}) = \mathcal{N}\left(\mathbf{x}|\boldsymbol{\mu}, \boldsymbol{\Lambda}^{-1}\right)$ and $p(\mathbf{y}|\mathbf{x}) = \mathcal{N}(\mathbf{y}|\mathbf{A}\mathbf{x} + \mathbf{b}, \mathbf{L}^{-1})$. By examining the quadratic form in the exponent of the joint distribution, and using the technique of 'completing the square' discussed in Section 2.3, find expressions for the mean and covariance of the marginal distribution $p(\mathbf{y})$ in which the variable $\mathbf{x}$ has been integrated out. To do this, make use of the Woodbury matrix inversion formula $(\mathbf{A} + \mathbf{BCD})^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1}\mathbf{B}(\mathbf{C}^{-1} + \mathbf{D}\mathbf{A}^{-1}\mathbf{B})^{-1}\mathbf{D}\mathbf{A}^{-1}.$. Verify that these results agree with $\mathbb{E}[\mathbf{y}] = \mathbf{A}\boldsymbol{\mu} + \mathbf{b}$ and $\operatorname{cov}[\mathbf{y}] = \mathbf{L}^{-1} + \mathbf{A}\mathbf{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}}.$ obtained using the results of Chapter 2.
|
Let's make this problem more clear. The deduction in the main text, i.e., (2.101-2.110), firstly denote a new random variable z corresponding to the joint distribution, and then by completing square according to z,i.e.,(2.103), obtain the precision matrix R by comparing $= -\frac{1}{2}\begin{pmatrix} \mathbf{x} \\ \mathbf{y} \end{pmatrix}^{\mathrm{T}}\begin{pmatrix} \mathbf{\Lambda} + \mathbf{A}^{\mathrm{T}}\mathbf{L}\mathbf{A} & -\mathbf{A}^{\mathrm{T}}\mathbf{L} \\ -\mathbf{L}\mathbf{A} & \mathbf{L} \end{pmatrix}\begin{pmatrix} \mathbf{x} \\ \mathbf{y} \end{pmatrix} = -\frac{1}{2}\mathbf{z}^{\mathrm{T}}\mathbf{R}\mathbf{z} \quad$ with the PDF of a multivariate Gaussian Distribution, and then it takes the inverse of precision matrix to obtain covariance matrix, and finally it obtains the linear term i.e., $\mathbf{x}^{\mathrm{T}} \mathbf{\Lambda} \boldsymbol{\mu} - \mathbf{x}^{\mathrm{T}} \mathbf{A}^{\mathrm{T}} \mathbf{L} \mathbf{b} + \mathbf{y}^{\mathrm{T}} \mathbf{L} \mathbf{b} = \begin{pmatrix} \mathbf{x} \\ \mathbf{y} \end{pmatrix}^{\mathrm{T}} \begin{pmatrix} \mathbf{\Lambda} \boldsymbol{\mu} - \mathbf{A}^{\mathrm{T}} \mathbf{L} \mathbf{b} \\ \mathbf{L} \mathbf{b} \end{pmatrix}.$ to calculate the mean.
In this problem, we are asked to solve the problem from another perspective: we need to write the joint distribution p(x, y) and then perform integration over x to obtain marginal distribution p(y). Let's begin by write the quadratic form in the exponential of p(x, y):
$$-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu})^T \boldsymbol{\Lambda}(\boldsymbol{x}-\boldsymbol{\mu}) - \frac{1}{2}(\boldsymbol{y}-\boldsymbol{A}\boldsymbol{x}-\boldsymbol{b})^T \boldsymbol{L}(\boldsymbol{y}-\boldsymbol{A}\boldsymbol{x}-\boldsymbol{b})$$
We extract those terms involving x:
$$= -\frac{1}{2} \mathbf{x}^{T} (\mathbf{\Lambda} + \mathbf{A}^{T} \mathbf{L} \mathbf{A}) \mathbf{x} + \mathbf{x}^{T} [\mathbf{\Lambda} \boldsymbol{\mu} + \mathbf{A}^{T} \mathbf{L} (\mathbf{y} - \mathbf{b})] + const$$
$$= -\frac{1}{2} (\mathbf{x} - \mathbf{m})^{T} (\mathbf{\Lambda} + \mathbf{A}^{T} \mathbf{L} \mathbf{A}) (\mathbf{x} - \mathbf{m}) + \frac{1}{2} \mathbf{m}^{T} (\mathbf{\Lambda} + \mathbf{A}^{T} \mathbf{L} \mathbf{A}) \mathbf{m} + const$$
Where we have defined:
$$\boldsymbol{m} = (\boldsymbol{\Lambda} + \boldsymbol{A}^T \boldsymbol{L} \boldsymbol{A})^{-1} [\boldsymbol{\Lambda} \boldsymbol{\mu} + \boldsymbol{A}^T \boldsymbol{L} (\boldsymbol{y} - \boldsymbol{b})]$$
Now if we perform integration over x, we will see that the first term vanish to a constant, and we extract the terms including y from the remaining parts, we can obtain :
$$= -\frac{1}{2} \mathbf{y}^{T} \left[ \mathbf{L} - \mathbf{L} \mathbf{A} (\mathbf{\Lambda} + \mathbf{A}^{T} \mathbf{L} \mathbf{A})^{-1} \mathbf{A}^{T} \mathbf{L} \right] \mathbf{y}$$
$$+ \mathbf{y}^{T} \left\{ \left[ \mathbf{L} - \mathbf{L} \mathbf{A} (\mathbf{\Lambda} + \mathbf{A}^{T} \mathbf{L} \mathbf{A})^{-1} \mathbf{A}^{T} \mathbf{L} \right] \mathbf{b}$$
$$+ \mathbf{L} \mathbf{A} (\mathbf{\Lambda} + \mathbf{A}^{T} \mathbf{L} \mathbf{A})^{-1} \mathbf{\Lambda} \boldsymbol{\mu} \right\}$$
We firstly view the quadratic term to obtain the precision matrix, and then we take advantage of $(\mathbf{A} + \mathbf{BCD})^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1}\mathbf{B}(\mathbf{C}^{-1} + \mathbf{D}\mathbf{A}^{-1}\mathbf{B})^{-1}\mathbf{D}\mathbf{A}^{-1}.$, we will obtain $\operatorname{cov}[\mathbf{y}] = \mathbf{L}^{-1} + \mathbf{A}\mathbf{\Lambda}^{-1}\mathbf{A}^{\mathrm{T}}.$. Finally, using the linear term combined with the already known covariance matrix, we can obtain $\mathbb{E}[\mathbf{y}] = \mathbf{A}\boldsymbol{\mu} + \mathbf{b}$.
| 3,754
|
2
|
2.33
|
hard
|
Consider the same joint distribution as in Exercise 2.32, but now use the technique of completing the square to find expressions for the mean and covariance of the conditional distribution $p(\mathbf{x}|\mathbf{y})$ . Again, verify that these agree with the corresponding expressions $\mathbb{E}[\mathbf{x}|\mathbf{y}] = (\mathbf{\Lambda} + \mathbf{A}^{\mathrm{T}}\mathbf{L}\mathbf{A})^{-1} \left\{ \mathbf{A}^{\mathrm{T}}\mathbf{L}(\mathbf{y} - \mathbf{b}) + \mathbf{\Lambda}\boldsymbol{\mu} \right\}$ and $cov[\mathbf{x}|\mathbf{y}] = (\mathbf{\Lambda} + \mathbf{A}^{\mathrm{T}}\mathbf{L}\mathbf{A})^{-1}.$.
|
According to Bayesian Formula, we can write $p(\mathbf{x}|\mathbf{y}) = \frac{p(\mathbf{x},\mathbf{y})}{p(\mathbf{y})}$ , where we have already known the joint distribution $p(\mathbf{x},\mathbf{y})$ in $cov[\mathbf{z}] = \mathbf{R}^{-1} = \begin{pmatrix} \mathbf{\Lambda}^{-1} & \mathbf{\Lambda}^{-1} \mathbf{A}^{\mathrm{T}} \\ \mathbf{A} \mathbf{\Lambda}^{-1} & \mathbf{L}^{-1} + \mathbf{A} \mathbf{\Lambda}^{-1} \mathbf{A}^{\mathrm{T}} \end{pmatrix}.$ and $\mathbb{E}[\mathbf{z}] = \begin{pmatrix} \boldsymbol{\mu} \\ \mathbf{A}\boldsymbol{\mu} + \mathbf{b} \end{pmatrix}.$, and the marginal distribution $p(\mathbf{y})$ in Prob.2.32., we can follow the same procedure in Prob.2.32., i.e. firstly obtain the covariance matrix from the quadratic term and then obtain the mean from the linear term. The details are omitted here.
| 852
|
2
|
2.34
|
medium
|
- 2.34 (\*\*) To find the maximum likelihood solution for the covariance matrix of a multivariate Gaussian, we need to maximize the log likelihood function $\ln p(\mathbf{X}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) = -\frac{ND}{2} \ln(2\pi) - \frac{N}{2} \ln |\boldsymbol{\Sigma}| - \frac{1}{2} \sum_{n=1}^{N} (\mathbf{x}_n - \boldsymbol{\mu})^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{x}_n - \boldsymbol{\mu}). \quad$ with respect to Σ, noting that the covariance matrix must be symmetric and positive definite. Here we proceed by ignoring these constraints and doing a straightforward maximization. Using the results (C.21), (C.26), and (C.28) from Appendix C, show that the covariance matrix Σ that maximizes the log likelihood function $\ln p(\mathbf{X}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) = -\frac{ND}{2} \ln(2\pi) - \frac{N}{2} \ln |\boldsymbol{\Sigma}| - \frac{1}{2} \sum_{n=1}^{N} (\mathbf{x}_n - \boldsymbol{\mu})^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{x}_n - \boldsymbol{\mu}). \quad$ is given by the sample covariance $\Sigma_{\mathrm{ML}} = \frac{1}{N} \sum_{n=1}^{N} (\mathbf{x}_{n} - \boldsymbol{\mu}_{\mathrm{ML}}) (\mathbf{x}_{n} - \boldsymbol{\mu}_{\mathrm{ML}})^{\mathrm{T}}$. We note that the final result is necessarily symmetric and positive definite (provided the sample covariance is nonsingular).
|
Let's follow the hint by firstly calculating the derivative of $\ln p(\mathbf{X}|\boldsymbol{\mu}, \boldsymbol{\Sigma}) = -\frac{ND}{2} \ln(2\pi) - \frac{N}{2} \ln |\boldsymbol{\Sigma}| - \frac{1}{2} \sum_{n=1}^{N} (\mathbf{x}_n - \boldsymbol{\mu})^{\mathrm{T}} \boldsymbol{\Sigma}^{-1} (\mathbf{x}_n - \boldsymbol{\mu}). \quad$ with respect to $\Sigma$ and let it equal to 0:
$$-\frac{N}{2}\frac{\partial}{\partial \Sigma}ln|\Sigma| - \frac{1}{2}\frac{\partial}{\partial \Sigma}\sum_{n=1}^{N}(\boldsymbol{x_n} - \boldsymbol{\mu})^T \Sigma^{-1}(\boldsymbol{x_n} - \boldsymbol{\mu}) = 0$$
By using (C.28), the first term can be reduced to:
$$-\frac{N}{2}\frac{\partial}{\partial \boldsymbol{\Sigma}}ln|\boldsymbol{\Sigma}| = -\frac{N}{2}(\boldsymbol{\Sigma}^{-1})^T = -\frac{N}{2}\boldsymbol{\Sigma}^{-1}$$
Provided with the result that the optimal covariance matrix is the sample covariance, we denote sample matrix S as :
$$S = \frac{1}{N} \sum_{n=1}^{N} (x_n - \mu)(x_n - \mu)^T$$
We rewrite the second term:
second term =
$$-\frac{1}{2} \frac{\partial}{\partial \Sigma} \sum_{n=1}^{N} (\mathbf{x_n} - \boldsymbol{\mu})^T \Sigma^{-1} (\mathbf{x_n} - \boldsymbol{\mu})$$
= $-\frac{N}{2} \frac{\partial}{\partial \Sigma} Tr[\Sigma^{-1} S]$
= $\frac{N}{2} \Sigma^{-1} S \Sigma^{-1}$
Where we have taken advantage of the following property, combined with the fact that S and $\Sigma$ is symmetric. (Note: this property can be found in *The Matrix Cookbook*.)
$$\frac{\partial}{\partial \boldsymbol{X}}Tr(\boldsymbol{A}\boldsymbol{X}^{-1}\boldsymbol{B}) = -(\boldsymbol{X}^{-1}\boldsymbol{B}\boldsymbol{A}\boldsymbol{X}^{-1})^T = -(\boldsymbol{X}^{-1})^T\boldsymbol{A}^T\boldsymbol{B}^T(\boldsymbol{X}^{-1})^T$$
Thus we obtain:
$$-\frac{N}{2}\boldsymbol{\Sigma}^{-1} + \frac{N}{2}\boldsymbol{\Sigma}^{-1}\mathbf{S}\boldsymbol{\Sigma}^{-1} = 0$$
Obviously, we obtain $\Sigma = S$ , just as required.
| 1,931
|
2
|
2.35
|
medium
|
Use the result $\mathbb{E}[\mathbf{x}] = \boldsymbol{\mu}$ to prove $\mathbb{E}[\mathbf{x}\mathbf{x}^{\mathrm{T}}] = \boldsymbol{\mu}\boldsymbol{\mu}^{\mathrm{T}} + \boldsymbol{\Sigma}.$. Now, using the results $\mathbb{E}[\mathbf{x}] = \boldsymbol{\mu}$, and $\mathbb{E}[\mathbf{x}\mathbf{x}^{\mathrm{T}}] = \boldsymbol{\mu}\boldsymbol{\mu}^{\mathrm{T}} + \boldsymbol{\Sigma}.$, show that
$$\mathbb{E}[\mathbf{x}_n \mathbf{x}_m] = \boldsymbol{\mu} \boldsymbol{\mu}^{\mathrm{T}} + I_{nm} \boldsymbol{\Sigma}$$
$\mathbb{E}[\mathbf{x}_n \mathbf{x}_m] = \boldsymbol{\mu} \boldsymbol{\mu}^{\mathrm{T}} + I_{nm} \boldsymbol{\Sigma}$
where $\mathbf{x}_n$ denotes a data point sampled from a Gaussian distribution with mean $\boldsymbol{\mu}$ and covariance $\boldsymbol{\Sigma}$ , and $I_{nm}$ denotes the (n,m) element of the identity matrix. Hence prove the result $\mathbb{E}[\Sigma_{\mathrm{ML}}] = \frac{N-1}{N} \Sigma.$.
|
The proof of $\mathbb{E}[\mathbf{x}\mathbf{x}^{\mathrm{T}}] = \boldsymbol{\mu}\boldsymbol{\mu}^{\mathrm{T}} + \boldsymbol{\Sigma}.$ is quite clear in the main text, i.e., from page 82 to page 83 and hence we won't repeat it here. Let's prove $\mathbb{E}[\Sigma_{\mathrm{ML}}] = \frac{N-1}{N} \Sigma.$. We first begin by proving (2.123):
$$\mathbb{E}[\boldsymbol{\mu_{ML}}] = \frac{1}{N} \mathbb{E}[\sum_{n=1}^{N} \boldsymbol{x_n}] = \frac{1}{N} \cdot N\boldsymbol{\mu} = \boldsymbol{\mu}$$
Where we have taken advantage of the fact that $x_n$ is independently and identically distributed (i.i.d).
Then we use the expression in (2.122):
$$\mathbb{E}[\mathbf{\Sigma}_{ML}] = \frac{1}{N} \mathbb{E}[\sum_{n=1}^{N} (\mathbf{x}_{n} - \boldsymbol{\mu}_{ML})(\mathbf{x}_{n} - \boldsymbol{\mu}_{ML})^{T}]$$
$$= \frac{1}{N} \sum_{n=1}^{N} \mathbb{E}[(\mathbf{x}_{n} - \boldsymbol{\mu}_{ML})(\mathbf{x}_{n} - \boldsymbol{\mu}_{ML})^{T}]$$
$$= \frac{1}{N} \sum_{n=1}^{N} \mathbb{E}[(\mathbf{x}_{n} - \boldsymbol{\mu}_{ML})(\mathbf{x}_{n} - \boldsymbol{\mu}_{ML})^{T}]$$
$$= \frac{1}{N} \sum_{n=1}^{N} \mathbb{E}[\mathbf{x}_{n} \mathbf{x}_{n}^{T} - 2\boldsymbol{\mu}_{ML} \mathbf{x}_{n}^{T} + \boldsymbol{\mu}_{ML} \boldsymbol{\mu}_{ML}^{T}]$$
$$= \frac{1}{N} \sum_{n=1}^{N} \mathbb{E}[\mathbf{x}_{n} \mathbf{x}_{n}^{T}] - 2\frac{1}{N} \sum_{n=1}^{N} \mathbb{E}[\boldsymbol{\mu}_{ML} \mathbf{x}_{n}^{T}] + \frac{1}{N} \sum_{n=1}^{N} \mathbb{E}[\boldsymbol{\mu}_{ML} \boldsymbol{\mu}_{ML}^{T}]$$
By using $\mathbb{E}[\mathbf{x}_n \mathbf{x}_m] = \boldsymbol{\mu} \boldsymbol{\mu}^{\mathrm{T}} + I_{nm} \boldsymbol{\Sigma}$, the first term will equal to:
first term =
$$\frac{1}{N} \cdot N(\mu \mu^T + \Sigma) = \mu \mu^T + \Sigma$$
The second term will equal to:
second term =
$$-2\frac{1}{N}\sum_{n=1}^{N}\mathbb{E}[\boldsymbol{\mu_{ML}x_n}^T]$$
= $-2\frac{1}{N}\sum_{n=1}^{N}\mathbb{E}[\frac{1}{N}(\sum_{m=1}^{N}\boldsymbol{x_m})\boldsymbol{x_n}^T]$
= $-2\frac{1}{N^2}\sum_{n=1}^{N}\sum_{m=1}^{N}\mathbb{E}[\boldsymbol{x_mx_n}^T]$
= $-2\frac{1}{N^2}\sum_{n=1}^{N}\sum_{m=1}^{N}(\boldsymbol{\mu\boldsymbol{\mu}^T} + \boldsymbol{I_{nm}\boldsymbol{\Sigma}})$
= $-2\frac{1}{N^2}(N^2\boldsymbol{\mu\boldsymbol{\mu}^T} + N\boldsymbol{\Sigma})$
= $-2(\boldsymbol{\mu\boldsymbol{\mu}^T} + \frac{1}{N}\boldsymbol{\Sigma})$
Similarly, the third term will equal to:
third term
$$= \frac{1}{N} \sum_{n=1}^{N} \mathbb{E}[\boldsymbol{\mu_{ML}} \boldsymbol{\mu_{ML}}^T]$$
$$= \frac{1}{N} \sum_{n=1}^{N} \mathbb{E}[(\frac{1}{N} \sum_{j=1}^{N} \boldsymbol{x_j}) \cdot (\frac{1}{N} \sum_{i=1}^{N} \boldsymbol{x_i})]$$
$$= \frac{1}{N^3} \sum_{n=1}^{N} \mathbb{E}[(\sum_{j=1}^{N} \boldsymbol{x_j}) \cdot (\sum_{i=1}^{N} \boldsymbol{x_i})]$$
$$= \frac{1}{N^3} \sum_{n=1}^{N} (N^2 \boldsymbol{\mu} \boldsymbol{\mu}^T + N\boldsymbol{\Sigma})$$
$$= \boldsymbol{\mu} \boldsymbol{\mu}^T + \frac{1}{N} \boldsymbol{\Sigma}$$
Finally, we combine those three terms, which gives:
$$\mathbb{E}[\mathbf{\Sigma}_{\boldsymbol{ML}}] = \frac{N-1}{N} \mathbf{\Sigma}$$
Note: the same procedure from $\mathbb{E}[\mathbf{x}] = \boldsymbol{\mu}$ to $\mathbb{E}[\mathbf{x}\mathbf{x}^{\mathrm{T}}] = \boldsymbol{\mu}\boldsymbol{\mu}^{\mathrm{T}} + \boldsymbol{\Sigma}.$ can be carried out to prove $\mathbb{E}[\mathbf{x}_n \mathbf{x}_m] = \boldsymbol{\mu} \boldsymbol{\mu}^{\mathrm{T}} + I_{nm} \boldsymbol{\Sigma}$ and the only difference is that we need to introduce index m and n to represent the samples. $\mathbb{E}[\mathbf{x}_n \mathbf{x}_m] = \boldsymbol{\mu} \boldsymbol{\mu}^{\mathrm{T}} + I_{nm} \boldsymbol{\Sigma}$ is quite straightforward if we see it in this way: If m = n, which means $x_n$ and $x_m$ are actually the same sample, $\mathbb{E}[\mathbf{x}_n \mathbf{x}_m] = \boldsymbol{\mu} \boldsymbol{\mu}^{\mathrm{T}} + I_{nm} \boldsymbol{\Sigma}$ will reduce to $\binom{N}{m} + \binom{N}{m-1} = \binom{N+1}{m}.$ (i.e. the correlation between different dimensions exists) and if $m \neq n$ , which means $x_n$ and $x_m$ are different samples, also i.i.d, then no correlation should exist, we can guess $\mathbb{E}[x_n x_m^T] = \mu \mu^T$ in this case.
| 4,249
|
2
|
2.36
|
medium
|
Using an analogous procedure to that used to obtain $= \mu_{\text{ML}}^{(N-1)} + \frac{1}{N} (\mathbf{x}_{N} - \mu_{\text{ML}}^{(N-1)}).$, derive an expression for the sequential estimation of the variance of a univariate Gaussian
distribution, by starting with the maximum likelihood expression
$$\sigma_{\rm ML}^2 = \frac{1}{N} \sum_{n=1}^{N} (x_n - \mu)^2.$$
$\sigma_{\rm ML}^2 = \frac{1}{N} \sum_{n=1}^{N} (x_n - \mu)^2.$
Verify that substituting the expression for a Gaussian distribution into the Robbins-Monro sequential estimation formula $\theta^{(N)} = \theta^{(N-1)} + a_{N-1} \frac{\partial}{\partial \theta^{(N-1)}} \ln p(x_N | \theta^{(N-1)}).$ gives a result of the same form, and hence obtain an expression for the corresponding coefficients $a_N$ .
|
Let's follow the hint. However, firstly we will find the sequential expression based on definition, which will make the latter process on finding coefficient $a_{N-1}$ more easily. Suppose we have N observations in total, and then we can write:
$$\begin{split} \sigma_{ML}^{2(N)} &= \frac{1}{N} \sum_{n=1}^{N} (x_n - \mu_{ML}^{(N)})^2 \\ &= \frac{1}{N} \left[ \sum_{n=1}^{N-1} (x_n - \mu_{ML}^{(N)})^2 + (x_N - \mu_{ML}^{(N)})^2 \right] \\ &= \frac{N-1}{N} \frac{1}{N-1} \sum_{n=1}^{N-1} (x_n - \mu_{ML}^{(N)})^2 + \frac{1}{N} (x_N - \mu_{ML}^{(N)})^2 \\ &= \frac{N-1}{N} \sigma_{ML}^{2(N-1)} + \frac{1}{N} (x_N - \mu_{ML}^{(N)})^2 \\ &= \sigma_{ML}^{2(N-1)} + \frac{1}{N} \left[ (x_N - \mu_{ML}^{(N)})^2 - \sigma_{ML}^{2(N-1)} \right] \end{split}$$
And then let us write the expression for $\sigma_{ML}$ .
$$\frac{\partial}{\partial \sigma^2} \left\{ \frac{1}{N} \sum_{n=1}^{N} ln p(x_n | \mu, \sigma) \right\} \bigg|_{\sigma_{ML}} = 0$$
By exchanging the summation and the derivative, and letting $N \to +\infty$ , we can obtain :
$$\lim_{N \to +\infty} \frac{1}{N} \sum_{n=1}^{N} \frac{\partial}{\partial \sigma^2} ln p(x_n | \mu, \sigma) = \mathbb{E}_x \left[ \frac{\partial}{\partial \sigma^2} ln p(x_n | \mu, \sigma) \right]$$
Comparing it with $f(\theta) \equiv \mathbb{E}[z|\theta] = \int zp(z|\theta) dz$, we can obtain the sequential formula to estimate $\sigma_{ML}$ :
$$\begin{split} \sigma_{ML}^{2(N)} &= \sigma_{ML}^{2(N-1)} + a_{N-1} \frac{\partial}{\partial \sigma_{ML}^{2(N-1)}} lnp(x_N | \mu_{ML}^{(N)}, \sigma_{ML}^{(N-1)}) & (*) \\ &= \sigma_{ML}^{2(N-1)} + a_{N-1} \left[ -\frac{1}{2\sigma_{ML}^{2(N-1)}} + \frac{(x_N - \mu_{ML}^{(N)})^2}{2\sigma_{ML}^{4(N-1)}} \right] \end{split}$$
Where we use $\sigma_{ML}^{2(N)}$ to represent the Nth estimation of $\sigma_{ML}^2$ , i.e., the estimation of $\sigma_{ML}^2$ after the Nth observation. What's more, if we choose :
$$a_{N-1} = \frac{2\sigma_{ML}^{4(N-1)}}{N}$$
Then we will obtain:
$$\sigma_{ML}^{2(N)} = \sigma_{ML}^{2(N-1)} + \frac{1}{N} \left[ -\sigma_{ML}^{2(N-1)} + (x_N - \mu_{ML}^{(N)})^2 \right]$$
We can see that the results are the same. An important thing should be noticed: In maximum likelihood, when estimating variance $\sigma_{ML}^{2(N)}$ , we will first estimate mean $\mu_{ML}^{(N)}$ , and then we we will calculate variance $\sigma_{ML}^{2(N)}$ .
In other words, they are decoupled. It is the same in sequential method. For instance, if we want to estimate both mean and variance sequentially, after observing the Nth sample (i.e., $x_N$ ), firstly we can use $\mu_{ML}^{(N-1)}$ together with $= \mu_{\text{ML}}^{(N-1)} + \frac{1}{N} (\mathbf{x}_{N} - \mu_{\text{ML}}^{(N-1)}).$ to estimate $\mu_{ML}^{(N)}$ and then use the conclusion in this problem to obtain $\sigma_{ML}^{(N)}$ . That is why in (\*) we write $lnp(x_N|\mu_{ML}^{(N)},\sigma_{ML}^{(N-1)})$ instead of $lnp(x_N|\mu_{ML}^{(N-1)},\sigma_{ML}^{(N-1)})$ .
| 2,963
|
2
|
2.37
|
medium
|
Using an analogous procedure to that used to obtain $= \mu_{\text{ML}}^{(N-1)} + \frac{1}{N} (\mathbf{x}_{N} - \mu_{\text{ML}}^{(N-1)}).$, derive an expression for the sequential estimation of the covariance of a multivariate Gaussian distribution, by starting with the maximum likelihood expression $\Sigma_{\mathrm{ML}} = \frac{1}{N} \sum_{n=1}^{N} (\mathbf{x}_{n} - \boldsymbol{\mu}_{\mathrm{ML}}) (\mathbf{x}_{n} - \boldsymbol{\mu}_{\mathrm{ML}})^{\mathrm{T}}$. Verify that substituting the expression for a Gaussian distribution into the Robbins-Monro sequential estimation formula $\theta^{(N)} = \theta^{(N-1)} + a_{N-1} \frac{\partial}{\partial \theta^{(N-1)}} \ln p(x_N | \theta^{(N-1)}).$ gives a result of the same form, and hence obtain an expression for the corresponding coefficients $a_N$ .
|
We follow the same procedure in Prob.2.36 to solve this problem. Firstly,
we can obtain the sequential formula based on definition.
$$\begin{split} \boldsymbol{\Sigma}_{ML}^{(N)} &= \frac{1}{N} \sum_{n=1}^{N} (\boldsymbol{x}_{n} - \boldsymbol{\mu}_{ML}^{(N)}) (\boldsymbol{x}_{n} - \boldsymbol{\mu}_{ML}^{(N)})^{T} \\ &= \frac{1}{N} \left[ \sum_{n=1}^{N-1} (\boldsymbol{x}_{n} - \boldsymbol{\mu}_{ML}^{(N)}) (\boldsymbol{x}_{n} - \boldsymbol{\mu}_{ML}^{(N)})^{T} + (\boldsymbol{x}_{N} - \boldsymbol{\mu}_{ML}^{(N)}) (\boldsymbol{x}_{N} - \boldsymbol{\mu}_{ML}^{(N)})^{T} \right] \\ &= \frac{N-1}{N} \boldsymbol{\Sigma}_{ML}^{(N-1)} + \frac{1}{N} (\boldsymbol{x}_{N} - \boldsymbol{\mu}_{ML}^{(N)}) (\boldsymbol{x}_{N} - \boldsymbol{\mu}_{ML}^{(N)})^{T} \\ &= \boldsymbol{\Sigma}_{ML}^{(N-1)} + \frac{1}{N} \left[ (\boldsymbol{x}_{N} - \boldsymbol{\mu}_{ML}^{(N)}) (\boldsymbol{x}_{N} - \boldsymbol{\mu}_{ML}^{(N)})^{T} - \boldsymbol{\Sigma}_{ML}^{(N-1)} \right] \end{split}$$
If we use *Robbins-Monro sequential estimation formula*, i.e., $\theta^{(N)} = \theta^{(N-1)} + a_{N-1} \frac{\partial}{\partial \theta^{(N-1)}} \ln p(x_N | \theta^{(N-1)}).$, we can obtain :
$$\begin{split} \boldsymbol{\Sigma}_{ML}^{(N)} &= \boldsymbol{\Sigma}_{ML}^{(N-1)} + \boldsymbol{a}_{N-1} \frac{\partial}{\partial \boldsymbol{\Sigma}_{ML}^{(N-1)}} lnp(\boldsymbol{x}_{N} | \boldsymbol{\mu}_{ML}^{(N)}, \boldsymbol{\Sigma}_{ML}^{(N-1)}) \\ &= \boldsymbol{\Sigma}_{ML}^{(N-1)} + \boldsymbol{a}_{N-1} \frac{\partial}{\partial \boldsymbol{\Sigma}_{ML}^{(N-1)}} lnp(\boldsymbol{x}_{N} | \boldsymbol{\mu}_{ML}^{(N)}, \boldsymbol{\Sigma}_{ML}^{(N-1)}) \\ &= \boldsymbol{\Sigma}_{ML}^{(N-1)} + \boldsymbol{a}_{N-1} \left[ -\frac{1}{2} [\boldsymbol{\Sigma}_{ML}^{(N-1)}]^{-1} + \frac{1}{2} [\boldsymbol{\Sigma}_{ML}^{(N-1)}]^{-1} (\boldsymbol{x}_{n} - \boldsymbol{\mu}_{ML}^{(N-1)}) (\boldsymbol{x}_{n} - \boldsymbol{\mu}_{ML}^{(N-1)})^{T} [\boldsymbol{\Sigma}_{ML}^{(N-1)}]^{-1} \right] \end{split}$$
Where we have taken advantage of the procedure we carried out in Prob.2.34 to calculate the derivative, and if we choose :
$$\boldsymbol{a}_{N-1} = \frac{2}{N} \boldsymbol{\Sigma}_{\boldsymbol{ML}}^{2(N-1)}$$
We can see that the equation above will be identical with our previous conclusion based on definition.
| 2,304
|
2
|
2.38
|
easy
|
Use the technique of completing the square for the quadratic form in the exponent to derive the results $\mu_N = \frac{\sigma^2}{N\sigma_0^2 + \sigma^2} \mu_0 + \frac{N\sigma_0^2}{N\sigma_0^2 + \sigma^2} \mu_{ML}$ and $\frac{1}{\sigma_N^2} = \frac{1}{\sigma_0^2} + \frac{N}{\sigma^2}$.
|
It is straightforward. Based on $p(\mathbf{X}|\mu) = \prod_{n=1}^{N} p(x_n|\mu) = \frac{1}{(2\pi\sigma^2)^{N/2}} \exp\left\{-\frac{1}{2\sigma^2} \sum_{n=1}^{N} (x_n - \mu)^2\right\}.$, $p(\mu) = \mathcal{N}\left(\mu|\mu_0, \sigma_0^2\right)$ and $p(\mu|\mathbf{X}) \propto p(\mathbf{X}|\mu)p(\mu).$, we focus on the exponential term of the posterior distribution $p(\mu|\mathbf{X})$ , which gives :
$$-\frac{1}{2\sigma^2} \sum_{n=1}^{N} (x_n - \mu)^2 - \frac{1}{2\sigma_0^2} (\mu - \mu_0)^2 = -\frac{1}{2\sigma_N^2} (\mu - \mu_N)^2$$
We rewrite the left side regarding to $\mu$ .
quadratic term =
$$-(\frac{N}{2\sigma^2} + \frac{1}{2\sigma_0^2})\mu^2$$
linear term =
$$(\frac{\sum_{n=1}^{N} x_n}{\sigma^2} + \frac{\mu_0}{\sigma_0^2}) \mu$$
We also rewrite the right side regarding to $\mu$ , and hence we will obtain :
$$-(\frac{N}{2\sigma^2} + \frac{1}{2\sigma_0^2})\mu^2 = -\frac{1}{2\sigma_N^2}\mu^2, \ (\frac{\sum_{n=1}^N x_n}{\sigma^2} + \frac{\mu_0}{\sigma_0^2})\mu = \frac{\mu_N}{\sigma_N^2}\mu$$
Then we will obtain:
$$\frac{1}{\sigma_N^2} = \frac{1}{\sigma_0^2} + \frac{N}{\sigma^2}$$
And with the prior knowledge that $\sum_{n=1}^{N} x_n = N \cdot \mu_{ML}$ , we can write :
$$\mu_{N} = \sigma_{N}^{2} \cdot \left(\frac{\sum_{n=1}^{N} x_{n}}{\sigma^{2}} + \frac{\mu_{0}}{\sigma_{0}^{2}}\right)$$
$$= \left(\frac{1}{\sigma_{0}^{2}} + \frac{N}{\sigma^{2}}\right)^{-1} \cdot \left(\frac{N\mu_{ML}}{\sigma^{2}} + \frac{\mu_{0}}{\sigma_{0}^{2}}\right)$$
$$= \frac{\sigma_{0}^{2}\sigma^{2}}{\sigma^{2} + N\sigma_{0}^{2}} \cdot \frac{N\mu_{ML}\sigma_{0}^{2} + \mu_{0}\sigma^{2}}{\sigma\sigma_{0}^{2}}$$
$$= \frac{\sigma^{2}}{N\sigma_{0}^{2} + \sigma^{2}} \mu_{0} + \frac{N\sigma_{0}^{2}}{N\sigma_{0}^{2} + \sigma^{2}} \mu_{ML}$$
| 1,777
|
2
|
2.39
|
medium
|
Starting from the results $\mu_N = \frac{\sigma^2}{N\sigma_0^2 + \sigma^2} \mu_0 + \frac{N\sigma_0^2}{N\sigma_0^2 + \sigma^2} \mu_{ML}$ and $\frac{1}{\sigma_N^2} = \frac{1}{\sigma_0^2} + \frac{N}{\sigma^2}$ for the posterior distribution of the mean of a Gaussian random variable, dissect out the contributions from the first N-1 data points and hence obtain expressions for the sequential update of $\mu_N$ and $\sigma_N^2$ . Now derive the same results starting from the posterior distribution $p(\mu|x_1,\ldots,x_{N-1}) = \mathcal{N}(\mu|\mu_{N-1},\sigma_{N-1}^2)$ and multiplying by the likelihood function $p(x_N|\mu) = \mathcal{N}(x_N|\mu,\sigma^2)$ and then completing the square and normalizing to obtain the posterior distribution after N observations.
|
Let's follow the hint.
$$\frac{1}{\sigma_N^2} = \frac{1}{\sigma_0^2} + \frac{N}{\sigma^2} = \frac{1}{\sigma_0^2} + \frac{N-1}{\sigma^2} + \frac{1}{\sigma^2} = \frac{1}{\sigma_{N-1}^2} + \frac{1}{\sigma^2}$$
However, it is complicated to derive a sequential formula for $\mu_N$ directly. Based on $\frac{1}{\sigma_N^2} = \frac{1}{\sigma_0^2} + \frac{N}{\sigma^2}$, we see that the denominator in $\mu_N = \frac{\sigma^2}{N\sigma_0^2 + \sigma^2} \mu_0 + \frac{N\sigma_0^2}{N\sigma_0^2 + \sigma^2} \mu_{ML}$ can be eliminated if we multiply $1/\sigma_N^2$ on both side of $\mu_N = \frac{\sigma^2}{N\sigma_0^2 + \sigma^2} \mu_0 + \frac{N\sigma_0^2}{N\sigma_0^2 + \sigma^2} \mu_{ML}$. Therefore we will derive a sequential formula for $\mu_N/\sigma_N^2$ instead.
$$\begin{split} \frac{\mu_{N}}{\sigma_{N}^{2}} &= \frac{\sigma^{2} + N\sigma_{0}^{2}}{\sigma_{0}^{2}\sigma^{2}} (\frac{\sigma^{2}}{N\sigma_{0}^{2} + \sigma^{2}} \mu_{0} + \frac{N\sigma_{0}^{2}}{N\sigma_{0}^{2} + \sigma^{2}} \mu_{ML}^{(N)}) \\ &= \frac{\sigma^{2} + N\sigma_{0}^{2}}{\sigma_{0}^{2}\sigma^{2}} (\frac{\sigma^{2}}{N\sigma_{0}^{2} + \sigma^{2}} \mu_{0} + \frac{N\sigma_{0}^{2}}{N\sigma_{0}^{2} + \sigma^{2}} \mu_{ML}^{(N)}) \\ &= \frac{\mu_{0}}{\sigma_{0}^{2}} + \frac{N\mu_{ML}^{(N)}}{\sigma^{2}} = \frac{\mu_{0}}{\sigma_{0}^{2}} + \frac{\sum_{n=1}^{N} x_{n}}{\sigma^{2}} \\ &= \frac{\mu_{0}}{\sigma_{0}^{2}} + \frac{\sum_{n=1}^{N-1} x_{n}}{\sigma^{2}} + \frac{x_{N}}{\sigma^{2}} \\ &= \frac{\mu_{N-1}}{\sigma_{N-1}^{2}} + \frac{x_{N}}{\sigma^{2}} \end{split}$$
Another possible solution is also given in the problem. We solve it by completing the square.
$$-\frac{1}{2\sigma^2}(x_N-\mu)^2-\frac{1}{2\sigma_{N-1}^2}(\mu-\mu_{N-1})^2=-\frac{1}{2\sigma_N^2}(\mu-\mu_N)^2$$
By comparing the quadratic and linear term regarding to $\mu$ , we can obtain:
$\frac{1}{\sigma_N^2} = \frac{1}{\sigma^2} + \frac{1}{\sigma_{N-1}^2}$
And:
$$\frac{\mu_N}{\sigma_N^2} = \frac{x_N}{\sigma^2} + \frac{\mu_{N-1}}{\sigma_{N-1}^2}$$
It is the same as previous result. Note: after obtaining the Nth observation, we will firstly use the sequential formula to calculate $\sigma_N^2$ , and then $\mu_N$ . This is because the sequential formula for $\mu_N$ is dependent on $\sigma_N^2$ .
| 2,284
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 42