From Entropy Search to Predictive Entropy Search
In Bayesian optimization, a popular acquisition function is predictive entropy search, which is a clever reframing of another acquisition function, entropy search. I rederive the connection and explain why this reframing is useful.
In Bayesian optimization, we want to maximize a black-box function ,
where is nonlinear in general and typically expensive to compute. The main idea is to place a Gaussian process (GP) prior on this function; and to use the inferred GP posterior to compute an acquisition function, to decide where to evaluate next. See (Snoek et al., 2012) for a more thorough explanation. A popular acquisition function is predictive entropy search (Hernández-Lobato et al., 2014); and the goal of this post is to carefully derive the key trick used in that paper.
Entropy search
In a classic paper, (MacKay, 1992) proposed an information theoretic approach to active data selection. His idea was to use an approximation of the model’s posterior entropy to decide which data to select next. In words, select new data points that minimize our uncertainty about the model’s parameters. Building on this idea, (Hennig & Schuler, 2012) proposed a method called entropy search (ES), an acquisition function that attempts to maximize information about the function maximum given new data:
In words, the left term is the entropy of the posterior of the maximizer with respect to our current data . For any new data point , this is constant. The right term is the same posterior entropy, except under an expectation. This expectation is necessary because we want to consider all possible values . So maximizing means minimizing the posterior entropy after a new observation, .
Predictive entropy search
However, Eq. is hard to compute. First, we must evaluate for all possible values of and for many values of . Furthermore, the entropy terms themselves have no closed-form solutions. We have no guarantee that the posterior over the optimal value has an easy analytical expression. To address these computational difficulties, (Hernández-Lobato et al., 2014) make the following observation:
They rewrite the ES acquisition function, , using the symmetry of mutual information, into the form of their so-called predictive entropy search (PES) acquisition function, . Before understanding why, let’s rederive the trick. First, consider the mutual information of and a random variable . We can write this as
The first step is just the definition of mutual information, and the second step is the definition of conditional entropy. The last line of Eq. is essentially first line of Eq. . If the distribution on is our posterior predictive, , and the distribution on is , then we can rewrite the last line as
However, mutual information is symmetric, and we can rewrite Eq. as
And this is essentially the middle line of Eq. . Again, the distribution on is , and the distribution on is the predictive, . This gives us
So why is this a nice idea? Notice that the left entropy term is now with respect to the posterior predictive. By placing a Gaussian process prior on , we induce a Gaussian assumption on . This means the posterior predictive has a closed-form solution. The right term must still be approximated, but this is nicer than approximating both terms.
Intuitively, we’ve flipped the problem. Rather than thinking about information gain with respect to a challenging distribution over a maximum where we must consider all possible values of , we think about the information gain with respect to our data, considering all possible values of . The latter may be easier because the predictive distributions are sometimes nicer to work with.
Conclusion
Predictive entropy search is a simple and nice idea, rewriting quantities that are hard to compute as quantities that are easier to compute. I think the method is so-named because it frames the problem in terms of predictive distributions. The trick used in this paper was initially proposed in (Houlsby et al., 2012) and occurs elsewhere, such as in (Hernández-Lobato et al., 2015).
- Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical bayesian optimization of machine learning algorithms. Advances in Neural Information Processing Systems, 2951–2959.
- Hernández-Lobato, J. M., Hoffman, M. W., & Ghahramani, Z. (2014). Predictive entropy search for efficient global optimization of black-box functions. Advances in Neural Information Processing Systems, 918–926.
- MacKay, D. J. C. (1992). Information-based objective functions for active data selection. Neural Computation, 4(4), 590–604.
- Hennig, P., & Schuler, C. J. (2012). Entropy search for information-efficient global optimization. The Journal of Machine Learning Research, 13(1), 1809–1837.
- Houlsby, N., Huszar, F., Ghahramani, Z., & Hernández-Lobato, J. M. (2012). Collaborative Gaussian processes for preference learning. Advances in Neural Information Processing Systems, 2096–2104.
- Hernández-Lobato, J. M., Gelbart, M., Hoffman, M., Adams, R., & Ghahramani, Z. (2015). Predictive entropy search for bayesian optimization with unknown constraints. International Conference on Machine Learning, 1699–1707.