This blog is a continuation of the previous discussion on MCMC, What are Monte Carlo Markov Chains (MCMC)? I introduce a well known MCMC algorithm, Metropolis-Hastings (MH). I describe how MH behaves and prove how it holds detailed balance. Then I show an example of a bad proposal, and move on to show what generally people choose for good proposals. I close with a brief comparison of MCMC and (Self-Normalized) Importance Sampling.
Before we start, I’d like to thank Jan-Willem van de Meent for his lectures in his Advance Machine Learning Class for PhD students at Northeastern University. The images shown are from his lectures. In addition, I’d like to address that more details on MC algorithms can be found in An Introduction to Probabilistic Programming available [arXiv]. This book is intended as a graduate-level introduction to probabilistic programming languages and methods for inference in probabilistic programs.
It turns out that it’s not so easy to design a transition operator that guarantees to have detailed balance. But what if maybe we can use something that turns any proposal operator (which is something we can come up with) into something that satisfies detailed balance. This folks, is what Metropolis-Hastings does for us!
What is Metropolis-Hastings?
The intuition behind Metropolis-Hastings is that
- We know we need to have an operator that satisfies detailed balance
- We can propose something ( some sample )
- We can check whether the proposal is good or not
- And sometimes we are going to keep that proposal, and sometimes we are not…. in such a way that keeping it vs not keeping it satisfies detailed balance.
Now I’ll introduce mathematically how MH works and then discuss why it works.
We begin with a current sample, , which we will use to get a new sample, using a proposal distribution, . The notation looks as follows: .
Then we decide to keep the new sample, with probability:
To discuss what this probability actually means, let’s remind ourselves what the detailed balance relationship is. It is: . But since we are now working with proposals, let’s switch notation from to .
Side Note: Sometimes transition probabilities such as can be referred to as a kernel. Instead of being specific with what is making the transition, we generalize by saying we are using a transition kernel. Common notation is . They both represent getting one sample dependent on another sample.
Let’s suppose that satisfies detailed balance, then the ratio here between and is 1. So now let’s refer back to the probability:
Here we are saying that with probability , we are going to set the proposal to our new sample, , and with probability we are going to reject the proposal. When we reject the proposal, our next sample is set to the same sample as before, , .
We see that if , we always accept the proposal. Basically, if our transition kernel always satisfies detailed balance, we can always accept it!
However, if it does not satisfy detailed balance, then this means that somehow there’s an imbalance with the kinds of proposals we are getting. There are some kind of samples that we are proposing too often and other kinds that we are not proposing often enough.
Question: if the ratio is larger than 1, does this mean we are proposing some kind of samples too often or not often enough?
Let’s think about it! If we are not proposing the kind of samples we want more, then we should probably accept them with higher probability. However, if we are seeing a sample too often, then maybe we should only accept it sometimes. Therefore if , it’s because we are seeing a rare sample we want more of (we are proposing not often enough). However, basic probability tells us that probabilities max out at 1. This is why we add the to get the probability .
So what does Metropolis-Hastings do for us? It’s an algorithm that basically corrects our proposals in order to preserve detailed-balance! Neat, huh?
Proving Detailed Balance for Metropolis-Hastings
The idea behind Metropolis-Hasting is that we can get a new (transitioned/dependent) sample when we sample from a proposal, , and get our next sample, with probability , otherwise we keep our previous sample, .
When we have this particular sampling scheme, we an have an implied transition kernel that can tell us (when we have and ), what is the probability that we are going to see a transition from to . In order to reason about this, we need to consider two cases. It could be that is not the same value as , but they could be in fact the same value where . We formulate this by working backwards.
In the figure above, we show that in order to get a value , we first need to sample an from and then we need to decide to keep the sample with probability . Therefore for the case where , we have .
Note that there are two cases where and . The first is if somehow we were able to sample the exact same sample again (this can only happen in some cases, for example if we are sampling from a Bernoulli distribution, but if we were sampling from a Normal, then it would be very unlikely). This gives us . The second possibility of getting the same sample again is by rejecting. Then we calculate the probability that any rejected sample will give us the same sample as before. We integrate over possible values we could have rejected, , and then we write the probability of .
We can prove detailed balance once we write out the kernel like this quite easily because it splits the problem into two small proofs, one for the case where and another for .
In the case where , we can just say:
Really in MH, the point is not to describe how often we stay at the sample (there are other properties that should help this from happening), but rather how often we transition from sample to sample. Therefore we care more about the next case.
In the case where , we can say:
Now we know that Metropolis-Hastings preserves detailed-balance. It is also the case that once you have detailed-balance, you can say if you have a normalized density, , then
In practice, when doing MH, this is a nice property because we don’t always have the normalized density. we can calculate the acceptance probability using the unnormalized density, .
It’s important to note that MH is very general. We can design any transition proposal and if that proposal violates detailed-balance then we can use the Metropolis-Hastings’ acceptance ratio to clean up the mis-designed proposal. We have shown that we can always evaluate the ratio whenever we have an unnormalized density and don’t have the normalize density (which is always the case).
Example of a Bad Proposal
We can do likelihood weighting. Suppose that we have to sample . We can sample from the prior, which means that we can ignore was was.
This is what we call Independent Metropolis-Hastings because new samples are independent from the previous sample (which is valid). Now we can see what our new acceptance ratio is:
Because our probabilities cancel, we are left with the ratio of the likelihoods. But we have the same issue as the likelihood weighting where we have low acceptance probabilities. With the exception that this sampler won’t let go of a good , this is no better than doing likelihood weighting because we still have to wait until we get a good sample again.
More Common Examples of Choosing Proposals
It’s much more common to setup a proposal where we sample a new sample, , from a Normal distribution centered on with some standard deviation, . It’s important to note that is symmetric. This means that the probability going from to should be the same as the probability as going from to , or .
This makes our acceptance ratio a bit easier since
What are the options for our step size, ? (1) We can take really small steps. This means we will then need a lot of steps to get somewhere (2) We can also take really large steps. However, we will probably jump around too much end up with samples we will more often reject.
So the question is then, How should we set our step size?
Based on people who have done some analysis, we should tune our acceptance rate. We keep adjusting it until we get a magic number, which is . This is optimal under a set of assumptions. In practice, we want anything between 0.1 and 0.7 depending on what we are doing.
Summary – MCMC vs. Importance Sampling
We have looked at two MC samplers, MCMC and Importance Sampling.
For Importance Sampling we say: if we design some proposal, generate some samples, assign each sample a weight, we can now take the weighted-average over the samples. In particular, we know that the samples gives us an estimate of the marginal likelihood.
For Metropolis-Hasting we do something different: We say, suppose we have a sample . Now we are going to propose a new sample, and are going to pick a number between 0 and 1. We are going to calculate the acceptance ratio, and if my number is larger than my acceptance ratio then I keep the previous sample otherwise I keep the new sample.
MCMC has a transition probability/kernel and satisfies detailed balance .
The difference: is that IS gives us the estimate of our marginal, and MH allows us to do hill-climbing, but doesn’t give us an estimate for the marginal.
We’ll discuss why having an estimate for the marginal is important when we discuss Annealed Importance Sampling!
Thanks for reading!