| [Statistical Computing: Lecture 15] |
Example:
Suppose that we have data X.
Model: X|p, ~ Gamma(p, )
Prior: p ~ Expo(1) and ~ Expo(3) indep.
Here the joint density of (p, ,X) is
f(p,Collecting terms involving , we get the full conditional density of
as
f(By inspection we recognise the full conditional as Gamma(p,X+3). Similarly, collecting the terms involving p, the full conditional density of p is f(p |which is not a standard distribution. It is not obvious how we may simulate from this full conditional. |
f(x) < B g(x) for all x,where B is some known constant. We shall call Bg an envelope of f. Sometimes we shall abuse notation and call g alone as an envelope without explicitly mentioning the constant B. Assume that we can simulate from this density, g. Then rejection sampling is a method to modify iid samples from g to get iid samples from f as follows. First generate X ~ g. Compute the ratio
R = f(X)/(B g(X)).Note that since X is generated from g, the chance that the denominator vanishes is 0. Hence R is well-defined with probability 1. Toss a coin with probability of head equal to R. If you get head, return X, else repeat the process. Here is the algorithm.
Rejection sampling LOOP: Generate X ~ g.
Let R = f(X)/(B g(X)).
Toss a coin with P(head)=R
If head
Output X.
Else
Go to LOOP.
End if
| Exercise: Can you prove that this algorithm really results in a rv with the correct density? [Hint: Define a 0-1 rv Y which is 1 iff X is accepted. You know that X ~ g. Find Y |X. Then compute the joint distribution of (X,Y). You are to show that X|Y=1 ~ f. You need to use nothing but elementary probability.] |
| Exercise: Show that this algorithm terminates with probability 1. |
![]() |
function X = rej(f,B)
while(1)
X = unifrnd(0,1); % g is Unif[0,1] density
numerator = feval(f,X);
denominator = B; % g(X) = 1 since g is Unif[0,1] density
R = numerator/denominator;
u = unifrnd(0,1);
if(u < R)
return;
end
end
|
![]() |
function d = tent(x)
if x < 0.5
d = 4 x;
else
d = 4- 4 x;
end
|
Example:
Here is the Gibbs sampler to simulate from the joint
distribution of (X,Y):
Clearly, the full conditionals are changing from step to step. For instance, X(1) is generated from X|Y=Y(0) while X(2) is generated from X|Y=Y(1). These two conditional distributions may not be the same. |
Adaptive Rejection Sampling (basic structure) LOOP: Generate X ~ g.
Let R = f(X)/(B g(X)).
Toss a coin with P(head)=R
If head
Output X.
Else
Update the envelope using X.
Go to LOOP.
End if
S = {x1 < ... < xk}of k points (k > 2), on the horizontal axis, so that the line joining (x1,log(f(x1))) and (x2,log(f(x2))) should has a positive slope and the line joining the last two points has negative slope. Given any such set S we shall construct an envelope Gs for log(f) as follows. (Then gs:=exp(Gs) will be an envelope for f.) Draw vertical lines through each xi to intersect the log(f)-curve at Pi.
![]() |
| Graph of log(f) |
![]() |
| Join with lines |
![]() |
| Zigzag crown shown with thick line |
Usually at most 5 or 6 iterations are required before the algorithm outputs a number.
Adaptive Rejection Sampling (complete) Initialise S
LOOP: Generate X ~ gS.
Let R = f(X)/(Bs gs(X)).
Toss a coin with P(head)=R
If head
Output X.
Else
Add X to S.
Go to LOOP.
End if
S = {x1,x2,x3},where the line joining (x1,log(f(x1))) and (x2,log(f(x2))) has a positive slope and the line joining (x2,log(f(x2))) and (x3,log(f(x3))) has a negative slope. During Gibbs sampling the full conditionals change only slightly from one iteration to the next. So the S from the end of one iteration provides a good starting S for the next iteration.
Exercise:
Write a Matlab function or SAS module to simulate from
the density proportional to exp(ax + b) over (l,r) where
l may be - or r may be .
In these unbounded cases, a must be
negative.
|
| Exercise: Given an S write down a ready-to-implement algorithm to simulate from gs. |