gemlib.mcmc#
Markov chain Monte Carlo inference
Markov-chain Monte Carlo (MCMC) is an algorithm used for drawing from random variables when the probability density function is known only up to a normalising constant. This makes MCMC appropriate for sampling from complex Bayesian posteriors.
gemlib.mcmc
provides a suite of composable MCMC kernels for use in both
general Bayesian hierarchical probability models and different types of state
transition model. In particular, it provides a framework for composing
Metropolis-within-Gibbs algorithms which are especially useful for
semi-continuous probability spaces such as in Bayesian hierarchical state
transition models.
MCMC drivers#
Kernel drivers are used to guide the overall MCMC scheme. We provide
functionality for iterating the MCMC (mcmc()
), restricting a
kernel to operate on a subset of the parameter space as part of a
Metropolis-within-Gibbs scheme (MwgStep
), and a special
“meta”-kernel (multi_scan()
) that can invoke a kernel multiple
times within an overall Metropolis-within-Gibbs scheme.
|
A Metropolis-within-Gibbs step. |
|
Represent a sampling algorithm |
|
Runs an MCMC using sampling_algorithm |
|
Performs multiple applications of a kernel |
Atomic kernels#
Atomic kernels encapsuate algorithms that propagate the Markov chain, i.e. an MCMC
“step”. Each function returns an instance of a SamplingAlgorithm
,
which is a tuple of init()
and
step()
functions with the special property of being
composable.
Here’s an example of creating a single Hamiltonian Monte-Carlo sampler:
from gemlib.mcmc import hmc
def log_prob_fn(x):
return sp.stats.normal.logpdf(x, loc=0.0, scale=1.0)
kernel = hmc(step_size=0.1, num_leapfrog_steps=16)
# Draw samples
samples, results = mcmc(
num_samples=1000,
sampling_algorithm=kernel,
target_density_fn=log_prob_fn,
initial_position=0.1,
seed=[0,0],
)
The major novelty in Gemlib is that kernels may be composed, such that appropriate methods can be used to propagate different parts of the parameter set as part of a Metropolis-within-Gibbs algorithm. For example:
from gemlib.mcmc import MwgStep, hmc, rwmh
def log_prob_fn(x, y):
x1_pdf = sp.stats.normal.logpdf(x, loc=0.0, scale=1.0)
x2_pdf = sp.stats.normal.logpdf(y, loc=1.0, scale=2.0)
Position = namedtuple("Position", ["x", "y"])
kernel1 = MwgStep(
hmc(step_size=0.1, num_leapfrog_steps=16),
target_names=["x"],
)
kernel2 = MwgStep(
rwmh(scale=1.0),
target_names=["y"],
)
# Compose the kernels
kernel = kernel1 >> kernel2
# Draw samples
samples, results = mcmc(
num_samples=1000,
sampling_algorithm=kernel,
target_density_fn=log_prob_fn,
initial_position=Position(0.1, 0.2),
seed=[0,0],
)
|
An Adaptive Random Walk Metropolis Hastings algorithm |
|
Hamiltonian Monte Carlo |
|
Random walk Metropolis Hastings MCMC kernel |
|
Transform a sampling algorithm. |
Discrete-time state transition model kernels#
These kernels operate on censored event times generated from the
DiscreteTimeStateTransitionModel
class.
|
Update initial conditions and events for DiscreteTimeStateTransitionModel |
|
Move partially-censored transition events |
|
Update right-censored events for DiscreteTimeStateTransitionModel |