gemlib.mcmc#

Markov chain Monte Carlo inference

Markov-chain Monte Carlo (MCMC) is an algorithm used for drawing from random variables when the probability density function is known only up to a normalising constant. This makes MCMC appropriate for sampling from complex Bayesian posteriors.

gemlib.mcmc provides a suite of composable MCMC kernels for use in both general Bayesian hierarchical probability models and different types of state transition model. In particular, it provides a framework for composing Metropolis-within-Gibbs algorithms which are especially useful for semi-continuous probability spaces such as in Bayesian hierarchical state transition models.

MCMC drivers#

Kernel drivers are used to guide the overall MCMC scheme. We provide functionality for iterating the MCMC (mcmc()), restricting a kernel to operate on a subset of the parameter space as part of a Metropolis-within-Gibbs scheme (MwgStep), and a special “meta”-kernel (multi_scan()) that can invoke a kernel multiple times within an overall Metropolis-within-Gibbs scheme.

MwgStep(sampling_algorithm, target_names, ...)

A Metropolis-within-Gibbs step.

SamplingAlgorithm(init_fn, step_fn)

Represent a sampling algorithm

mcmc(num_samples, sampling_algorithm, ...[, ...])

Runs an MCMC using sampling_algorithm

multi_scan(num_updates, sampling_algorithm)

Performs multiple applications of a kernel

Atomic kernels#

Atomic kernels encapsuate algorithms that propagate the Markov chain, i.e. an MCMC “step”. Each function returns an instance of a SamplingAlgorithm, which is a tuple of init() and step() functions with the special property of being composable.

Here’s an example of creating a single Hamiltonian Monte-Carlo sampler:

from gemlib.mcmc import hmc

def log_prob_fn(x):
  return sp.stats.normal.logpdf(x, loc=0.0, scale=1.0)

kernel = hmc(step_size=0.1, num_leapfrog_steps=16)

# Draw samples
samples, results = mcmc(
    num_samples=1000,
    sampling_algorithm=kernel,
    target_density_fn=log_prob_fn,
    initial_position=0.1,
    seed=[0,0],
)

The major novelty in Gemlib is that kernels may be composed, such that appropriate methods can be used to propagate different parts of the parameter set as part of a Metropolis-within-Gibbs algorithm. For example:

from gemlib.mcmc import MwgStep, hmc, rwmh

def log_prob_fn(x, y):
  x1_pdf = sp.stats.normal.logpdf(x, loc=0.0, scale=1.0)
  x2_pdf = sp.stats.normal.logpdf(y, loc=1.0, scale=2.0)

Position = namedtuple("Position", ["x", "y"])

kernel1 = MwgStep(
    hmc(step_size=0.1, num_leapfrog_steps=16),
    target_names=["x"],
)

kernel2 = MwgStep(
    rwmh(scale=1.0),
    target_names=["y"],
)

# Compose the kernels
kernel = kernel1 >> kernel2

# Draw samples
samples, results = mcmc(
    num_samples=1000,
    sampling_algorithm=kernel,
    target_density_fn=log_prob_fn,
    initial_position=Position(0.1, 0.2),
    seed=[0,0],
)

adaptive_rwmh([initial_scale, ...])

An Adaptive Random Walk Metropolis Hastings algorithm

hmc([step_size, num_leapfrog_steps, mass_matrix])

Hamiltonian Monte Carlo

rwmh([scale])

Random walk Metropolis Hastings MCMC kernel

transform_sampling_algorithm(bijectors, ...)

Transform a sampling algorithm.

Discrete-time state transition model kernels#

These kernels operate on censored event times generated from the DiscreteTimeStateTransitionModel class.

left_censored_events_mh(incidence_matrix, ...)

Update initial conditions and events for DiscreteTimeStateTransitionModel

move_events(incidence_matrix, ...[, name])

Move partially-censored transition events

right_censored_events_mh(incidence_matrix, ...)

Update right-censored events for DiscreteTimeStateTransitionModel