Using PyMC

Using PyMC#

PyMC is a very powerful Python library designed for probabilistic and Bayesian analysis. Here, we show that PyMC can be used to perform the same likelihood sampling that we previously wrote our own algorithm for.

Below, we read in the data and build the model.

import pandas as pd 
import numpy as np
from scipy.stats import norm

data = pd.read_csv('../data/first-order.csv')

D = [norm(data['At'][i], data['At_err'][i]) for i in range(len(data))]

def first_order(t, k, A0):
    """
    A first order rate equation.
    
    :param t: The time to evaluate the rate equation at.
    :param k: The rate constant.
    :param A0: The initial concentration of A.
    
    :return: The concentration of A at time t.
    """
    return A0 * np.exp(-k * t)

The next step is to construct the PyMC sampler. The format that PyMC expects can be a bit unfamiliar.

First we create objects for the two parameters, these are bounded so \(0 \leq k < 1\) and \(0 \leq [A]_0 < 10\). Strictly, these are prior probabilities, which we will look at next, but using uniform distributions means this is mathematically equivalent to likelihood sampling. Next, we create a normally distributed likelihood function to compare the data and the model. Finally, we sample for 1000 steps, with 10 chains. The tune parameter is the number of steps for tuning the Markov chain step sizes.

import pymc as pm

with pm.Model() as model:
    k = pm.Uniform('k', 0, 1)
    A0 = pm.Uniform('A0', 0, 10)
    
    At = pm.Normal('At', 
                   mu=first_order(data['t'], k, A0), 
                   sigma=data['At_err'], 
                   observed=data['At'])
    
    trace = pm.sample(1000, tune=1000, chains=10, progressbar=False)
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (10 chains in 2 jobs)
NUTS: [k, A0]
Sampling 10 chains for 1_000 tune and 1_000 draw iterations (10_000 + 10_000 draws total) took 5 seconds.

Unlike the code that we created previously, PyMC defaults to using the NUTS sampler, which stands for No-U-Turn sampler [7]. This sampler enables the step size tuning that we have taken advantage of.

This results in a object assigned to the variable trace.

trace
arviz.InferenceData
    • <xarray.Dataset> Size: 168kB
      Dimensions:  (chain: 10, draw: 1000)
      Coordinates:
        * chain    (chain) int64 80B 0 1 2 3 4 5 6 7 8 9
        * draw     (draw) int64 8kB 0 1 2 3 4 5 6 7 ... 993 994 995 996 997 998 999
      Data variables:
          k        (chain, draw) float64 80kB 0.09838 0.0997 0.1063 ... 0.1197 0.09881
          A0       (chain, draw) float64 80kB 7.332 7.07 7.475 ... 7.88 7.88 6.942
      Attributes:
          created_at:                 2025-08-21T09:19:10.700373+00:00
          arviz_version:              0.22.0
          inference_library:          pymc
          inference_library_version:  5.20.0
          sampling_time:              5.05695366859436
          tuning_steps:               1000

    • <xarray.Dataset> Size: 1MB
      Dimensions:                (chain: 10, draw: 1000)
      Coordinates:
        * chain                  (chain) int64 80B 0 1 2 3 4 5 6 7 8 9
        * draw                   (draw) int64 8kB 0 1 2 3 4 5 ... 995 996 997 998 999
      Data variables: (12/17)
          largest_eigval         (chain, draw) float64 80kB nan nan nan ... nan nan
          step_size              (chain, draw) float64 80kB 0.8339 0.8339 ... 0.5714
          perf_counter_start     (chain, draw) float64 80kB 1.101e+03 ... 1.105e+03
          energy_error           (chain, draw) float64 80kB 0.03355 ... 0.04462
          smallest_eigval        (chain, draw) float64 80kB nan nan nan ... nan nan
          max_energy_error       (chain, draw) float64 80kB 0.09537 0.2051 ... -0.4004
          ...                     ...
          diverging              (chain, draw) bool 10kB False False ... False False
          n_steps                (chain, draw) float64 80kB 3.0 3.0 5.0 ... 1.0 5.0
          energy                 (chain, draw) float64 80kB 3.633 4.11 ... 5.347 4.948
          process_time_diff      (chain, draw) float64 80kB 0.0003405 ... 0.0003039
          acceptance_rate        (chain, draw) float64 80kB 0.9373 0.8784 ... 0.956
          index_in_trajectory    (chain, draw) int64 80kB 2 2 1 2 -3 2 ... 2 -1 -1 0 3
      Attributes:
          created_at:                 2025-08-21T09:19:10.721943+00:00
          arviz_version:              0.22.0
          inference_library:          pymc
          inference_library_version:  5.20.0
          sampling_time:              5.05695366859436
          tuning_steps:               1000

    • <xarray.Dataset> Size: 80B
      Dimensions:   (At_dim_0: 5)
      Coordinates:
        * At_dim_0  (At_dim_0) int64 40B 0 1 2 3 4
      Data variables:
          At        (At_dim_0) float64 40B 6.23 3.76 2.6 1.85 1.27
      Attributes:
          created_at:                 2025-08-21T09:19:10.726370+00:00
          arviz_version:              0.22.0
          inference_library:          pymc
          inference_library_version:  5.20.0

This contains the chain information amoung other things. Instead of probing into the trace object, we can take advantage of functionality from the arviz library to produce some informative plots.

import matplotlib.pyplot as plt
import arviz as az

az.plot_trace(trace, var_names=["k", "A0"])
plt.tight_layout()
plt.show()
../_images/fe1721194ec26ad07432c7d216ea8a9cf10d762c9928a469df6a5999cd7969a9.png

Above, we can see the trace of each of the different chains. The chains appear to have converged to the same distribution. We can get the flat chains with the following function.

flat_chain = np.vstack([trace.posterior['k'].values.flatten(), trace.posterior['A0'].values.flatten()]).T

import seaborn as sns

chains_df = pd.DataFrame(flat_chain, columns=['k', 'A0'])
sns.jointplot(data=chains_df, x='k', y='A0', kind='kde')
plt.show()
../_images/41a3e46c892b3877a0d76ed4ed62ba7675c9c082912e871de1a7eef4a1a65f1f.png

It is clear that, using PyMC, we have much better sampling of the distributions. This makes using summary statistics, like the mean and standard deviation much more reliable.

az.summary(trace, var_names=["k", "A0"])
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
k 0.106 0.01 0.088 0.124 0.000 0.000 2737.0 3531.0 1.0
A0 7.558 0.44 6.771 8.419 0.009 0.005 2678.0 3220.0 1.0