Using PyMC

Using PyMC#

PyMC is a very powerful Python library designed for probabilistic and Bayesian analysis. Here, we show that PyMC can be used to perform the same likelihood sampling that we previously wrote our own algorithm for.

Below, we read in the data and build the model.

import pandas as pd 
import numpy as np
from scipy.stats import norm

data = pd.read_csv('../data/first-order.csv')

D = [norm(data['At'][i], data['At_err'][i]) for i in range(len(data))]

def first_order(t, k, A0):
    """
    A first order rate equation.
    
    :param t: The time to evaluate the rate equation at.
    :param k: The rate constant.
    :param A0: The initial concentration of A.
    
    :return: The concentration of A at time t.
    """
    return A0 * np.exp(-k * t)

The next step is to construct the PyMC sampler. The format that PyMC expects can be a bit unfamiliar.

First we create objects for the two parameters, these are bounded so \(0 \leq k < 1\) and \(0 \leq [A]_0 < 10\). Strictly, these are prior probabilities, which we will look at next, but using uniform distributions means this is mathematically equivalent to likelihood sampling. Next, we create a normally distributed likelihood function to compare the data and the model. Finally, we sample for 1000 steps, with 10 chains. The tune parameter is the number of steps for tuning the Markov chain step sizes.

import pymc as pm

with pm.Model() as model:
    k = pm.Uniform('k', 0, 1)
    A0 = pm.Uniform('A0', 0, 10)
    
    At = pm.Normal('At', 
                   mu=first_order(data['t'], k, A0), 
                   sigma=data['At_err'], 
                   observed=data['At'])
    
    trace = pm.sample(1000, tune=1000, chains=10, progressbar=False)
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (10 chains in 2 jobs)
NUTS: [k, A0]
Sampling 10 chains for 1_000 tune and 1_000 draw iterations (10_000 + 10_000 draws total) took 5 seconds.

Unlike the code that we created previously, PyMC defaults to using the NUTS sampler, which stands for No-U-Turn sampler [7]. This sampler enables the step size tuning that we have taken advantage of.

This results in a object assigned to the variable trace.

trace
arviz.InferenceData
    • <xarray.Dataset> Size: 168kB
      Dimensions:  (chain: 10, draw: 1000)
      Coordinates:
        * chain    (chain) int64 80B 0 1 2 3 4 5 6 7 8 9
        * draw     (draw) int64 8kB 0 1 2 3 4 5 6 7 ... 993 994 995 996 997 998 999
      Data variables:
          k        (chain, draw) float64 80kB 0.09638 0.096 0.09886 ... 0.1226 0.1099
          A0       (chain, draw) float64 80kB 7.426 7.504 7.467 ... 8.139 8.119 7.67
      Attributes:
          created_at:                 2025-05-29T16:31:12.657606+00:00
          arviz_version:              0.21.0
          inference_library:          pymc
          inference_library_version:  5.20.0
          sampling_time:              5.1065380573272705
          tuning_steps:               1000

    • <xarray.Dataset> Size: 1MB
      Dimensions:                (chain: 10, draw: 1000)
      Coordinates:
        * chain                  (chain) int64 80B 0 1 2 3 4 5 6 7 8 9
        * draw                   (draw) int64 8kB 0 1 2 3 4 5 ... 995 996 997 998 999
      Data variables: (12/17)
          largest_eigval         (chain, draw) float64 80kB nan nan nan ... nan nan
          acceptance_rate        (chain, draw) float64 80kB 1.0 0.8076 ... 0.8979
          n_steps                (chain, draw) float64 80kB 3.0 1.0 1.0 ... 3.0 5.0
          index_in_trajectory    (chain, draw) int64 80kB 2 1 -1 1 -3 ... -3 -1 -1 2 2
          max_energy_error       (chain, draw) float64 80kB -0.09478 0.2136 ... 0.2029
          energy                 (chain, draw) float64 80kB 4.285 4.702 ... 6.788 5.41
          ...                     ...
          energy_error           (chain, draw) float64 80kB -0.07081 ... -0.1449
          tree_depth             (chain, draw) int64 80kB 2 1 1 2 3 3 ... 2 3 3 2 2 3
          process_time_diff      (chain, draw) float64 80kB 0.000366 ... 0.0003037
          step_size_bar          (chain, draw) float64 80kB 0.6361 0.6361 ... 0.6515
          reached_max_treedepth  (chain, draw) bool 10kB False False ... False False
          lp                     (chain, draw) float64 80kB -3.986 -4.368 ... -3.37
      Attributes:
          created_at:                 2025-05-29T16:31:12.679232+00:00
          arviz_version:              0.21.0
          inference_library:          pymc
          inference_library_version:  5.20.0
          sampling_time:              5.1065380573272705
          tuning_steps:               1000

    • <xarray.Dataset> Size: 80B
      Dimensions:   (At_dim_0: 5)
      Coordinates:
        * At_dim_0  (At_dim_0) int64 40B 0 1 2 3 4
      Data variables:
          At        (At_dim_0) float64 40B 6.23 3.76 2.6 1.85 1.27
      Attributes:
          created_at:                 2025-05-29T16:31:12.683717+00:00
          arviz_version:              0.21.0
          inference_library:          pymc
          inference_library_version:  5.20.0

This contains the chain information amoung other things. Instead of probing into the trace object, we can take advantage of functionality from the arviz library to produce some informative plots.

import matplotlib.pyplot as plt
import arviz as az

az.plot_trace(trace, var_names=["k", "A0"])
plt.tight_layout()
plt.show()
../_images/3c0dd0fe7fb20386b241ed1e29c70b3dece20c6f05c5d099d4b367860c9fa91a.png

Above, we can see the trace of each of the different chains. The chains appear to have converged to the same distribution. We can get the flat chains with the following function.

flat_chain = np.vstack([trace.posterior['k'].values.flatten(), trace.posterior['A0'].values.flatten()]).T

import seaborn as sns

chains_df = pd.DataFrame(flat_chain, columns=['k', 'A0'])
sns.jointplot(data=chains_df, x='k', y='A0', kind='kde')
plt.show()
../_images/57e9d96c16716f63665f30c498eccc5de53d4d381e1a8eb0eb56f96f855a3c9b.png

It is clear that, using PyMC, we have much better sampling of the distributions. This makes using summary statistics, like the mean and standard deviation much more reliable.

az.summary(trace, var_names=["k", "A0"])
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
k 0.106 0.01 0.088 0.125 0.000 0.000 2886.0 3675.0 1.0
A0 7.568 0.44 6.748 8.398 0.009 0.005 2645.0 3424.0 1.0