TONIC

Author

Sarah Urbut

Published

June 14, 2024

TONIC Reimagined

TONIC1 included n=739 patients anticipating coronary interventional procedures, comparing fasting and non-fasting approaches. Key subgroups were analyzed: n=177 females, n=74 with insulin-dependent diabetes, n=233 inpatients, n=218 percutaneous coronary interventions, n=147 with moderate sedation, and n=42 semi-urgent procedures. Supplemental Table 41 reports no significant heterogeneity across subgroups for non-inferiority, concluding fasting is generally non-inferior.

A Bayesian analysis of the TONIC trial data provides a nuanced interpretation. Unlike frequentist approaches relying on p-values, Bayesian analysis offers:

Posterior Distributions: The posterior distribution represents updated beliefs about treatment effects after observing the data. It shows plausible values for each subgroup’s effect estimate. For example, in the diabetes and female subgroups, the posterior distributions indicate a significant probability that non-fasting may be inferior, with credible intervals crossing the noninferiority margin of 4%.

Credible Intervals: A credible interval directly tells us about the probability of the effect, unlike a confidence interval. For example, in the diabetes subgroup, the credible interval suggests substantial uncertainty, often crossing the noninferiority margin, indicating non-inferiority cannot be conclusively determined (Figure 1).

Probability of Non-Inferiority: Bayesian methods provide a direct probability assessment. For example, in the diabetes subgroup, the probability that non-fasting is inferior might be as high as 22%, suggesting non-fasting could lead to worse outcomes. This is calculated using repeated sampling from the Bayesian distribution.

Subgroup Variability: Bayesian credible intervals highlight variability across subgroups better than p-values. Bayesian analysis reveals the probability distribution of effects, helping identify which subgroups might benefit from individualized fasting protocols.

In summary, while the TONIC trial presents intriguing findings, Bayesian analysis offers deeper insights into the data. It uncovers subtleties and potential implications of non-fasting before coronary procedures, providing a probabilistic understanding of risks and benefits, ensuring clinical decisions are informed by a complete picture of the data. This supports more personalized patient care, particularly in determining fasting protocols.

Code
# Install and load the necessary packages
if (!require("rstan")) {
  install.packages("rstan")
  library(rstan)
}
Loading required package: rstan
Loading required package: StanHeaders

rstan version 2.32.6 (Stan version 2.32.2)
For execution on a local, multicore CPU with excess RAM we recommend calling
options(mc.cores = parallel::detectCores()).
To avoid recompilation of unchanged Stan programs, we recommend calling
rstan_options(auto_write = TRUE)
For within-chain threading using `reduce_sum()` or `map_rect()` Stan functions,
change `threads_per_chain` option:
rstan_options(threads_per_chain = 1)
Code
if (!require("bayesplot")) {
  install.packages("bayesplot")
  library(bayesplot)
}
Loading required package: bayesplot
This is bayesplot version 1.11.1
- Online documentation and vignettes at mc-stan.org/bayesplot
- bayesplot theme set to bayesplot::theme_default()
   * Does _not_ affect other ggplot2 plots
   * See ?bayesplot_theme_set for details on theme setting
Code
if (!require("ggplot2")) {
  install.packages("ggplot2")
  library(ggplot2)
}
Loading required package: ggplot2
Code
# Subgroup data from Supplementary Table S4
subgroup_names <- c("No diabetes", "Diabetes", "Male", "Female")
effect_estimates <- c(-2.2, -0.4, -2.0, -2.4)
upper_limits <- c(1.5, 10.0, 2.1, 5.1)

# Calculate standard errors
standard_errors <- (upper_limits - effect_estimates) / 1.645

# Create a list with the data
data <- list(
  N = length(effect_estimates),
  y = effect_estimates,
  sigma = standard_errors
)
Code
# Stan model code (stored in bayesian_model.stan)
stan_code <- "
data {
  int<lower=0> N; 
  vector[N] y; 
  vector<lower=0>[N] sigma; 
}
parameters {
  vector[N] theta; 
}
model {
  theta ~ normal(0, 10); 
  y ~ normal(theta, sigma); 
}
"
# Write the model to a file
write(stan_code, file = "bayesian_model.stan")

# Compile and fit the model
stan_model <- stan_model("bayesian_model.stan")
fit <- sampling(stan_model, data = data, iter = 2000, warmup = 1000, chains = 4, seed = 123)
saveRDS(fit,file = "~/Dropbox/tonicfit.rds")
Code
# Load the model fit
fit <- readRDS("~/Dropbox/tonicfit.rds")

# Extract posterior samples using as.array
posterior_samples <- as.array(fit)

# Subgroup data from Supplementary Table S4
subgroup_names <- c("No diabetes", "Diabetes", "Male", "Female")
effect_estimates <- c(-2.2, -0.4, -2.0, -2.4)
upper_limits <- c(1.5, 10.0, 2.1, 5.1)

# Calculate standard errors
standard_errors <- (upper_limits - effect_estimates) / 1.645

# Create a list with the data
data <- list(
  N = length(effect_estimates),
  y = effect_estimates,
  sigma = standard_errors
)

# Function to extract and summarize posterior samples
extract_posterior_samples <- function(posterior_samples, parameter_index) {
  samples <- posterior_samples[, , parameter_index]
  samples <- as.vector(samples)  # Flatten to a single vector
  list(
    mean = mean(samples),
    median = median(samples),
    lower = quantile(samples, probs = 0.025),
    upper = quantile(samples, probs = 0.975)
  )
}

# Extract and summarize posterior samples for each parameter
posterior_summary <- lapply(1:data$N, function(i) extract_posterior_samples(posterior_samples, i))

# Prepare data for plotting
bayesian_df <- data.frame(
  Subgroup = factor(1:data$N, labels = subgroup_names),
  Median = sapply(posterior_summary, function(x) x$median),
  Lower = sapply(posterior_summary, function(x) x$lower),
  Upper = sapply(posterior_summary, function(x) x$upper)
)

# Data for frequentist analysis
frequentist_data <- data.frame(
  Subgroup = subgroup_names,
  Estimate = effect_estimates,
  CI_Lower = effect_estimates - 1.645 * standard_errors,
  CI_Upper = upper_limits
)

# Plotting both frequentist and Bayesian results
plot_combined <- function(frequentist_data, bayesian_df, margin = 4) {
  ggplot() +
    geom_point(data = bayesian_df, aes(y = Subgroup, x = Median, color = "Bayesian"), size = 3) +
    geom_errorbar(data = bayesian_df, aes(y = Subgroup, xmin = Lower, xmax = Upper, color = "Bayesian"), height = 0.2, size = 1) +
    geom_point(data = frequentist_data, aes(y = Subgroup, x = Estimate, color = "Frequentist"), size = 3) +
    geom_errorbar(data = frequentist_data, aes(y = Subgroup, xmin = CI_Lower, xmax = CI_Upper, color = "Frequentist"), height = 0.2, size = 1) +
    geom_vline(aes(xintercept = margin, linetype = "Noninferiority margin"), color = "black", size = 1) +
    scale_color_manual(name = "Interval Type", values = c("Bayesian" = "blue", "Frequentist" = "red")) +
    scale_linetype_manual(name = "Margin", values = c("Noninferiority margin" = "dashed")) +
    labs(title = "Frequentist vs Bayesian Credible Intervals",
         y = "Subgroup",
         x = "Effect Estimate") +
    theme_minimal(base_size = 14) +
    theme(axis.text.y = element_text(angle = 0, hjust = 1, size = 12),
          axis.title.y = element_text(size = 14),
          axis.title.x = element_text(size = 14),
          plot.title = element_text(size = 16, face = "bold"),
          plot.margin = margin(20, 20, 20, 20),
          legend.position = "bottom")  # Move legend to the bottom for better spacing
}

# Create the combined plot
combined_plot <- plot_combined(frequentist_data, bayesian_df, margin = 4)
Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
ℹ Please use `linewidth` instead.
Warning in geom_errorbar(data = bayesian_df, aes(y = Subgroup, xmin = Lower, :
Ignoring unknown parameters: `height`
Warning in geom_errorbar(data = frequentist_data, aes(y = Subgroup, xmin =
CI_Lower, : Ignoring unknown parameters: `height`
Code
# Save or display the final plot
ggsave("combined_plot.pdf", combined_plot, width = 12, height = 8)
combined_plot

While the confidence interval represents the long run frequency that a given sampling scheme would contain the parameter of interest, the credible interval provides direct interpretation about the uncertainty surrounding a parameter of interest. In the diabetes subgroup there is substantial uncertainty with a 22% chance of non-inferiority (>4%).

Code
# Calculate the probability of non-inferiority
noninferiority_margin <- 4
prob_noninferior <- sapply(1:data$N, function(i) mean(as.vector(posterior_samples[, , i]) < noninferiority_margin))
prob_inferior <- sapply(1:data$N, function(i) mean(as.vector(posterior_samples[, , i]) > noninferiority_margin))

# Print the probabilities
probabilities <- data.frame(
  Subgroup = subgroup_names,
  Prob_Noninferior = round(prob_noninferior,2),
  Prob_Inferior = round(prob_inferior,2)
)
probabilities
     Subgroup Prob_Noninferior Prob_Inferior
1 No diabetes             1.00          0.00
2    Diabetes             0.78          0.22
3        Male             0.99          0.01
4      Female             0.93          0.07