Chapter 20 Posterior Predictive Checks

Cobra snakes are known for hypnotizing their prey. Like cobras, Bayesian posteriors can fool you into submission - thinking you have a good model with small uncertainty. The seductiveness of getting results needs to be counter-balanced with a good measure of skepticism. For us, that skepticism manifests as a posterior predictive check - a method of ensuring the posterior distribution can simulate data that is similar to the data observed. We want to ensure our BAW leads to actionable insights, not intoxicating and venomous results.

When modelling real-world data, your generative DAG never captures the true generating process - the real-world is too messy. However, if your generative DAG can approximate reality, then your model might be useful. Modelling with generative DAGs provides a good starting place from which to confirm, deny, or refine business narratives. After all, data-supported business narratives motivate action within a firm; fancy algorithms with intimidating names are not sufficient on their own. Whether your generative DAG proves successful or not, the modelling process by itself puts you on a good path towards learning more from both the domain expertise of business stakeholders and observed data.

20.1 Revisiting The Tickets Example

Last chapter, we were modelling the daily number of tickets issued in New York City on Wednesdays. We made a dataframe, wedTicketsDF, that had our observed data using the code shown here:

library(tidyverse)
library(causact)
library(greta)
library(lubridate)
nycTicketsDF = ticketsDF

## summarize tickets for Wednesday
wedTicketsDF = nycTicketsDF %>% 
  mutate(dayOfWeek = wday(date, label = TRUE)) %>%
  filter(dayOfWeek == "Wed") %>% 
  group_by(date) %>% 
  summarize(numTickets = sum(daily_tickets))

The generative DAG of Figure 19.13, which we thought was successful, yielded a posterior distribution with a huge reduction in uncertainty. As we will see, this DAG will turn out to be the hypnotizing cobra I was warning you about. Let’s learn a way of detecting models that are inconsistent with observation.

graph = dag_create() %>%
  dag_node("Daily # of Tickets","k",
           rhs = poisson(lambda),
           data = wedTicketsDF$numTickets) %>%
  dag_node("Avg # Daily Tickets","lambda",
           rhs = uniform(3000,7000),
           child = "k")

graph %>% dag_render()

Figure 19.13: A generative DAG for modelling the daily issuing of tickets on Wednesdays in New York city.

Using the generative DAG in Figure 19.13, we call greta to get our posterior distribution:

drawsDF = graph %>% dag_greta()
drawsDF %>% dagp_plot()

Figure 19.14: Posterior density for uncertainty in rate parameter lambda.

Posterior density for uncertainty in rate parameter lambda.

Figure 19.14 suggests reduced uncertainty in the average number of tickets issued, \(\lambda\), when moving from prior to posterior. Prior uncertainty gave equal plausibility to any number between 3,000 and 7,000. The plausible range for the posterior spans a drastically smaller range, about 5,005 - 5,030. So while this might lead us to think we have a good model, do not be hypnotized into believing it just yet.

20.1.1 Posterior Predictive Check

Here we use just one draw from the posterior for demonstrating a posterior predictive check. It is actually more appropriate to use dozens of draws to get a feel for the variability within the entire sample of feasible posterior distributions.

A posterior predictive check compares simulated data using a draw of your posterior distribution to the observed data you are modelling - usually represented by the data node at the bottom of your generative DAG. In reference to Figure 19.13, this means we simulate 105 observations of tickets issued, \(K\), and compare the simulated data to the 105 real-world observations (two years worth of Wednesday tickets).

Future versions of greta will incorporate functionality to make these checks easier. Unfortunately, given current functionality this is a tedious process, albeit instructive though.

Simulating 105 observations requires us to convert the DAGs joint distribution recipe into computer code - we do this going from top to bottom of the graph. At the top of the DAG is lambda, so we get a single random draw from the posterior:

lambdaPost = drawsDF %>%  # posterior dist.
  sample_n(1) %>%  # get a random row
  pull(lambda)  # convert from tibble to single value
lambdaPost # print value
## [1] 5030.67

Note: Due to some inherent randomness, you will not get an identical lambdapost value, but your value will be close and you should use it to follow along with the process.

Continuing the recipe conversion by moving from parent to child in Figure 19.13, we simulate 105 realizations of \(K\) using the appropriate rfoo functions (greta does not support posterior predictive checks yet, so we must use R’s built-in random variable samplers, namely rpois for a Poisson random variable) :

simData = rpois(n = 105, lambda = lambdaPost)

And then, we can compare the histograms of the simulated data and the observed data:

library(tidyverse)
# make data frame for ggplot
plotDF = tibble(k_observed = wedTicketsDF$numTickets,
                k_simulated = simData)

# make df in tidy format (use tidyr::pivot_longer)
# so fill can be mapped to observed vs simulated data
plotDF = plotDF %>%
  pivot_longer(cols = c(k_observed,k_simulated),
               names_to = "dataType",
               values_to = "ticketCount")  

colors = c("k_observed" = "navyblue", 
           "k_simulated" = "cadetblue")

ggplot(plotDF, aes(x = ticketCount)) + 
  geom_density(aes(fill = dataType),
               alpha = 0.5) +
  scale_fill_manual(values = colors)

Figure 20.1: A comparison of simulated and observed data histograms. They do not look alike at all.

A comparison of simulated and observed data histograms.  They do not look alike at all.

Figure 20.1 shows two very different distributions of data. The observed data seemingly can vary from 0 to 8,000 while the simulated data never strays too far from 5,000. The real-world dispersion is not being captured by our generative DAG. Why not?

Our generative DAG wrongly assumes that every Wednesday has the exact same conditions for tickets being issued. In research done by Auerbach (2017Auerbach, Jonathan. 2017. “Are New York City Drivers More Likely to Get a Ticket at the End of the Month?” Significance 14 (4): 20–25.) based off the same data, they consider holidays and ticket quotas as just some of the other factors driving the variation in tickets issued. To do better, we would need to account for this variation.

20.2 A Generative DAG That Works

There is some discretion in choosing priors and advice is evolving. Structuring generative DAGs is tricky, but rely on your domain knowledge to help you do this. One of the thought leaders in this space is Andrew Gelman. You can see a recent thought process regarding prior selection here: https://andrewgelman.com/2018/04/03/justify-my-love/. In general, his blog is an excellent resource for informative discussions on prior setting.

Let’s now look at how a good posterior predictive check might work. Consider the following graphical model from the previous chapter which modelled cherry tree heights:

library(greta)
library(causact)

graph = dag_create() %>%
  dag_node("Tree Height","x",
           rhs = student(nu,mu,sigma),
           data = trees$Height) %>%
  dag_node("Degrees Of Freedom","nu",
           rhs = gamma(2,0.1),
           child = "x") %>%
  dag_node("Avg Cherry Tree Height","mu",
           rhs = normal(50,24.5),
           child = "x") %>%
  dag_node("StdDev of Observed Height","sigma",
           rhs = uniform(0,50),
           child = "x") %>%
  dag_plate("Observation","i",
            nodeLabels = "x")

graph %>% dag_render()

Figure 20.2: Generative DAG for observed cherry tree height.

We get the posterior as usual:

drawsDF = graph %>% dag_greta()

And then compare data simulated from a random posterior draw to the observed data. Actually, we will compare simulated data from several draws, say 20, to get a fuller picture of what the posterior implies for observed data. By creating multiple simulated datasets, we can see how much the data distributions vary. Observed data is subject to lots of randomness, so we just want to ensure that the observed randomness falls within the realm of our plausible narratives.

Getting the twenty random draws, which is technically a random sample of the posterior representative sample, we place them in paramsDF:

paramsDF = drawsDF %>%
  sample_n(20) ## get 20 random draws of posterior
paramsDF %>% head(5) ## see first five rows
## # A tibble: 5 x 3
##      nu    mu sigma
##   <dbl> <dbl> <dbl>
## 1  11.8  75.2  5.75
## 2  12.7  73.0  7.12
## 3  50.2  74.2  6.40
## 4  50.2  77.7  6.72
## 5  15.1  76.6  8.22

Then, for each row of paramsDF, we will simulate 31 observations. Since we are going to do the same simulation 20 times, once for each row of parameters, I write a function which returns a vector of simulated tree heights. Again, we convert the generative DAG recipe (Figure 20.2) to code that enables our posterior predictive check:

Typing ?greta::distributions into the console will open a help screen regarding greta distributions. For every distribution we use in greta there is corresponding distribution in R that has the dfoo, pfoo, and rfoo functions that yield density, cumulative distribution, and random sampling functions, respectively. Digging into this help file reveals that the parameterization of the Student-t distribution in greta is based on a function in the extraDistr package. Digging further, the rfoo function for the Student-t distribution is rlst(). We will call this function using extraDistr::rlst() instead of running library(extraDistr) and then using rlst(). More generally, if you want a specific function from a specific package without loading the package, use the convention packageName::functionName.

## function to get simulated observations when given 
## a draw of nu, mu, and sigma

# uncomment below if you see error that rlst() not found
# install.packages("extraDistr")
# the install might require you to restart R session
# in this case, you may have to rerun lines from above

simObs = function(nu,mu,sigma){
  # use n = 31 because 31 observed heights in the data
  simVector = extraDistr::rlst(n = 31,
                               df = nu, 
                               mu = mu, 
                               sigma = sigma)
  return(simVector)
}

The below code tests the function is working as expected (always do this after writing functions):

simObs(nu = 1,mu = 500,sigma = 10)
##  [1]  495.8880  492.4844  509.7498  612.4705  121.9394  522.6108  489.0402
##  [8]  486.1612  475.7929  500.0397  462.4765  505.5010  555.5501  498.5290
## [15]  501.0454  506.5500  513.4674  482.9909  512.7661 3805.1542  500.0676
## [22]  501.6410  489.2743  504.0521  472.5673  500.5036  466.7390  494.0917
## [29]  505.2346  490.2250  505.3359

We now get 31 observations for each row in paramsDF using some clever coding tricks that you should review slowly and unfortunately cannot be avoided:

See https://jennybc.github.io/purrr-tutorial/ls03_map-function-syntax.html for a nice tutorial on how the pmap() function (parallel map) works.

library(tidyverse)
simsList = pmap(paramsDF,simObs)

## give unique names to list elements
## required for conversion to dataframe
names(simsList) = paste0("sim",1:length(simsList))

## create dataframe of list
## each column corresponds to one of the 20
## randomly chosen posterior draws.  For each draw,
## 31 simulated points were created
simsDF = as_tibble(simsList)

simsDF ## see what got created
## # A tibble: 31 x 20
##     sim1  sim2  sim3  sim4  sim5  sim6  sim7  sim8  sim9 sim10 sim11 sim12 sim13
##    <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
##  1  86.6  61.7  76.9  80.6  59.6  84.5  73.5  81.9  70.7  77.2  77.8  78.4  83.7
##  2  77.5  85.1  73.9  68.7  81.0  78.2  78.7  74.0  69.9  64.6  77.0  77.1  68.9
##  3  76.9  70.9  66.9  76.2  63.5  71.7  79.2  75.5  77.1  71.4  72.4  73.9  80.2
##  4  68.5  82.2  74.8  81.0  68.6  85.7  66.0  80.5  70.6  73.0  83.3  70.2  79.7
##  5  72.1  71.0  89.3  70.7  82.5  64.8  78.5  72.7  75.6  77.0  76.7  77.5  76.7
##  6  72.0  78.1  71.8  84.5  87.4  82.0  73.4  64.0  80.8  78.6  88.2  81.0  80.3
##  7  83.4  58.8  74.9  78.5  82.2  66.8  78.8  82.7  90.4  80.6  82.3  73.2  74.9
##  8  74.3  65.6  86.4  79.5  80.9  73.4  83.0  81.9  79.2  80.6  93.0  79.2  69.5
##  9  69.3  71.2  75.2  79.7  77.8  78.0  85.1  81.7  71.1  59.6  86.0  80.4  77.2
## 10  68.2  79.4  73.9  77.6  72.3  70.9  80.1  85.5  79.1  66.0  77.1  78.7  77.8
## # ... with 21 more rows, and 7 more variables: sim14 <dbl>, sim15 <dbl>,
## #   sim16 <dbl>, sim17 <dbl>, sim18 <dbl>, sim19 <dbl>, sim20 <dbl>
## create tidy version for plotting
plotDF = simsDF %>%
  pivot_longer(cols = everything()) 

## see random sample of plotDF rows
plotDF %>% sample_n(10)  
## # A tibble: 10 x 2
##    name  value
##    <chr> <dbl>
##  1 sim20  79.8
##  2 sim10  74.3
##  3 sim1   81.2
##  4 sim6   71.0
##  5 sim14  69.6
##  6 sim10  80.6
##  7 sim12  76.5
##  8 sim20  70.5
##  9 sim1   78.5
## 10 sim5   81.0
# tidy data allows mapping of aesthetics to name column

And with a lot of help from googling tidyverse plotting issues, one can figure out how to consolidate the 20 simulated densities with the 1 observed density into a single plot (see Figure 20.3). The goal is to see if the narrative modelled by the generative DAG (e.g. Figure 20.2) can feasibly produce data that has been observed. The following code creates the visual we are looking for:

obsDF = tibble(obs = trees$Height)

colors = c("simulated" = "cadetblue", 
           "observed" = "navyblue")

ggplot(plotDF) +
  stat_density(aes(x=value, 
                   group = name,
                   color = "simulated"),
               geom = "line",  # for legend
               position = "identity") + #1 sim = 1 line
  stat_density(data = obsDF, aes(x=obs, 
                   color = "observed"),
               geom = "line",  
               position = "identity",
               lwd = 2) +
  scale_color_manual(values = colors) +
  labs(x = "Cherry Tree Height (ft)",
       y = "Density Estimated from Data",
       color = "Data Type")

Figure 20.3: A posterior predictive check for our model of observed cherry tree heights.

A posterior predictive check for our model of observed cherry tree heights.

Figure 20.3, a type of spaghetti plot (so-called for obvious reasons), shows 21 different density lines. The twenty light-colored lines each represent a density derived from the 31 points of a single posterior draw. The thicker dark-line is the observed density based on the actual 31 observations. As can be seen, despite the variation across all the lines, the posterior does seem capable of generating data like that which we observed. While this is not a definitive validation of the generative DAG, it is a very good sign that your generative DAG is on the right track.

Despite any posterior predictive success, remain vigilant for factors not included in your generative DAG. Investigating these can lead to substantially more refined narratives of how your data gets generated. For example, in the credit card example of the causact chapter, our initial model ignored the car model data. Even with this omission, the model would pass a posterior predictive check quite easily. Only by including car model data, as might be suggested by working with business stakeholders, did we achieve a much more accurate business narrative. Do not be a business analyst who only looks at data, get out and talk to domain experts! See this Twitter thread for a real-world example of why this matters: https://twitter.com/oziadias/status/1221531710820454400. It shows how data generated in the real-world of emergency physicians can only be modelled properly when real-world considerations are factored in.