- 1 Welcome
- 2 Becoming a Data-Driven Business Analyst
- 3 The Computing Environment
- 4 R: Basic Usage
- 5 R Packages: causact, tidyverse, etc.
- 6 dplyr: Manipulating Data Frames
- 7 dplyr: Data Manipulation For Insight
- 8 ggplot2: Data Visualization Using The Grammar of Graphics
- 9 ggplot2: The Four Stages of Visualization
- 10 Representing Uncertainty
- 11 Joint Distributions Tell You Everything
- 12 Graphical Models Tell Joint Distribution Stories
- 13 Bayesian Inference On Graphical Models
- 14 Generative DAGs As Business and Mathematical Narratives
- 15 Install causact’s Python Dependencies
- 16 causact: Computational Bayesian Inference Workflows
- 17 The beta Distribution
- 18 Parameter Estimation
- 19 Posterior Predictive Checks
- 20 Decision Making
- 21 A Simple Linear Model
- 22 Linear Predictors and Inverse Link Functions
- 23 Multi-Level Modelling
- 24 Compelling Decisions and Actions Under Uncertainty
- 25 Your Journey Continues

Cobra snakes are known for hypnotizing their prey. Like cobras, Bayesian posteriors can fool you into submission - thinking you have a good model with small uncertainty. The seductiveness of getting results needs to be counter-balanced with a good measure of skepticism. For us, that skepticism manifests as a posterior predictive check - a method of ensuring the posterior distribution can simulate data that is similar to the observed data. We want to ensure our BAW leads to actionable insights, not intoxicating and venomous results.

When modelling real-world data, your generative DAG **never** captures the *true* generating process - the real-world is too messy. However, if your generative DAG can approximate reality, then your model might be useful. Modelling with generative DAGs provides a good starting place from which to confirm, deny, or refine business narratives. After all, data-supported business narratives motivate action within a firm; fancy algorithms with intimidating names are not sufficient on their own. Whether your generative DAG proves successful or not, the modelling process by itself puts you on a good path towards learning more from both the domain expertise of business stakeholders and observed data.

Last chapter, we were modelling the daily number of tickets issued in New York City on Wednesdays. We made a dataframe, `wedTicketsDF`

, that had our observed data using the code shown here:

```
library(tidyverse)
library(causact)
library(lubridate)
nycTicketsDF = ticketsDF
## summarize tickets for Wednesday
wedTicketsDF = nycTicketsDF %>%
mutate(dayOfWeek = wday(date, label = TRUE)) %>%
filter(dayOfWeek == "Wed") %>%
group_by(date) %>%
summarize(numTickets = sum(daily_tickets))
```

The generative DAG of Figure 18.13, which we thought was successful, yielded a posterior distribution with a huge reduction in uncertainty. As we will see, this DAG will turn out to be the hypnotizing cobra I was warning you about. Let’s learn a way of detecting models that are inconsistent with observation.

```
graph = dag_create() %>%
dag_node("Daily # of Tickets","k",
rhs = poisson(rate),
data = wedTicketsDF$numTickets) %>%
dag_node("Avg # Daily Tickets","rate",
rhs = uniform(3000,7000),
child = "k")
graph %>% dag_render()
```

Using the generative DAG in Figure 18.13, we call `dag_numpyro()`

to get our posterior distribution:

Figure 18.14 suggests reduced uncertainty in the average number of tickets issued, \(\lambda\), when moving from prior to posterior. Prior uncertainty gave equal plausibility to any number between 3,000 and 7,000. The plausible range for the posterior spans a drastically smaller range, about 5,005 - 5,030. So while this might lead us to think we have a good model, do not be hypnotized into believing it just yet.

Here we use just one draw from the posterior for demonstrating a posterior predictive check. It is actually more appropriate to use dozens of draws to get a feel for the variability within the entire sample of feasible posterior distributions.

A *posterior predictive check* compares simulated data using a draw of your posterior distribution to the observed data you are modelling - usually represented by the data node at the bottom of your generative DAG. In reference to Figure 18.13, this means we simulate 105 observations of tickets issued, \(K\), and compare the simulated data to the 105 real-world observations (two years worth of Wednesday tickets).

Future versions of `causact`

will incorporate
functionality to make these checks easier. Unfortunately, given current
functionality this is a tedious process, albeit instructive though.

Simulating 105 observations requires us to convert the DAGs joint distribution recipe into computer code - we do this going from top to bottom of the graph. At the top of the DAG is `lambda`

, so we get a single random draw from the posterior:

```
lambdaPost = drawsDF %>% # posterior dist.
slice_sample(n=1) %>% # get a random row
pull(rate) # convert from tibble to single value
lambdaPost # print value
```

`## [1] 5014.07`

*Note: Due to some inherent randomness, you will not get an identical lambdapost value, but your value will be close and you should use it to follow along with the process.*

Continuing the recipe conversion by moving from parent to child in Figure 18.13, we simulate 105 realizations of \(K\) using the appropriate `rfoo`

functions (`causact`

does not support posterior predictive checks yet, so we must use R’s built-in random variable samplers, namely `rpois`

for a Poisson random variable) :

And then, we can compare the histograms of the simulated data and the observed data:

```
library(tidyverse)
# make data frame for ggplot
plotDF = tibble(k_observed = wedTicketsDF$numTickets,
k_simulated = simData)
# make df in tidy format (use tidyr::pivot_longer)
# so fill can be mapped to observed vs simulated data
plotDF = plotDF %>%
pivot_longer(cols = c(k_observed,k_simulated),
names_to = "dataType",
values_to = "ticketCount")
colors = c("k_observed" = "navyblue",
"k_simulated" = "cadetblue")
ggplot(plotDF, aes(x = ticketCount)) +
geom_density(aes(fill = dataType),
alpha = 0.5) +
scale_fill_manual(values = colors)
```

Figure 19.1 shows two very different distributions of data. The observed data seemingly can vary from 0 to 8,000 while the simulated data never strays too far from 5,000. The real-world dispersion is not being captured by our generative DAG. Why not?

Our generative DAG wrongly assumes that every Wednesday has the exact same conditions for tickets being issued. In research done by Auerbach (2017Auerbach, Jonathan. 2017. “Are New York City Drivers More Likely to Get a Ticket at the End of the Month?” *Significance* 14 (4): 20–25.) based off the same data, they consider holidays and ticket quotas as just some of the other factors driving the variation in tickets issued. To do better, we would need to account for this variation.

There is some discretion in choosing priors and advice is evolving. Structuring generative DAGs is tricky, but rely on your domain knowledge to help you do this. One of the thought leaders in this space is Andrew Gelman. You can see a recent thought process regarding prior selection here: https://andrewgelman.com/2018/04/03/justify-my-love/. In general, his blog is an excellent resource for informative discussions on prior setting.

Let’s now look at how a good posterior predictive check might work. Consider the following graphical model from the previous chapter which modelled cherry tree heights:

```
library(causact)
graph = dag_create() %>%
dag_node("Tree Height","x",
rhs = student(nu,mu,sigma),
data = trees$Height) %>%
dag_node("Degrees Of Freedom","nu",
rhs = gamma(2,0.1),
child = "x") %>%
dag_node("Avg Cherry Tree Height","mu",
rhs = normal(50,24.5),
child = "x") %>%
dag_node("StdDev of Observed Height","sigma",
rhs = uniform(0,50),
child = "x") %>%
dag_plate("Observation","i",
nodeLabels = "x")
graph %>% dag_render()
```

We get the posterior as usual:

And then compare data simulated from a random posterior draw to the observed data. *Actually, we will compare simulated data from several draws, say 20, to get a fuller picture of what the posterior implies for observed data.* By creating multiple simulated datasets, we can see how much the data distributions vary. Observed data is subject to lots of randomness, so we just want to ensure that the observed randomness falls within the realm of our plausible narratives.

Getting the twenty random draws, which is technically a random sample of the posterior representative sample, we place them in `paramsDF`

:

```
paramsDF = drawsDF %>%
slice_sample(n=20) ## get 20 random draws of posterior
paramsDF %>% head(5) ## see first five rows
```

```
## # A tibble: 5 × 3
## mu nu sigma
## <dbl> <dbl> <dbl>
## 1 75.7 15.5 6.31
## 2 75.4 5.29 6.50
## 3 76.4 41.3 6.04
## 4 77.3 13.2 4.87
## 5 74.9 16.3 6.92
```

Then, for each row of `paramsDF`

, we will simulate 31 observations. Since we are going to do the same simulation 20 times, once for each row of parameters, I write a function which returns a vector of simulated tree heights. Again, we convert the generative DAG recipe (Figure 19.2) to code that enables our posterior predictive check:

Typing `?causact::distributions`

into the console will
open a help screen regarding `causact`

distributions. For
every distribution we use in `causact`

there is corresponding
distribution in `R`

that has the `dfoo`

,
`pfoo`

, and `rfoo`

functions that yield density,
cumulative distribution, and random sampling functions, respectively.
Digging into this help file reveals that the parameterization of the
Student-t distribution in causact is based on a function in the
`extraDistr`

package. Digging further, the `rfoo`

function for the Student-t distribution is `rlst()`

. We will
call this function using `extraDistr::rlst()`

instead of
running `library(extraDistr)`

and then using
`rlst()`

. More generally, if you want a specific function
from a specific package without loading the package using
`library()`

, use the convention
`packageName::functionName`

.

```
## function to get simulated observations when given
## a draw of nu, mu, and sigma
# uncomment below if you see error that rlst() not found
# install.packages("extraDistr")
# the install might require you to restart R session
# in this case, you may have to rerun lines from above
simObs = function(nu,mu,sigma){
# use n = 31 because 31 observed heights in the data
simVector = extraDistr::rlst(n = 31,
df = nu,
mu = mu,
sigma = sigma)
return(simVector)
}
```

The below code tests the function is working as expected (always do this after writing functions):

```
## [1] 499.4432 490.0600 490.2535 504.2504 532.6770 481.3870 518.0526 500.4955
## [9] 479.5958 503.6482 458.4636 506.3394 493.7585 508.0890 500.1752 487.5619
## [17] 937.7306 547.8439 502.0153 496.3097 510.8093 462.0082 503.1166 526.0855
## [25] 126.7345 476.2569 543.0752 492.4564 505.9244 491.8591 490.7712
```

We now get 31 observations for each row in `paramsDF`

using some clever coding tricks that you should review slowly and unfortunately cannot be avoided:

See
https://jennybc.github.io/purrr-tutorial/ls03_map-function-syntax.html
for a nice tutorial on how the `pmap()`

function (parallel
map) works.

```
library(tidyverse)
simsList = pmap(paramsDF,simObs)
## give unique names to list elements
## required for conversion to dataframe
names(simsList) = paste0("sim",1:length(simsList))
## create dataframe of list
## each column corresponds to one of the 20
## randomly chosen posterior draws. For each draw,
## 31 simulated points were created
simsDF = as_tibble(simsList)
simsDF ## see what got created
```

```
## # A tibble: 31 × 20
## sim1 sim2 sim3 sim4 sim5 sim6 sim7 sim8 sim9 sim10 sim11 sim12 sim13
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 87.1 73.8 71.4 68.3 71.5 81.9 79.3 86.1 68.6 77.6 70.0 76.9 73.7
## 2 71.6 85.0 73.9 79.7 68.5 85.6 91.7 73.4 71.8 81.7 71.4 74.4 74.2
## 3 75.7 81.8 68.2 77.5 80.9 77.0 77.4 74.7 74.9 78.2 74.2 74.2 85.2
## 4 77.6 80.6 73.2 80.9 71.5 74.3 77.5 84.6 69.3 72.3 83.5 83.5 79.9
## 5 72.8 67.9 83.7 81.1 80.0 74.5 65.7 82.8 72.8 73.6 80.4 70.6 76.6
## 6 76.7 84.2 75.3 76.9 79.5 79.5 77.9 79.4 63.4 80.5 62.8 77.3 73.5
## 7 78.1 74.0 75.3 82.1 75.2 87.5 83.9 68.2 70.3 75.8 74.7 72.7 74.8
## 8 70.4 66.0 80.4 77.6 77.7 79.8 78.4 80.3 73.9 75.4 67.9 69.2 79.4
## 9 76.1 77.0 80.1 76.9 69.0 79.2 60.2 67.2 72.9 70.9 75.9 75.1 86.6
## 10 74.3 52.1 72.7 76.3 72.9 76.0 77.3 83.5 76.1 78.2 77.6 89.2 61.9
## # ℹ 21 more rows
## # ℹ 7 more variables: sim14 <dbl>, sim15 <dbl>, sim16 <dbl>, sim17 <dbl>,
## # sim18 <dbl>, sim19 <dbl>, sim20 <dbl>
```

```
## create tidy version for plotting
plotDF = simsDF %>%
pivot_longer(cols = everything())
## see random sample of plotDF rows
plotDF %>% slice_sample(n=10)
```

```
## # A tibble: 10 × 2
## name value
## <chr> <dbl>
## 1 sim17 66.6
## 2 sim13 81.9
## 3 sim4 76.3
## 4 sim8 74.7
## 5 sim2 69.9
## 6 sim18 84.3
## 7 sim14 91.2
## 8 sim3 71.1
## 9 sim16 88.4
## 10 sim14 81.1
```

And with a lot of help from googling tidyverse plotting issues, one can figure out how to consolidate the 20 simulated densities with the 1 observed density into a single plot (see Figure 19.3). The goal is to see if the narrative modelled by the generative DAG (e.g. Figure 19.2) can feasibly produce data that has been observed. The following code creates the visual we are looking for:

```
obsDF = tibble(obs = trees$Height)
colors = c("simulated" = "cadetblue",
"observed" = "navyblue")
ggplot(plotDF) +
stat_density(aes(x=value,
group = name,
color = "simulated"),
geom = "line", # for legend
position = "identity") + #1 sim = 1 line
stat_density(data = obsDF, aes(x=obs,
color = "observed"),
geom = "line",
position = "identity",
lwd = 2) +
scale_color_manual(values = colors) +
labs(x = "Cherry Tree Height (ft)",
y = "Density Estimated from Data",
color = "Data Type")
```

Figure 19.3, a type of spaghetti plot (so-called for obvious reasons), shows 21 different density lines. The twenty light-colored lines each represent a density derived from the 31 points of a single posterior draw. The thicker dark-line is the observed density based on the actual 31 observations. As can be seen, despite the variation across all the lines, the posterior does seem capable of generating data like that which we observed. While this is not a definitive validation of the generative DAG, it is a very good sign that your generative DAG is on the right track.

Despite any posterior predictive success, remain vigilant for factors not included in your generative DAG. Investigating these can lead to substantially more refined narratives of how your data gets generated. For example, in the credit card example of the *causact* chapter, our initial model ignored the *car model* data. Even with this omission, the model would pass a posterior predictive check quite easily. Only by including *car model* data, as might be suggested by working with business stakeholders, did we achieve a much more accurate business narrative. *Do not be a business analyst who only looks at data*, get out and talk to domain experts! See this Twitter thread for a real-world example of why this matters: https://twitter.com/oziadias/status/1221531710820454400. It shows how data generated in the real-world of emergency physicians can only be modelled properly when real-world considerations are factored in.

**Exercise 19.1 **The following code will extract and plot the number of runs scored per game at the Colorado Rockies baseball field in the 2010-2014 baseball seasons.

Runs scored per game is a good example of count data. And often, people will use a Poisson distribution to model this type of outcome in sports.

```
library(dplyr)
library(causact)
graph = dag_create() %>%
dag_node("Avg. Runs Scored","rate",
rhs = uniform(0,50)) %>%
dag_node("Runs Scored","k",
rhs = poisson(rate),
data = baseDF$runsInGame) %>%
dag_edge("rate","k") %>%
dag_plate("Observation","i",
nodeLabels = "k")
graph %>% dag_render()
```

Using the generative DAG of Figure 19.5, write the code to **plot a posterior predictive check** for this generative DAG. You will notice that the simulated data does not seem consistent with the observed data (i.e. this model is deficient). Comment on what intervals of values seem to be either over-estimated or under-estimated.

Share this page on Twitter: Share this on Twitter

YouTube playlist link for videos that accompany each chapter: https://youtube.com/playlist?list=PLassxuIVwGLPy-mtohX-NXrjD8fc9FBOc

Buy a beautifully printed full-color version of "A Business Analyst's Guide to Business Analytics" on Amazon: http://www.amazon.com/dp/B0CFZMKRGX