- 1 Welcome to A Business Analyst’s Introduction to Business Analytics
**I Introductory Material**- 2 Becoming a Data-Driven Business Analyst
- 3 The Computing Environment
- 4 R: Basic Usage
- 5 R Packages: causact,tidyverse, etc.
**II DATA: Manipulation & Visualization**- 6 dplyr: Manipulating Data Frames
- 7 dplyr: Data Manipulation For Insight
- 8 ggplot2: Data Visualization Using The Grammar of Graphics
- 9 ggplot2: The Four Stages of Visualization
**III DATA STORIES: Modelling The Real World**- 10 Representing Uncertainty
- 11 Joint Distributions Tell You Everything
- 12 Graphical Models Tell Joint Distribution Stories
- 13 Bayesian Inference On Graphical Models
- 14 Generative DAGs As Prior Joint Distributions
- 15 Install Tensorflow, greta, and causact
- 16 greta: Bayesian Updating And Probabilistic Statements About Posteriors
- 17 causact: Quick Inference With Generative DAGs
- 18 The beta Distribution
- 19 Parameter Estimation
- 20 Posterior Predictive Checks
- 21 Decision Making
- 22 A Simple Linear Model
- 23 Linear Predictors and Inverse Link Functions
- 24 Multi-Level Modelling
- 25 Compelling Decisions and Actions Under Uncertainty

Cobra snakes are known for hypnotizing their prey. Like cobras, Bayesian posteriors can fool you into submission - thinking you have a good model with small uncertainty. The seductiveness of getting results needs to be counter-balanced with a good measure of skepticism. For us, that skepticism manifests as a posterior predictive check - a method of ensuring the posterior distribution can simulate data that is similar to the data observed. We want to ensure our BAW leads to actionable insights, not intoxicating and venomous results.

When modelling real-world data, your generative DAG **never** captures the *true* generating process - the real-world is too messy. However, if your generative DAG can approximate reality, then your model might be useful. Modelling with generative DAGs provides a good starting place from which to confirm, deny, or refine business narratives. After all, data-supported business narratives motivate action within a firm; fancy algorithms with intimidating names are not sufficient on their own. Whether your generative DAG proves successful or not, the modelling process by itself puts you on a good path towards learning more from both the domain expertise of business stakeholders and observed data.

Last chapter, we were modelling the daily number of tickets issued in New York City on Wednesdays. We made a dataframe, `wedTicketsDF`

, that had our observed data using the code shown here:

```
library(tidyverse)
library(causact)
library(greta)
library(lubridate)
nycTicketsDF = ticketsDF
## summarize tickets for Wednesday
wedTicketsDF = nycTicketsDF %>%
mutate(dayOfWeek = wday(date, label = TRUE)) %>%
filter(dayOfWeek == "Wed") %>%
group_by(date) %>%
summarize(numTickets = sum(daily_tickets))
```

The generative DAG of Figure 19.13, which we thought was successful, yielded a posterior distribution with a huge reduction in uncertainty. As we will see, this DAG will turn out to be the hypnotizing cobra I was warning you about. Let’s learn a way of detecting models that are inconsistent with observation.

```
graph = dag_create() %>%
dag_node("Daily # of Tickets","k",
rhs = poisson(lambda),
data = wedTicketsDF$numTickets) %>%
dag_node("Avg # Daily Tickets","lambda",
rhs = uniform(3000,7000),
child = "k")
graph %>% dag_render()
```

Using the generative DAG in Figure 19.13, we call `greta`

to get our posterior distribution:

Figure 19.14 suggests reduced uncertainty in the average number of tickets issued, \(\lambda\), when moving from prior to posterior. Prior uncertainty gave equal plausibility to any number between 3,000 and 7,000. The plausible range for the posterior spans a drastically smaller range, about 5,005 - 5,030. So while this might lead us to think we have a good model, do not be hypnotized into believing it just yet.

Here we use just one draw from the posterior for demonstrating a posterior predictive check. It is actually more appropriate to use dozens of draws to get a feel for the variability within the entire sample of feasible posterior distributions.

A *posterior predictive check* compares simulated data using a draw of your posterior distribution to the observed data you are modelling - usually represented by the data node at the bottom of your generative DAG. In reference to Figure 19.13, this means we simulate 105 observations of tickets issued, \(K\), and compare the simulated data to the 105 real-world observations (two years worth of Wednesday tickets).

Future versions of `greta`

will incorporate functionality to make these check easier. Unfortunately, given current functionality this is a tedious process, albeit instructive though.

Simulating 105 observations requires us to convert the DAGs joint distribution recipe into computer code - we do this going from top to bottom of the graph. At the top of the DAG is `lambda`

, so we get a single random draw from the posterior:

```
lambdaPost = drawsDF %>% # posterior dist.
sample_n(1) %>% # get a random row
pull(lambda) # convert from tibble to single value
lambdaPost # print value
```

`## [1] 5020.122`

*Note: Due to some inherent randomness, you will not get an identical lambdapost value, but your value will be close and you should use it to follow along with the process.*

Continuing the recipe conversion by moving from parent to child in Figure 19.13, we simulate 105 realizations of \(K\) using the appropriate `rfoo`

functions (`greta`

does not support posterior predictive checks yet, so we must use R’s built-in random variable samplers, namely `rpois`

for a Poisson random variable) :

And then, we can compare the histograms of the simulated data and the observed data:

```
library(tidyverse)
# make data frame for ggplot
plotDF = tibble(k_observed = wedTicketsDF$numTickets,
k_simulated = simData)
# make data frame in tidy format
# so fill can be mapped to observed vs simulated data
plotDF = plotDF %>%
pivot_longer(cols = c(k_observed,k_simulated),
names_to = "dataType",
values_to = "ticketCount") # from tidyr package
colors = c("k_observed" = "navyblue", "k_simulated" = "cadetblue")
ggplot(plotDF, aes(x = ticketCount)) +
geom_density(aes(fill = dataType),
alpha = 0.5) +
scale_fill_manual(values = colors)
```

Figure 20.1 shows two very different distributions of data. The observed data seemingly can vary from 0 to 8,000 while the simulated data never strays too far from 5,000. The real-world dispersion is not being captured by our generative DAG. Why not?

Our generative DAG wrongly assumes that every Wednesday has the exact same conditions for tickets being issued. In research done by Auerbach (2017Auerbach, Jonathan. 2017. “Are New York City Drivers More Likely to Get a Ticket at the End of the Month?” *Significance* 14 (4): 20–25.) based off the same data, they consider holidays and ticket quotas as just some of the other factors driving the variation in tickets issued. To do better, we would need to account for this variation.

There is some discretion in choosing priors and advice is evolving. Structuring generative DAGs is tricky, but rely on your domain knowledge to help you do this. One of the thought leaders in this space is Andrew Gelman. You can see a recent thought process regarding prior selection here: https://andrewgelman.com/2018/04/03/justify-my-love/. In general, his blog is an excellent resource for informative discussions on prior setting.

Let’s now look at how a good posterior predictive check might work. Consider the following graphical model from the previous chapter which modelled cherry tree heights:

```
library(greta)
library(causact)
graph = dag_create() %>%
dag_node("Tree Height","x",
rhs = student(nu,mu,sigma),
data = trees$Height) %>%
dag_node("Degrees Of Freedom","nu",
rhs = gamma(2,0.1),
child = "x") %>%
dag_node("Avg Cherry Tree Height","mu",
rhs = normal(50,24.5),
child = "x") %>%
dag_node("StdDev of Observed Height","sigma",
rhs = uniform(0,50),
child = "x") %>%
dag_plate("Observation","i",
nodeLabels = "x")
graph %>% dag_render()
```

We get the posterior as usual:

And then compare data simulated from a random posterior draw to the observed data. *Actually, we will compare simulated data from several draws, say 20, to get a fuller picture of what the posterior implies for observed data.* By creating multiple simulated datasets, we can see how much the data distributions vary. Observed data is subject to lots of randomness, so we just want to ensure that the observed randomness falls within the realm of our plausible narratives.

Getting the twenty random draws, which is technically a random sample of the posterior representative sample, we place them in `paramsDF`

:

```
paramsDF = drawsDF %>%
sample_n(20) ## get 20 random draws of posterior
paramsDF %>% head(5) ## see first five rows
```

```
## # A tibble: 5 x 3
## nu mu sigma
## <dbl> <dbl> <dbl>
## 1 10.4 76.6 8.18
## 2 29.2 78.5 6.50
## 3 9.90 75.5 5.81
## 4 11.8 76.9 7.25
## 5 8.11 76.8 6.53
```

Then, for each row of `paramsDF`

, we will simulate 31 observations. Since we are going to do the same simulation 20 times, once for each row of parameters, I write a function which returns a vector of simulated tree heights. Again, we convert the generative DAG recipe (Figure 20.2) to code that enables our posterior predictive check:

Typing `?greta::distributions`

into the console will open a help screen regarding `greta`

distributions. For every distribution we use in `greta`

there is corresponding distribution in `R`

that has the `dfoo`

, `pfoo`

, and `rfoo`

functions that yield density, cumulative distribution, and random sampling functions, respectively. Digging into this help file reveals that the parameterization of the Student-t distribution in greta is based on a function in the `extraDistr`

package. Digging further, the `rfoo`

function for the Student-t distribution is `rlst()`

. We will call this function using `extraDistr::rlst()`

instead of running `library(extraDistr)`

and then using `rlst()`

. More generally, if you want a specific function from a specific package without loading the package, use the convention `packageName::functionName`

.

```
## function to get simulated observations when given
## a draw of nu, mu, and sigma
# uncomment below line if you get an error that rlst() not found
# install.packages("extraDistr")
# the install might require you to restart R session
# in this case, you may have to rerun lines from above
simObs = function(nu,mu,sigma){
# use n = 31 because there are 31 observed
# tree heights in the data
simVector = extraDistr::rlst(n = 31,df = nu, mu = mu, sigma = sigma)
return(simVector)
}
```

The below code tests the function is working as expected (always do this after writing functions):

```
## [1] 501.1007 461.1472 466.8654 495.2515 490.2192 489.3973 482.3341 501.0150
## [9] 516.5468 551.6474 499.9827 536.2224 665.0259 494.3385 470.7698 510.4961
## [17] 498.6055 440.4372 503.7029 513.8430 498.1281 511.8068 505.9243 515.9161
## [25] 490.0101 501.7462 462.7930 481.1312 516.1769 488.1636 505.7931
```

We now get 31 observations for each row in `paramsDF`

using some clever coding tricks that you should review slowly and unfortunately cannot be avoided:

See https://jennybc.github.io/purrr-tutorial/ls03_map-function-syntax.html for a nice tutorial on how the `pmap()`

function (parallel map) works.

```
library(tidyverse)
simsList = pmap(paramsDF,simObs)
## give unique names to list elements
## required for conversion to dataframe
names(simsList) = paste0("sim",1:length(simsList))
## create dataframe of list
## each column corresponds to one of the 20
## randomly chosen posterior draws. For each draw,
## 31 simulated points were created
simsDF = as_tibble(simsList)
simsDF ## see what got created
```

```
## # A tibble: 31 x 20
## sim1 sim2 sim3 sim4 sim5 sim6 sim7 sim8 sim9 sim10 sim11 sim12 sim13
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 80.6 82.4 78.5 72.8 80.4 71.5 72.1 69.8 73.7 84.3 74.6 76.4 74.1
## 2 80.5 74.7 73.8 87.1 87.7 81.9 80.4 68.1 71.6 78.2 86.1 78.2 86.2
## 3 62.8 90.6 76.3 75.3 86.8 80.9 80.3 70.6 89.7 60.9 77.7 74.8 82.2
## 4 89.3 87.9 71.1 78.4 93.9 78.3 86.6 84.2 79.4 58.4 82.3 78.4 82.1
## 5 78.2 74.4 80.6 78.3 54.0 68.1 82.3 78.7 83.0 69.9 69.8 67.3 72.5
## 6 71.4 87.1 55.3 80.9 78.7 67.0 74.1 76.2 71.6 73.7 72.4 77.1 73.2
## 7 74.3 88.7 65.5 71.0 78.6 76.7 79.4 79.0 78.0 74.9 79.8 69.3 69.4
## 8 71.4 93.0 83.5 80.9 79.7 76.5 79.6 75.8 77.8 73.3 73.6 64.3 81.8
## 9 68.5 76.1 76.7 65.4 70.1 73.0 70.7 82.0 78.2 74.7 78.9 73.5 63.0
## 10 99.3 75.8 77.8 63.2 77.5 80.1 63.1 80.7 80.5 72.6 73.7 78.7 79.0
## # ... with 21 more rows, and 7 more variables: sim14 <dbl>, sim15 <dbl>,
## # sim16 <dbl>, sim17 <dbl>, sim18 <dbl>, sim19 <dbl>, sim20 <dbl>
```

```
## create tidy version for plotting
plotDF = simsDF %>%
pivot_longer(cols = everything())
## see random sample of plotDF rows
plotDF %>% sample_n(10)
```

```
## # A tibble: 10 x 2
## name value
## <chr> <dbl>
## 1 sim19 72.2
## 2 sim9 77.6
## 3 sim2 75.5
## 4 sim3 62.9
## 5 sim12 77.1
## 6 sim20 87.0
## 7 sim15 77.2
## 8 sim4 77.2
## 9 sim9 80.5
## 10 sim13 79.0
```

And with a lot of help from googling tidyverse plotting issues, one can figure out how to consolidate the 20 simulated densities with the 1 observed density into a single plot (see Figure 20.3). The goal is to see if the business narrative (modelled as the generative DAG in Figure 20.2) can feasibly produce data that has been observed. The following code creates the visual we are looking for:

```
obsDF = tibble(obs = trees$Height)
colors = c("simulated" = "cadetblue", "observed" = "navyblue")
ggplot(plotDF) +
stat_density(aes(x=value,
group = name,
color = "simulated"),
geom = "line", ## makes legend look right
position = "identity") + ##keeps sims separated
stat_density(data = obsDF, aes(x=obs,
color = "observed"),
geom = "line",
position = "identity",
lwd = 2) +
scale_color_manual(values = colors) +
labs(x = "Cherry Tree Height (ft)",
y = "Density Estimated from Data",
color = "Data Type")
```

Figure 20.3, a type of spaghetti plot (so-called for obvious reasons), shows 21 different density lines. The twenty light-colored lines each represent densities derived from a posterior draw. The thicker dark-line is the observed density based on the actual 31 observations. As can be seen, despite the variation across all the lines, the posterior does seem capable of generating data like that which we observed. While this is not a definitive validation of the generative DAG, it is a very good sign that your business narrative is on the right track.

Despite any posterior predictive success, remain vigilant for factors not included in your generative DAG. Investigating these can lead to substantially more refined narratives of how your data gets generated. For example, in the credit card example of the *causact* chapter, our initial model ignored the *car model* data. Even with this omission, the model would pass a posterior predictive check quite easily. Only by including *car model* data, as suggested by working with business stakeholders, did we achieve a much more accurate business narrative. *Do not be a business analyst who only looks at data*, get out and talk to domain experts! See this Twitter thread for a real-world example of why this matters: https://twitter.com/oziadias/status/1221531710820454400. It shows how data generated in the real-world of emergency physicians can only be modelled properly when real-world considerations are factored in.