You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Notice that if you call `maxent` several times in the same cell, as we just did, we will get all the distributions in the same figure. This can be very useful to visually compare several alternatives.
Copy file name to clipboardExpand all lines: Chapters/Sensitivity_checks.qmd
+4-5Lines changed: 4 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -317,11 +317,10 @@ Let's illustrate this with an example. We will use the same model we used in the
317
317
318
318
We are going to start by computing predictions at `new_data`, as you may already know how to do this from other examples. To do that we need to create a new DataFrame with the values of the predictors we want to use for prediction. In this case, we will use the median, min, and max values of each covariate. For your particular use case, you may want to use different values, like quantiles of any other specific values of interest.
319
319
320
-
We can ask ArviZ, to compute the Bayesian R² for us, as long as we provide a DataTree with both observed and predicted data.
321
-
320
+
We can ask ArviZ, to compute the Bayesian R² for us, we need to specify which variables represent the posterior mean (or location parameter) and standard deviation (or variance).
To compute the joint log-likelihood we just sum the poin-twise log-likelihood evaluations along the observations. In other words we compute one log-likelihood value per MCMC step.
We see no signs of data-conflict or likelihood noninformativity, we can visually check this as we did before, but given these values we should expect the distributions of the derived quantities to very similar across the different priors.
0 commit comments