Skip to content

Commit d190572

Browse files
authored
update versions and syntax (#184)
1 parent 3d3c9fd commit d190572

3 files changed

Lines changed: 9 additions & 10 deletions

File tree

Chapters/Prior_elicitation.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ dist_mean = pz.Gamma(mu=2)
127127
pz.maxent(dist_mean, 0, 3, 0.8)
128128
129129
dist_mode = pz.Gamma()
130-
pz.maxent(dist_mode, 0, 3, 0.8, mode=2);
130+
pz.maxent(dist_mode, 0, 3, 0.8, fixed_stat=("mode",2));
131131
```
132132

133133
Notice that if you call `maxent` several times in the same cell, as we just did, we will get all the distributions in the same figure. This can be very useful to visually compare several alternatives.

Chapters/Sensitivity_checks.qmd

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -317,11 +317,10 @@ Let's illustrate this with an example. We will use the same model we used in the
317317

318318
We are going to start by computing predictions at `new_data`, as you may already know how to do this from other examples. To do that we need to create a new DataFrame with the values of the predictors we want to use for prediction. In this case, we will use the median, min, and max values of each covariate. For your particular use case, you may want to use different values, like quantiles of any other specific values of interest.
319319

320-
We can ask ArviZ, to compute the Bayesian R² for us, as long as we provide a DataTree with both observed and predicted data.
321-
320+
We can ask ArviZ, to compute the Bayesian R² for us, we need to specify which variables represent the posterior mean (or location parameter) and standard deviation (or variance).
322321
```{python}
323-
r2_da = azp.ndarray_to_dataarray(azp.r2_score(dt_bf_01, summary=False).reshape(4, 2000), var_name="r2")
324-
dt_bf_01.posterior["r2_score"] = r2_da
322+
r2_da = azp.ndarray_to_dataarray(azp.bayesian_r2(dt_bf_01, pred_mean="μ", scale="σ", summary=False).reshape(4, 2000), var_name="r2")
323+
dt_bf_01.posterior["r2"] = r2_da
325324
```
326325

327326
To compute the joint log-likelihood we just sum the poin-twise log-likelihood evaluations along the observations. In other words we compute one log-likelihood value per MCMC step.
@@ -333,7 +332,7 @@ dt_bf_01.posterior["log_score"] = dt_bf_01.log_likelihood.sum("y_dim_0")["y"]
333332
Once we have added the derived quantities we just need to call `psense_summary` (or the others `psense_*` functions) as usual:
334333

335334
```{python}
336-
azp.psense_summary(dt_bf_01, var_names=["r2_score", "log_score"])
335+
azp.psense_summary(dt_bf_01, var_names=["r2", "log_score"])
337336
```
338337

339338
We see no signs of data-conflict or likelihood noninformativity, we can visually check this as we did before, but given these values we should expect the distributions of the derived quantities to very similar across the different priors.

requirements.txt

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
1-
arviz==0.21.0
1+
arviz==0.22.0
22
arviz-base @ git+https://github.com/arviz-devs/arviz-base
33
arviz-stats @ git+https://github.com/arviz-devs/arviz-stats
44
arviz-plots @ git+https://github.com/arviz-devs/arviz-plots
55
bambi>=0.16.0
6-
kulprit @ git+https://github.com/bambinos/kulprit
7-
preliz==0.21.0
6+
kulprit ==0.5.0
7+
preliz==0.23.0
88
pymc>=5.22.0
9-
pymc-bart @ git+https://github.com/pymc-devs/pymc-bart
9+
pymc-bart ==0.11.0
1010
pymc-extras>=0.5.0
1111
scipy>=1.16.2

0 commit comments

Comments
 (0)