ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.1 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://www.bayesianspectacles.org/book-review-of-bayesian-statistics-the-fun-way/ |
| Last Crawled | 2026-04-13 11:18:45 (1 day ago) |
| First Indexed | 2020-05-22 05:34:50 (5 years ago) |
| HTTP Status Code | 200 |
| Meta Title | Book Review of “Bayesian Statistics the Fun Way” – Bayesian Spectacles |
| Meta Description | null |
| Meta Canonical | null |
| Boilerpipe Text | The subtitle says it all: “Understanding statistics and probability with Star Wars, Lego, and rubber ducks”. And the author, Will Kurt, does not disappoint: the writing is no-nonsense, the content is understandable, the examples are engaging, and the Bayesian concepts are explained clearly. Here are some of the book’s features that I particularly enjoyed:
Attention to the fundamentals
. Without getting bogged down in axioms and theorems, Kurt stresses that “Probability is a measurement of how strongly we believe things about the world” (p. 14; note that this interpretation also holds for the likelihood function). Kurt outlines the laws of probability theory and discusses how Bayesian reasoning is an extension of pure logic to continuous-valued degrees of conviction.
A focus on the simplest model
. If you are looking for a Bayesian generalized linear mixed model, you won’t find it here. Throughout, Kurt sticks mostly to the binomial distribution and conjugate beta priors. This is a great choice, as the purpose of this book is to get across the key Bayesian concepts.
Discussion of both parameter estimation and hypothesis testing
. There are precious few introductory books on Bayesian inference (few that are really introductory anyway), but those that exist usually shy away from hypothesis testing. I have always found this strange, because, as Kurt demonstrates, both hypothesis testing and parameter estimation follow from exactly the same updating mechanism, namely Bayes’ rule (see also
this post
and also
Gronau & Wagenmakers, 2019
).
R code, but sparingly
. Throughout the book, Kurt uses snippets of R code to make certain concepts more concrete. The best thing about this is that he does not overdo it. An appendix provides a quick introduction to R.
In my opinion, there are also some opportunities for further improvement:
The combinatorics of the binomial coefficient are not given an intuitive explanation. Yet, once you know the intuition, it is easy to
reconstruct
the coefficient on the fly instead of having to memorize it.
When Bayes’ rule is introduced, I prefer its
predictive form
, as shown here:
Â
Â
The predictive form clarifies that new knowledge (the posterior, on the left-hand side) arises from updating old knowledge (the prior, first factor on the right-hand side) with the evidence that is coming from the data, quantified as relative predictive performance (see also Rouder & Morey, 2019).
The first chapter in the part “Hypothesis testing: The heart of statistics” (bonus points for the title!) deals with a Bayesian A/B test. Kurt explains:
“In this chapter, we’re going to build our first hypothesis test, an
A/B
test. Companies often use A/B tests to try out product web pages, emails, and other marketing materials to determine which will work best for customers. In this chapter, we’ll test our belief that removing an image from an email will increase the
click-through rate
against the belief that removing it will hurt the click-through rate.”
But this is a question of estimation, not of hypothesis testing. As conceptualized by Harold Jeffreys, a problem of hypothesis testing involves the tenability of a single specific parameter value. In most A/B tests, the question of interest is not whether a change will help or hurt, but whether it will help or be ineffective. The hypothesis that the change is ineffective is instantiated by a prior spike at zero. Note that a Bayesian A/B hypothesis test was recently added to JASP (
https://jasp-stats.org/2020/04/28/bayesian-reanalyses-of-clinical-a-b-trials-with-jasp-the-heatmap-robustness-check/
; see also
Gronau, Raj, & Wagenmakers, 2019
).
A minor quibble is the interpretation of the Bayes factor in chapter 16:
“The Bayes factor is a formula that tests the plausibility of one hypothesis by comparing it to another. The result tells us how many times more likely one hypothesis is than another.”
What is described here is the posterior odds (i.e., belief), not the Bayes factor (i.e., evidence; for details see this
post
). This is just a slip of the pen, however, since the subsequent text demonstrates that Kurt knows what he’s talking about.
Wrapping Up
This book radiates enthusiasm. This is another sense in which the author successfully presents an ultralite version of Jaynes’ work “Probability theory: The logic of science”. The best way to convey the book’s contents and the author’s enthusiasm is to present the final paragraph, “wrapping up”:
“Now that you’ve finished your journey into Bayesian statistics, you can appreciate the true beauty of what you’ve been learning. From the basic rules of probability, we can derive Bayes’ theorem, which lets us convert evidence into a statement expressing the strength of our beliefs. From Bayes’ theorem, we can derive the Bayes factor, a tool for comparing how well two hypotheses explain the data we’ve observed. By iterating through possible hypotheses and normalizing the results, we can use the Bayes factor to create a parameter estimate for an unknown value. This, in turn, allows us to perform countless other hypothesis tests by comparing our estimates. And all we need to do to unlock all this power is use the basic rules of probability to define out likelihood, P(D|H)!”
Conclusion
As a first introduction to Bayesian inference, this book is hard to beat. It nails the key concepts in a compelling and instructive fashion. I give it full marks: five out of five stars. Perhaps a future edition will make use of a new JASP module that we currently have under development (no spoilers!).
Want to Know More?
An interview with Will Kurt is
here
.
Another review of “Bayesian statistics the fun way” is
here
.
Will Kurt’s blog, “Count Bayesie”, is
here
.
References
Gronau, Q. F., Raj A., & Wagenmakers, E.-J. (2019).
Informed Bayesian inference for the A/B test
. Manuscript submitted for publication.
Gronau, Q. F., & Wagenmakers, E.-J. (2019).
Rejoinder: More limitations of Bayesian leave-one-out cross-validation.
Computational Brain & Behavior, 2
, 35-47.
Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge: Cambridge University Press.
Kurt, W. (2019). Bayesian statistics the fun way. San Francisco: No Starch Press.
Perezgonzalez, J. D. (2020).
Bayesian benefits for the pragmatic researcher
.
Current Directions in Psychological Science, 25
, 169-176.
About The Author
Eric-Jan Wagenmakers
Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam. |
| Markdown | Ă—
[](https://www.bayesianspectacles.org/ "Bayesian Spectacles")
Menu
- [Home](https://www.bayesianspectacles.org/)
- [Free Course Book](https://www.bayesianspectacles.org/free-course-book/)
- [Archive](https://www.bayesianspectacles.org/archive/)
- [Artwork](https://www.bayesianspectacles.org/artwork-library/)
- [The Artist](https://www.bayesianspectacles.org/artwork-2/)
- [Artwork Library](https://www.bayesianspectacles.org/artwork-library/)
- [JASP](https://www.bayesianspectacles.org/jasp/)
- [About](https://www.bayesianspectacles.org/our-vision/)
- [The Team](https://www.bayesianspectacles.org/about/)
- [Our Vision](https://www.bayesianspectacles.org/our-vision/)
- [Privacy Policy](https://www.bayesianspectacles.org/privacy-policy/)
# Blog
# Book Review of “Bayesian Statistics the Fun Way”
May 21 - 2020
[](https://www.bayesianspectacles.org/book-review-of-bayesian-statistics-the-fun-way/ "Book Review of “Bayesian Statistics the Fun Way”")
The subtitle says it all: “Understanding statistics and probability with Star Wars, Lego, and rubber ducks”. And the author, Will Kurt, does not disappoint: the writing is no-nonsense, the content is understandable, the examples are engaging, and the Bayesian concepts are explained clearly. Here are some of the book’s features that I particularly enjoyed:
1. **Attention to the fundamentals**. Without getting bogged down in axioms and theorems, Kurt stresses that “Probability is a measurement of how strongly we believe things about the world” (p. 14; note that this interpretation also holds for the likelihood function). Kurt outlines the laws of probability theory and discusses how Bayesian reasoning is an extension of pure logic to continuous-valued degrees of conviction.
2. **A focus on the simplest model**. If you are looking for a Bayesian generalized linear mixed model, you won’t find it here. Throughout, Kurt sticks mostly to the binomial distribution and conjugate beta priors. This is a great choice, as the purpose of this book is to get across the key Bayesian concepts.
3. **Discussion of both parameter estimation and hypothesis testing**. There are precious few introductory books on Bayesian inference (few that are really introductory anyway), but those that exist usually shy away from hypothesis testing. I have always found this strange, because, as Kurt demonstrates, both hypothesis testing and parameter estimation follow from exactly the same updating mechanism, namely Bayes’ rule (see also [this post](https://www.bayesianspectacles.org/bayes-factors-for-those-who-hate-bayes-factors/#more-880) and also [Gronau & Wagenmakers, 2019](https://link.springer.com/content/pdf/10.1007%2Fs42113-018-0022-4.pdf)).
4. **R code, but sparingly**. Throughout the book, Kurt uses snippets of R code to make certain concepts more concrete. The best thing about this is that he does not overdo it. An appendix provides a quick introduction to R.
In my opinion, there are also some opportunities for further improvement:
1. The combinatorics of the binomial coefficient are not given an intuitive explanation. Yet, once you know the intuition, it is easy to *reconstruct* the coefficient on the fly instead of having to memorize it.
2. When Bayes’ rule is introduced, I prefer its [predictive form](https://osf.io/3tdh9/), as shown here:
[](https://www.bayesianspectacles.org/wp-content/uploads/2020/05/BS-Blogpost-book-review2.jpg)
The predictive form clarifies that new knowledge (the posterior, on the left-hand side) arises from updating old knowledge (the prior, first factor on the right-hand side) with the evidence that is coming from the data, quantified as relative predictive performance (see also Rouder & Morey, 2019).
3. The first chapter in the part “Hypothesis testing: The heart of statistics” (bonus points for the title!) deals with a Bayesian A/B test. Kurt explains:
> “In this chapter, we’re going to build our first hypothesis test, an *A/B* test. Companies often use A/B tests to try out product web pages, emails, and other marketing materials to determine which will work best for customers. In this chapter, we’ll test our belief that removing an image from an email will increase the *click-through rate* against the belief that removing it will hurt the click-through rate.”
But this is a question of estimation, not of hypothesis testing. As conceptualized by Harold Jeffreys, a problem of hypothesis testing involves the tenability of a single specific parameter value. In most A/B tests, the question of interest is not whether a change will help or hurt, but whether it will help or be ineffective. The hypothesis that the change is ineffective is instantiated by a prior spike at zero. Note that a Bayesian A/B hypothesis test was recently added to JASP (<https://jasp-stats.org/2020/04/28/bayesian-reanalyses-of-clinical-a-b-trials-with-jasp-the-heatmap-robustness-check/>; see also [Gronau, Raj, & Wagenmakers, 2019](https://arxiv.org/abs/1905.02068)).
4. A minor quibble is the interpretation of the Bayes factor in chapter 16:
> “The Bayes factor is a formula that tests the plausibility of one hypothesis by comparing it to another. The result tells us how many times more likely one hypothesis is than another.”
What is described here is the posterior odds (i.e., belief), not the Bayes factor (i.e., evidence; for details see this [post](https://www.bayesianspectacles.org/the-single-most-prevalent-misinterpretation-of-bayes-rule/)). This is just a slip of the pen, however, since the subsequent text demonstrates that Kurt knows what he’s talking about.
### Wrapping Up
This book radiates enthusiasm. This is another sense in which the author successfully presents an ultralite version of Jaynes’ work “Probability theory: The logic of science”. The best way to convey the book’s contents and the author’s enthusiasm is to present the final paragraph, “wrapping up”:
> “Now that you’ve finished your journey into Bayesian statistics, you can appreciate the true beauty of what you’ve been learning. From the basic rules of probability, we can derive Bayes’ theorem, which lets us convert evidence into a statement expressing the strength of our beliefs. From Bayes’ theorem, we can derive the Bayes factor, a tool for comparing how well two hypotheses explain the data we’ve observed. By iterating through possible hypotheses and normalizing the results, we can use the Bayes factor to create a parameter estimate for an unknown value. This, in turn, allows us to perform countless other hypothesis tests by comparing our estimates. And all we need to do to unlock all this power is use the basic rules of probability to define out likelihood, P(D\|H)!”
### Conclusion
As a first introduction to Bayesian inference, this book is hard to beat. It nails the key concepts in a compelling and instructive fashion. I give it full marks: five out of five stars. Perhaps a future edition will make use of a new JASP module that we currently have under development (no spoilers!).
### Want to Know More?
An interview with Will Kurt is [here](https://notamonadtutorial.com/interview-with-will-kurt-on-his-latest-book-bayesian-statistics-the-fun-way-63ce8aee32ed).
Another review of “Bayesian statistics the fun way” is [here](https://www.frontiersin.org/articles/10.3389/fpsyg.2019.03021/full).
Will Kurt’s blog, “Count Bayesie”, is [here](https://www.countbayesie.com/).
### References
Gronau, Q. F., Raj A., & Wagenmakers, E.-J. (2019). [Informed Bayesian inference for the A/B test](https://arxiv.org/abs/1905.02068). Manuscript submitted for publication.
Gronau, Q. F., & Wagenmakers, E.-J. (2019). [Rejoinder: More limitations of Bayesian leave-one-out cross-validation.](https://link.springer.com/content/pdf/10.1007%2Fs42113-018-0022-4.pdf) *Computational Brain & Behavior, 2*, 35-47.
Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge: Cambridge University Press.
Kurt, W. (2019). Bayesian statistics the fun way. San Francisco: No Starch Press.
Perezgonzalez, J. D. (2020). [Bayesian benefits for the pragmatic researcher](https://www.frontiersin.org/articles/10.3389/fpsyg.2019.03021/full>%E2%80%9C>Book%20Review:%20Bayesian%20statistics%20the%20fun%20way:%20Understanding%20statistics%20and%20probability%20with%20Star%20Wars,%20Lego,%20and%20rubber%20ducks.</a>%20Frontiers%20in%20Psychology,%2010:3021.</p>%0A<p>Rouder,%20J.%20N.,%20&%20Morey,%20R.%20D.%20\(2019\).%20Teaching%20Bayes%E2%80%99%20theorem:%20Strength%20of%20evidence%20as%20predictive%20accuracy.%20<em>The%20American%20Statistician,%2073</em>,%20186-190.</p>%0A<p>Wagenmakers,%20E.-J.,%20Morey,%20R.%20D.,%20&%20Lee,%20M.%20D.%20\(2016\).%20<a%20href=). *Current Directions in Psychological Science, 25*, 169-176.
#### About The Author

### Eric-Jan Wagenmakers
Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam.
## Other articles
[](https://www.bayesianspectacles.org/youtube-lecture-by-johnny-van-doorn-on-theory-and-practice-of-bayesian-inference-using-jasp/ "YouTube Lecture by Johnny van Doorn on “Theory and Practice of Bayesian Inference Using JASP”")
### YouTube Lecture by Johnny van Doorn on “Theory and Practice of Bayesian Inference Using JASP”
March 05 - 2026
On January 26th this year, Johnny van Doorn presented an hour-long tutorial lecture on “Theory and Practice of Bayesian Inference…
[](https://www.bayesianspectacles.org/bayesian-thinking-for-toddlers-the-cartoon/ "Bayesian Thinking for Toddlers: The Cartoon")
### Bayesian Thinking for Toddlers: The Cartoon
February 04 - 2026
For better or for worse, it appears that my most appreciated work is the children’s book Bayesian Thinking for Toddlers…
[](https://www.bayesianspectacles.org/redefine-statistical-significance-part-xxi-edgeworth-proposed-the-005-criterion-back-in-1885/ "Redefine Statistical Significance Part XXI: Edgeworth Proposed the .005 Criterion Back in 1885")
### Redefine Statistical Significance Part XXI: Edgeworth Proposed the .005 Criterion Back in 1885
January 14 - 2026
The statistical significance test was not invented by Ronald Fisher. The key idea was already laid out by Francis Ysidro…
### powered by
Bayesian Spectacles is powered by JASP: a free, friendly, and flexible software package for conducting statistical analyses. Discover JASP at [jasp-stats.org](http://jasp-stats.org/) and follow JASP on [Twitter](http://www.twitter.com/JASPStats).
### Search
### Categories
- [Explanations](https://www.bayesianspectacles.org/category/explanations/)
- [From the Ground Up](https://www.bayesianspectacles.org/category/from-the-ground-up/)
- [General](https://www.bayesianspectacles.org/category/general/)
- [Misconceptions](https://www.bayesianspectacles.org/category/misconceptions/)
- [Tools](https://www.bayesianspectacles.org/category/tools/)
### follow us
© 2026 Bayesian Spectacles. All rights reserved. |
| Readable Markdown | The subtitle says it all: “Understanding statistics and probability with Star Wars, Lego, and rubber ducks”. And the author, Will Kurt, does not disappoint: the writing is no-nonsense, the content is understandable, the examples are engaging, and the Bayesian concepts are explained clearly. Here are some of the book’s features that I particularly enjoyed:
1. **Attention to the fundamentals**. Without getting bogged down in axioms and theorems, Kurt stresses that “Probability is a measurement of how strongly we believe things about the world” (p. 14; note that this interpretation also holds for the likelihood function). Kurt outlines the laws of probability theory and discusses how Bayesian reasoning is an extension of pure logic to continuous-valued degrees of conviction.
2. **A focus on the simplest model**. If you are looking for a Bayesian generalized linear mixed model, you won’t find it here. Throughout, Kurt sticks mostly to the binomial distribution and conjugate beta priors. This is a great choice, as the purpose of this book is to get across the key Bayesian concepts.
3. **Discussion of both parameter estimation and hypothesis testing**. There are precious few introductory books on Bayesian inference (few that are really introductory anyway), but those that exist usually shy away from hypothesis testing. I have always found this strange, because, as Kurt demonstrates, both hypothesis testing and parameter estimation follow from exactly the same updating mechanism, namely Bayes’ rule (see also [this post](https://www.bayesianspectacles.org/bayes-factors-for-those-who-hate-bayes-factors/#more-880) and also [Gronau & Wagenmakers, 2019](https://link.springer.com/content/pdf/10.1007%2Fs42113-018-0022-4.pdf)).
4. **R code, but sparingly**. Throughout the book, Kurt uses snippets of R code to make certain concepts more concrete. The best thing about this is that he does not overdo it. An appendix provides a quick introduction to R.
In my opinion, there are also some opportunities for further improvement:
1. The combinatorics of the binomial coefficient are not given an intuitive explanation. Yet, once you know the intuition, it is easy to *reconstruct* the coefficient on the fly instead of having to memorize it.
2. When Bayes’ rule is introduced, I prefer its [predictive form](https://osf.io/3tdh9/), as shown here:
[](https://www.bayesianspectacles.org/wp-content/uploads/2020/05/BS-Blogpost-book-review2.jpg)
The predictive form clarifies that new knowledge (the posterior, on the left-hand side) arises from updating old knowledge (the prior, first factor on the right-hand side) with the evidence that is coming from the data, quantified as relative predictive performance (see also Rouder & Morey, 2019).
3. The first chapter in the part “Hypothesis testing: The heart of statistics” (bonus points for the title!) deals with a Bayesian A/B test. Kurt explains:
> “In this chapter, we’re going to build our first hypothesis test, an *A/B* test. Companies often use A/B tests to try out product web pages, emails, and other marketing materials to determine which will work best for customers. In this chapter, we’ll test our belief that removing an image from an email will increase the *click-through rate* against the belief that removing it will hurt the click-through rate.”
But this is a question of estimation, not of hypothesis testing. As conceptualized by Harold Jeffreys, a problem of hypothesis testing involves the tenability of a single specific parameter value. In most A/B tests, the question of interest is not whether a change will help or hurt, but whether it will help or be ineffective. The hypothesis that the change is ineffective is instantiated by a prior spike at zero. Note that a Bayesian A/B hypothesis test was recently added to JASP (<https://jasp-stats.org/2020/04/28/bayesian-reanalyses-of-clinical-a-b-trials-with-jasp-the-heatmap-robustness-check/>; see also [Gronau, Raj, & Wagenmakers, 2019](https://arxiv.org/abs/1905.02068)).
4. A minor quibble is the interpretation of the Bayes factor in chapter 16:
> “The Bayes factor is a formula that tests the plausibility of one hypothesis by comparing it to another. The result tells us how many times more likely one hypothesis is than another.”
What is described here is the posterior odds (i.e., belief), not the Bayes factor (i.e., evidence; for details see this [post](https://www.bayesianspectacles.org/the-single-most-prevalent-misinterpretation-of-bayes-rule/)). This is just a slip of the pen, however, since the subsequent text demonstrates that Kurt knows what he’s talking about.
### Wrapping Up
This book radiates enthusiasm. This is another sense in which the author successfully presents an ultralite version of Jaynes’ work “Probability theory: The logic of science”. The best way to convey the book’s contents and the author’s enthusiasm is to present the final paragraph, “wrapping up”:
> “Now that you’ve finished your journey into Bayesian statistics, you can appreciate the true beauty of what you’ve been learning. From the basic rules of probability, we can derive Bayes’ theorem, which lets us convert evidence into a statement expressing the strength of our beliefs. From Bayes’ theorem, we can derive the Bayes factor, a tool for comparing how well two hypotheses explain the data we’ve observed. By iterating through possible hypotheses and normalizing the results, we can use the Bayes factor to create a parameter estimate for an unknown value. This, in turn, allows us to perform countless other hypothesis tests by comparing our estimates. And all we need to do to unlock all this power is use the basic rules of probability to define out likelihood, P(D\|H)!”
### Conclusion
As a first introduction to Bayesian inference, this book is hard to beat. It nails the key concepts in a compelling and instructive fashion. I give it full marks: five out of five stars. Perhaps a future edition will make use of a new JASP module that we currently have under development (no spoilers!).
### Want to Know More?
An interview with Will Kurt is [here](https://notamonadtutorial.com/interview-with-will-kurt-on-his-latest-book-bayesian-statistics-the-fun-way-63ce8aee32ed).
Another review of “Bayesian statistics the fun way” is [here](https://www.frontiersin.org/articles/10.3389/fpsyg.2019.03021/full).
Will Kurt’s blog, “Count Bayesie”, is [here](https://www.countbayesie.com/).
### References
Gronau, Q. F., Raj A., & Wagenmakers, E.-J. (2019). [Informed Bayesian inference for the A/B test](https://arxiv.org/abs/1905.02068). Manuscript submitted for publication.
Gronau, Q. F., & Wagenmakers, E.-J. (2019). [Rejoinder: More limitations of Bayesian leave-one-out cross-validation.](https://link.springer.com/content/pdf/10.1007%2Fs42113-018-0022-4.pdf) *Computational Brain & Behavior, 2*, 35-47.
Jaynes, E. T. (2003). Probability theory: The logic of science. Cambridge: Cambridge University Press.
Kurt, W. (2019). Bayesian statistics the fun way. San Francisco: No Starch Press.
Perezgonzalez, J. D. (2020). [Bayesian benefits for the pragmatic researcher](https://www.frontiersin.org/articles/10.3389/fpsyg.2019.03021/full>%E2%80%9C>Book%20Review:%20Bayesian%20statistics%20the%20fun%20way:%20Understanding%20statistics%20and%20probability%20with%20Star%20Wars,%20Lego,%20and%20rubber%20ducks.</a>%20Frontiers%20in%20Psychology,%2010:3021.</p>%0A<p>Rouder,%20J.%20N.,%20&%20Morey,%20R.%20D.%20\(2019\).%20Teaching%20Bayes%E2%80%99%20theorem:%20Strength%20of%20evidence%20as%20predictive%20accuracy.%20<em>The%20American%20Statistician,%2073</em>,%20186-190.</p>%0A<p>Wagenmakers,%20E.-J.,%20Morey,%20R.%20D.,%20&%20Lee,%20M.%20D.%20\(2016\).%20<a%20href=). *Current Directions in Psychological Science, 25*, 169-176.
#### About The Author

### Eric-Jan Wagenmakers
Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam. |
| Shard | 28 (laksa) |
| Root Hash | 13274024404467094228 |
| Unparsed URL | org,bayesianspectacles!www,/book-review-of-bayesian-statistics-the-fun-way/ s443 |