Videos from JuliaCon are now available online
2018
Previous editions: 2017 | 2016 | 2015 | 2014
Jeff Mills



BayesTesting.jl: Bayesian Hypothesis Testing without Tears

BayesTesting.jl implements a fresh approach to hypothesis testing in Julia. The Jeffreys-Lindley-Bartlett paradox does not occur.  Any prior can be employed, including uninformative and reference priors, so the same prior employed for inference can be used for testing, and objective Bayesian posteriors can be used for testing. In standard problems when the posterior distribution matches (numerically) the frequentist sampling distribution or likelihood, there is a one-to-one correspondence with the frequentist test.  The resulting posterior odds against the null hypothesis are easy to interpret (unlike p-values), do not violate the likelihood principle, and result from minimizing a linear combination of type I and II errors rather than fixing the type I error before testing. The testing procedure satisfies the Neyman-Pearson lemma, so tests are uniformly most powerful, and satisfy the most general Bayesian robustness theorem. BayesTesting.jl provides functions for a variety of standard testing situations, along with more generic modular functions to easily allow the testing procedure to be employed in novel situations. For example, given any Monte Carlo or MCMC posterior sample for an unknown quantity, θ, the generic BayesTesting.jl function mcodds can be used to test hypotheses concerning θ, often with one or two lines of code. The talk will demonstrate application of the new approach in several standard situations, including testing a sample mean, comparison of means, and regression parameter testing. A brief presentation of the methodology will be followed by examples. The examples illustrate our experiences with implementing the new testing procedure in Julia over the past year.

Speaker's bio