Videos from JuliaCon are now available online
2018
Previous editions: 2017 | 2016 | 2015 | 2014
Hong Ge

University of Cambridge



The Turing language for probabilistic programming

Probabilistic programming is becoming an attractive approach to probabilistic machine learning. Through relieving researchers from the tedious burden of hand-deriving inference algorithms, not only does it enable the development of more accurate and interpretable models but it also encourage reproducible research. However, successful probabilistic programming systems require flexible, generic and efficient inference engines. In this work, we present a system called Turing for flexible composable probabilistic programming inference. Turing has an intuitive modelling syntax and supports a wide range of sampling based inference algorithms. Most importantly, Turing inference is composable: it combines Markov chain sampling operations on subsets of model variables, e.g. using a combination of a Hamiltonian Monte Carlo (HMC) engine and a particle Gibbs (PG) engine. This composable inference engine allows the user to easily switch between black-box style inference methods such as HMC, and customized inference methods. Our aim is to present Turing and its composable inference engines to the community and encourage other researchers to build on this system to help advance the field of probabilistic machine learning.

Speaker's bio