Videos from JuliaCon are now available online
2018
Previous editions: 2017 | 2016 | 2015 | 2014
Zenna Tavares



Omega: Fast, Causal Inference from Simple Parts

Abstract Omega is a library for probabilistic and causal inference. It started with the question “How can we say what we want to say in probabilistic languages?”. More precisely, we wondered why frameworks for deep learning and probabilistic inference provide little support to add declarative knowledge to our models. For instance, we should be able to simply assert that our classifiers are robust from adversarial attack; that they and are algorithmically fair; that physical objects persist continuously through time, in order to make make object tracking more robust; that human and mouse models are similar, so that measurements from mice help us make better inferences about humans, where data is expensive. Omega is the culmination of a theoretical and engineering effort to address this challenge. In short, an Omega program is a normal Julia function augmented with uncertainty, and inference is execution of that function under constraints. The salient features that distinguish Omega from other approaches are:

  1. Declarative knowledge: Omega allows you to condition on any Julia predicate, providing a mechanism to encode declarative knowledge about a domain. * Causal inference: Omega allows you to imagine counter-factually what would happen under different scenarios. * Higher-order: Omega allows you to condition on distributional properties such as expectation, variance, and divergences.  This allows us to encode properties such as algorithmic fairness and robustness directly. In this talk I will outline the principles of Omega through several examples in probabilistic and causal inference. I will also dive into some implementation details, such as how for inference we hijack the random number generator and automatically turn Boolean functions into "soft" Boolean function.

Speaker's bio