Abstract Omega is a library for probabilistic and causal inference. It started with the question “How can we say what we want to say in probabilistic languages?”. More precisely, we wondered why frameworks for deep learning and probabilistic inference provide little support to add declarative knowledge to our models. For instance, we should be able to simply assert that our classifiers are robust from adversarial attack; that they and are algorithmically fair; that physical objects persist continuously through time, in order to make make object tracking more robust; that human and mouse models are similar, so that measurements from mice help us make better inferences about humans, where data is expensive. Omega is the culmination of a theoretical and engineering effort to address this challenge. In short, an Omega program is a normal Julia function augmented with uncertainty, and inference is execution of that function under constraints. The salient features that distinguish Omega from other approaches are: