Videos from JuliaCon are now available online
2018
Previous editions: 2017 | 2016 | 2015 | 2014
Jesse Bettencourt

U of Toronto



Self-tuning Gradient Estimators through Higher-order Automatic Differentiation in Julia

Gradient-based optimization is the main trick of deep learning and deep reinforcement learning. However, it’s hard to estimate gradients in the most interesting settings - when the mechanism being optimized is unknown (as in reinforcement learning) or involves discrete operations (such as in optimizing programs). I’ll give a quick overview of the tricks of the trade:

Speaker's bio

Jesse Bettencourt is a graduate student in the Machine Learning group at the University of Toronto and the Vector Institute. He is supervised by David Duvenaud and Roger Grosse and teaches the undergraduate/graduate course on probabilistic models and machine learning. He is very excited to use Julia in his ML research and possibly in future course offerings.