Previous editions: 2016 | 2015 | 2014

Accepted Talks & Workshops



Lightning Talks


An Invitation to Julia: Toward Version 1.0

David P. Sanders, Department of Physics, Faculty of Sciences, National University of Mexico

This is an introductory tutorial on Julia as it is today, aimed at people with experience in another language, and who want to get up to speed quickly as Julia heads towards its first stable version.

About David P. Sanders

David P. Sanders is associate professor of computational physics in the Department of Physics of the Faculty of Sciences at the National University of Mexico in Mexico City. His previous Julia tutorials on YouTube have over 75,000 views. He is a principal author of the ValidatedNumerics.jl package for interval arithmetic, and IntervalConstraintProgramming.jl for constraint propagation.


Deep Learning with Julia

Mike Innes, Jonathan Malmaud, Pontus Stenetorp, Julia Computing, Massachusetts Institute of Technology, University College London

Over the last few years we have seen Deep Learning rise to prominence not just in academia with state-of-the-art results for well-established tasks, but also in industry to leverage an ever-increasing amount of data becoming available. Due to the computationally heavy nature of Deep Learning approaches, Julia is in a unique position to serve as the language of choice for developing and deploying deep machine learning models. In this workshop we will introduce Deep Learning for a general audience – assuming only high school-level mathematics to gain a practical understanding of the topics covered. We will first introduce the history and theoretical underpinnings of Deep Learning. After this we will proceed to introduce the lay of the land in terms of libraries and frameworks in Julia – demonstrating to the audience how one can implement state-of-the-art Deep Learning models for various forms of data. After attending the workshop the audience will have an understanding of how they can use Julia for Deep Learning and adapt these approaches to their own data. The organisers of the workshop have between them many years of experience of teaching, research, and working with and implementing Deep Learning frameworks in Julia and other programming languages.

About Mike Innes, Jonathan Malmaud, Pontus Stenetorp

Mike Innes is a software engineer at Julia Computing, where he works on the Juno IDE and the machine learning ecosystem. Jon Malmaud is a PhD candidate at MIT’s Brain and Cognitive Science Department, where he works on AI and Deep Learning. He’s also a core contributor to the Julia language, and created Julia’s TensorFlow bindings. Pontus Stenetorp is a research associate at University College London, that spends most of his research time on Natural Language Processing and Machine Learning – with a particular focus on Deep Learning. He has been using Julia since 2014 due to a need for rapid prototyping and computational performance. When it comes to Julia contributions, he tends to contribute small occasional patches to the standard library.


From One to Many: Writing Julia Code to Simulate Big Quantum Systems on Big Computers

Katharine Hyatt, UC Santa Barbara

Start using Julia to do simulations of quantum systems with many interacting particles! We will write a single-core exact diagonalization code which can handle a variety of models from quantum physics, using Julia to make it readable and performant. We’ll tour the Julia package ecosystem for useful packages that will help us store our results to share with others and get to the interesting physics. Then, we’ll use some of Julia’s parallel features to scale up our code to do many-body simulation on many-core and many-node systems in a convenient, reproducible, and fast way. You need not ever have written any Julia before. We’ll use physics examples as natural motivation to explore Julia’s capabilities, but no background in quantum mechanics is required. We will introduce the models as we go.

About Katharine Hyatt

5th year physics graduate student, sometimes Julia contributor


GPU Programming with Julia

Tim Besard, Simon Danisch, Valentin Churavy, Various

This interactive workshop will introduce a couple of tools and packages for GPU programming in Julia: how to set-up a working environment, basic usage, and optimization. Participants will be able to follow along using their own system, or on a cloud-based JuliaBox instance.

Proposed sessions:

  1. Introduction to the JuliaGPU ecosystem
  2. CUDAnative.jl
  3. GPUArrays.jl
About Tim Besard, Simon Danisch, Valentin Churavy

Contributors to the JuliaGPU ecosystem


Integrating Julia in Real-world, Distributed Pipelines

Daniel Whitenack, Pachyderm

After attending this workshop, you will have the skills needed to integrate Julia in real-world environments. Not only that, you will understand at least one strategy for distributing Julia data analysis at production scale on large data sets and streaming data. The Roadmap of the workshop will include:

  1. Intro - This section will explore any barriers to pushing Julia into production. What to do in real-world environments and what are the challenges of integrating Julia at production scale?
  2. Making your Julia analysis portable - Here, we will learn how to containerize Julia analyses, which goes a long way to making them deployable within organizations. We will also explore the trade offs with containerization and common gotchas. In this case, we will use Docker to containerize an example data analysis written in Julia.
  3. Distributing your Julia analysis at scale - Finally, we will learn how to take our Docker-ized Julia analysis and distributed at scale. That is, we will learn how to orchestrate the distribution of that analysis across a cluster and how to distribute data between instances of Julia. To do this, we will employ Kubernetes and Pachyderm. The workshop will be completely example/demo based and will include individual exercises for the students.
About Daniel Whitenack

Daniel (@dwhitena) is a Ph.D. trained data scientist working with Pachyderm (@pachydermIO). Daniel develops innovative, distributed data pipelines which include predictive models, data visualizations, statistical analyses, and more. He has spoken at conferences around the world (Datapalooza, DevFest Siberia, GopherCon, and more), teaches data science/engineering with Ardan Labs (@ardanlabs), maintains the Go kernel for Jupyter, and is actively helping to organize contributions to various open source data science projects.


NLOptControl.jl a Tool for Solving Nonlinear Optimal Control Problems

Huckleberry Febbo, University of Michigan

I am the developer of NLOptControl.jl, a JuliaOpt tool that is an extension for JuMP.jl. NloptControl.jl is used for formulating and solving nonlinear optimal control problems. A current limitation of optimization modeling software, such as JuMP is that it does not allow for ease of adding integral constraint equations. NLOptControl.jl also provides an implementation of the pseudo-spectral method written in written in julia which is extremely fast. While I have not yet benchmarked it against GPOPSii (a commercial software that also uses this method to solve optimal control problems), I hope to have made some comparisons to help motivate my users during juliaCon2017. NLOptControl.jl is an extension of JuMP.jl, and with that comes the a tremendous amount of power. For instance, have you ever struggled calculating Hessians and Jacobians? Well those days are over because NLOptControl.jl takes care of that for you by simply utilizing JuMP and the automatic differentiation capabilities of ReverseDiffSparse.jl. Workshop Details The workshop will give people interested in nonlinear optimal control guidance and hands-on experience using a very high level tool that is fast, concise, and powerful. The workshop will be organize into two parts; background information and hands-on experience. The background information section will explain the basics of nonlinear optimal control problems, why I got started with julia, and then show some examples including the Autonomous Vehicle Control problems that I am solving. Then during the hands-on part, users will solve optimal control problems from start to finish and the results will be automatically plotted.

  1. Background Information:
    • What is nonlinear optimal control?
      • basic problem setup
    • Why I got started with julia?
      • Autonomous Vehicle Controls (not running fast enough in MATLAB)
      • Examples of how I use the software
    • Benchmark
      • Compare to other similar tools (GPOPSii)
  2. Hands on:
    • Guide users through several simple examples
      • Discuss syntax etc.
    • Guide users to solve other more advanced problems
      • Perhaps a simple version of their own
About Huckleberry Febbo

Mechanical Engineering Ph.D. 4th year student

Optimization and Solving Systems of Equations in Julia

Patrick Kofod Mogensen, University of Copenhagen

In this workshop we will introduce the two main packages organized under the JuliaNLSolvers umbrella: Optim.jl for optimization and NLsolve.jl for solving systems of equations. We will look at the types of problems the packages solve, what the interfaces are like, and work on practical examples. A strong mathematical background is not needed, but some understanding of calculus is required to follow the discussion of the different methods.

About Patrick Kofod Mogensen

Ph.D. student in economics, JuliaNLSolvers owner and developer, Julia nerd.

The Unique Features and Performance of DifferentialEquations.jl

Chris Rackauckas, University of California, Irvine

DifferentialEquations.jl is a highly extendable high-performance library for solving a vast array of differential equations in Julia. The purpose of this workshop is to introduce the participants to DifferentialEquations.jl, focusing on the new types of problems that are able to be explored through this software and how Julia has made this possible. We will start with a tutorial of the ordinary differential equation solvers. Users will be shown how to use the common solver interface to solve and analyze equations using the solvers from OrdinaryDiffEq.jl, Sundials.jl, ODE.jl, LSODA.jl, and ODEInterface.jl. Next, the capabilities will be explored in further depth, and users will walk through solving hybrid differential equations continuous + discrete components), using arbitrary precision and unitful arithmetic, and solving equations with discontinuous events. After that, the tutorial will show users how to branch out to other forms of differential equations, showing how the same interface allows them to use the unique high-order adaptive Runge-Kutta methods for stochastic differential equations and the fast high-order methods for delay differential equations. Lastly, participants will be walked through the analysis add-on tools, using Optim.jl to perform parameter estimation of ordinary differential equation models, identify sensitive parameters, and quantify numerical uncertainties of solutions. Users will leave the workshop with an expanded view of what kinds of problems can be solved with DifferentialEquations.jl and with the knowledge of how to solve them.

About Chris Rackauckas

Chris Rackauckas is a 4th year Ph.D. student in Mathematics at the University of California, Irvine. He is the principal author of many Julia packages, including the JuliaDiffEq packages (DifferentialEquations.jl) and ParallelDataTransfer.jl, and has contributed to numerous other packages related to scientific computing. Chris is also actively engaged in the Julia community as the author of the StochasticLifestyle blog and the tutorial “A Deep Introduction to Julia for Data Science and Scientific Computing”.



AoT or JIT : How Does Julia Work?

Jameson Nash, Julia Computing, Inc.

Julia uses a unique mix of techniques adopted from conventional static and dynamic to provide a special blend of high-performance and flexible compute kernels. This allows it to simultaneously have a fully ahead-of-time-compiled code model – while permitting (even encouraging) code updates at runtime – and a fully runtime-interpreted interface – while permitting extensive compile-time optimization. In this talk, I will examine some of the trade-offs and limitations this requires of user code, especially on common first-class code evaluation features – such as eval and incremental pre-compilation – as well as advanced features – such as @generated functions and @pure. We will also try to take a look at the internal layout and implementation of some of these data structures, and how the compiler works to maintain their correctness over time, despite other changes to the system.

About Jameson Nash

I’ve been a Julia contributor since before it was cool. Now, I’m working for Julia Computing, as the static compilation champion, compiler correctness fiend, and performance cliff jumper.


Building End to End Data Science Solutions in the Azure Cloud with Julia

Udayan Kumar/ Paul Shealy, Microsoft

Increasingly organizations are using cloud platforms to store their data and perform analytics driven by cost, scale, and manageability considerations. Business applications are being retooled to leverage the vast enterprise / public data, artificial intelligence (AI), and machine learning (ML) algorithms. To build and deploy large scale intelligent applications, data scientists and analysts today need to be able to combine their knowledge of analytical languages and platforms like Julia with that of the cloud. In this talk, data scientists and analysts will learn how to build end-to-end analytical solutions using Julia on scalable cloud infrastructure. Developing such solutions usually requires one to understand how to seamlessly integrate Julia with various cloud technologies. After attending the talk, the attendees should have a good understanding of all the major aspects needed to start building intelligent applications on the cloud using Julia, leveraging appropriate cloud services and tool-kits. We will also briefly introduce the Azure Data Science Virtual Machine DSVM which provides a comprehensive development/experimentation environment with several pre-configured tools to make it easy to work with different cloud services (SQL Data Warehouse, Spark, Blobs etc.) from Julia and other popular data analytics languages. Join this demo heavy session where we cover the end to end data science life-cycle and show how you can access storage and compute services on the Azure cloud using Julia from the DSVM. A self-guided tutorial building upon the examples in the demo will be published online for attendees to continue their learning offline.

About Udayan Kumar/ Paul Shealy

Udayan is a Software Engineer with Algorithms and Data Science group at Microsoft. Before coming to Microsoft, he was designing predictive algorithms to detect threats and malignant apps at a mobile security startup in Chicago. He has a MS and a Ph.D. in Computer Engineering from University of Florida, Gainesville, FL. His research was focused on Trust, Privacy and Behavior mining in Mobile Networks. Paul is a senior software engineer in Microsoft’s Algorithms and Data Science group, where he is the lead engineer for the Data Science Virtual Machine and works on a variety of solutions for easier machine learning and data science. He was previously the project lead for the Planner service in Office 365. While on Planner he also worked on disaster recovery, topology, storage, and several other core service components. He holds computer science degrees from Clemson and Duke.

COBRA.jl: Accelerating Systems Biology

Laurent Heirendt, Luxembourg Centre for Systems Biomedicine

Laurent Heirendt, Sylvain Arreckx, Ines Thiele, Ronan M.T. Fleming Systems Biologists in the COnstraint-Based Reconstruction and Analysis (COBRA) [7] community are gearing up to develop computational models of large and huge-scale biochemical networks with more than one million biochemical reactions. The growing model size puts a strain on efficient simulation and network exploration times to the point that accelerating existing COBRA methods became a priority. Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks has long been hampered by performance limitations of current implementations in Matlab/C (The COBRA Toolbox [8] and fastFVA [3]) or Python (cobrapy [2]). Julia [1] is the language that fills the gap between complexity, performance, and development time. DistributedFBA.jl [4], part of the novel COBRA.jl package, is a high-level, high-performance, open-source Julia implementation of flux balance analysis, which is a linear optimization problem. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes using optimization solver interfaces implemented in MathProgBase.jl [5]. Julia’s parallelization capabilities led to a speedup in latency that follows Amdahl’s law. For the first time, a flux variability analysis (two flux balance analyses on each biochemical reaction) on a model with more than 200k biochemical reactions [6] has been performed. With Julia and COBRA.jl, the reconstruction and analysis capabilities of large and huge-scale models in the COBRA community are lifted to another level. Code and benchmark data are freely available on github.com/opencobra/COBRA.jl References:

  • [1] Bezanson, Jeff and Edelman, Alan and Karpinski, Stefan and Shah, Viral B., “Julia: A Fresh Approach to Numerical Computing”, arXiv:1411.1607 [cs] (2014). arXiv: 1411.1607
  • [2] Ebrahim, Ali and Lerman, Joshua A. and Palsson, Bernhard O. and Hyduke, Daniel R., “COBRApy: COnstraints-Based Reconstruction and Analysis for Python”, BMC Systems Biology 7 (2013), pp. 74.
  • [3] Gudmundsson, Steinn and Thiele, Ines, “Computationally efficient flux variability analysis”, BMC Bioinformatics 11, 1 (2010), pp. 489.
  • [4] Heirendt, Laurent and Thiele, Ines and Fleming, Ronan M. T., “DistributedFBA.jl: high-level, high-performance flux balance analysis in Julia”, Bioinformatics btw838 (2017).
  • [5] Lubin, Miles and Dunning, Iain, “Computing in Operations Research using Julia”, INFORMS Journal on Computing 27, 2 (2015), pp. 238–248. arXiv: 1312.1431
  • [6] Magnúsdóttir, Stefanía and Heinken, Almut and Kutt, Laura and Ravcheev, Dmitry A. and Bauer, Eugen and Noronha, Alb…, “Generation of genome-scale metabolic reconstructions for 773 members of the human gut microbiota”, Nat Biotech 35, 1 (2017), pp. 81–89.
  • [7] Palsson, Bernhard Ø, Systems Biology: Constraint-based Reconstruction and Analysis (Cambridge, England: Cambridge University Press, 2015).
  • [8] Schellenberger, Jan and Que, Richard and Fleming, Ronan M. T. and Thiele, Ines and Orth, Jeffrey D. and Feist, Adam M. and Ziel…, “Quantitative prediction of cellular metabolism with constraint-based models: the COBRA Toolbox v2.0”, Nat. Protocols 6, 9 (2011), pp. 1290–1307. 00182
About Laurent Heirendt

Laurent Heirendt was born in 1987 in Luxembourg City, Luxembourg (Europe). He received his BSc in Mechanical Engineering from the Ecole Polytechnique Fédérale de Lausanne, Switzerland in 2009. A year later, he received his MSc in Advanced Mechanical Engineering from Imperial College London in the UK, where his research and thesis focused on developing a general dynamic model for shimmy analysis of aircraft landing gear that is still in use today. He received his Ph.D. in 2014 in Aerospace Science from the University of Toronto, Canada. He developed a thermo-tribomechnical model of an aircraft landing gear, which led to a patent pending design of a critical aircraft landing gear component. He then worked in industry and oversaw the structural analysis of large aircraft docking structures. Recently, Laurent started as a Research Associate at the Luxembourg Centre for Systems Biomedicine, where he works in the numerical optimization of large biochemical networks using Julia. Besides his mother tongue Luxembourgish, he is fluent in English, French and German, and he is currently learning Brazilian Portuguese. 


Equations, inequalities and global optimisation: guaranteed solutions using interval methods and constraint propagation

David P. Sanders, Department of Physics, Faculty of Sciences, National University of Mexico

How can we find all solutions of a system of nonlinear equations, the “feasible set” satisfied by a collection of inequalities, or the global optimum of a complicated function? These are all known to be hard problems in numerical analysis. In this talk, we will show how to solve all of these problems, in a guaranteed way, using a collection of related methods based on interval arithmetic, provided by the IntervalArithmetic.jl package. The starting point is a simple dimension-independent bisection code, which can be enhanced in a variety of ways. This method is rigorous: it is guaranteed to find all roots, or to find the global minimum, respectively. One key idea is the use of continuous constraint propagation, which allows us to remove large portions of the search space that are infeasible. We will explain the basics of this method, in particular the “forward-backward contractor”, and describe the implementation in the IntervalConstraintProgramming.jl package. This package generates forward and backward code automatically from a Julia expression, using metaprogramming techniques. These are combined into “contractors”, i.e. operators that contract a box without removing any portion of the set of interest. These, in turn, give a rigorous answer to the question whether a given box lies inside the feasible set or not. In this way, a paving (collection of boxes) is built up that approximates the set.

About David P. Sanders

David P. Sanders is associate professor of computational physics in the Department of Physics of the Faculty of Sciences at the National University of Mexico in Mexico City. His video tutorials on Julia have a total of 75,000 views on YouTube. He is a principal author of the ValidatedNumerics.jl package for interval arithmetic, and IntervalConstraintProgramming.jl for constraint propagation.


Event-based Simulation of Spiking Neural Networks in Julia

Rainer Engelken, Max Planck Institute for Dynamics and Self-Organization

Information in the brain is processed by the coordinated activity of large neural circuits. Neural network models help to understand, for example, how biophysical features of single neurons and the network topology shape the collective circuit dynamics. This requires solving large systems of coupled differential equations which is numerically challenging. Here, we introduce a novel efficient method for numerically exact simulations of sparse neural networks that bring to bear Julia’s different data structures and high performance. The new algorithm reduces the computational cost from O(N) to O(log(N)) operations per network spike. This is achieved by mapping the neural dynamics to pulse-coupled phase oscillators and using mutable binary heaps for efficient state updates. Thereby numerically exact simulations of large spiking networks and the characterization of their chaotic phase space structure become possible. For example, calculating the largest Lyapunov exponent of a spiking neural network with one million neurons is sped up by more than four orders of magnitude compared to previous implementations in other programming languages (C++, Python, Matlab).

About Rainer Engelken

Rainer just finished his Ph.D. in at the Max Planck Institute for Dynamics and Self-Organization (Göttingen) on ‘Chaotic neural circuit dynamics’ after studying physics at various places. He has been using Julia since 2014, as it minimizes both programming time and CPU time and allows easy debugging, profiling and visualization under one roof.


Fast Multidimensional Signal Processing with Shearlab.jl

Héctor Andrade Loarca, Technical University of Berlin (TUB)

The Shearlet Transform was proposed in 2005 by the Professor Gitta Kutyniok (http://www3.math.tu-berlin.de/numerik/mt/mt/www.shearlet.org/papers/SMRuADaSO.pdf) and her colleagues as a multidimensional generalization of the Wavelet Transform, and since then it has been adopted by a lot of Companies and Institutes by its stable and optimal representation of multidimensional signals. Shearlab.jl is a already registered Julia package (https://github.com/arsenal9971/Shearlab.jl) based in the most used implementation of Shearlet Transform programmed in Matlab by the Research Group of Prof. Kutyniok (http://www.shearlab.org/software); improving it by at least double the speed on different experiments. As examples of applications of Shearlet Transform one has Image Denoising, Image Inpaiting and Video Compression; for instance I used it mainly to reconstruct the Light Field of a 3D Scene from Sparse Photographic Samples of Different Perspectives with Stereo Vision purposes. A lot of research institutes and companies have already adopted the Shearlet Transform in their work (e.g. Fraunhofer Institute in Berlin and Charité Hospital in Berlin, Mathematical Institute of TU Berlin) by its directional sensitivity, reconstruction stability and sparse representation.

About Héctor Andrade Loarca

Ph.D. student in Mathematics at the Technical University of Berlin (TUB) with Professor Gitta Kutyniok as advisor; major in Mathematics and Physics from National University of México (UNAM); ex Data Scientist of a mexican Open Governance Start Up (OPI); with experience in Data Mining, Machine Learning, Computational Harmonic Analysis and Computer Vision. Currently developing Light Field Reconstruction algorithms using Digital Signal Processing tools for 3D Imaging and Stereo Vision. Is known by his colleagues for using Julia on everything. It was introduced to Julia by Professor David Philip Sanders and after both gave a course on Computational Statistical Physics using Julia at the National University of México (UNAM) which convinced him to adopt Julia as his main programming language.

Flux: Machine Learning with Julia

Mike Innes, Julia Computing, Inc.

Flux.jl is a new Julia package for machine learning. It aims to provide strong tooling and support for debugging, high-level features for working with very complex networks, and state of the art performance via backends like TensorFlow or MXNet, while also providing a very high level of interoperability so that approaches can easily be mixed and matched. This talk will introduce Flux from the ground up and demonstrate some of its more advanced features.

About Mike Innes

I work with Julia Computing on Julia’s IDE, Juno, as well as various projects within the machine learning ecosystem.

Full Stack Web Development with Genie.jl

Adrian Salceanu, None

The web is eating the world, but building modern web applications can be an intimidating task. Successful online products must be fast, beautiful and usable. Responsive, maintainable and extendable. Provide simple and flexible web APIs. Be secure. Reach virtually 100% uptime while being easy to debug, extend and update, requiring powerful logging, intelligent caching and rapid scaling strategies. Julia as a language has an enormous potential in the web space thanks to its concise and friendly syntax, the powerful REPL, Unicode support, cross-platform availability, the efficiently compiled code and its parallel and distributed computing capabilities. And Julia’s ecosystem already provides low level libraries like HttpServer and WebSockets. But they leave the developers having to spend large amounts of time writing glue and boilerplate code: a tedious, expensive and error prone task. Genie is a new web framework that leverages Julia’s unique combination of features and its extensive collection of packages to empower developers to create high-performance web apps in less time and with less code. It glues low level libraries and contributes its own middlewares to expose a coherent and efficient workflow and a rich API for building web applications. This talk will give you the guided tour of Genie, introducing the MVC stack and its main components and showing you how to quickly bootstrap a new Genie app and how to easily implement CRUD operations to expose resources over the internet, in an efficient and secure manner. You will see how easy it is to use Genie’s API in tandem with Julia’s modules system to hook up your code - allowing you to focus on your software’s value proposition instead of wasting precious time dealing with the low level details of transporting bytes over the wire.

About Adrian Salceanu

Web developer since 2000. Architecting and building multi-tier, performance critical web apps handling large amounts of real time data since 2008. PHP, Ruby, JavaScript, F#, Elixir. Now using Julia and Genie to tackle web development’s own two-language problem (productive-slow-interpreted vs unproductive-fast-compiled). CTO at OLBG. Startup devotee and serial tech founder. IronHack mentor, organizer of Barcelona Julia and Barcelona on Rails. Creator of Genie.jl.


GLVisualize 1.0

Simon Danisch, JuliaLang

GLVisualize is a visualization framework written purely in Julia + OpenGL. There are a lot of new changes that I want to talk about:

  • New trait system for more modularity and code clarity
  • Different backends for GLVisualize - conquering the Web & PDFs!
  • A new API for simpler drawing
  • Tight integration with GPUArrays, pre-processing on the GPU
  • Higher level plotting interface
About Simon Danisch

Developer of GLVisualize & GPUArrays

GraphGLRM: Making Sense of Big Messy Data

Mihir Paradkar, Cornell University

Many projects in research and development require analysis of tabular data. For example, medical records can be viewed as a collection of variables like height, weight, and age for different patients. The values may be boolean (yes or no), numerical (100.3), categorical (A, B, O), or ordinal (early, middle, late). Some values may also be missing. However, analysis and feature extraction is made easier by knowing relationships between variables, for example, that weight increases with height. GraphGLRM is a framework that leverages structure in data to de-noise, compress, and estimate missing values. Using Julia’s flexibility and speed, we developed this package quickly and with sufficient performance for real-world data processing needs. GraphGLRMs are now robust and versatile enough to work with sparse, heterogeneous data. We will also discuss updates to Julia data structures and tooling that would ease package development and further empower the GraphGLRM framework. More about GraphGLRMs: https://github.com/mihirparadkar/GraphGLRM.jl More about LowRankModels: https://github.com/madeleineudell/LowRankModels.jl

About Mihir Paradkar

Mihir Paradkar recently graduated from Cornell University in Biological Engineering. He has been user of Julia since v0.3.5 and is a developer of GraphGLRM.jl and LowRankModels.jl . He will be starting as a software engineering in data mining at Yelp late this summer.


HiFrames: High Performance Distributed Data Frames in Julia

Ehsan Totoni, Intel Labs

Data frames are essential tools for data scientists, but existing data frames packages in Julia (and other languages) are sequential and do not scale to large data sets. Alternatively, data frames in distributed frameworks such as Spark are slow and not integrated with other computations flexibly. We propose a novel compiler-based approach where we integrate data frames into the High Performance Analytics Toolkit (HPAT) to build HiFrames. It automatically parallelizes and compiles relational operations along with other array computations in end-to-end data analytics programs, and generates efficient MPI/C++ code. We demonstrate that HiFrames is significantly faster than alternatives such as Spark on clusters, without forcing the programmer to switch to embedded SQL for part of the program. HiFrames is 3.6x to 70x faster than Spark SQL for basic relational operations, and can be up to 20,000x faster for advanced analytics operations, such as weighted moving averages (WMA), that the map-reduce paradigm cannot handle effectively. We will discuss how Julia’s powerful macro and compilation system facilitates developing HiFrames.

About Ehsan Totoni

Ehsan Totoni is a Research Scientist at Intel Labs. He develops programming systems for large-scale HPC and big data analytics applications with a focus on productivity and performance. He received his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 2014.

Image Quilting: Building 3D Geological Models One Tile at a Time

Júlio Hoffimann, Stanford University

ImageQuilting.jl is a high-performance implementation of texture synthesis and transfer for 3D images that is capable of matching pre-existing data in the canvas where the image is to be synthesized. It can optionally make use of GPUs through the OpenCL standard and is being currently used by the industry for fast generation of 3D geological models. In this talk, I will demonstrate some of the applications of this package in energy resources engineering and hydrogeology, and will highlight the qualities of the Julia programming language that enabled an unprecedented speed in this famous computer vision algorithm.

About Júlio Hoffimann

I am a Ph.D. candidate in the Department of Energy Resources Engineering at Stanford University. In my research, I study the links between surface processes (i.e. flow and sediment transport) at the surface of the Earth and the resulting geostatistical properties at its subsurface. Part of this research consists of developing efficient algorithms for stochastic/physical simulation of 3D Earth models. For more information, please visit: https://juliohm.github.io

Julia for Fully Homomorphic Encryption: Current Progress and Challenges

José Manuel Calderón Trilla, Galois, Inc.

Fully homomorphic encryption (FHE) is a cryptographic technique allowing a user to run arbitrary computations over encrypted data. This is particularly useful for computing statistical analytics over sensitive data. In this work, we introduce a Julia module, Fhe.jl, which supports running Julia functions over an FHE-encrypted data set. We do so by using symbolic execution to convert a Julia function into its circuit representation, which we then evaluate over the encrypted data. In this talk, we will discuss the progress we have made so far, some of the challenges we have run into, and how we hope to work with the Julia community to continue our efforts.

About José Manuel Calderón Trilla

José Manuel Calderón Trilla is a Research Scientist at Galois, Inc. working on Compilers, Static Analysis, and Formal Methods. He received his Ph.D. from the University of York in the UK for his work on Implicit Parallelism in lazy functional languages.

Julia for Infrastructure: Experiences in Developing a Distributed Storage Service

Ajay Mendez, Founder, Kinant.com

Julia is a language designed for numerical computing and it does that job pretty well. However, the emphasis on numerical computing and data science tends to overshadow the language’s other use cases. In this talk we share our experiences using Julia to build a distributed data fabric using commodity hardware. A data fabric is a distributed storage system that abstracts away the physical infrastructure and makes data available to applications using well known protocols such as NFS or S3. Our talk focuses on how we use Julia to implement a data fabric with specific examples. We will discuss some of the shortcomings and how we circumvented them. Finally we close by a cost benefit analysis of developing in Julia and how it can be a critical advantage in bringing products to market.

About Ajay Mendez

Ajay works on systems and infrastructure software for fun and profit. He has dabbled in operating systems, memory allocators, file systems and distributed systems. He founded kinant.com in 2017 to simplify the deployment and usage of storage infrastructure.


Julia: The Type of Language for Mathematical Programming

Madeleine Udell, Cornell University

Julia was designed to be the right language for programming mathematics. In this talk, I’ll argue that its sophisticated type system allows mathematicians to program in the same way they write mathematics. This simplicity has two consequences. First, it has made Julia an attractive ecosystem in which to write mathematical packages: Julia is now the language with the most comprehensive, robust, and user-friendly ecosystem of packages for mathematical programming (or optimization, in modern lingo). Second, it has made Julia the right language in which to express many mathematical problems. The lightweight type system makes it easy to write code that is clearer than pseudocode. This talk will present three case studies in optimization. We hope the audience will leave the talk with a new appreciation of Julia’s type system, as well as a new toolkit of packages to use for data fitting and optimization.

  1. Convex is a widely used library for convex optimization in Julia. In that package, the type system is used to create and recursively analyze the abstract syntax tree representing an optimization problem. Notions such as the sign of a real number, or the convexity or concavity of a function, are represented as types; and the convexity of an expression can be analyzed using a simple recursion over the tree of types.
  2. LowRankModels is a statistical package for imputing missing entries in large, heterogeneous tabular data set. LowRankModels uses type information about a DataFrame to automatically select the appropriate optimization problem to solve in order to find the best completion for the data table. These optimization problems are parametrized by a set of loss functions and regularizers. Using the type system, we are able to write algorithms that work seamlessly for any loss function or regularizer a user may dream up.
  3. Sketched approximations are a class of fast algorithms for producing a low rank approximation to a matrix - like an eigenvalue decomposition, but faster. We’ll show how to use parametric types to write all the special cases of the algorithm without introducing redundant code. Notably, these parametric types make it easier to understand the flow of the algorithm, and have essentially no analogue in “pseudocode” notation. Together with Julia’s simple mathematical syntax and support for unicode (eg, Greek) letters, we’ll see that the Julia code functions not only as an implementation of the method, but as a better version of pseudocode.
About Madeleine Udell

Madeleine Udell is Assistant Professor of Operations Research and Information Engineering and Richard and Sybil Smith Sesquicentennial Fellow at Cornell University. She studies optimization and machine learning for large scale data analysis and control, with applications in marketing, demographic modeling, medical informatics, and engineering system design. Her recent work on generalized low rank models (GLRMs) extends principal components analysis (PCA) to embed tabular data sets with heterogeneous (numerical, Boolean, categorical, and ordinal) types into a low dimensional space, providing a coherent framework for compressing, denoising, and imputing missing entries. She has developed of a number of open source libraries for modeling and solving optimization problems, including Convex.jl, one of the top ten tools in the new Julia language for technical computing, and is a member of the JuliaOpt organization, which curates high quality optimization software. Madeleine completed her Ph.D. at Stanford University in Computational & Mathematical Engineering in 2015 under the supervision of Stephen Boyd, and a one year postdoctoral fellowship at Caltech in the Center for the Mathematics of Information hosted by Professor Joel Tropp. At Stanford, she was awarded a NSF Graduate Fellowship, a Gabilan Graduate Fellowship, and a Gerald J. Lieberman Fellowship, and was selected as the doctoral student member of Stanford’s School of Engineering Future Committee to develop a road-map for the future of engineering at Stanford over the next 10–20 years. She received a B.S. degree in Mathematics and Physics, summa cum laude, with honors in mathematics and in physics, from Yale University.

Knet.jl: Beginning Deep Learning with 100 Lines of Julia

Deniz Yuret, Koç University, Istanbul

Knet (pronounced “kay-net”) is the Koç University deep learning framework implemented in Julia by Deniz Yuret and collaborators. Knet uses dynamic computational graphs generated at runtime for automatic differentiation of (almost) any Julia code. This allows machine learning models to be implemented by only describing the forward calculation (i.e. the computation from parameters and data to loss) using the full power and expressivity of Julia. The implementation can use helper functions, loops, conditionals, recursion, closures, tuples and dictionaries, array indexing, concatenation and other high level language features, some of which are often missing in the restricted modeling languages of static computational graph systems like Theano, Torch, Caffe and Tensorflow. GPU operation is supported by simply using the KnetArray type instead of regular Array for parameters and data. High performance is achieved using custom memory management and efficient GPU kernels.

About Deniz Yuret

Deniz Yuret received his BS, MS, and Ph.D. at MIT working at the AI Lab on machine learning and natural language processing during 1988-1999. He co-founded Inquira, Inc., a startup commercializing question answering technology which was later acquired by Oracle. He is currently an associate professor of Computer Engineering at Koç University, Istanbul and founder of its Artificial Intelligence Laboratory. In his spare time he develops Knet.jl, a Julia deep learning framework that uses dynamic computational graphs generated at runtime for automatic differentiation of (almost) any Julia code.


LightGraphs: Our Network, Our Story

James Fairbanks & Seth Bromberger, Georgia Tech Research Institute & Lawrence Livermore National Laboratory

Our talk discusses the development and origin of LightGraphs, current features, and future developments. We introduce the package’s major design choices in a historical context as a compromise between the three core LightGraphs goals of simplicity, performance, and flexibility. We highlight several areas where specific features of Julia have led to flexible and efficient implementations of graph algorithms. We will highlight our work in centrality measures, graph traversals, and spectral graph algorithms as examples of areas where Julia’s performance and design decisions have allowed LightGraphs to provide best-in-class implementations of graph algorithms. We also discuss integration with other organizations – JuliaOpt for matching and flow problems, and the Julia data visualization ecosystem – and highlight specifically LightGraphs’ potential to provide leadership on performant graph visualization. Finally, we speculate on the influence of Julia’s focus on elegant parallel processing to future development of the package.

About James Fairbanks & Seth Bromberger

Dr. James Fairbanks is a Research Engineer at the Georgia Tech Research Institute where he studies problems in complex networks, data analysis, and high performance computing with applications to healthcare and social phenomena. Seth Bromberger, a Research Scientist at Lawrence Livermore National Laboratory (https://people.llnl.gov/seth), is currently exploring the application of graph theory and machine learning to cybersecurity problems in critical infrastructure.


Miletus: A Financial Modelling Suite in Julia

Simon Byrne, Ranjan Anantharaman, Julia Computing, Inc.

Miletus is a financial software suite in Julia, with a financial contract specification language and extensive modelling features. In this talk, we’ll discuss the design principles involved in how to model a contract from primitive components, and how Julia’s language features lend themselves intuitively to this task. We’ll then talk about the various features of the software suite such as closed form models, binomial trees and computation of price sensitivities (aka “the Greeks”), providing several examples and code snippets, along with comparisons with other popular frameworks in this space.

About Simon Byrne, Ranjan Anantharaman

Dr Simon Byrne is a quantitative software developer at Julia Computing, where he implements cutting edge numerical routines for statistical and financial models. Simon has a Ph.D. in statistics from the University of Cambridge, and has extensive experience in computational statistics and machine learning in both academia and industry. He has been contributing to the Julia project since 2012. Ranjan Anantharaman is a data scientist at Julia Computing where he works on numerical software in a variety of domains. His interests include scientific computing and machine learning. He has been contributing to the Julia project and ecosystem since 2015.

Mixed-Mode Automatic Differentiation in Julia

Jarrett Revels, MIT

Julia’s unique execution model, metaprogramming facilities, and type system make it an ideal candidate language for native automatic differentiation (AD). In this talk, we’ll discuss a variety of Julia-specific tricks employed by ForwardDiff and ReverseDiff to differentiate user-provided Julia functions. Topics covered include the implementation of a native Julia execution tracer via operator overloading, functor-based directives for specialized instruction taping, SIMD vectorization and instruction elision for inlined dual number operations, and vectorized differentiation of linear algebraic expressions. I’ll close the talk with a glimpse into the future of AD in Julia and JuMP, highlighting the effect new features may have on other downstream projects like Celeste, Optim and RigidBodyDynamics.

About Jarrett Revels

I like to make Julia code differentiate itself.


Modern Machine Learning in Julia with TensorFlow.jl

Jonathan Malmaud, MIT

By many measures, TensorFlow has grown over the last year to become the most popular library for training machine-learning models. TensorFlow.jl provides Julia with a simple yet feature-rich interface to TensorFlow that takes advantage of Julia’s multiple dispatch, just-in-time compilation, and metaprogramming capabilities to provide unique capabilities exceeding TensorFlow’s own native Python API. This talk will demonstrate TensorFlow.jl by guiding listeners through training a realistic model of image captioning , showing how to 1) construct the model with native Julia control flow and indexing, 2) visualize the model structure and parameters in a web browser during training, and 3) seamlessly save and share the trained model with Python. No prior experience with TensorFlow is assumed.

About Jonathan Malmaud

Ph.D. candidate at MIT studying artificial intelligence


Modia: A Domain Specific Extension of Julia for Modeling and Simulation

Hilding Elmqvist, Mogram AB, Lund, Sweden

Modia is a Julia package to model and simulate physical systems (electrical, mechanical, thermo-dynamical, etc.) described by differential and algebraic equations. A user defines a model on a high level with model components (such as a mechanical body, an electrical resistance, or a pipe) that are physically connected together. A model component is constructed by “expression = expression” equations. The defined model is symbolically processed, JIT compiled and simulated with Sundials IDA solver with the KLU sparse matrix package. By this approach it’s possible and convenient to build models with hundred thousands of equations describing the dynamics of a car, an airplane, a power plant, etc. and simulate them. The authors used previous experience from the design of the modeling language Modelica (www.Modelica.org) to develop Modia. In the presentation it is shown how a user can build models and simulate physical systems, including mechanical systems and electrical circuits. Furthermore, the design of Modia is sketched: The Modia language is a domain specific extension of Julia using macros. With graph theoretical algorithms, some of them recently developed by the authors, equations are pre-processed (including analytic differentiation if necessary) and transformed into a special form that can be simulated by IDA. Hereby the sparsity structure of the original (Modia) equations, as well as the nature of array equations are kept intact.

About Hilding Elmqvist

Hilding Elmqvist attained his Ph.D. at the Department of Automatic Control, Lund Institute of Technology in 1978. His Ph.D. thesis contains the design of a novel object-oriented model language called Dymola and algorithms for symbolic model manipulation. It introduced a new modeling methodology based on connecting submodels according to the corresponding physical connections instead of signal flows. Submodels were described declaratively by equations instead of assignment statements. Elmqvist spent one year in 1978-1979 at the Computer Science Department at Stanford University, California. In 1992, Elmqvist founded Dynasim AB in Lund, Sweden. The primary product is Dymola for object-oriented modeling allowing graphical composition of models and 3D visualization of model dynamics. Elmqvist took the initiative in 1996 to organize an international effort to design the next generation object-oriented language for physical modeling: Modelica. In April 2006, Dynasim AB was acquired by Dassault Systemes. In January 2016, Elmqvist founded Mogram AB. Current activities include designing and implementing an experimental modeling language called Modia.

OhMyREPL.jl: This Is My REPL; There Are Many Like It, But This One Is Mine

Kristoffer Carlsson, Chalmers University of Technology

By default, Julia comes with a powerful REPL that itself is completely written in Julia. It has, among other things, tab completion, customizable keybindings and different prompt modes to use the shell or access the help system. However, with regards to visual customization there are not that many options for a user to tweak. To that end, I created the package OhMyREPL.jl. Upon loading, it hooks into the REPL and adds features such as syntax highlighting, matching bracket highlighting, functionality to modify input and output prompts and a new way of printing stacktraces and error messages. It also contains some non-visual features, like allowing text that has been copied from a REPL session to be directly pasted back into a REPL and quickly opening the location of stack frames from a stacktrace in an editor. The talk will give an overview of the different features, discuss which features managed to get upstreamed to Julia v0.6 and, if time allows, outline the internals of the package.

About Kristoffer Carlsson

Ph.D. student in computational mechanics at Chalmers University of Technology. Using Julia both for studies and as a hobby.


Pkg3: Julia's New Package Manager

Stefan Karpinski, Julia Computing, Inc. / NYU

This talk covers the design and implementation of Pkg3, the third (and hopefully final!) major iteration of Julia’s built-in package manager. We’ll begin with some history: what worked and didn’t work in the two previous iterations of the package manager. Pkg3 tries to marry the better parts of systems like Python’s virtualenv and Rust’s cargo, while supporting federated and layered package registries, and supporting interactive usage as well as reproducible environments and reliable deployment of code in production. We’ll nerd out a bit with some graph theory and how difficult it is to select compatible sets of package versions, and how much harder still it is to make version resolution understandable and predictable. But it won’t be all theory – we’ll also cover imminently practical subjects like “how do I install packages?”

About Stefan Karpinski

co-creator of Julia, co-founder of Julia Computing

Programming NVIDIA GPUs in Julia with CUDAnative.jl

Tim Besard, Ghent University

GPUs have typically been programmed using low-level languages like CUDA and OpenCL, providing full control over the hardware at the expense of developer efficiency. CUDAnative.jl makes it possible to program GPUs directly from Julia, in the case you need the flexibility to write your own kernel functions, without having to fall back to CUDA C or binary libraries. In this talk, I will give an overview of CUDAnative.jl with its features and restrictions, explain the technology behind it, and sketch our future plans.

About Tim Besard

Ph.D. student at Ghent University


QML.jl: Cross-platform GUIs for Julia

Bart Janssens, Royal Military Academy

The QML.jl (https://github.com/barche/QML.jl) package enables using the QML markup language from the Qt library to build graphical user interfaces for Julia programs. The package follows the recommended Qt practices and promotes separation between the GUI code and application logic. After a short introduction of these principles, the first topic of this talk will be the basic communication between QML and Julia, which happens through Julia functions and data (including composite types) stored in context properties. Using just a few basic building blocks, this makes all of the QML widgets available for interaction with Julia. The next part of the talk deals with Julia-specific extensions, such as the Julia ListModel, interfacing with the display system and GLVisualize and GR.jl support. These features will be illustrated using live demos, based on the examples in the QML.jl repository. Finally, some ideas for extending and improving the package will be listed, soliciting many contributions hopefully. The target audience for this talk is anyone interested in developing GUIs for their Julia application with a consistent look on OS X, Linux and Windows. All user-facing code is pure Julia and QML, no C++ knowledge is required to use the package.

About Bart Janssens

I am an associate professor at the mechanics department of the Royal Military Academy. For my Ph.D., I worked on Coolfluid, a C++ framework for computational fluid dynamics with a domain specific language. My interest in Julia is sparked by its powerful metaprogramming functionality coupled with C++-like performance, together with much better accessibility for students. To ease the transition to Julia, we are working on making some C++ libraries available in Julia. The QML.jl package is part of this effort. We also use Julia in our daily teaching activities, to provide students with interactive solutions to exercises.


Query.jl: Query Almost Anything in Julia

David Anthoff, UC Berkeley

Query is a package for querying julia data sources. Its role is similar to LINQ in C# and dplyr in R. It can filter, project, join and group data from any iterable data source. It has enhanced support for querying arrays, DataFrames, DataTables, TypedTables, IndexedTables and any DataStream source (e.g. CSV, Feather, SQLite etc.). The package also defines an interface for tabular data that allows a) dispatch on any tabular data source and b) simple conversions of tabular data representations. The talk will first introduce Query from a user perspective and highlight different examples of queries that the package makes feasible. The second half of the talk will dive deep into the internals of the package and explain the various extension points that package provides.

About David Anthoff

David Anthoff is an environmental economist who studies climate change and environmental policy. He co-develops the integrated assessment model FUND that is used widely in academic research and in policy analysis. His research has appeared in Science, the Journal of Environmental Economics and Management, Environmental and Resource Economics, the Oxford Review of Economic Policy and other academic journals. He contributed a background research paper to the Stern Review and has advised numerous organizations (including US EPA and the Canadian National Round Table on the Environment and the Economy) on the economics of climate change. He is an assistant professor in the Energy and Resources Group at the University of California, Berkeley. Previously he was an assistant professor in the School of Natural Resources and the Environment of the University of Michigan, a postdoc at the University of California, Berkeley and a postdoc at the Economic and Social Research Institute in Ireland. He also was a visiting research fellow at the Smith School of Enterprise and the Environment, University of Oxford. He holds a Ph.D. (Dr. rer. pol.) in economics from the University of Hamburg (Germany) and the International Max Planck Research School on Earth System Modelling, a MSc in Environmental Change and Management from the University of Oxford (UK) and a M.Phil. in philosophy, logic and philosophy of science from Ludwig-Maximilians-Universität München (Munich, Germany).


Stochastic Optimization Models on Power Systems

Camila Metello & Joaquim Garcia, PSR Inc.

We will present 3 tools for decision making under uncertainty in the power systems area: SDDP, a tool for optimal hourly operation of complex power systems; OptGen, a computational tool for determining the least-cost expansion of a multi-regional hydrothermal system; OptFlow, a mathematical model to optimize operation of a generation/transmission system with AC electrical network constraints. These models have been used by system operators, regulators and investors in more than seventy countries in the Americas, Asia-Pacific, Europe and Africa, including some of the largest hydro based systems in the world, such as the Nordic pool, Canada, the US Pacific Northwest and Brazil. SDDP is also the model used by the World Bank staff in their planning studies of countries in Asia, Africa and Latin America. OptGen had some interesting applications regional studies such as the interconnection of Central America, the Balkan regions, the interconnection of nine South American countries, Africa (Egypt-Sudan-Ethiopia and Morocco-Spain) and Central Asia. The original version of all 3 models was written in FORTRAN with the aid of some modelling tool or higher level API: AMPL for OptFlow, Mosel for OptGen and COIN-OR API for SDDP. Similar to any software, maintaining the code and adding new features became increasingly complex because they have to be built upon older data structures and program architectures. These concerns motivated PSR to develop an updated version of these programs written entirely in julia (with JuMP and MathProgBase) for three basic reasons: (i) the code is concise and very readable; (ii) the availability of an advanced optimization “ecosystem”; and (iii) excellent resources for distributed processing (CPUs and GPUs). We retained the use of Xpress by developing the Xpress.jl library. We also use MPI.jl for distributed processing (including multiple servers in AWS). The computational performance of the new code is matches the current ones’, which is very encouraging given that the current FORTRAN code has been optimized for several years based on thousands of studies. Also, the julia code incorporates several new modeling features that were easy to implement in all the 3 models: including SDP and SOCP relaxations for OPF and SDDiP method for stochastic integer optimization, confirming our expectation of faster model development. The new models were incorporated to an integrated planning system for Peru being developed by PSR, which will be delivered in August 2017. They are also being internally tested as a “shadow” to the current version for studies in several countries and was delivered for beta testing for some PSR clients. The official release is scheduled for the end of 2017.

About Camila Metello & Joaquim Garcia

Camila graduated as an industrial engineer and has a MSc in Decision Analysis from PUC-Rio. Attended UC Berkeley for a semester during under graduation. Joined PSR in 2013, where, at present, works with the development of the models of optimization of hydrothermal dispatch under uncertainty with network constraints (SDDP model) and electric systems expansion planning (OPTGEN model).

Joaquim has a BSc degree in electrical engineering and a BSc degree in mathematics, both from PUC -Rio and is currently working towards a PhD in electrical engineering with emphasis on decision support, also at PUC-Rio. During his undergraduate studies, he attended a year at UC Santa Barbara. He joined PSR in 2015 has been been working on the development of optimization models for hydro-thermal dispatch under uncertainty with transmission constraints reliability analysis, electrical systems expansion planning and nonlinear optimal power flow. Before PSR Joaquim worked with decision support at LAMPS (Laboratory of applied mathematical programming and statistics, at PUC-Rio) and with OTDR and Signal Processing at LabOpt (Optoelectronics laboratory, at PUC-Rio).

Taking Vector Transposes Seriously

Jiahao Chen, Capital One

from @jiahao: We have really thought carefully about what the transpose of a vector should mean in a programming language. The pre-0.6 behavior that vector’vector yields a vector, vector’ yields a matrix, and vector’’ yields a matrix are all bad mathematics and produced no shortage of confusion by end users. I present a summary of our research at the MIT Julia Labs into issue #4774, as a language design question that is informed by a comprehensive understanding of user expectations. Our main result is a short proof that it is impossible to avoid either new types, “ugly mathematics” (violation of Householder notation) or type instability. A single Array type is incompatible with Householder notation that produces the expected types from typical linear algebraic expressions. Furthermore, Householder notation intrinsically requires a conflation of 1x1 matrices and true scalars. I also provide historical evidence the notion of “ugly mathematics” is neither static nor objective. In reality, linear algebra has changed greatly over the past centuries, demonstrating the impermanence of even elementary concepts of what matrices and vectors are and how they have been influenced by notation - a discussion forced into consciousness through the lens of programming language design, types, and formal program semantics. I review the resolution of #19670 in the context of other designs in other programming languages, showing that all these designs turn out to locally optimal in conflating as much of Householder notation and array semantics as possible. Joint work with Alan Edelman, Andy Ferris, and a few other people.

About Jiahao Chen

Data Scientist at Capital One, formerly Research Scientist at MIT


TaylorIntegration.jl: Taylor's Integration Method in Julia

Jorge Perez and Luis Benet, UNAM (Mexico)

In this talk we shall present TaylorIntegration.jl, an ODE integration package using Taylor’s method in Julia. The main idea of Taylor’s method is to approximate locally the solution by means by a high-order Taylor expansion, whose coefficients are computed recursively using automatic differentiation techniques. One of the principal advantages of Taylor’s method is that, whenever high accuracy is required, the order of the method can be increased, which is more efficient computationally than taking smaller time steps. The accuracy of Taylor’s method permits to have round-off errors per integration step. Traditionally, it has been difficult to make a generic Taylor integration package, but Julia permits this beautifully. We shall present some examples of the application of this method to ODE integration, including the whole computation of the Lyapunov spectrum, use of jet transport techniques, and parameter sensitivity. Open issues related to improving performance will be described.

About Jorge Perez and Luis Benet

Jorge Perez is a Physics Ph.D. student at UNAM, Mexico, under supervision of Luis Benet and David P. Sanders, authors of TaylorSeries.jl and ValidatedNumerics.jl. His Ph.D. research project is related to understanding the dynamics of minor Solar System objects: comets, asteroids, etc. He is coauthor of TaylorIntegration.jl and a contributor to TaylorSeries.jl. Luis Benet is Associate Professor at the Instituto de Ciencias Físicas of the National University of Mexico (UNAM). He is mainly interested in classical and quantum chaos, including the dynamics of Solar System objects. He is coauthor of ValidatedNumerics.jl, TaylorSeries.jl and TaylorIntegration.jl, and has contributed to other Julia packages.


The Dolo Modeling Framework

Spencer Lyon, NYU Stern

We present a family of three Julia packages that together constitute a complete framework to describe and solve rational expectation models in economics. Dolang.jl is an equation parser and compiler that understands how to compile latex-like strings describing systems of equations into efficient Julia functions for evaluating the levels or derivatives of the equations. Dolo.jl leverages Dolang and implements a variety of frontier algorithms for solving a wide class of discrete time, continuous control rational expectations models. Finally, Dyno.jl builds upon Dolang to implement a Julia prototype of the Matlab-based dynare software library used extensively throughout academia and the public sector to approximate the solution to and estimate rational expectations models.

About Spencer Lyon

Economics Ph.D. student at NYU Stern. Active Julia member since 0.2

The Present and Future of Robotics in Julia

Robin Deits and Twan Koolen, MIT CSAIL

We (Twan and Robin) are graduate students in the Robot Locomotion Group at MIT. Our research focuses on modeling and optimization for the simulation and control of walking (and sometimes flying) robots. We’ve been using Julia in our research over the past year, and we’re excited to share what we’ve learned, what we’ve built, and what we’re hoping to see in the future of Julia. Specifically, we’d like to share some of our work on:

  • Robot dynamics and simulation in Julia: https://github.com/tkoolen/RigidBodyDynamics.jl
  • 3D visualization and manipulation of robot models from Julia: https://github.com/rdeits/RigidBodyTreeInspector.jl https://github.com/rdeits/DrakeVisualizer.jl
  • Optimization in Julia: https://github.com/rdeits/NNLS.jl
  • Collision algorithms in Julia: https://github.com/rdeits/EnhancedGJK.jl https://github.com/rdeits/AdaptiveDistanceFields.jl

We would also like to talk about how some of the best parts of the Julia ecosystem have made our work possible, like JuMP.jl, ForwardDiff.jl, and StaticArrays.jl. And, finally, we plan to discuss what we hope to see in Julia’s future, including what the role of Julia can be inside a real-time robot controller.

About Robin Deits and Twan Koolen

We’re graduate students in the Robot Locomotion Group at MIT, where we work on simulation, planning, and control of walking and flying robots.


The State of the Type System

Jeff Bezanson, Julia Computing, Inc.

Julia 0.6 includes a long-needed overhaul of the type system. While the effects of this change are not always visible, the new system eliminates classes of bugs and increases the expressiveness of types and method signatures. I plan to briefly explain how the new system works and what you can do with it. But more importantly, I want to ask: where do we go from here? Will we ever need another overhaul? I’ll present some possible future features and other related speculations. Topics may include record types, more powerful tuple types, protocols, ugly corner cases, and method specificity and ambiguity.

About Jeff Bezanson

Jeff is one of the creators of Julia, co-founding the project at MIT in 2009 and eventually receiving a Ph.D. related to the language in 2015. He continues to work on the compiler and system internals, while also working to expand Julia’s commercial reach as a co-founder of Julia Computing, Inc.


Turing: a Fresh Approach to Probabilistic Programming

Hong Ge, Zoubin Ghahramani, Kai Xu, University of Cambridge

Turing is a new probabilistic programming language (PPL) based on Julia, a framework which allows users to define probabilistic models and perform inference automatically. Thanks to Julia’s meta-programming support, Turing has a very friendly front-end modelling interface. Meanwhile, coroutines are used in Turing’s inference engine development to achieve the state-of-the-art sampling performance. Also, we have recently introduced a new Gibbs interface, which allows user to compose different samplers and run them in the same time. In this talk, we will discuss our motivation of developing Turing in Julia, introduce the design and architecture of Turing, and present some practical examples of how probabilistic modelling is performed in Turing.

About Hong Ge, Zoubin Ghahramani, Kai Xu

Developers of the Turing project form Cambridge Machine Learning Group


Using Parallel Computing for Macroeconomic Forecasting at the Federal Reserve Bank of New York

Pearl Li, Federal Reserve Bank of New York

This talk will give an overview of how researchers at the Federal Reserve Bank of New York have implemented economic forecasting and other post-estimation analyses of dynamic stochastic general equilibrium (DSGE) models using Julia’s parallel computing framework. This is part of the most recent release of our DSGE.jl package, following our ports of the DSGE model solution and estimation steps from MATLAB that were presented at JuliaCon in 2016. I will discuss the technical challenges and constraints we faced in our production environment and how we used Julia’s parallel computing tools to substantially reduce both the time and memory usage required to forecast our models. I will present our experiences with the different means of parallel computing offered in Julia - including an extended attempt at using DistributedArrays.jl - and discuss what we have learned about parallelization, both in Julia and in general. In addition, I will provide some of our new perspectives on using Julia in a production setting at an academic and policy institution. DSGE models are sometimes called the workhorses of modern macroeconomics, applying insights from microeconomics to inform our understanding of the economy as a whole. They are used to forecast economic variables, investigate counterfactual scenarios, and understand the impact of monetary policy. The New York Fed’s DSGE model is a large-scale model of the U.S. economy, which incorporates the zero lower bound, price/wage stickiness, financial frictions, and other realistic features of the economy. Solving, estimating, and forecasting it presents a series of high-dimensional problems which are well suited for implementation in Julia.

Disclaimer: This talk reflects the experience of the author and does not represent an endorsement by the Federal Reserve Bank of New York or the Federal Reserve System of any particular product or service. The views expressed in this talk are those of the authors and do not necessarily reflect the position of the Federal Reserve Bank of New York or the Federal Reserve System. Any errors or omissions are the responsibility of the authors.

About Pearl Li

I’m a Research Analyst at the New York Fed using Julia to estimate and forecast macroeconomic models. I’m interested in applying the frontier of scientific computing to economic research, so that we can solve more realistic and complex models.


Lightning Talks

Applications of Convex.jl in Optimization Involving Complex Numbers

Ayush Pandey, Indian Institute of Technology Kharagpur

Convex optimization problems require rigorous mathematical understanding to solve them. Convex.jl allows users to solve complex optimization problems easily by providing a simple intuitive user interface to express the objective function and constraints. As it became popular, we saw increased demand to support optimization over complex numbers, from users working in diverse scientific fields including power grid optimization, quantum information theory, wireless communication, and signal processing. Previously, these users relied on various tools such as MATLAB’s cvx and open-source python package PICOS to tackle different problems depending upon their domain of work. Convex’s new support for complex numbers allows users to approach each of these problems in Julia. In this talk, I will show how to the new functionality in Convex.jl provides a single integrated solution for many types of Disciplined Convex Programming Problems and show how to solve complex problems using Convex.jl in very few lines of code, taking examples from scientific domains mentioned above. I will also present benchmarks comparing Convex.jl with competing open-source tools.

About Ayush Pandey

Ayush Pandey is a final year graduate student at IIT Kharagpur studying Mathematics & Computing Sciences with micro-specialization in Optimization Theory and Applications. He is also a Google Summer of Code, 2016 fellow under the Julia Language.

Automatically Deriving Test Data for Julia Functions

Simon Poulding, Blekinge Institute of Technology, Sweden

The use of multiple dispatch in Julia’s standard library and user-written functions presents a challenge for automated techniques of generating test data. In order to exercise all the methods that implement a function, the generation technique must generate test data with diverse data types, but traditional techniques typically focus solely on diverse data values and implicitly assume a constant data type. In this talk, I will demonstrate our solution to this challenge which automatically learns an effective probability distribution over types and methods that create instances of these types. I will explain how we used this approach to fuzz-test some common arithmetic and string functions in the Julia standard library, in the process identifying three faults.

About Simon Poulding

I am an assistant professor in software engineering. The primary objective of my research is to improve the cost-effectiveness of software testing through the application of machine learning and statistical methods. I have been a user of Julia for three years, and am co-developer of the DataGenerators package which facilitates the generation of complex test data.


BioSimulator.jl: Stochastic Simulation in Julia

Alfonso Landeros, University of California, Los Angeles

Complex systems in biology are often difficult to treat analytically using mathematics and expensive to investigate with empirical methods. Moreover, deterministic approaches are misleading in systems that exhibit noise (e.g. rare events akin to mutation and extinction). Stochastic simulation provides investigators with the ability to simulate complex systems by integrating mathematical rigor and biological insight. However, simulations are slow, computationally expensive, and difficult to implement in software. My goal in developing BioSimulator.jl is to provide investigators with a tool that enables (1) quick and intuitive model prototyping, (2) efficient simulation, (3) visualization of simulation output, and (4) implementing new stochastic simulation algorithms. Using the Julia language allowed us to meet all four criteria with relative ease and extend to parallelized simulations. My talk will describe the theory underlying BioSimulator.jl, highlight aspects of our implementation, and present a few numerical examples.

About Alfonso Landeros

I am a first-year student in biomathematics. My studies are focused on stochastic processes, scientific computing, and optimization.

Circuitscape: A Tool to Measure Landscape Connectivity

Ranjan Anantharaman, Julia Computing, Inc.

Circuitscape is one of the most popular tools to measure landscape connectivity, using concepts from electrical circuit theory. Ecologists can model landscapes as large resistance maps and then compute current maps and voltage potentials at various parts on the landscape. Computationally, this involves constructing a large graph and using a sparse solver. This tool has originally been written in Python, and this talk will be about porting it to Julia as well as improving the solver in the package. This talk will also focus on performance comparisons between the Julia and Python versions.

About Ranjan Anantharaman

Ranjan Anantharaman is a data scientist at Julia Computing. His interests span applied mathematics and numerical computing, and he enjoys working with computation across a variety of fields and domains.

Continuous-Time Point-Process Factor Analysis in Julia

Gergo Bohner, Gatsby Computational Neuroscience Unit, UCL

Neurons throughout the brain, and particularly in the cerebral cortex, represent many quantities of interest using population codes. Latent variable models of neural population activity may be seen as attempting to identify the value, time-evolution and encoding of such internal variables from neural data alone. They do so by seeking a small set of underlying processes that can account for the coordinated activity of the population. We introduce a novel estimation method [1] for latent factor models for point processes that operates on continuous spike times. Our method is based on score matching for point process regressions [2] adapted to population recordings with latent processes formed by mixing basis functions. The basis functions are represented as either Fourier modes, or functions living in a Reproducing Kernel Hilbert Space, parametrised using MLKernels. The method requires the kernel matrix as well as the first and second derivatives thereof, which we can compute efficiently via the Calculus package, making use of anonymous functions. Parameter estimation is then closed form and thus lightning fast up to normalisation, but afterwards we need to estimate the total intensity in the observation period. The approximation of the time integral relies on Cubature.jl. Due to its speed, this method enables neuroscientists to visualise latent processes in real time during experimental recordings and immediate compare them to their expectations, thus quickening the Planning-Design-Analysis loop by a large margin.

  1. https://github.com/gbohner/PoissonProcessEstimation.jl
  2. Sahani, M; Bohner, G and Meyer A, 2016 - Score-matching estimators for continuous-time point-process regression models. MLSP2016
About Gergo Bohner

Gergo focused on math and physics in high school, but completed an engineering degree in Molecular Bionics as an undergrad in his home city, Budapest. After being the image processing guy in a cancer research lab in London as well as learning about AI in Leuven, Belgium, Gergo settled as a Ph.D. student in the Gatbsy Computational Neuroscience Unit, working on developing machine learning algorithms to process and understand various types of neural data.

Cows, Lakes, and a JuMP Extension for Multi-stage Stochastic Optimization

Oscar Dowson, University of Auckland

Stochastic Dual Dynamic Programming (SDDP) is an optimization algorithm for solving large, multi-stage stochastic programming problems. It is well known in the electricity community, but has received little attention in other application areas. The algorithm is computationally demanding as it typically involves iteratively solving hundreds of thousands of linear programs. In the past, implementations have been coded in slow, but expressive mathematical optimization languages such as AMPL, or in fast, but low level languages such as C++. In this talk, we detail a JuMP extension we have developed to solve problems using SDDP. We also present benchmarks showing that our Julia implementation has similar run-times to a previous version developed in C++, while being more flexible and expressive. This speed and flexibility has allowed us to revisit assumptions made in previous work, as well as apply the SDDP algorithm to problems as diverse as agriculture, energy, and finance.

About Oscar Dowson

Oscar Dowson (@odow) is a P.h.D. Candidate in Engineering Science at the University of Auckland. He works on applying stochastic optimization to the New Zealand dairy industry.

DataStreams: Roadmap for Data I/O in Julia

Jacob Quinn, Domo

The DataStreams package defines a powerful and performant interface for getting data in and out of Julia. Come learn about exciting advances in features and performance as we approach Julia 1.0.

About Jacob Quinn

Attended Carnegie Mellon for a master’s degree in data science and active Julia contributor for 4 years now.


Diversity and Inclusion at JuliaCon and in the Scientific Computing Community

Erica Moszkowski, Federal Reserve Bank of New York

This talk will address efforts to promote diversity and inclusion at JuliaCon this year, with the goals of a) increasing awareness of JuliaCon’s initiatives among conference participants and the Julia community at large and b) starting a public conversation about diversity and inclusion with other open-source conferences. It will place JuliaCon’s initiatives in the context of the broader scientific computing community.

About Erica Moszkowski

Erica is a research analyst in the Macroeconomics function at the Federal Reserve Bank of New York and the Diversity Chair for JuliaCon 2017. She is a 2015 graduate of Williams College and plans to begin her Ph.D. in Economics in the fall.


Exploring Evolutionary Dynamics of Communications in Bacteria Using Julia

Yifei Wang, Georgia Institute of Technology

Many species of bacteria are able to collectively sense and respond to their environment. This communication form known as quorum-sensing (QS) can be achieved through the production of small molecules that are able to freely diffuse across cell membranes. These molecules (autoinducers) can subsequently be detected by other individuals in the population and once a threshold limit is reached, then this may cause a change in gene expression which allows bacteria to coordinate their activities such as biofilm formation, virulence and antibiotic resistance. Despite the widespread interest in QS from molecular mechanisms to social evolution and pathogen control, there is still controversy over the basic evolutionary function of QS. Using Julia as the agent-based modeling platform, we have been able to investigate the rewards and risks of coordination and cooperation in QS. In this talk, I will briefly introduce the research background and share some of our results obtained from in silico evolution using Julia. This work is important as it sheds light on how simple signal-mediated behavioral rules can shape complex collective behaviors in bacteria. Julia greatly helped simplify the modeling precess and speed up simulations.

About Yifei Wang

Yifei Wang is currently a postdoctoral research fellow with the School of Biological Sciences at Georgia Institute of Technology. His research focuses on collective intelligence, evolutionary dynamics and high-performance computing. Dr. Wang received a degree of B.Eng. in computer science & technology in 2009, a degree of M.Eng. in astronautics engineering in 2012, and a degree of Ph.D. in computing in 2016. He was an awardee of Richard E. Merwin Student Scholarship from the IEEE Computer Society in 2011, and received a three-year Overseas University Research Studentship from the University of Bath (UK) in 2012. Dr. Wang was the Student Activities Chair of IEEE UK & Ireland Section from 2013 to 2015. He along with his team successfully organized the 3rd IEEE UK & Ireland Student Branch Congress in 2013.

GR Framework: Present and Future

Josef Heinen, Forschungszentrum Jülich

GR is a plotting package for the creation of two- and three-dimensional graphics in Julia, offering basic MATLAB-like plotting functions to visualize static or dynamic data with minimal overhead. In addition, GR can be used as a backend for Plots, a popular visualization interface and toolset for Julia. Using quick practical examples, this talk is going to present the special features and capabilities provided by the GR framework for high-performance graphics, in particular when being used in interactive notebooks (Jupyter), development environments (Atom), desktop applications (nteract) or terminal programs (iTerm2). The presentation also introduces how to embed GR in interactive GUI applications based on QML.jl, a new plotting interface to Qt5 QML. Moreover, some experimental features and elements will be introduced, i.a. a meta layer providing an interactive interface to new backends based on Qt5 or JavaScript.

About Josef Heinen

Josef Heinen is the head of the group “Scientific IT–Systems” at the Peter Grünberg Institute / Jülich Centre for Neutron Science, both institutes at Forschungszentrum Jülich, a leading research centre in Germany. The design and development of visualization systems have been an essential part of his activities over the last twenty years. Most recently his team is engaged with the further development of a universal framework for cross-platform visualization applications (GR Framework).


Heavy-duty pricing of Fixed Income financial contracts with Julia

Felipe Noronha, Brazilian Development Bank

The pricing of bonds or a Credit Portfolio usually has simple mathematics. However, when faced with a big portfolio, careful design is crucial for a fast execution time. I’ll show how I designed a solution to price a database of about 2.4 million contracts with 78 million cashflows in up to 3.5 minutes using a 8 core machine. The solution uses plain simple Julia code, some parallel computation and buffering strategies, Julia’s native serialization for fast-loading data from the disk, and a handful of packages. BusinessDays.jl and InterestRates.jl packages will be featured.

About Felipe Noronha

Bachelor in Computer Engineering, M. Sc. in Economics, Market Risk Manager at BNDES (Brazilian Development Bank).

Improving Biological Network Inference with Julia

Thalia Chan, Imperial College, London

In the multi-disciplinary field of systems biology, we welcome the opportunity that Julia brings for writing fast software with simple syntax. Speed is important in an age when biological datasets are increasing in size and analyses are becoming computationally more expensive. One example is the problem of determining how genes within a cell interact with one another. In the inference of gene regulatory networks (GRN) we seek to detect relationships between genes through statistical dependencies in biological data, and as datasets grow, so does computation time. Some algorithms use measures from information theory, which are suitable for detecting nonlinear biological relationships, but incur a high computational cost. We developed InformationMeasures.jl, a package for calculating information theoretic measures. The improvement in performance of our Julia package compared to widely-used packages in other languages enables us to develop new algorithms with higher complexity, examining triples, rather than pairs, of genes. These we can show are more successful than pairwise methods (in simulated data where the underlying GRNs are known), and scale well to the size of the largest currently-available biological datasets.

About Thalia Chan

Thalia is a Ph.D. student in theoretical systems biology at Imperial College, London. Her research focuses on algorithm development for biological network inference, in particular using information theory. Outside of her studies she contributes to various open source software projects.

Interfacing with LLVM Using LLVM.jl

Tim Besard, Ghent University

LLVM.jl provides a high-level Julia interface to the LLVM compiler framework. In this talk, I’ll explain how to use LLVM.jl for basic code generation and execution, as well as how to integrate it with the rest of the Julia compiler.

About Tim Besard

Ph.D. student at Ghent University


JLD2: High-performance Serialization of Julia Data Structures in an HDF5-compatible Format

Simon Kornblith, MIT

At present, two options exist for saving Julia data structures to disk: Julia’s built-in serializer and the JLD (Julia data) package. The built-in serializer achieves reasonable performance, but uses a non-standardized format that differs by Julia version and processor architecture. JLD saves data structures in a standardized format (HDF5), but has substantial overhead when saving large numbers of mutable objects. In this talk, I describe the design of JLD2, a re-implementation of JLD. By replacing JLD’s dependency on the HDF5 library with a pure Julia implementation of a subset of HDF5, JLD2 achieves performance comparable to Julia’s built-in serializer, while writing files readable by standard HDF5 implementations. Additionally, JLD2 resolves numerous issues with the previous JLD format and implementation.

About Simon Kornblith

I am currently a Ph.D. student in neuroscience at MIT, but my affiliation will probably change before JuliaCon.

JSeqArray: Data Manipulation of Whole-genome Sequencing Variants in Julia

Xiuwen Zheng, University of Washington

Whole-genome sequencing (WGS) data is being generated at an unprecedented rate. Analysis of WGS data requires a flexible data format to store the different types of DNA variation. A new WGS variant data format “SeqArray” was proposed recently (Zheng X, etc, 2017 Bioinformatics), which outperforms text-based variant call format (VCF) in terms of access efficiency and file size. Here I introduce a new Julia package “JSeqArray” for data manipulation of genotypes and annotations in an array-oriented manner (https://github.com/CoreArray/JSeqArray). It enables users to write portable and immediately usable code in the wider scientific ecosystem. When used in conjunction with the in-built multiprocessing and job-oriented functions for parallel execution, the JSeqArray package provides users a flexible and high-performance programming environment for analysis of WGS variant data. In the presentation, the examples of calculating allele frequencies, principal component analysis and association tests of linear regression will be given.

About Xiuwen Zheng

Ph.D, Biostatistics, (6/13) Dept. of Biostatistics, UW, Seattle, WA Postdoctoral Fellow, (7/13 – 8/15) Dept. of Biostatistics, University of Washington (UW), Seattle, WA Senior Fellow, (9/15 – present) Dept. of Biostatistics, University of Washington (UW), Seattle, WA Develop and apply statistical and computational methods for the interpretation of large-scale genetic data

Julia Roadmap

Stefan Karpinski, Julia Computing, Inc. / NYU


About Stefan Karpinski

co-creator of Julia, co-founder of Julia Computing


Julia for Seismic Data Processing and Imaging (Seismic.jl)

Wenlei Gao, University of Alberta

Seismic.jl is a Julia package that provides a framework for seismic wave modeling, data processing and imaging. The current version includes support to read/write seismic data, reconstruction and denoising of multi-dimensional (5D) seismic data via parallel and distributed tensor completion, GPU-accelerated finite-difference solvers for seismic wave simulations, and seismic imaging including passive-seismic source location. In this lightning talk, I will briefly describe our area of research and immediately, show how Seismic.jl has been used as main framework for our research in applied seismology.

About Wenlei Gao

Wenlei Gao received his B.Sc in 2010 and M.Sc in 2013 in Geophysics from China University of Petroleum, Beijing, China. From 2013 to 2014 he worked for the Research Institute of China National Offshore Oil Company. He is currently enrolled in the Ph.D. program in Geophysics in the University of Alberta. His research is mainly focused on multicomponent seismic data registration and joint deconvolution.

Julia on the Raspberry Pi

Avik Sengupta, Julia Computing, Inc.

A quick update on the state of Julia on the Raspberry Pi. We will see how get Julia and GPIO related packages working on the Pi, and explore some working examples of applications running on the Pi and utilising its power to interact with the physical world.

About Avik Sengupta

Avik is the author of Julia’s integration with Java and various other packages. One of his hobbies is to make Julia a first class language on the Raspberry Pi.

Julia: a Major Scripting Language in Economic Research?

Anna Ciesielski, ifo Institute and Ludwig-Maximilians University in Munich (Germany)

Julia has the potential to become a major programming language in economics. In this presentation I will suggest a new way to calibrate models of economic growth. For that purpose I use a Markov-Chain Monte-Carlo algorithm (the Klara package) and I repeatedly solve for the roots of a big system of nonlinear equations using the JuMP and Ipopt packages. With this approach I am able to estimate the distributions of parameter values which drive long-run economic growth and project confidence intervals of macroeconomic variables into the future. For this purpose Julia is the best programming language that I know of, because it combines a great range of functionalities and at the same time it is very fast. To conclude, I will reflect on some challenges that came up during the project.

About Anna Ciesielski

I am a Ph.D. student in the economics department at the Ludwig-Maximilians University in Munich (Germany).

JuliaBox on Various Cloud Platforms and Current Development Goals

Nishanth H. Kottary, Julia Computing Inc.

A quick presentation on our experience of running JuliaBox on various cloud platforms viz. Amazon AWS, Google Cloud Platform and Microsoft Azure. Also we present the current development plans to make JuliaBox faster and support a host of new features.

About Nishanth H. Kottary

Software Engineer at Julia Computing Inc.


Jeff Bezanson, Julia Computing, Inc.

JuliaDB.jl is an end-to-end all-Julia data analysis platform incorporating storage, parallelism and compute into a single model. One can load a pile of CSV files into JuliaDB as a distributed table. JuliaDB will index the files and save the index for efficient lookup of subsets of the data later. You can also convert the data from the CSV files into an efficient memory mappable binary format (“ingest”). This talk will be a brief introduction to the basic primitives of JuliaDB and how to use them.”

About Jeff Bezanson

Jeff is one of the creators of Julia, co-founding the project at MIT in 2009 and eventually receiving a Ph.D. related to the language in 2015. He continues to work on the compiler and system internals, while also working to expand Julia’s commercial reach as a co-founder of Julia Computing, Inc.

JuliaRun: A Simple & Scalable Julia Deployment Platform

Tanmay Mohapatra & Pradeep Mudlapur, Julia Computing, Inc.

JuliaRun is a product of Julia Computing under development and a few early users. It is adaptable to a variety of private and public clouds and makes it easy to deploy Julia applications both batch and online. We will present a brief of the architecture and how it can help deploy scalable end to end applications.

About Tanmay Mohapatra & Pradeep Mudlapur

Tanmay and Pradeep have contributed to Julia packages in JuliaWeb and JuliaCloud.

Junet: Towards Better Network Analysis in Julia

Igor Zakhlebin, Northwestern University

I will present Junet — a new package for network analysis that seeks to be a fast and hackable alternative to mainstream network analysis libraries like NetworkX, igraph, and graph-tool. Unlike other Julia packages, it allows to quickly traverse and modify the graphs as well as to associate the attributes with their nodes and edges. I will discuss the data structures implemented in Junet and showcase how specific Julia’s features allow to make them efficient. For example, thanks to parametric types it is possible to shrink the memory consumed by Junet to a fraction of what other libraries require. And conjunction of multiple dispatch with just-in-time compilation allows to optimize some methods based on the specific types they operate on, sometimes eliminating the computation altogether. The talk will also overview things that are experimental and don’t work so well like creating zero-cost iterators and parallelizing loops. Finally, I will present the benchmarks comparing Junet with state-of-the-art libraries for network analysis.

About Igor Zakhlebin

Graduate student


L1-penalized Matrix Linear Models for High Throughput Data

Jane Liang, University of Tennessee Health Science Center

Analysis of high-throughput data can be improved by taking advantage of known relationships between observations. Matrix linear models provide a simple framework for encoding such relationships to enhance detection of associations. Estimation of these models is challenging when the datasets are large and when penalized regression is used. This talk will discuss implementing fast estimation algorithms for L1-penalized matrix linear models as a first-time Julia user and fluent R user. We will share our experiences using Julia as our platform for prototyping, numerical linear algebra, parallel computing, and sharing our method.

About Jane Liang

Jane Liang recently obtained a bachelor’s degree in statistics from UC Berkeley and plans to enter a doctoral program later this year. Currently, she is a scientific programmer working with Dr. Saunak Sen at the University of Tennessee Health Science Center, Department of Preventive Medicine, Division of Biostatistics.

MultipleTesting.jl: Simultaneous Statistical Inference in Julia

Nikolaos Ignatiadis, Stanford University

The parallel application of multiple statistical hypothesis tests is one of the fundamental patterns of exploratory data analysis for big datasets. This becomes essential in various fields of scientific research, such as in high-throughput biology, medicine and imaging where one is routinely faced with millions of tests. The goal is to protect against spurious discoveries with rigorous statistical error control guarantees, while simultaneously providing enough power to detect needles in a haystack. Here, we present MultipleTesting.jl, a package that provides a unified interface for classical and modern multiple testing methods. We give a quick introduction to the underlying statistical concepts and show how Julia is ideally suited for such an endeavour: First, most multiple testing procedures consist of a standard set of primitives, such as p-values, adjusted p-values and hypothesis weights. Second, elaborate (multiple testing) algorithms often consist of simpler components in a plug-and-play fashion; these include estimators of the proportion of true null hypotheses, parametric as well as non-parametric distribution estimators, and statistical machine learning techniques. All of these ideas can be abstracted away by Julia’s type system and multiple dispatch. Third, Julia provides the computational performance which is necessary when analyzing millions of hypotheses. We believe MultipleTesting.jl complements the growing number of high quality statistics packages in Julia’s ecosystem.

About Nikolaos Ignatiadis

Nikos Ignatiadis is a first year Ph.D. student at Stanford’s Statistics department. He is interested in the development of interpretable methods for multiple testing and high dimensional inference.

Nulls.jl: Missingness for Data in Julia

Jacob Quinn, Domo

Nullability is a complex issue for any programming language or domain; Nulls.jl puts forth the data-friendly approach Julia has wanted and deserves with core language support.

About Jacob Quinn

Attended Carnegie Mellon for a master’s degree in data science and active Julia contributor for 4 years now.


Solving Geophysical Inverse Problems with the jInv.jl Framework: Seeing Underground with Julia

Patrick Belliveau, University of British Columbia

Geophysical inversion is the mathematical and computational process of estimating the spatial distribution of physical properties of the earth’s subsurface from remote measurements. It’s a key tool in applied geophysics, which is generally concerned with determining the structure and composition of the earth’s interior without direct sampling. At JuliaCon 2017 I would like to discuss our group’s efforts to develop a modular, scalable, and extensible framework for solving geophysical inverse problems and other partial differential equation (PDE) constrained parameter estimation problems in Julia. To solve PDE constrained parameter estimation problems we need advanced algorithms for optimization, for the solution of PDEs, and the ability to efficiently share information between these domains. Our framework, called jInv—short for JuliaInversion—provides modular building block routines for these tasks that allow users to easily write their own software to solve new problems. The framework heavily uses Julia’s multiple dispatch to allow for extensibility and generic programming. It is also critical that software implementations of these algorithms can scale to large distributed computing systems. jInv allows users to exploit the parallelism in geophysical inverse problems without detailed knowledge of Julia’s parallel computing constructs. The first main goal of my talk is to discuss our approach to exploiting parallelism in geophysical inverse problems and how it has been implemented in jInv. The second goal is to illustrate, through examples of developing jInv modules for new geophysical problems, how we’ve moved jInv from a research project for the benefit of our own group to a tool that can be of use to the wider community.

About Patrick Belliveau

Hi! I’m a Ph.D. student in the department of Earth, Ocean and Atmospheric Sciences at the University of British Columbia in Vancouver Canada. Academically I’m interested in developing new computational methods for solving geophysical imaging problems. Since coming to UBC Julia has become my language of choice and I now consider myself a reformed Fortran programmer.

SparseRegression.jl: Statistical Learning in Pure Julia

Josh Day, NC State University

SparseRegression implements a variety of offline and online algorithms for statistical models that are linear in the parameters (generalized linear models, quantile regression, SVMs, etc.). This talk will discuss my experience using primitives defined in the JuliaML ecosystem (LossFunctions and PenaltyFunctions) to implement a fast and flexible SparseReg type for fitting a wide variety of models.

About Josh Day

Josh is a statistics Ph.D. student at NC State University, where he researches on-line optimization algorithms for performing statistical analysis on big and streaming data.

Statically Sized and Typed Data in Julia

Andy Ferris, Fugro Roames

I will describe my experience working with and developing highly efficient data structures in Julia which leverage statically known information - such as the type(s) contained in a collection, or the predetermined size of an array. Julia’s combination of efficient code generation and metaprogramming capability make it an ideal language to implement data structures which are both convenient to program with and lightning fast in execution. I plan to describe the various metaprogramming approaches which are useful for implementing containers of known size or having inhomogeneous elements - by using traits, pure functions, generated functions, macros and recursion. I will touch upon the successes and failures of packages like StaticArrays.jl, Switches.jl and TypedTables.jl, and hope to preview work on a more flexible yet strongly-typed tabular data structure than currently provided by TypedTables.

About Andy Ferris

I currently work at Fugro Roames on the intersection of machine learning, geodesy and big data. Beginning with detailed, large-scale scans of the physical world, we deduce intelligence for our clients that would be expensive to acquire directly. Previously, I worked in academia as a numerical quantum physicist, where I was attracted to Julia for its unique combination of user productivity and speed.


Sustainable Machine Learning Workflows at Production Scale with Julia

Daniel Whitenack, Pachyderm

The recent advances in machine learning and artificial intelligence are amazing, and Julia seems poised to play a significant role in these fields. Yet, in order to have real value within a company, data scientists must be able to get their models off of their laptops and deployed within a company’s distributed data pipelines and production infrastructure. In this talk, we will implement an ML model locally and talk about the trouble we can get into taking this to production. Then, against all odds, we will actually deploy the model in a scalable manner to a production cluster. Now that’s a pretty good 10 minutes!

About Daniel Whitenack

Daniel (@dwhitena) is a Ph.D. trained data scientist working with Pachyderm (@pachydermIO). Daniel develops innovative, distributed data pipelines which include predictive models, data visualizations, statistical analyses, and more. He has spoken at conferences around the world (Datapalooza, DevFest Siberia, GopherCon, and more), teaches data science/engineering with Ardan Labs (@ardanlabs), maintains the Go kernel for Jupyter, and is actively helping to organize contributions to various open source data science projects.

Teaching Through Code

Christina Lee, Okinawa Institute of Science and Technology

Standards already exist to improve software readability, but code understandable by a colleague differs from the best code to present to a student. As a scientist, I have often had to jump from mathematics or pseudo-code to a fully fledged implementation, with no chance to gain purchase in an intermediate middle ground. In the last year, I have worked on a Julia blog in computational physics and numerics and have striven to write code comprehensible to someone unfamiliar with the fundamental principles of the algorithm. In this talk, I will display both good and bad examples of documentation and tutorials, as well as guidelines for improvement.

About Christina Lee

Theoretical Physics Graduate Student


The Julia VS Code Extension

Zac Nugent, None

This talk will give an overview of the Julia extension for VS Code. The extension currently provides syntax highlighting, an integrated REPL, code completion, hover help, an integrated linter, code navigation, integration with the Julia testing infrastructure and integrated support for Weave documents (Julia’s knitr equivalent). A 30-minute version of this talk would talk about the internals of the extension. We would describe the Julia language server (our implementation of the Microsoft Language Server Protocol) that provides the integration with the VS Code UI. Other topics we would cover are our approach to build a robust and reliable software delivery mechanism that does not depend on the shared Julia package directory, our custom parser that is used in the language server and the developments currently being made to provide actionable parse-time formatting and linting hints, as well as any other features we might add between now and JuliaCon. Links: https://github.com/JuliaEditorSupport/LanguageServer.jl https://github.com/JuliaEditorSupport/julia-vscode https://github.com/ZacLN/Parser.jl

About Zac Nugent

London based Economist

TheoSea: Theory Marching to Light

Mark Stalzer, Caltech

TheoSea (for THEOry SEArch) is a Julia meta-program that discovers compact theories from data if they exist. It writes candidate theories in Julia and then validates: tossing the bad theories and keeping the good theories. Compactness is measured by a metric, such as the number of space-time derivatives. A theory can consist of more than one well-formed formula over a mathematical language. The underlying algorithm is optimal in terms of compactness, although it may be combinatorially explosive for non-compact theories. TheoSea is now working on re-discovering the source-free Maxwell equations and the wave equation of light. There are many applications.

About Mark Stalzer


Using Julia to Inform Qb@ll Development

Jane E. Herriman, Caltech/Lawrence Livermore National Lab

Qb@ll is an open source density functional theory package being developed at Lawrence Livermore National Lab. Present work focuses on efficient time integration methods, with the aim of substantially increasing the timescales accessible in simulations of electron dynamics. Qb@ll is several hundred thousand lines of C++, Fortran, and Perl. Exploring new methods directly in the code base is extremely developer-time inefficient. Simultaneously, rigorous exploration of relevant methods is highly computationally intensive, precluding the use of traditional high-productivity languages. Screening new methods in Julia has proven highly effective, even accounting for the time to learn the language and implement a small code to explore electron wave function integration.

About Jane E. Herriman

Jane is a graduate student in computational materials physics enrolled at Caltech. She is interning at Lawrence Livermore National Lab, where she is working with Xavier Andrade on methods for and applications of density functional theory.

Using Return Type Annotations Effectively

Eric Davies, Invenia Technical Computing

Function return type annotations were added over a year ago and have seen some usage in Base but little in user-land. This talk will describe how they are implemented and discuss how ResultTypes.jl uses them to great effect.

About Eric Davies

Eric is co-leading Invenia’s transition to Julia and designing the building blocks for Invenia’s Energy Intelligence System.


Web Scraping with Julia

Avik Sengupta, Julia Computing, Inc.

A large part of data science is in the gathering of data, and in solving the 2 language problem, it should be no surprise that Julia is great for that part of the workflow. In this talk, we will discuss how to combine a set of packages (HTTP.jl, Gumbo.jl, Cascadia.jl) to easily develop and deploy a web scraping strategy. We will see how Julia’s high level language features make it easy to interactively develop such projects, and at the same allow deployment into a distributed cluster for scraping at scale.

About Avik Sengupta

Avik is the author of Julia’s integration with Java and various other packages. One of his hobbies is to make Julia a first class language on the Raspberry Pi.


WebIO.jl: a Thin Abstraction Layer for Web Based Widgets

Shashi Gowda, Julia Computing, Inc.

WebIO acts as a small Julian bridge between browser-based UIs to Julia such as IJulia, Atom, Blink and Mux, and packages that wish to output rich, interactive widgets. This means graphics packages don’t have to depend on IJulia or Atom or Blink etc to create widgets. Instead they only depend on WebIO and use its abstractions. Widgets written with WebIO once will work on all the above interfaces. Some features are:

  • A DSL for creating HTML elements
  • A Julia-to-JavaScript transpiler
  • transparent and easy communication with observable refs
  • Ability to reliably load arbitrary JS libraries from the web / serve them from disk with correct ordering of code execution. (This has plagued many a package so far)
  • Flexible. Not tied into any javascript framework, no opinions. Allows you to execute arbitrary JS on your widgets.
  • Allows mixing and mashing widgets and concepts from different packages seamlessly, resulting in arbitrarily granular separation of concerns. Enables an ecosystem of UI packages, as opposed to Escher’s monolithic codebase.
About Shashi Gowda

I work on various Julia projects. My interests are mainly in interactive UIs.

© 2014-2020 JuliaCon.org All rights reserved. Fork this site on Github or open a bug report.