You are here: Home / Abstracts

Abstracts

Kathleen Fisher, Tufts University

'PL and ML: Two Great Tastes that Taste Great Together'

 We see daily evidence of the astounding ways that machine learning has been changing the world: everything from self-driving cars to computers that can win at Jeopardy! to search engines that can find pictures of cats.  Unfortunately, building such applications currently requires Herculean effort and extensive expertise in disparate areas, including knowledge of the domain in question, mastery of machine learning algorithms, and the ability to code those algorithms to run extremely efficiently on a variety of platforms.  DARPA’s PPAML program was created partly to test the hypothesis that programming language technology can improve this situation by raising the level of abstraction and letting a compiler and run-time system do some of the work currently done by machine learning experts.  In this talk, I will summarize the programming language world-view and describe various techniques from that world that might be useful in a machine-learning context.  I will also touch on various ways in which machine learning has been positively impacting programming language research. 

David Poole, University of British Columbia and Leverhulme Trust Visiting Professor at the University of Oxford

'Interoperability of probabilistic programs and data (probabilistic programs as the lingua franca of science)'

A probabilistic program is a program that makes (probabilistic) predictions on data. The evidence-based hypotheses that we would like our programs to embody are often much narrower than the programs we need for applications. For example, consider the problem of making a prediction of outcome of a treatment for a patient conditioned the electronic health record of the patient. To make such a prediction, we should use all of the evidence available, which includes information about all other patients, clinical trials, biology, etc. The results of such evidence is currently buried in research papers, and while many of these are relevant to a particular case, none of the papers are directly applicable to the patient at hand. Imagine if these research papers were made machine interpretable. To make a prediction for a new patient, we need to combine many such programs, and they all need to interoperate with data. This talk will outline the vision, some results that have been obtained, and applications in geology.

Jeffrey Mark Siskind

'What every machine-learning researcher should know about AD'

Most modern machine-learning methods, including those employed in probabilistic programming, would benefit from being able to take gradients of complex models expressed as computer programs. Automatic Differentiation (AD) is a collection of techniques for automatically transforming a program that computes a differentiable function to one that computes its derivative. While these techniques have been known, in one form or another, for many decades, they have not received wide use. This is, in part, because there is a fundamental tradeoff: the highly efficient tools are inconvenient to use and the ones that are convenient to use are highly inefficient. For the past decade, we have been engaged in research that attempts to bridge the gap, developing fundamental new programming language and compiler technology that seeks to make AD both convenient and efficient. In this talk, I will give an overview of the technical fundamentals, particularly those of transformation-based reverse mode, along with our novel efforts to expose those to the programmer through a convenient reflective API yet migrate the run-time reflection to compile time to achieve efficiency.

https://engineering.purdue.edu/~qobi/papers.html

Joint work with Barak A. Pearlmutter.

Luc De Raedt

'Probabilistic logic programming concepts'

A multitude of different probabilistic programming languages exists today, all extending a traditional programming language with primitives to support modeling of complex, structured probability distributions.

In this talk I shall focus on probabilistic extensions of logic programming languages such as Prolog, which have been developed since more than 20 years. One advantage of such languages is that they naturally fit both the statistical relational learning / artificial intelligence perspective as they define probability distributions over logical interpretations as well as that of probabilistic programming languages.

During this talk I shall give an overview of the underlying concepts and semantics of these languages as well as sketch their current inference and learning mechanisms. 

This talk will be largely based on joint tutorials with Angelika Kimmig and the forthcoming survey paper : Luc De Raedt, Angelika Kimmig, Probabilistic (Logic) Programming Concepts, Machine Learning, in press. 

An early version of this paper can be found at http://arxiv.org/abs/1312.4328

Hongseok Yang, Oxford University

'Program transformation for probabilistic programs'

One of the main attractions of probabilistic programming languages is that they allow data scientists to express complex models compactly, without worrying about the issue of inference on these models. The issue is then taken care of by generic inference algorithms that come with these languages. Although there has been a noticeable progress towards constructing efficient inference algorithms for probabilistic programs, these algorithms are still behind those designed specifically for a particular class of models. In this talk, I will explain my ongoing work with colleagues in Oxford, which aims at addressing this efficiency issue of inference. Our work is based on simple observations:

(1) some known efficient inference algorithms can be understood in terms of transformingan original probabilistic model to a new one (for instance, by marginalising random variables or computing posterior distribution) and running a standard inference algorithm on the transformed model; (2) in the context of probabilistic programming, these model transformations correspond to program transformations. We have been developing techniques for automatically transforming probabilistic programs and thereby optimising models expressed in these programs.

During the talk, I will explain these techniques, and also point out challenges that we have encountered.

This talk is based on ongoing joint work with David Tolpin, Jan-Willem van de Meent and Frank Wood.

Vikash K. Mansinghka 

'A Survey of Probabilistic Programming'

Probabilistic inference is a widely-used, rigorous approach for processing ambiguous information based on models that are uncertain or incomplete.  However, models and inference algorithms can be difficult to specify and implement, let alone design, validate, or optimize. Additionally, inference often appears to be intractable. Probabilistic programming is an emerging field that aims to address these challenges by formalizing modelling and inference using key ideas from probability theory, programming languages, and Turing-universal computation.

This talk will use real-world applications of three probabilistic programming systems to illustrate the principles of probabilistic programming:
BayesDB, a Bayesian database that enables users to directly query the probable implications of data tables without training in statistics. Short queries in BQL, an SQL-like language, have been used to discover validated findings from a broad class of databases, including Earth satellites, country-level measures of economic development, and US hospitals.
Picture, an imperative probabilistic language for 3D scene perception. Picture uses deep neural networks and statistical learning to invert generative models based on computer graphics. 50-line Picture programs can infer 3D models of human poses, faces, and other object classes from single images.
Venture, a general-purpose probabilistic programming platform with programmable inference. Venture aims to be sufficiently extensible, expressive, and efficient for general-purpose use, and has been successfully applied in fields such as robotics and statistics.