AIM
Day 2 (Wednesday 3 April) @ 11:15–12:45
Stephanie van der Pas (Amsterdam UMC)
Causal conclusions from a cut-off: Bayesian regression discontinuity designs
An opportunity for causal inference presents itself when an intervention is assigned based on a cut-off, as is very common in medical decision-making. Suppose for example that patients aged 65 or younger receive treatment A and patients older than 65 receive treatment B. On average, patients aged 64 will be similar to patients aged 66 in all potentially confounding aspects like BMI or smoking status. So if the outcomes for patients aged 64 are much better than those of patients aged 66, we may reasonably ascribe this difference to the intervention, and claim a causal effect. This is the core concept behind the regression discontinuity design (RDD). In RDD, the causal is estimated only locally at the cut-off point. Here we focus on the situation where the cut-off is unknown. We introduce a Bayesian approach in which we incorporate prior knowledge about the cut-off location, suitable for the hitherto somewhat neglected fuzzy version of the RDD, where compliance may be imperfect. We compare the new method to the most popular frequentists methods in simulations and on medical data sets.
Joint work with Julia Kowalska and Mark van de Wiel

Alexander Taveira Blomenhofer (CWI)
Tensor decomposition models in Machine Learning
Many problems in Machine Learning ask for structured representations of high-dimensional tensors. Examples include the parameter estimation for Gaussian mixture models, as well as polynomial neural networks. We will discuss some tensor decomposition models and present a theorem to quantify their expressiveness in the presence of a group action. We also discuss when overdetermined models have a unique solution.

Dirk van der Hoeven (Leiden University)
Advances in bandit convex optimization
In this talk I will present some recent advances in bandit convex optimization, in which the goal is to optimise a sequence of convex functions. The main challenges in bandit convex optimization are a) estimating a convex function based solely on one evaluation and b) developing a suitable optimization algorithm to be run on the estimated function. I will discuss a recently proposed approach to tackle a) that builds an estimator of the convex function which is globally a very poor estimator. Crucially, it can be shown that close to the minimizer of the convex function this estimator is actually a reasonable estimator, which combined with a suitable algorithm for b) that never strays too far from that minimizer, leads to good guarantees for the bandit convex optimization setting.

Silke Glas (University of Twente)
Model reduction on manifolds: a differential geometric framework
Using nonlinear projections and preserving structure in model order reduction (MOR) are currently active research fields. In this paper, we provide a novel differential geometric framework for model reduction on smooth manifolds, which emphasizes the geometric nature of the objects involved. The crucial ingredient is the construction of an embedding for the low-dimensional submanifold and a compatible reduction map, for which we discuss several options. Our general framework allows capturing and generalizing several existing MOR techniques, such as structure preservation for Lagrangian- or Hamiltonian dynamics, and using nonlinear projections that are, for instance, relevant in transport-dominated problems. The joint abstraction can be used to derive shared theoretical properties for different methods, such as an exact reproduction result. To connect our framework to existing work in the field, we demonstrate that various techniques for data-driven construction of nonlinear projections can be included in our framework.
