Day 1 (Tuesday 19 April) @ 13:30–15:00
Joanna Ellis Monaghan (University of Amsterdam)
The Tutte Polynomial: From Codes to DNA
William Tutte was one of the most influential mathematicians in combinatorics. He was a code-breaker at Bletchley Park, but his work remained classified long after Turing’s story was told. One of the testimonies to his genius is that so many of his ideas proved to be catalysts, triggering further investigations and opening new vistas of mathematics. The Tutte polynomial is one of many such examples in his legacy. Here we will explore the Tutte polynomial, including its universality and multiple definitions, as well as some of its combinatorial and algebraic properties. We will showcase information encoded in the Tutte polynomial as evaluations and specializations, as these inform nearly every aspect of combinatorics and beyond including codes, DNA, and statistical mechanics models in physics and biology.
Photo credits: Dirk Gillissen
Léo Ducas (Centrum Wiskunde & Informatica)
Arakelov Random Walks, and Application in Cryptography and in Algorithmic Number Theory
During this presentation, I would like to overview a recent line of work jointly carried with Koen de Boer, Alice Pellet–Mary and Benjamin Wesolowski. The main object of our study is the space of ideal lattices of a given number field, a space known as the Arakelov class group (up to isometries).
Our central result is a fast-mixing theorem for random walks over this group, assuming an Extended Riemann Hypothesis. A first application is a worst-case to average-case reduction for the short-vector problem over ideal lattices (ideal-SVP), a problem of cryptographic interest. But this theorem is also useful for algorithms in Number Theory. For example it allows to formally prove that computing Power Residue Symbols can be done in polynomial time, a result that was previously only heuristic. It should also be useful in eliminating heuristics in sub-exeponential algorithms for computing S-unit groups.
Karen Aardal (TU Delft)
Topics on the interface between discrete optimization and machine learning
We treat some topics relevant to machine learning and discrete optimization. The most widely used algorithm for solving discrete optimization problems is the tree search algorithm branch-and-bound (b&b). If a node in the search tree cannot be pruned, one needs to branch. To select the variable on which to branch is a crucial part of b&b as it affects the size of the search tree. We discuss how learning can be used to make selections that can are competitive with state-of-the-art methods.
We also discuss how optimization can be used in so-called adversarial learning, and some results won the characterization of functions that are representable by neural networks with certain activation functions.
Photo credits: Dori Steeneken