STAR
Day 2 (Wednesday 23 April) @ 13:30–15:00
Moritz Otto (Leiden University)
Central limit theorems for linear eigenvalue statistics of random spatial networks
In this talk, I will discuss central limit theorems for linear eigenvalue statistics of adjacency and Laplacian matrices for different models of random geometric networks built on an underlying Poisson point process. The first model comprises a broad family of stabilizing networks that are of interest in computational geometry, such as Delaunay triangulations and Gabriel graphs. The second model is the random connection model, where edges are put independently between points with a distant-dependent probability. In the first part, I will consider polynomial test functions. If time permits, I will thereafter explain which additional difficulties occur when general test functions are considered.
The talk is based on joint work with Christian Hirsch (Aarhus) and Kyeongsik Nam (Seoul).

Bart van Parys (CWI)
Robust Mean Estimation for Optimization: The Impact of Heavy Tails
Most data-driven decisions formulations in the literature explicitly assume bounded or light-tailed distributions. However, many real-world phenomena exhibit heavy-tailed distributions, characterized by rare but extreme events with may have significant impact. In this work, we investigate the performance of sample average approximation and Wasserstein DRO and show that neither offer adequate protection when the associated losses are bounded from right but regularly varying heavy-tailed from the left. Surprisingly, if the data has finite variance, classical variance regularization does offer such protection but we show that it is generally conservative. Finally, we show that a judiciously scaled Kullback-Leibler DRO is statistically efficient. We do so by developing an upper bound on the probability that the KL DRO decision disappoints out-of-sample (of independent interest) and indicate that it matches a statistical lower bound asymptotically obtained through a change-of-measure argument.

Janusz Meylahn (University of Twente)
Can pricing algorithms learn to collude?
Pricing algorithms may be able to learn to work together to raise prices in ways that are legal under current antitrust law. This phenomenon is known as algorithmic collusion. Many simulation studies have shown that reinforcement learning algorithms may be capable of this feat, but the interpretation of these results is debated among economists, legal scholars, computer scientists and mathematicians. A central issue obscuring the debate is the lack of a common definition of what it means for algorithms to learn to collude. In this talk, I will propose such a definitions and presents results on the collusive capabilities of various algorithms in the light of the definition.
Based on joint work with Arnoud den Boer, Maarten Pieter Schinkel, Ibrahim Abada, Joe Harrington and Xavier Lambin.
