Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Agenda Overview |
| Session | ||
Statistical Inverse Problems
| ||
| Presentations | ||
Linear methods for non-linear inverse problems 1Delft University of Technology; 2Bocconi University, Italy We propose a novel Bayesian linearization approach for non-linear PDE constrained inverse problems. We split the non-linear inverse problem into a linear statistical and a non-linear analytic component. We derive optimal posterior contraction rates, reliable uncertainty quantification, data driven tuning and scalable approximations. The general approaches is applied to specific examples, including Darcy flow and heat equation with absorption term. Learning with Heavy-tailes TU Braunschweig, Germany We examine the performance of ridge regression in reproducing kernel Hilbert spaces in the presence of noise that exhibits a finite number of higher moments. We establish excess risk bounds consisting of subgaussian and polynomial terms based on the well known integral operator framework. The dominant subgaussian component allows to achieve convergence rates that have previously only been derived under subexponential noise - a prevalent assumption in related work from the last two decades. These rates are optimal under standard eigenvalue decay conditions, demonstrating the asymptotic robustness of regularized least squares against heavy-tailed noise. Our derivations are based on a Fuk-Nagaev inequality for Hilbert-space valued random variables. Comparing regularisation paths of (conjugate) gradient estimators in ridge regression 1Humboldt-Universität zu Berlin, Germany; 2Aarhus Universitet, Denmark We consider standard gradient descent, gradient flow and conjugate gradients as iterative algorithms for minimising a penalised ridge criterion in linear regression. While it is well known that conjugate gradients exhibit fast numerical convergence, the statistical properties of their iterates are more difficult to assess due to inherent non-linearities and dependencies. On the other hand, standard gradient flow is a linear method with well-known regularising properties when stopped early. By an explicit non-standard error decomposition we are able to bound the prediction error for conjugate gradient iterates by a corresponding prediction error of gradient flow at transformed iteration indices. This way, the risk along the entire regularisation path of conjugate gradient iterations can be compared to that for regularisation paths of standard linear methods like gradient flow and ridge regression. In particular, the oracle conjugate gradient iterate shares the optimality properties of the gradient flow and ridge regression oracles up to a constant factor. Numerical examples show the similarity of the regularisation paths in practice. | ||

