The accurate and efficient numerical solution of partial differential equations (PDEs) remains a cornerstone of computational science and engineering. Classical methods, such as finite element approaches supported by rigorous a priori and a posteriori error estimates, robust tools from numerical linear algebra, and high-end software packages, provide well-established tools for scientific computing. At the same time, the rise of data-driven techniques, particularly modern machine learning-based approaches such as physics-informed neural networks, deep Ritz methods, neural Galerkin schemes, and neural network enhanced reduced-basis methods, have opened new possibilities for solving PDEs in settings where traditional assumptions may not hold, the curse of dimensionality conspires against established algorithms, or the huge computational demand prohibits time-sensitive applications.
The goal of this workshop is to bridge these research directions, uniting the theoretical rigor of numerical mathematics, the stochastic analysis of data inaccuracies, and the flexibility and adaptability of machine learning. By exploring how error estimation, adaptivity, and convergence analysis can inform and enhance machine learning models for PDEs, we hope to facilitate discussions for new hybrid computational methods featuring the advantages of methods from each community, i.e., algorithms that are provably reliable, data-aware, and numerically robust and efficient.
Invited speaker
- Randolf Altmeyer
- Claire Boyer
- Martin Burger
- Felix Dietrich
- Andew B. Duncan
- Melina Freitag
- Silke Glas
- Hanno Gottschalk
- Siddhartha Mishra
- Olga Mula
- Richard Nickl
- Benjamin Peherstorfer
- Philipp Petersen
- Carsten Rockstuhl
- Laura M. Sangalli
- Claudia Schillings