Mar 2 – 4, 2026
Karlsruhe Institute of Technology
Europe/Berlin timezone

Contribution List

20 out of 20 displayed
Export to PDF
  1. Siddhartha Mishra (ETH Zurich)
    3/2/26, 1:15 PM
    Invited talk
  2. Felix Dietrich
    3/2/26, 2:00 PM
    Invited talk

    We discuss a sampling scheme for a data-dependent probability distribution of the parameters of neural networks. Such sampled networks are provably dense in the continuous functions, and have a convergence rate in the number of neurons that is independent of the input dimension. Using sampled neurons as basis functions in an ansatz allow us to use separation of variable schemes and effectively...

    Go to contribution page
  3. Hanno Gottschalk (TU Berlin)
    3/2/26, 3:15 PM
    Invited talk

    In this work, we explore the use of compact latent representations with learned time dynamics ('World Models') to simulate physical systems. Drawing on concepts from control theory, we propose a theoretical framework that explains why projecting time slices into a low-dimensional space and then concatenating to form a history ('Tokenization') is so effective at learning physics datasets, and...

    Go to contribution page
  4. Prof. Claire Boyer (Université Paris-Saclay)
    3/2/26, 4:00 PM
    Invited talk

    We will begin by discussing the limitations inherent in the training of Physics-Informed Neural Networks (PINNs), which, despite their conceptual appeal, often face practical challenges (such as convergence issues, sensitivity to hyperparameters, and the need of large data volume). In a second step, we will recast and characterize the problem of physics-informed learning as a kernel method....

    Go to contribution page
  5. Laurent Navoret (University of Strasbourg, Inria)
    3/2/26, 6:15 PM

    We are interested in numerically solving high-dimensional advection-diffusion equations, such as kinetic equations or parametric problems. Traditional numerical methods suffer from the curse of dimensionality, as the number of degrees of freedom grows exponentially with dimension. Recently, methods based on neural networks have proven effective in reducing the number of degrees of freedom by...

    Go to contribution page
  6. Birgit Hillebrecht (KIT)
    3/2/26, 6:18 PM

    We present two residual‑based a posteriori error estimators for physics‑informed neural networks (PINNs) that are applicable to the approximation of solutions of partial differential equations (PDEs) on complex geometries. Building on the semigroup‑based framework introduced previously, we incorporate the concept of input‑to‑state stability (ISS), or suitable modifications thereof, to quantify...

    Go to contribution page
  7. Maximilian Siebel (Heidelberg University)
    3/2/26, 6:21 PM

    In many practical and numerical inverse problems, the exact data log-likelihood is not fully accessible, motivating the use of surrogate likelihoods. We study heteroscedastic statistical nonparametric nonlinear inverse problems and establish posterior contraction results when inference is based on a surrogate log-likelihood constructed from proxy error variances and an approximate forward map....

    Go to contribution page
  8. Johannes Müller
    3/2/26, 6:24 PM

    We discusses an “optimize-then-project” approach for applications in scientific machine learning. The key idea is to design algorithms at the infinite-dimensional level and subsequently discretize them in the tangent space of the neural network ansatz, similar to a natural gradient style ansatz. We illustrate this approach in the context of the variational Monte Carlo method for quantum...

    Go to contribution page
  9. Thomas Schillinger (University of Mannheim)
    3/2/26, 6:27 PM

    We consider hyperbolic partial differential equations (PDEs) with a space-dependent flux function to describe traffic flow dynamics. The PDE is coupled with a stochastic process modeling traffic accidents, thereby capturing the interplay between traffic dynamics and accident occurrence. This framework enables the analysis of accident risk and its consequences in road networks.
    A key aspect of...

    Go to contribution page
  10. Stephan Simonis (ETH Zurich)
    3/2/26, 6:30 PM

    Global well-posedness for three-dimensional fluid flow equations remains a profound open problem. Recent efforts have shifted toward statistical solutions as a robust framework for describing turbulence, yet efficient computational tools to explore these solutions in three dimensions are scarce.
    We develop novel stochastic numerical schemes to compute and analyze statistical solutions for...

    Go to contribution page
  11. Silke Glas (University of Twente)
    3/3/26, 8:30 AM
    Invited talk

    Capturing and preserving physical properties, e.g., system energy, stability, and passivity, using data-driven methods is currently a highly researched topic in surrogate modeling. To ensure that the desired physical properties are retained, structure-preserving projection techniques are used in model order reduction (MOR).
    In this talk, we present structure-preserving MOR with nonlinear...

    Go to contribution page
  12. Benjamin Peherstorfer (Courant Institute of Mathematical Sciences, New York University)
    3/3/26, 9:15 AM
    Invited talk

    Learning models of time-dependent processes that generalize across initial conditions and parameter regimes is a key challenge in machine learning and the computational sciences. For chaotic, turbulent, and stochastic systems, modeling the dynamics of individual trajectories can be exceedingly challenging because trajectories can be erratic and irregular, and in stochastic settings may even be...

    Go to contribution page
  13. Philipp Petersen (Universität Wien)
    3/3/26, 10:30 AM
    Invited talk

    Deep learning methods are increasingly deployed using low-precision arithmetic, primarily driven by memory, energy, and throughput constraints. At the same time, deep neural networks are highly compositional systems, a structure that naturally raises concerns about the amplification and accumulation of numerical errors across layers and operations. Nonetheless, such models are being applied at...

    Go to contribution page
  14. Andrew Duncan (Imperial College London)
    3/3/26, 11:15 AM
    Invited talk

    In PDE-based inverse problems, only a limited number of sensors can be deployed, so choosing measurement locations is crucial, but the resulting design problem is highly nonconvex. This talk explores how we can lift sensor placement from selecting B points to optimising over probability measures on the design domain, giving a tractable relaxation with a Bayesian interpretation. We then solve...

    Go to contribution page
  15. Claudia Schillings (FU Berlin)
    3/3/26, 2:00 PM
    Invited talk
  16. Dimitri Konen (University of Cambridge)
    3/3/26, 2:45 PM
    Invited talk

    We consider a Bayesian update procedure to predict future states of infinite-dimensional non-linear dynamical systems. We focus on dissipative systems, in which information is lost exponentially fast over time. While, from an inverse problem perspective, this is expected to make inference difficult, it turns out to be extremely useful from a statistical perspective. When a Gaussian process...

    Go to contribution page
  17. Carsten Rockstuhl
    3/3/26, 4:00 PM
    Invited talk

    Understanding and predicting solutions to Maxwell’s equations lies at the heart of research in optics and photonics. Traditionally, mostly physics-based approaches were used for that purpose, i.e., analytical and, very often, numerical methods. However, over time, we have been accumulating plenty of data on structure-property relations, i.e., we know how a given optical structure responds to...

    Go to contribution page
  18. Claudia Strauch
    3/3/26, 4:45 PM
    Invited talk

    Denoising diffusion models can be interpreted through stochastic dynamics closely related to time-dependent PDEs, yet their practical implementations often rely on truncation heuristics that lack theoretical justification. We study denoising diffusion models driven by reflected diffusions, which naturally confine the dynamics to bounded domains and remove this mismatch between theory and...

    Go to contribution page
  19. Aurélien Castre (University of Cambridge, DPMMS)
    3/4/26, 9:45 AM
    Invited talk

    We consider general parameter to solution maps $\theta \mapsto \mathcal G(\theta)$ of non-linear partial differential equations and describe an approach based on a Banach space version of the implicit function theorem to verify the gradient stability condition of Nickl & Wang (JEMS 2024) for the underlying non-linear inverse problem, providing also injectivity estimates and corresponding...

    Go to contribution page
  20. Melina Freitag (University of Potsdam)
    Invited talk

    We propose a non-intrusive model order reduction technique for stochastic differential equations with additive Gaussian noise. The method extends the operator inference framework and focuses on inferring reduced-order drift and diffusion coefficients by formulating and solving suitable least-squares problems based on observational data. Various subspace constructions based on the available...

    Go to contribution page