We will begin by discussing the limitations inherent in the training of Physics-Informed Neural Networks (PINNs), which, despite their conceptual appeal, often face practical challenges (such as convergence issues, sensitivity to hyperparameters, and the need of large data volume). In a second step, we will recast and characterize the problem of physics-informed learning as a kernel method....
In this work, we explore the use of compact latent representations with learned time dynamics ('World Models') to simulate physical systems. Drawing on concepts from control theory, we propose a theoretical framework that explains why projecting time slices into a low-dimensional space and then concatenating to form a history ('Tokenization') is so effective at learning physics datasets, and...
We are interested in numerically solving high-dimensional advection-diffusion equations, such as kinetic equations or parametric problems. Traditional numerical methods suffer from the curse of dimensionality, as the number of degrees of freedom grows exponentially with dimension. Recently, methods based on neural networks have proven effective in reducing the number of degrees of freedom by...
We present two residual‑based a posteriori error estimators for physics‑informed neural networks (PINNs) that are applicable to the approximation of solutions of partial differential equations (PDEs) on complex geometries. Building on the semigroup‑based framework introduced previously, we incorporate the concept of input‑to‑state stability (ISS), or suitable modifications thereof, to quantify...
In many practical and numerical inverse problems, the exact data log-likelihood is not fully accessible, motivating the use of surrogate likelihoods. We study heteroscedastic statistical nonparametric nonlinear inverse problems and establish posterior contraction results when inference is based on a surrogate log-likelihood constructed from proxy error variances and an approximate forward map....
We discusses an “optimize-then-project” approach for applications in scientific machine learning. The key idea is to design algorithms at the infinite-dimensional level and subsequently discretize them in the tangent space of the neural network ansatz, similar to a natural gradient style ansatz. We illustrate this approach in the context of the variational Monte Carlo method for quantum...
We consider hyperbolic partial differential equations (PDEs) with a space-dependent flux function to describe traffic flow dynamics. The PDE is coupled with a stochastic process modeling traffic accidents, thereby capturing the interplay between traffic dynamics and accident occurrence. This framework enables the analysis of accident risk and its consequences in road networks.
A key aspect of...
Global well-posedness for three-dimensional fluid flow equations remains a profound open problem. Recent efforts have shifted toward statistical solutions as a robust framework for describing turbulence, yet efficient computational tools to explore these solutions in three dimensions are scarce.
We develop novel stochastic numerical schemes to compute and analyze statistical solutions for...
Deep learning methods are increasingly deployed using low-precision arithmetic, primarily driven by memory, energy, and throughput constraints. At the same time, deep neural networks are highly compositional systems, a structure that naturally raises concerns about the amplification and accumulation of numerical errors across layers and operations. Nonetheless, such models are being applied at...
In PDE-based inverse problems, only a limited number of sensors can be deployed, so choosing measurement locations is crucial, but the resulting design problem is highly nonconvex. This talk explores how we can lift sensor placement from selecting B points to optimising over probability measures on the design domain, giving a tractable relaxation with a Bayesian interpretation. We then solve...
Capturing and preserving physical properties, e.g., system energy, stability, and passivity, using data-driven methods is currently a highly researched topic in surrogate modeling. To ensure that the desired physical properties are retained, structure-preserving projection techniques are used in model order reduction (MOR).
In this talk, we present structure-preserving MOR with nonlinear...
Learning models of time-dependent processes that generalize across initial conditions and parameter regimes is a key challenge in machine learning and the computational sciences. For chaotic, turbulent, and stochastic systems, modeling the dynamics of individual trajectories can be exceedingly challenging because trajectories can be erratic and irregular, and in stochastic settings may even be...
We consider a Bayesian update procedure to predict future states of infinite-dimensional non-linear dynamical systems. We focus on dissipative systems, in which information is lost exponentially fast over time. While, from an inverse problem perspective, this is expected to make inference difficult, it turns out to be extremely useful from a statistical perspective. When a Gaussian process...
We propose a non-intrusive model order reduction technique for stochastic differential equations with additive Gaussian noise. The method extends the operator inference framework and focuses on inferring reduced-order drift and diffusion coefficients by formulating and solving suitable least-squares problems based on observational data. Various subspace constructions based on the available...
Denoising diffusion models can be interpreted through stochastic dynamics closely related to time-dependent PDEs, yet their practical implementations often rely on truncation heuristics that lack theoretical justification. We study denoising diffusion models driven by reflected diffusions, which naturally confine the dynamics to bounded domains and remove this mismatch between theory and...
We consider general parameter to solution maps $\theta \mapsto \mathcal G(\theta)$ of non-linear partial differential equations and describe an approach based on a Banach space version of the implicit function theorem to verify the gradient stability condition of Nickl & Wang (JEMS 2024) for the underlying non-linear inverse problem, providing also injectivity estimates and corresponding...
Understanding and predicting solutions to Maxwell’s equations lies at the heart of research in optics and photonics. Traditionally, mostly physics-based approaches were used for that purpose, i.e., analytical and, very often, numerical methods. However, over time, we have been accumulating plenty of data on structure-property relations, i.e., we know how a given optical structure responds to...