Speaker
Description
We discusses an “optimize-then-project” approach for applications in scientific machine learning. The key idea is to design algorithms at the infinite-dimensional level and subsequently discretize them in the tangent space of the neural network ansatz, similar to a natural gradient style ansatz. We illustrate this approach in the context of the variational Monte Carlo method for quantum many-body problems, where neural quantum states have recently emerged as powerful representations of high-dimensional wavefunctions. In this setting, we recover the celebrated stochastic reconfiguration algorithm, interpreting it as a projected Riemannian $L^2$ gradient descent method. We further explore extensions to Riemannian Newton methods, and conclude with considerations related to the scalability of these schemes.