Speaker
Felix Dietrich
Description
We discuss a sampling scheme for a data-dependent probability distribution of the parameters of neural networks. Such sampled networks are provably dense in the continuous functions, and have a convergence rate in the number of neurons that is independent of the input dimension. Using sampled neurons as basis functions in an ansatz allow us to use separation of variable schemes and effectively solve time-dependent partial differential equations. In computational experiments comparing training speed and accuracy, the sampling scheme outperforms iterative, gradient-based optimization by several orders of magnitude.