Mathematical Finance Seminar
Date
Time
17:15
Location
TUB, MA 043
Matthieu Lauriere (NYU Shanghai)

An Efficient On-Policy Deep Learning Framework for Stochastic Optimal Control

We present a novel on-policy algorithm for solving stochastic optimal control (SOC) problems. By leveraging the Girsanov theorem, our method directly computes on-policy gradients of the SOC objective without expensive backpropagation through stochastic differential equations or adjoint problem solutions. This approach significantly accelerates the optimization of neural network control policies while scaling efficiently to high-dimensional problems and long time horizons. We evaluate our method on classical SOC benchmarks as well as applications to sampling from unnormalized distributions via Schrodinger-Follmer processes and fine-tuning pre-trained diffusion models. Experimental results demonstrate substantial improvements in both computational speed and memory efficiency compared to existing approaches. Joint work with Mengjian Hua and Eric Vanden-Eijnden.