We study continuous-time mean-variance portfolio selection in markets where stock prices are diffusion processes driven by observable factors that are also diffusion processes yet the coefficients of these processes are unknown. Based on the recently developed reinforcement learning (RL) theory for diffusion processes, we present a general data-driven RL algorithm that learns the pre-committed investment strategy directly without attempting to learn or estimate the market coefficients. For multi-stock Black-Scholes markets without factors, we further devise a baseline algorithm and prove its performance guarantee by deriving a sublinear regret bound in terms of Sharpe ratio. For performance enhancement and practical implementation, we modify the baseline algorithm and carry out an extensive empirical study to compare their performance, in terms of a host of common metrics, with a large number of widely used portfolio allocation strategies on S&P 500 constituents. The results demonstrate that the proposed continuous-time RL strategy is consistently among the best especially in a volatile bear market, and decisively outperforms the model-based continuous-time counterparts by significant margins.

Mathematical Finance Seminar
Date
Time
16:oo
Location:
HUB; RUD 25; 1.115
Yilie Huang (Columbia U)
Mean-Variance Portfolio Selection by Continuous-Time Reinforcement Learning: Algorithms, Regret Analysis, and Empirical Study
Mathematical Finance Seminar
Date
Time
17:oo
Location:
HUB; RUD 25; 1.115
Sam Cohen (Oxford)