Learn Continuously, Act Discretely: Hybrid Action-Space Reinforcement Learning For Optimal Execution

Feiyang Pan, Tongzhe Zhang, Ling Luo, Jia He and Shuoling Liu.
In IJCAI 2022.

Abstract:

Optimal execution is a sequential decision-making problem for cost-saving in algorithmic trading. Studies have found that reinforcement learning (RL) can help decide the order-splitting sizes. However, there is still a problem that remained unsolved: how to place limit orders at appropriate limit prices?

The key challenge lies in the ``continuous-discrete duality’’ of the action space. On the one hand, to generalize well, the continuous action space using percentage changes in prices is preferred. On the other hand, the trader eventually needs to choose limit prices discretely due to the existence of the tick size, which requires specialization for every single stock with different characteristics (e.g., the liquidity and the price range). So we need continuous control for generalization and discrete control for specialization. To this end, we propose a hybrid RL method to combine the advantages of both of them. We first use a continuous control agent to scope an action subset, then deploy a fine-grained agent to choose a specific limit price. Extensive experiments show that our method has higher sample efficiency and better training stability than existing RL algorithms, and significantly outperforms previous learning-based methods for order execution.

Download: [PDF]