How Monte Carlo Tree Search works part3(Machine Learning) | by Monodeep Mukherjee | Apr, 2024


  1. Verified Multi-Step Synthesis using Large Language Models and Monte Carlo Tree Search(arXiv)

Author : David Brandfonbrener, Sibi Raja, Tarun Prasad, Chloe Loughridge, Jianang Yang, Simon Henniger, William E. Byrd, Robert Zinkov, Nada Amin

Abstract : We present an approach using Monte Carlo Tree Search (MCTS) to guide Large Language Models (LLMs) to generate verified programs in Dafny, Lean and Coq. Our method, which we call VMCTS, leverages the verifier inside the search algorithm by checking partial programs at each step. In combination with the LLM prior, the verifier feedback raises the synthesis capabilities of open source models. On a set of five verified programming problems, we find that in four problems where the base model cannot solve the question even when re-sampling solutions for one hour, VMCTS can solve the problems within 6 minutes. The base model with VMCTS is even competitive with ChatGPT4 augmented with plugins and multiple re-tries on these problems. Our code and benchmarks are available at https://github.com/namin/llm-verified-with-monte-carlo-tree-search

2.RiskMiner: Discovering Formulaic Alphas via Risk Seeking Monte Carlo Tree Search (arXiv)

Author : Tao Ren, Ruihan Zhou, Jinyang Jiang, Jiafeng Liang, Qinghao Wang, Yijie Peng

Abstract : The formulaic alphas are mathematical formulas that transform raw stock data into indicated signals. In the industry, a collection of formulaic alphas is combined to enhance modeling accuracy. Existing alpha mining only employs the neural network agent, unable to utilize the structural information of the solution space. Moreover, they didn’t consider the correlation between alphas in the collection, which limits the synergistic performance. To address these problems, we propose a novel alpha mining framework, which formulates the alpha mining problems as a reward-dense Markov Decision Process (MDP) and solves the MDP by the risk-seeking Monte Carlo Tree Search (MCTS). The MCTS-based agent fully exploits the structural information of discrete solution space and the risk-seeking policy explicitly optimizes the best-case performance rather than average outcomes. Comprehensive experiments are conducted to demonstrate the efficiency of our framework. Our method outperforms all state-of-the-art benchmarks on two real-world stock sets under various metrics. Backtest experiments show that our alphas achieve the most profitable results under a realistic trading setting

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here