Comment by mlmonkey
3 days ago
As a "tradional" ML guy who missed out on learning about RL in school, I'm confused about how to use RL in "traditional" problems.
Take, for example, a typical binary classifier with a BCE loss. Suppose I wanted to shoehorn RL onto this: how would I do that?
Or, for example, the House Value problem (given a set of features about a house for sale, predict its expected sale value). How would I slap RL onto that?
I guess my confusion comes from how the losses are hooked up. Traditional losses (BCE, RMSE, etc.) I know about; but how do you bring RL loss into problems?
RL is a technique for finding an optimal policy for Markov decision processes. If you can define state spaces and action spaces for a sequential decision problem with uncertain outcomes, then reinforcement learning is typically a pretty good way of finding a function mapping states to actions, assuming it isn't a sufficiently small problem that an exact solution exists.
I don't really see why you would want to use it for binary classification or continuous predictive modeling. It's why it excels in game play and operational control. You need to make decisions now that constrain possible decision in the future, but you cannot know the outcome until that future comes and you cannot attribute causality to the outcome even when you learn what it is. This isn't "hot dog/not a hot dog" that generally has an unambiguously correct answer and the classification itself is directly either correct or incorrect. In RL, a decision made early in a game probably leads causally to a particular outcome somewhere down the line, but the exact extent to which any single action contributes is unknown and probably unknowable in many cases.
Three considerations that come into play in deciding about using RL: 1) how informative is the loss on each example, 2) can you see how to adjust the model based on the loss signal, and 3) how complex is the feature space?
For the house value problem, you can quantify how far the prediction is from the true value, there are lots of regression models with proven methods of adjusting the model parameters (e.g. gradient descent), and the feature space comprises mostly monotone, weakly interacting features like quality of neighborhood schools and square footage. It's a "traditional" problem and can be solved as well as possible by the traditional methods we know and love. RL is unnecessary, might require more data than you have, and might produce an inferior result.
In contrast, for a sequential decision problem like playing go, the binary won-lost signal doesn't tell us much about how well or poorly the game was played, it's not clear how to improve the strategy, and there are a large number of moves at each turn with no evident ranking. In this setting RL is a difficult but possible approach.
I just wouldn't.
RL is nice in that it is handles messy cases where you don't have per example labels.
How do you build a learned chess playing bot? Essentially the state of the art is to find a clever way of turning the problem of playing chess into a sequence of supervised learning problems.
So IIUC RL is applicable only when the outcome is not immediately available.
Let's say I do have a problem in that setting; say the chess problem, where I have a chess board with the positions of chess pieces and some features like turn number, my color, time left on the clock, etc. are available.
Would I train a DNN with these features? Are there some libraries where I can try out some toy problems?
I guess coming from a classical ML background I am quite clueless about RL but want to learn more. I tried reading the Sutton and Barto book, but got lost in the terminology. I'm a more hands-on person.
OpenAI has an excellent interactive course on Deep RL: https://spinningup.openai.com/en/latest/
The AlphaGo paper might be what you need. It requires some work to understand, but is clearly written. I read it when it came out and was confident enough to give a talk on it. (I don't have the slides any more; I did this when I was at a FAANG and left them behind.)