The Deep Q-Network (DQN) algorithm, as introduced by DeepMind in a NIPS 2013 workshop paper, and later published in Nature 2015 can be credited with revolutionizing reinforcement learning. In this post, therefore, I would like to give a guide to a subset of the DQN algorithm. This is a continuation of an earlier reinforcement learning article about linear function approximators. My contribution here will be orthogonal to my previous post about the preprocessing steps for game frames.
Before I proceed, I should mention upfront that there are already a lot of great blog posts and guides that people have written about DQN. Here are the prominent ones I was able to find and read:
- Denny Britz’s notes and implementation of DQN and Double-DQN. I have not read the code yet, but I imagine this will be my second-favorite DQN implementation aside from spragnur’s version.
- Nervana’s “Demystifying Deep Reinforcement Learning, by Tambet Matiisen. This has a useful figure to show intuition on why we want the network to take only the state as input (not the action), and an intuitive argument as to why we don’t want pooling layers with Atari games (as opposed to object recognition tasks).
- Ben Lau’s post on using DQN to play FlappyBird. The part on DQN might be useful. I won’t need to use his code.
- A post from Machine Learning for Artists (huh, interesting) with some source code and corresponding descriptions.
- A long post from Ruben Fiszel, which also covers some of the major extensions of DQN. I will try to write more details on those papers in future blog posts, particularly A3C.
- A post from Arthur Juliani, who also mentions the target network and Double-DQN.
I will not try to repeat what these great folks have already done. Here, I focus specifically on the aspect of DQN that was the most challenging for me to understand,1 which was about how the loss function works. I draw upon the posts above to aid me in this process.
In general, to optimize any function which is complicated, has high-dimensional parameters and high-dimensional data points, one needs to use an algorithm such as stochastic gradient descent which relies on sampling and then approximating the gradient step . The key to understanding DQN is to understand how we characterize the loss function.
Recall from basic reinforcement learning that the optimal Q-value is defined from the Bellman optimality conditions:
But is this the definition of a Q-value (i.e. a value)?
This is something I had to think carefully before it finally hit home to me, and I think it’s crucial to understanding Q-values. When we refer to , all we are referring to is some arbitrary value. We might pull that value out of a table (i.e. tabular Q-Learning), or it might be determined based on a linear or neural network function approximator. In the latter case, it’s better to write it as . (I actually prefer writing it as but the DeepMind papers use the other notation, so for the rest of this post, I will use their notation.)
Our GOAL is to get it to satisfy the Bellman optimality criteria, which I’ve written above. If we have “adjusted” our function value so that for all input pairings, it satisfies the Bellman requirements, then we rewrite the function as with the asterisk.
Let’s now define a loss function. We need to do this so that we can perform stochastic gradient descent, which will perform the desired “adjustment” to . What do we want? Given a state-action pair , the target will be the Bellman optimality condition. We will use the standard squared error loss. Thus, we write the loss as:
But we’re not quite done, right? This is for a specific pair. We want the to be close to the “Bellman” value across all such pairs. Therefore, we take expectations. Here’s the tricky part: we need to take expectations with respect to samples , so we have to consider a “four-tuple”, instead of a tuple, because the target (Bellman) depends on and . The loss function becomes:
Note that I have now added the parameter . However, this actually includes the tabular case. Why? Suppose we have two states and three actions. Thus, the total table size is six, with elements indexed by: . Now let’s define a six-dimensional vector . We will decide to encode each of the six values into one component of the vector. Thus, . In other words, we parameterize the arbitrary function by , and we directly look at to get the Q-values! Think about how this differs from the linear approximation case I discussed in my last post. Instead of corresponding to one element in the parameter vector , it turns out to be a linear combination of the full along with a state-action dependent vector .
With the above intuition in mind, we can perform stochastic gradient descent to minimize the loss function. It is over expectations of all samples. Intuitively, we can think of it as a function of an infinite amount of such samples. In practice, though, to approximate the expectation, we can take a finite amount of samples and use that to form the gradient updater.
We can continue with this logic until we get to the smallest possible update case, involving just one sample. What does the update look like? With one sample, we have approximated the loss as:
I have substituted , a single element in , since we assume the tabular case for now. I didn’t do that for the target, since for the purpose of the Bellman optimality condition, we assume it is fixed for now. Since there’s only one component of “visible” in the loss function, the gradient for all components2 other than is zero. Hence, the gradient update is:
Guess what? This is exactly the standard tabular Q-Learning update taught in reinforcement learning courses! I wrote the same exact thing in my MDP/RL review last year. Here it is, reproduced below:
Let’s switch gears to the DQN case, so we express with the implicit assumption that represents neural network weights. In the Nature paper, they express the loss function as:
(I added in an extra that they were missing, and note that their .)
This looks very similar to the loss function I had before. Let’s deconstruct the differences:
The samples come from , which is assumed to be an experience replay history. This helps to break correlation among the samples. Remark: it’s important to know the purpose of experience replay, but nowadays it’s probably out of fashion since the A3C algorithm runs learners in parallel and thus avoids sample correlation across threads.
Aside from the fact that we’re using neural networks instead of a tabular parameterization, the weights used for the target versus the current (presumably non-optimal) Q-value are different. At iteration , we assume we have some giant weight vector representing all of the neural network weights. We do not, however, use that same vector to parameterize the network. We fix the targets with the previous weight vector .
These two differences are mentioned in the Nature paper in the following text on the first page:
We address these instabilities with a novel variant of Q-learning, which uses two key ideas. […]
I want to elaborate on the second point in more detail, as I was confused by it for a while. Think back to my tabular Q-Learning example in this post. The target was parameterized using . When I perform an update using SGD, I updated . If this turns out to be the same component as , then this will automatically update the target. Think of the successor state as being equal to the current state. And, again, during the gradient update, the target was assumed fixed, which is why I did not re-write into a component of ; it’s as if we are “drawing” the value from the table once for the purposes of computing the target, so the value is ignored when we do the gradient computation. After the gradient update, the value may be different.
DeepMind decided to do it a different way. Instead, we have to fix the weights for some number of iterations, and then allow the samples to accumulate. The argument for why this works is a bit hand-wavy and I’m not sure if there exists any rigorous mathematical justification. The DeepMind paper says that this reduces correlations with the target. This is definitely different from the tabular case I just described, where one update immediately modifies targets. If I wanted to make my tabular case the same as DeepMind’s scenario, I would update the weights the normal way, but I would also fix the vector , so that when computing targets only, I would draw from instead of the current .
You can see explanations about DeepMind’s special use of the target network that others have written. Denny Britz argues that it leads to more stable training:
Trick 2 - Target Network: Use a separate network to estimate the TD target. This target network has the same architecture as the function approximator but with frozen parameters. Every T steps (a hyperparameter) the parameters from the Q network are copied to the target network. This leads to more stable training because it keeps the target function fixed (for a while).
Arthur Juliani uses a similar line of reasoning:
Why not use just use one network for both estimations? The issue is that at every step of training, the Q-network’s values shift, and if we are using a constantly shifting set of values to adjust our network values, then the value estimations can easily spiral out of control. The network can become destabilized by falling into feedback loops between the target and estimated Q-values. In order to mitigate that risk, the target network’s weights are fixed, and only periodically or slowly updated to the primary Q-networks values. In this way training can proceed in a more stable manner.
I want to wrap up by doing some slight investigation of spragnur’s DQN code which relate to the points above about the target network. (The neural network portion in his code is written using the Theano and Lasagne libraries, which by themselves have some learning curve; I will not elaborate on these as that is beyond the scope of this blog post.) Here are three critical parts of the code to understand:
q_network.pyscript contains the code for managing the networks. Professor Sprague uses a shared Theano variable to manage the input to the network, called
self.imgs_shared. What’s key is that he has to make the dimension , where (just like in my last post). So why the +1? This is needed so that he can produce the Q-values for in the loss function, which uses the successor state rather than the current state (not to mention the previous weights as well!). The Nervana blog post I listed made this distinction, saying that a separate forward pass is needed to compute -values for the successor states. By wrapping everything together into one shared variable, spragnur’s code efficiently utilizes memory.
Here’s the corresponding code segment:
On a related note, the “variable”
q_valscontains the Q-values for the current minibatch, while
next_q_valscontains the Q-values for the successor states. Since the minibatches used in the default setting are -dimensional numpy arrays, both
next_q_valscan be thought of as arrays with 32 values each. In both cases, they are computed by calling
lasagne.layers.get_output, which has an intuitive name.
The code separates the
next_q_valscases based on a
self.freeze_intervalparameter, which is -1 and 10000 for the NIPS and Nature versions, respectively. For the NIPS version, spragnur uses the
disconnected_gradfunction, meaning that the expression is not computed in the gradient. I believe this is similar to what I did before with the tabular -Learning case, when I didn’t “convert” to .
The Nature version is different. It will create a second network called
self.next_l_outwhich is independent of the first network
self.l_out. During the training process, the code periodically calls
self.reset_q_hat()to update the weights of
self.next_l_outby copying from the current weights of
Both of these should accomplish the same goal of having a separate network whose weights are updated periodically. The Nature version is certainly easier to understand.
I briefly want to mention a bit about the evaluation metrics used. The code, following the NIPS and Nature papers, reports the average reward per episode, which is intuitive and what we ultimately want to optimize. The NIPS paper argued that the average reward metric is extremely noisy, so they also reported on the average action value encountered during testing. They collected a random amount of states and for each , tracked the -value obtained from that state, and found that the average discounted reward kept increasing. Professor Sprague’s code computes this value in the
ale_agent.pyfile in the
finish_testingmethod. He seems to do it different from the way the NIPS paper reports it, though, by subsampling a random set of states after each epoch instead of having that random set fixed for all 100 epochs (I don’t see
self.holdout_dataassigned anywhere). But, meh, it does basically the same thing.
I know I said this before, but I really like spragnur’s code.
That’s all I have to say for now regarding DQNs. I hope this post serves as a useful niche in the DQN blog-o-sphere by focusing specifically on how the loss function gets derived. In subsequent posts, I hope to touch upon the major variants of DQN in more detail, particularly A3C. Stay tuned!
Taking the gradient of a function from a vector to a scalar, such as , involves taking partial derivatives for each component. All components other than in this example here will have derivatives of 0. ↩