Going Deeper Into Reinforcement Learning: Fundamentals of Policy Gradients
As I stated in my last blog post, I am feverishly trying to read more research papers. One category of papers that seems to be coming up a lot recently are those about policy gradients, which are a popular class of reinforcement learning algorithms which estimate a gradient for a function approximator. Thus, the purpose of this blog post is for me to explicitly write the mathematical foundations for policy gradients so that I can gain understanding. In turn, I hope some of my explanations will be useful to a broader audience of AI students.
Assumptions and Problem Statement
In any type of research domain, we always have to make some set of assumptions. (By “we”, I refer to the researchers who write papers on this.) With reinforcement learning and policy gradients, the assumptions usually mean the episodic setting where an agent engages in multiple trajectories in its environment. As an example, an agent could be playing a game of Pong, so one episode or trajectory consists of a full starttofinish game.
We define a trajectory of length as
where comes from the starting distribution of states, , and with the dynamics model (i.e. how the environment changes). We actually ignore the dynamics when optimizing, since all we care about is getting a good gradient signal for to make it better. If this isn’t clear now, it will be clear soon. Also, the reward can be computed from the states and actions, since it’s usually a function of , so it’s not technically needed in the trajectory.
What’s our goal here with policy gradients? Unlike algorithms such as DQN, which strive to find an excellent policy indirectly through Qvalues, policy gradients perform a direct gradient update on a policy to change its parameters, which is what makes it so appealing. Formally, we have:

Note I: I put under the expectation. This means the rewards are computed from a trajectory which was generated under the policy . We have to find “optimal” settings of to make this work.

Note II: we don’t need to optimize the expected sum of discounted rewards, though it’s the formulation I’m most used to. Alternatives include ignoring by setting it to one, extending to infinity if the episodes are infinitehorizon, and so on.
The above raises the allimportant question: how do we find the best ? If you’ve taken optimization classes before, you should know the answer already: perform gradient ascent on , so we have where is the function being optimized. Here, that’s the expected value of whatever sum of rewards formula we’re using.
Two Steps: LogDerivative Trick and Determining Log Probability
Before getting to the computation of the gradient, let’s first review two mathematical facts which will be used later, and which are also of independent interest. The first is the “logderivative” trick, which tells us how to insert a log into an expectation when starting from . Specifically, we have:
where is the density of . Most of these steps should be straightforward. The main technical detail to worry about is exchanging the gradient with the integral. I have never been comfortable in knowing when we are allowed to do this or not, but since everyone else does this, I will follow them.
Another technical detail we will need is the gradient of the log probability of a trajectory since we will later switch from above with a trajectory . The computation of proceeds as follows:
The probability of decomposes into a chain of probabilities by the Markov Decision Process assumption, whereby the next action only depends on the current state, and the next state only depends on the current state and action. To be explicit, we use the functions that we already defined: and for the policy and dynamics, respectively. (Here, represents the starting state distribution.) We also observe that when taking gradients, the dynamics disappear!
Computing the Raw Gradient
Using the two tools above, we can now get back to our original goal, which was to compute the gradient of the expected sum of (discounted) rewards. Formally, let be the reward function we want to optimize (i.e. maximize). Using the above two tricks, we obtain:
In the above, the expectation is with respect to the policy function, so think of it as . In practice, we need trajectories to get an empirical expectation, which estimates this actual expectation.
So that’s the gradient! Unfortunately, we’re not quite done yet. The naive way is to run the agent on a batch of episodes, get a set of trajectories (call it ) and update with using the empirical expectation, but this will be too slow and unreliable due to high variance on the gradient estimates. After one batch, we may exhibit a wide range of results: much better performance, equal performance, or worse performance. The high variance of these gradient estimates is precisely why there has been so much effort devoted to variance reduction techniques. (I should also add from personal research experience that variance reduction is certainly not limited to reinforcement learning; it also appears in many statistical projects which concern a biasvariance tradeoff.)
How to Introduce a Baseline
The standard way to reduce the variance of the above gradient estimates is to insert a baseline function inside the expectation.
For concreteness, assume , so we have no discounted rewards. We can express the policy gradient in three equivalent, but perhaps nonintuitive ways:
Comments:

Step (i) follows from plugging in our chosen into the policy gradient we previously derived.

Step (ii) follows from first noting that . The reason why this is true can be somewhat tricky to identify. I find it easy to think of just redefining as for some fixed timestep . Then, we do the exact same computation above to get the final result, as shown in the equation of the “Computing the Raw Gradient” section. The main difference now is that since we’re considering the reward at time , our trajectory under expectation stops at that time. More concretely, . This is like “throwing away variables” when taking expectations due to “pushing values” through sums and summing over densities (which cancel out); I have another example later in this post which makes this explicit.
Next, we sum over both sides, for . Assuming we can exchange the sum with the gradient, we get
where indicates the trajectory up to time . (Full disclaimer: I’m not sure if this formalism with is needed, and I think most people would do this computation without worrying about the precise expectation details.)

Step (iii) follows from a nifty algebra trick. To simplify the subsequent notation, let . In addition, ignore the expectation; we’ll only rearrange the inside here. With this substitution and setup, the sum inside the expectation from Step (ii) turns out to be
In other words, each has its own row of value to which it gets distributed. Next, switch to the column view: instead of summing rowwise, sum columnwise. The first column is . The second is . And so on. Doing this means we get the desired formula after replacing with its real meaning and hitting the expression with an expectation.
Note: it is very easy to make a typo with these. I checked my math carefully and crossreferenced it with references online (which themselves have typos). If any readers find a typo, please let me know.
Using the above formulation, we finally introduce our baseline , which is a function of (and not , I believe). We “insert” it inside the term in parentheses:
At first glance, it doesn’t seem like this will be helpful, and one might wonder if this would cause the gradient estimate to become biased. Fortunately, it turns out that this is not a problem. This was surprising to me, because all we know is that is a function of . However, this is a bit misleading because usually we want to be the expected return starting at time , which means it really “depends” on the subsequent time steps. For now, though, just think of it as a function of .
Understanding the Baseline
In this section, I first go over why inserting above doesn’t make our gradient estimate biased. Next, I will go over why the baseline reduces variance of the gradient estimate. These two capture the best of both worlds: staying unbiased and reducing variance. In general, any time you have an unbiased estimate and it remains so after applying a variance reduction technique, then apply that variance reduction!
First, let’s show that the gradient estimate is unbiased. We see that with the baseline, we can distribute and rearrange and get:
Due to linearity of expectation, all we need to show is that for any single time , the gradient of multiplied with is zero. This is true because
Here are my usual overlydetailed comments (apologies in advance):

Note I: this notation is similar to what I had before. The trajectory is now represented as . In addition, the expectation is split up, which is allowed. If this is confusing, think of the definition of the expectation with respect to at least two variables. We can write brackets in any appropriately enclosed location. Furthermore, we can “omit” the unnecessary variables in going from to (see expression above). Concretely, assuming we’re in discreteland with actions in and states in , this is because evaluates to:
This is true because of the definition of expectation, whereby we get the joint density over the entire trajectory, and then we can split it up like we did earlier with the gradient of the log probability computation. We can distribute all the way back to (but not beyond) the first sum over . Pushing sums “further back” results in a bunch of sums over densities, each of which sums to one. The astute reader will notice that this is precisely what happens with variable elimination for graphical models. (The more technical reason why “pushing values back through sums” is allowed has to do with abstract algebra properties of the sum function, which is beyond the scope of this post.)

Note II: This proof above also works with an infinitetime horizon. In Appendix B of the Generalized Advantage Estimation paper (arXiv link), the authors do so with a proof exactly matching the above, except that and are now infinity.

Note III: About the expectation going to zero, that’s due to a wellknown fact about score functions, which are precisely the gradient of log probailities. We went over this in my STAT 210A class last fall. It’s again the log derivative trick. Observe that:
where the penultimate step follows from how is a density. This follows for all time steps, and since the gradient of the log gets distributed for each , it applies in all time steps. I switched to the continuousland version for this, but it also applies with sums, as I just recently used in Note I.
The above shows that introducing doesn’t cause bias.
The last thing to cover is why its introduction reduces variance. I provide an approximate argument. To simplify notation, set . We focus on the inside of the expectation (of the gradient estimate) to analyze the variance. The technical reason for this is that expectations are technically constant (and thus have variance zero) but in practice we have to approximate the expectations with trajectories, and that has high variance.
The variance is approximated as:
Approximation (i) is because we are approximating the variance of a sum by computing the sum of the variances. This is not true in general, but if we can assume this, then by the definition of the variance , we are left with the term since we already showed that introducing the baseline doesn’t cause bias. Approximation (ii) is because we assume independence among the values involved in the expectation, and thus we can factor the expectation.
Finally, we are left with the term . If we are able to optimize our choice of , then this is a least squares problem, and it is well known that the optimal choice of is to be the expected value of . In fact, that’s why policy gradient researchers usually want to approximate the expected return starting at time , and that’s why in the vanilla policy gradient algorithm we have to refit the baseline estimate each time to make it as close to the expected return . At last, I understand.
How accurate are these approximations in practice? My intuition is that they are actually fine, because recent advances in reinforcement learning algorithms, such as A3C, focus on the problem of breaking correlation among samples. If the correlation among samples is broken, then Approximation (i) becomes better, because I think the samples are no longer generated from the same trajectory.
Well, that’s my intuition. If anyone else has a better way of describing it, feel free to let me know in the comments or by email.
Discount Factors
So far, we have assumed we wanted to optimize the expected return, or the expected sum of rewards. However, if you’ve studied value iteration and policy iteration, you’ll remember that we usually use discount factors . These empirically work well because the effect of an action many time steps later is likely to be negligible compared to other action. Thus, it may not make sense to try and include raw distant rewards in our optimization problem. Thus, we often impose a discount as follows:
where the serves as the discount, starting from 1, then getting smaller as time passes. (The first line above is a repeat of the policy gradient formula that I describe earlier.) As this is not exactly the “desired” gradient, this is an approximation, but it’s a reasonable one. This time, we now want our baseline to satisfy .
Advantage Functions
In this final section, we replace the policy gradient formula with the following value functions:
Both of these should be familiar from basic AI; see the CS 188 notes from Berkeley if this is unclear. There are also discounted versions, which we can denote as and . In addition, we can also consider starting at any given time step, as in which provides the expected (discounted) return assuming that at time , our stateaction pair is .
What might be new is the advantage function. For the undiscounted version, it is defined simply as:
with a similar definition for the discounted version. Intuitively, the advantage tells us how much better action would be compared to the return based on an “average” action.
The above definitions look very close to what we have in our policy gradient formula. In fact, we can claim the following:
In (i), we replace terms with their expectations. This is not generally valid to do, but it should work in this case. My guess is that if you start from the second line above (after the “(i)”) and plug in the definition of the expectation inside and rearrange terms, you can get the first line. However, I have not had the time to check this in detail and it takes a lot of space to write out the expectation fully. The conditioning with the value functions makes it a bit messy and thus the law of iterated expectation may be needed.
Also from line (i), we notice that the value function is a baseline, and hence we can add it there without changing the unbiasedness of the expectation. Then lines (ii) and (iii) are just for the advantage function. The implication of this formula is that the problem of policy gradients, in some sense, reduces to finding good estimates of the advantage function . That is precisely the topic of the paper Generalized Advantage Estimation.
Concluding Remarks
Hopefully, this is a helpful, selfcontained, bareminimum introduction to policy gradients. I am trying to learn more about these algorithms, and going through the math details is helpful. This will also make it easier for me to understand the increasing number of research papers that are using this notation.
I also have to mention: I remember a few years ago during the first iteration of CS 294112 that I had no idea how policy gradients worked. Now, I think I have become slightly more enlightened.
Acknowledgements: I thank John Schulman for making his notes publicly available.
Update April 19, 2017: I have code for vanilla policy gradients in my reinforcement learning GitHub repository.