Asynchronous advantage actor-critic (A3C) Algorithm
The Asynchronous advantage actor-critic (A3C) Algorithm is one of the latest algorithms developed by the Artificial Intelligence division, Deep Mind at Google. It is used for the Deep Reinforcement Learning field. The first mention of A3C was found in a research paper published in 2016 named Asynchronous Methods for deep learning. Before moving towards the insights of this algorithm, let us first try to decode and understand what its name means.
The basic 3 tags from the name of this algorithm
Asynchronous: The A3C algorithm, in contrast with any other deep machine learning algorithm, works with multiple learning agents, with each agent having a unique environment for it. The agents work with different cases within their respective environments, gaining knowledge with each interaction. As the number of interactions increases, the agents become more knowledgeable. Since the agents all together are controlled by a global network, they contribute to global knowledge. The complete process is asynchronous, hence the name. The entire global network representation is similar to the human life structure as the knowledge of each individual helps the whole community (global network) to grow.
Actor-Critic: In contrast with the simple techniques used before the A3C algorithm, this algorithm uses the best part of paths in both the Iteration and Policy gradient methods. In simpler terms, the Asynchronous Advantage Actor-Critic algorithm is used for the prediction of the value function V(s) and also the optimal function for policy. Here, each learning agent stores the result from implementing the Value function (Critic) for updating the value of our policy-gradient function (Actor).
Note that this means that the learning agent calculates the conditional probability that refers to the parameterized possibility of it choosing the action 'a' when it is in the state.
Advantage: Usually, while implementing the policy gradient function in the A3C algorithm, some of the actions performed by the learning agents are rewarding, whereas some are penalized. In order to let the agent determine the result of every step, the discounted returns (gamma R). However, by using the advantage, the agent also learns how better the rewards are compared to what it was expecting. These insights let the agent identify that the function is better and hence due to this factor, the tag Advantage is given to the algorithm's name.
The advantage metric that is used is calculated using this expression:
Advantage factor: A = Q(s, a) – V(s)
These are what each part of the name of this algorithm means.
Explanation
A3C is a conceptually simple and lightweight framework that uses asynchronous gradient descent of the policy for optimizing the deep neural network controllers. A simple application of A3C is the task of navigating 3D inputs when provided with some visual inputs. Technically speaking, A3C is a policy gradient algorithm used to maintain a policy gradient of ( \pi\left(a_{t}\mid{s}_{t}; \theta\right) ) . The critics that are used in the Asynchronous advantage actor-critic (A3C) Algorithm learn the value function. While the learning agents are gaining more knowledge, multiple actors in the algorithm are trained together parallel and then are synced with the global parameters. The policy gradients in this algorithm are accumulated for the stability training, similar to the parallel stochastic gradient descent.
There are various advantages of the A3C algorithm:
- Faster
- More Robust
- Uses diversification of knowledge
- Performs better than most of the standard Deep Reinforcement Learning Algorithms.
- Can work with both continuous as well as distributed action spaces.