RL 1: MRP and MDP

October 30 , 2017

Based on David Silver's reinforcement learning lecture 1 and lecture 2.

Major components of RL agent

Policy

In reinforcement learning, the policy determines which action to take given a state, the broad goal of an RL algorithm. A policy can be stochastic or deterministic:

\begin{aligned} a &= \pi(s) && \text{deterministic policy} \ \pi(a|s) &= P[A=a|S=s] && \text{stochastic policy} \end{aligned}

Value function

The state-value function evaluates the "worth" of a state, based on the present value of future rewards. We use the policy to get an expectation of rewards, which gives us the value of state $s$ based on policy $\pi$. Different policies would give us different state values.

$$v{\pi}(s) = E{\pi} [Rt + \gamma R{t+1} + \gamma^2 R_{t+2} \cdots | S_t = s]$$

If we know the expected reward for policies at each state then we know which action to take, and so the trajectory.

Model

The model is the agent's representation of the environment. Knowing the model allows us to...

Deep Reinforcement Learning with Q-Networks

June 19 , 2017

On Human-level control through deep reinforcement learning by Mnih et al, 2015.

Although a few years old, this is a seminal paper on deep reinforcement learning and I encourage you to read the original paper in which the authors developed the well known deep Q-netowrk (DQN) artificial agent.

DQN is an end-to-end method for training an agent, avoiding handcrafted rules for a specific domain. In a nutshell, the agent sees what you see, has access to actions you have access to (joystick movements) and is told to optimize a specific score. Amazingly, the trained algorithm achieves human level performance on 49 Atari games using not only the same architecture, but the same parameters!

Model

Reinforcement learning is concerned with an agent interacting with the world to maximize a reward. To maximize this reward it makes a series of observations and actions. Here the game is observed via pixels (like humans), so unsuprisingly it uses a convolutional neural network to extract interesting...

Gumbel Softmax

April 14 , 2017

On Categorical Reparameterization with Gumbel-Softmax by Jang et al, 2017.

The reparameterization trick is a way to formulate a distribution so as to efficiently sample. You may be aware of the trick for the Gaussian case. When sampling is written as $z \sim \mathcal{N}(\mu, \sigma^2)$ the whole sampling process is random. By rewriting the sampling to:

$$z = \mu + \sigma \epsilon \text{ ,where } \epsilon \sim \mathcal{N}(0,1)$$

we have shifted the randomness to only $\epsilon$, which is the same distribution for whichever value of $\mu$ and $\sigma$. Now the statistics can be learned in a neural network through backpropagation. In the Variational AutoEncoder case, latent variables $\mu$ and $\sigma$ are functions of the input $x$, modeled as hidden layers in a neural network.

Exploring The Limits of Language Modeling

April 3 , 2017

A review of the paper Exploring the Limits of Language Modeling, 2016, by Jozefowicz et al of Google Brain Google Brain. The architecture and weights have been released by the authors and made available on the official tensorflow git repo

TL;DR for large scale modeling: character CNN inputs, lstm with huge hidden sates, use importance sampling, ensemble your best models.

Language Modeling

If you've done any neural net work in Natural Language Processing (NLP) you've probably hit a few walls. For example, you may have built a generative model where you had to deal with a massive output layer, where your model predicts the probability of tens of thousands of words. A smallish penultimate layer of 128 hidden units predicting a vocabulary of 40k words costs you a staggering 5.12 million parameters. Just one layer. This paper explores recent advances in Recurrent neural Networks (RNN) for large scale tasks and how to best deal with such issues.

In the good old days of Statistical...

Tutorial: Setting up a machine for deep learning

February 15 , 2017

Welcome to this complete guide on setting up your machine for deep learning. Specifically, we will install Ubuntu, TensorFlow, Theano and Torch. These require CUDA for GPU computation. I have found that installing CUDA through apt-get was always problematic, so this guide goes through a manual installation.