We have released an example code of DQN which is one the most famous deep reinforcement learning algorithm.
Let’s just quickly play Atari!
Though there are many example DQN codes on the web, not much of their trained models are downloadable. In our example, with just one line of command, the specified Atari game’s trained model in nnp format is automatically downloaded and used to play and render the game.
For example, if you want DQN to play a game called Phoenix, you can just execute the command as shown below. The trained model will be downloaded into the assets folder as PhoenixNoFrameskip-v4/qnet.nnp and the game will be played and rendered. (We recommend using cudnn extension to utilize the GPU since it provides better user experience)
$ python play_atari.py --extension cudnn --gym-env PhoenixNoFrameskip-v4
When you train Phoenix with DQN, you notice that the training is successful when you defeat the last boss. The movie is one interesting example when the agent dies immediately after it defeats to last boss.
Let’s evaluate like the original paper
Our example training code evaluates the parameters during training like it is done in the DQN original paper so that you can reproduce the results of the original paper.
By executing the command below, you can evaluate the parameters while training like in the original paper.
$ python train_atari.py --extension cudnn --gym-env PhoenixNoFrameskip-v4
Evaluation results for 59 Atari games are shown here. Notice that though we only trained for 10M steps which is smaller than that of the original paper (50M steps), the evaluation results are mostly better due to the good parameters we used which are originally provided by OpenAI baseline.