Deep Reinforcement Learning With Python | Part 3| Using Tensorboard to Analyse Trained Models

Mohammed AL-Ma'amari
Towards Data Science
3 min readJul 14, 2020

--

In The Previous Parts:

  • First part explained and created the game environment.
  • Second part discussed the process of training the DQN, explained DQNs and gave reasons to choose DQN over Q-Learning.

In This Part:

We are going to:

  • Use Tensorboard to visualize the goodness of trained models.
  • Explain the way of loading and trying a trained model.
  • Use the best model and let it play the game.

Using Tensorboard:

1- Use a modified Tensorboard to make Tensorboard logs the data from all episodes in the training process in one log file instead of making a new log file every time we fit the model.

The next code of the ModifiedTensorBoard is from this blog by sentdex, I just changed it a little bit to make it run on TensorFlow2.0:

2- Define an object of the modified tensorboard in the __init__ of the agent class:

PATH and name are used to define the full path where the log file will be saved in.

3- Pass the modified tensorboard as a callback when fitting the model:

4- At the start of each episode :

5- To update the logs, use the next line:

Visualising the logs :

Open the terminal on the directory where the “logs” folder is and run:

tensorboard --logdir="logs/"

A browser window will appear:

You can try the other options by yourself.

By using these visualisations we can see the relations between logged variables, such as epsilon and max_reward.

Loading and Using Trained Models:

Some Shots of Agents Playing The Game:

You can follow me on:

--

--

I am a computer engineer | I love machine learning and data science and spend my time learning new stuff about them.