Deep Learning: Solving Problems With TensorFlow

Learn how to Solve Optimization Problems and Train your First Neural Network with the MNIST Dataset!

www.forbes.com

Introduction

The goal of this article is to define and solve pratical use cases with TensorFlow. To do so, we will solve:

  • A linear regression problem, where we will adjust a regression line to a dataset
  • And we will end solving the “Hello World” of Deep Learning classification projects with the MINST Dataset.

Optimization Problem

Netflix has decided to place one of their famous posters in a building. The marketing team has decided that the advertising poster has to cover an area of 600 square meters, with a margin of 2 meters above and below and 4 meters left and right.

import numpy as np
import tensorflow as tf
x = tf.Variable(initial_value=tf.random_uniform([1], 34, 35),name=’x’)
y = tf.Variable(initial_value=tf.random_uniform([1], 0., 50.), name=’y’)
# Loss function
s = tf.add(tf.add(632.0, tf.multiply(8.0, y)), tf.divide(2400.0, y), ‘s’)
opt = tf.train.GradientDescentOptimizer(0.05)
train = opt.minimize(s)
sess = tf.Session()init = tf.initialize_all_variables()
sess.run(init)
old_solution = 0
tolerance = 1e-4
for step in range(500):
sess.run(train)
solution = sess.run(y)
if np.abs(solution — old_solution) < tolerance:
print(“The solution is y = {}”.format(old_solution))
break

old_solution = solution
if step % 10 == 0:
print(step, “y = “ + str(old_solution), “s = “ + str(sess.run(s)))
x = 600/old_solution[0]
print(x)
import matplotlib.pyplot as plty = np.linspace(0, 400., 500)
s = 632.0 + 8*y + 2400/y
plt.plot(y, s)
print("The function minimum is in {}".format(np.min(s)))
min_s = np.min(s)
s_min_idx = np.nonzero(s==min_s)
y_min = y[s_min_idx]
print("The y value that reaches the minimum is {}".format(y_min[0]))

Let’s See other Example

In this case, we want to find the minimum of the y = log2(x) function.

x = tf.Variable(15, name='x', dtype=tf.float32)
log_x = tf.log(x)
log_x_squared = tf.square(log_x)
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(log_x_squared)
init = tf.initialize_all_variables()def optimize():
with tf.Session() as session:
session.run(init)
print("starting at", "x:", session.run(x), "log(x)^2:", session.run(log_x_squared))
for step in range(100):
session.run(train)
print("step", step, "x:", session.run(x), "log(x)^2:", session.run(log_x_squared))

optimize()
x_values = np.linspace(0,10,100)
fx = np.log(x_values)**2
plt.plot(x_values, fx)
print("The function minimum is in {}".format(np.min(fx)))
min_fx = np.min(fx)
fx_min_idx = np.nonzero(fx==min_fx)
x_min_value = x_values[fx_min_idx]
print("The y value that reaches the minimum is {}".format(x_min_value[0]))

Let’s Solve a Linear Regression Problem

Let’s see how to adjust a straight line to a dataset that represent the intelligence of every character in the Simpson’s show, from Ralph Wigum to Doctor Frink.

n_observations = 50
_, ax = plt.subplots(1, 1)
xs = np.linspace(0., 1., n_observations)
ys = 100 * np.sin(xs) + np.random.uniform(0., 50., n_observations)
ax.scatter(xs, ys)
plt.draw()
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)
W = tf.Variable(tf.random_normal([1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
Y_pred = tf.add(tf.multiply(X, W), b)
loss = tf.reduce_mean(tf.pow(Y_pred - y, 2))
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# Definition of the number of iterations and start the initialization using the GPU
n_epochs = 1000
with tf.Session() as sess:
with tf.device("/GPU:0"):
# We initialize now all the defined variables
sess.run(tf.global_variables_initializer())
# Start the adjust
prev_training_loss = 0.0
for epoch_i in range(n_epochs):
for (x, y) in zip(xs, ys):
sess.run(optimizer, feed_dict={X: x, Y: y})
W_, b_, training_loss = sess.run([W, b, loss], feed_dict={X: xs, Y: ys}) # We print the losses every 20 epochs
if epoch_i % 20 == 0:
print(training_loss)
# Ending conditions
if np.abs(prev_training_loss - training_loss) < 0.000001:
print(W_, b_)
break
prev_training_loss = training_loss
# Plot of the result
plt.scatter(xs, ys)
plt.plot(xs, Y_pred.eval(feed_dict={X: xs}, session=sess))

MNIST Dataset

Let’s see now how to classify digits images with a logistic regression. We will use the “Hello world” of the Deep Learning datasets.

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data/', one_hot=True)print("Train examples: {}".format(mnist.train.num_examples))
print("Test examples: {}".format(mnist.test.num_examples))
print("Validation examples: {}".format(mnist.validation.num_examples))
# Images are stored in a 2D tensor: images_number x image_pixels_vector
# Labels are stored in a 2D tensor: images_number x classes_number (one-hot)

print("Images Size train: {}".format(mnist.train.images.shape))
print("Images Size train: {}".format(mnist.train.labels.shape))
# To see the range of the images values
print("Min value: {}".format(np.min(mnist.train.images)))
print("Max value: {}".format(np.max(mnist.train.images)))
# To see some images we will acess a vector of the dataset and resize it to 28x28
plt.subplot(131)
plt.imshow(np.reshape(mnist.train.images[0, :], (28, 28)), cmap='gray')
plt.subplot(132)
plt.imshow(np.reshape(mnist.train.images[27500, :], (28, 28)), cmap='gray')
plt.subplot(133)
plt.imshow(np.reshape(mnist.train.images[54999, :], (28, 28)), cmap='gray')
n_input = 784  # Number of data features: number of pixels of the image
n_output = 10 # Number of classes: from 0 to 9
net_input = tf.placeholder(tf.float32, [None, n_input]) # We create the placeholder
W = tf.Variable(tf.zeros([n_input, n_output]))
b = tf.Variable(tf.zeros([n_output]))
net_output = tf.nn.softmax(tf.matmul(net_input, W) + b)

SoftMax Function

# We also need a placeholder for the image label, with which we will compare our prediction And finally, we define our loss function: cross entropy
y_true = tf.placeholder(tf.float32, [None, n_output])
# We check if our prediction matches the label
cross_entropy = -tf.reduce_sum(y_true * tf.log(net_output))
idx_prediction = tf.argmax(net_output, 1)
idx_label = tf.argmax(y_true, 1)
correct_prediction = tf.equal(idx_prediction, idx_label)
# We define our measure of accuracy as the number of hits in relation to the number of predicted samples
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
# We now indicate that we want to minimize our loss function (the cross entropy) by using the gradient descent algorithm and with a rate of learning = 0.01.
optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
from IPython.display import clear_outputwith tf.Session() as sess:  sess.run(tf.global_variables_initializer())  # Let's train the regressor
batch_size = 10
for sample_i in range(mnist.train.num_examples):
sample_x, sample_y = mnist.train.next_batch(batch_size)
sess.run(optimizer, feed_dict={net_input: sample_x,
y_true: sample_y})
# Let's check how is performing the regressor
if sample_i < 50 or sample_i % 200 == 0:
val_acc = sess.run(accuracy, feed_dict={net_input: mnist.validation.images, y_true: mnist.validation.labels})
print("({}/{}) Acc: {}".format(sample_i, mnist.train.num_examples, val_acc))
# Let's show the final accuracy
print('Teste accuracy: ', sess.run(accuracy, feed_dict={net_input: mnist.test.images, y_true: mnist.test.labels}))
  • 1 epoch
  • descent gradient as optimizer
  • and softmax as activation function.

Final Words

As always, I hope you enjoyed the post, that you have learned how to use TensorFlow to solve linear problems and that you have succesfully trained your first Neural Network!

Towards Data Science

A Medium publication sharing concepts, ideas, and codes.

Victor Roman

Written by

Industrial Engineer and passionate about 4.0 Industry. My goal is to encourage people to learn and explore its technologies and their infinite posibilites.

Towards Data Science

A Medium publication sharing concepts, ideas, and codes.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade