Natural Scene Recognition Using Deep Learning

In computer vision Scene Recognition is one of the top challenging research fields.

Shubham Gupta
Towards Data Science

--

Recognizing the environment in one glance is one of the human brain’s most accomplished deeds. While the tremendous recent progress in object recognition tasks originates from the availability of large datasets such as COCO and the rise of Convolution Neural Networks ( CNNs) to learn high-level features, scene recognition performance has not achieved the same level of success.

In this blog post, we will see how classification models perform on classifying images of a scene. For this task, we have taken the Places365-Standard dataset to train the model. This dataset has 1,803,460 training images and 365 classes with the image number per class varying from 3,068 to 5,000 and size of images is 256*256.

Images from the dataset

Installing and Downloading the data

Let’s start by setting up Monk and its dependencies:

!git clone https://github.com/Tessellate-Imaging/monk_v1.git
! cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt

After installing the dependencies, I downloaded the Places365-Standard dataset which is available to download from here.

Create an Experiment

I have created an experiment, and for this task, I used mxnet gluon back-end.

import os
import sys
sys.path.append("monk_v1/monk/");
from gluon_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("Places_365", "Experiment");

Model Selection and Training

I experimented with various models like resnet, densenet, inception, vgg16, and many more but only vgg16 gives the greater validation accuracy than any other model.

gtf.Default(dataset_path="train/",
path_to_csv="labels.csv",
model_name="vgg16",
freeze_base_network=False,
num_epochs=20);
gtf.Train();

After training for 20 epoch I got the training accuracy of 65% and validation accuracy of 53%.

Prediction

gtf = prototype(verbose=1);
gtf.Prototype("Places_365", "Experiment", eval_infer=True);
img_name = "test_256/Places365_test_00208427.jpg"
predictions = gtf.Infer(img_name=img_name);
from IPython.display import Image
Image(filename=img_name)
Prediction on test images
img_name = "test_256/Places365_test_00151496.jpg" 
predictions = gtf.Infer(img_name=img_name);
from IPython.display import Image
Image(filename=img_name)
Prediction on test images

After this, I tried to find out why the accuracy has not improved more than what I got. Some of the possible reasons are:

Incorrect Labels:- While inspecting the training folder, there are images that have incorrect labels like baseball_field has the wrong image. There are many more incorrect labels.

Wrong Image in baseball_field
img=mpimg.imread(“images/train/baseball_field2469.jpg”)
imgplot = plt.imshow(img)

Unclear Scenes:- Due to various similar classes that share similar objects like dining_room and dining_hall, forest_road and field_road, there are unclear images that are very hard to classify.

Label: field_road
Label: forest_road

As we can see it is very hard to classify these 2 images.

Multiple Scene Parts:- Images consist of multiple scenes parts can not be classified into one category like buildings near the ocean. These scenes can be hard to classify and require more ground truth labels for describing the environment.

To summarize, this blog post has shown how we can use deep learning networks to perform a natural scene classification and why scene recognition performance has not achieved the same level of success as that of object recognition.

References

http://places2.csail.mit.edu/PAMI_places.pdf

--

--