The world’s leading publication for data science, AI, and ML professionals.

Pillar Based 3-D Point Cloud Object Detection Implementation on Waymo Open Dataset

Tutorial to implement a Pillar Based Object Detection Deep Neural Net on Amazon Sage Maker. This can be generalized to any cloud instance…

Introduction

This post is about implementing and running a 3D point cloud Object Detection deep neural network on AWS. LiDAR (3D Point Clouds) object detection has been a crucial area of research in the area of autonomous driving. Since the self-driving cars use LiDARs to detect the objects on the road, it is important to detect these objects and predict their motion to make sensible driving decisions while on the road.

There are many open datasets for object detection and tracking on roads for you to learn from. I’m listing a few popular ones below:

Once you know what dataset to work with, then comes the well studied Neural Networks that have been performing well on such data. Considering different applications that such data can be used for, for example, object detection, segmentation, object tracking, etc. there are different networks. Please refer to this post to read more on 3D point cloud data and it’s applications, as well as which networks work best for each application. For the purpose of this post, we focus on object detection and a recently published Pillar Based method[1] to solve the purpose.

Prerequisites:

Waymo Dataset Repo on Google Cloud
Waymo Dataset Repo on Google Cloud

Implementation Steps

Start the cloud instance

Since I’m using Amazon SageMaker, we start a notebook instance with around 2TB of space and a GPU.

Creating a notebook instance on Amazon SageMaker.
Creating a notebook instance on Amazon SageMaker.

Then we start our cloud instance and open JupyterLab, followed by a new terminal on our instance.

Open a terminal for your cloud instance.
Open a terminal for your cloud instance.

Get the data

Use the following commands to get the data from google cloud bucket (remember to add gcloud to the PATH):

curl https://sdk.cloud.google.com | bash
exec -l $SHELL
gcloud init

In case gcloud is not recognized still, view the contents of the .bashrc file with cat /home/ec2-user/.bashrc and verify the links for gcloud, probably on the last two lines of this file. Use the commands below to manually add them to your PATH. This should get the gcloud commands working on your terminal.

source /home/ec2-user/google-cloud-sdk/path.bash.inc
source /home/ec2-user/google-cloud-sdk/completion.bash.inc
gcloud init

Now finally, get the training and validation data and store it inside a train_data and validation_data directory respectively. The command to copy data from the google cloud bucket to your instance is gsutil -m cp -r gs://waymo_open_dataset_v_xxx /path/to/my/data To find the exact name of the waymo repository, open the google cloud storage directory for the waymo open dataset and copy the repo’s name. If you downloaded a .tar file, extract it with the command tar -xvf training_xxxx.tar. This will provide the data files for various segments inside a scene.

Setup code

  • Clone the GitHub repo git clone https://github.com/tyagi-iiitv/pillar-od.git
  • Create a virtual environment to work in conda create -p ./env anaconda python=3.7 This installs an initial, complete Anaconda environment with necessary packages.
  • Activate the conda environment source activate ./env
  • Install additional libraries conda install absl-py tensorflow-gpu tensorflow-datasets
  • Install Waymo Open Dataset Wrapper library
rm -rf waymo-od > /dev/null
git clone https://github.com/waymo-research/waymo-open-dataset.git waymo-od
cd waymo-od && git branch -a
git checkout remotes/origin/master
pip install --upgrade pip
pip install waymo-open-dataset-tf-2-1-0==1.2.0
  • Prepare the dataset for the model. Change the source and target directories inside the file pillar-od/data/generate_waymo_dataset.sh file. Now run the file to read frames from the downloaded data. This is going to take a while, depending on the size of the data that you’ve downloaded.
cd pillar-od/data
chmod +x generate_waymo_dataset.sh
./generate_waymo_dataset.sh

Model Training

Before we can run the train.py file inside the pillar-od directory, make sure to change the paths to the dataset and other configuration parameters inside the config.py file. Once done with that, let’s install a few libraries to get started with training.

pip install lingvo tensorflow-addons

and finally, the model is ready to train for cars (class=1)/pedestrians(class=2):

python train.py --class_id=1 --nms_iou_threshold=0.7 --pillar_map_size=256
python train.py --class_id=2 --nms_iou_threshold=0.7 --pillar_map_size=512

Model Evaluation

Same procedure for model evaluation, use the commands below:

python eval.py --class_id=1 --nms_iou_threshold=0.7 --pillar_map_size=256 --ckpt_path=/path/to/checkpoints --data_path=/path/to/data --model_dir=/path/to/results
python eval.py --class_id=2 --nms_iou_threshold=0.2 --pillar_map_size=512 --ckpt_path=/path/to/checkpoints --data_path=/path/to/data --model_dir=/path/to/results

References

[1] Pillar-based Object Detection for Autonomous Driving, Wang, Y. et. al. ECCV 2020


Related Articles