How to Win a Hackathon — Real-time Mobile Wildfire Detection in Tensorflow Lite.

Deep learning for good.

Adrian Yijie Xu
Towards Data Science

--

Introduction

Since 2010, Hackathons have gained increasing visibility as a cultural phenomenon within the coding community. Beyond student- led events, industry leaders such as Facebook, Google, and Microsoft have all recognized the utility of such events, sponsoring or running hackathons of their own. Combining an intense coding and presentation project with aspects of networking over a bed of deodorant, snacks, and sleeping bags, hackathons present great opportunities for students to evaluate their solution-building capabilites under highly stressful conditions.

You may even learn of new ways to deal with stress.

Essentially, a hackathon is a microcosm of the pipeline process for developing a startup: you do your own research, build a MVP, and deliver a pitch to an audience, over the course of up to 48 hours. You’ll go through some wild mood swings, very little sleep, but learn a lot about your ability to work under pressure, adversity, and delivering on ideas. In particular, the team-based orientation of such challenges is an effective test of one’s teamwork capabilities: indeed, while professional hackathon “crews” are common, learning to work with new teammates is a much more realistic simulation of an corporate engineering environment.

But given the possibility of a large number of teams in a hackathon, how do we stand out and build an award winning entry? Naturally, there’s always an element of luck involved, but we do have control over a select number of key aspects:

  • Novelty — the level of originality of your solution. Has it been done a thousand times before? Note that novelty applies equally to application and method — a tried-and-true approach on a new application domain can still be an original solution.
  • Specificity — how well does your solution addresses the challenge statement? A water-spraying AI-powered robot may not be the best solution to a investment bank’s problems, for example.
  • Execution — how well have you delivered your solution? This includes the demo as well as the presentation and pitching aspects, with the latter two often being more significant in shorter competitions. Remember, a well-executed bad idea always trumps a badly-executed good idea.

To illustrate these points in practice , we’ll go over our winning entry to 48-hour Singapore 2018 NASA SpaceApps Challenge’s environmental theme — Sammuta, a multimodal early detection solution to wildfire management.

Note, as we’ve previously covered pitching and presentations in a separate article on GradientCrescent, we won’t be spending too much time on them here. We’ll also stay away from discussing team-building and other interpersonal skills, and orient this article to focusing on the solution itself.

Deep Learning For Early Wildfire Detection

Over the past year, wildfires in Australia captured the attention of the world. With over 15 million acres scorched and over 100 species now feared extinct, the incident has become a tragic reminder of the effects of man-made climate change.

While the exact causes of this season’s fire remain disputed, climate change has resulted in an accelerated rate in wildfire incidence, with increasing temperatures and a large amount of dry tinder resulting in a powder keg for ignition. California, a known hub for wildfires, has now moved beyond “fire season” into an all-year phenomenon, and previously safe areas such as Sweden are now observing an incidence of wildfires. From 2017–2018, over $233 billion has been spent on wildfire-related activities. These are all facts you should utilize in your presentation, as framing the problem with an emotional context is key to getting your message across to audiences. This is discussed in detail in our previous article on pitching.

Sammuta’s opening slide

Our solution to this problem consisted of an early multimodal detection model to wildfires, featuring three key components:

  • A cheap sensor grid with transmission capabilities to act as a coarse heat detection map.
  • A programmable aerial drone with a minimum of 2kg of lift capacity and a range adequate enough to reach any point of a gridspace.
  • An responsive, vision-based wildfire detection system.

This presented visually below:

All of these components can be acquired commercially off-the-shelf, resulting in a cost-effective, rapidly implementable solution .By using a disposable, rugged, cheap detection map for each community, we can ensure effective coverage while remaining economical and energy efficient. Programming an aerial observation drone to fly to specific coordinates of a tripped sensor is facile, and has been previously demonstrated for search and rescue services.

However, it was important to narrow down on an effective demonstration component for the purpose of the competition. We decided that a vision-based mobile detection system would be the most feasible and visually striking solution. Given the temporal and resource limitations of the hackathon, any solution would have to fulfill the following criteria:

  • Be computationally lightweight and responsive
  • High accuracy (>80%) over a range of simulated wildfires
  • Be easy to implement, preferably within 12 hours.
  • Be easy to train for maintenance and update purposes
  • Possible online-learning compatibility for future updates

To fulfill these requirements, we decided to utilize Transfer Learning and Tensorflow Lite to train a real-time mobile wildfire classifier for Android devices.

Implementation

Our final Tensorflow Lite implementation for Android Devices , together with the Python scripts needed for training, can be found in the GradientCrescent repository. You’ll need to install Tensorflow (1.9 or above) along with the TOCO converter into your workspace to attempt the retraining process. You’ll also need Android Studio to compile and build the final .apk files yourself, although we do provide a copy in the repository.

Tensorflow Lite is a compression technique designed to adapt standard Tensorflow graph models into lightweight, highly-responsive packages suitable for fast-performance on a light footprint. Such applications include lower-end mobile devices that would be suitable for semi-disposable applications. It allows for the use of quantization conversion techniques, where a model’s parameters can be converted into a 8-bit format, shown to reduce model size while improving on latency. At inference, such data is reconverted back into its original format, resulting in a minimal effect on classification accuracy.

We’ve covered Transfer Learning in a previous article. Essentially, transfer learning refers to the fine-tuning of a pre-trained neural network on a new dataset. The principle here is that the features in the old classes and new data classes are shared, and hence can be applied to target new target classes. While retraining a pre-trained MobileNetV1 network is a facile process using the Tensorflow library, we can perform transfer learning directly from the terminal through scripting.

Firstly, we defined some classes in order to undergo a retraining process. For the purpose of our demonstration, we did a Google Image Search to obtain around 250 images of wildfires, forests, and of sandwiches (humor never hurt any submission after all). Our retraining script can be found in the script “retrain.py”, modified from the official Tensorflow repository. This script can be invoked from a terminal, making it exceptionally beginner friendly as no additional scripting is required.

Our final model utilized a pre-trained MobileNetV1 architecture for input images with 224 x 224 resolution, featuring 75% quanitzation. This was preceded by extensive testing of various architectures in order to achieve a balance between millisecond-level response time and classification accuracy. Generally, the more complex the architecture and the lower the level of quantization, the slower your model will perform.

We then invoke our retraining script with the terminal command, specifying the training time, the specific pre-trained model architecture, preprocessing parameters, as well as our input and output directories:

python retrain.py --image_dir=C:\tensorflow_work\Firepics --output_graph=retrained_graph.pb --output_labels=retrained_labels.txt --bottleneck_dir=bottlenecks --learning_rate=0.0005 --testing_percentage=10 --validation_percentage=10 --train_batch_size=32 --validation_batch_size=-1 --flip_left_right True --random_scale=30 --random_brightness=30 --eval_step_interval=100 --how_many_training_steps=4000 --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_075_224/quantops/feature_vector/1

Our retrained model here is the retrained graph output, however this is not yet an Android compatible Tensorflow Lite (.tflite) file. To facilitate the conversion, we must first freeze our graph file using the “freeze.py” script, and then utilize the TOCO converter to convert our file via another terminal command:

python freeze_graph.py — input_graph=retrained_graph.pb — input_checkpoint=mobilenet_v1_0.5_224.ckpt.data-00000-of-00001 — input_binary=true — output_graph=/tmp/frozen_retrained_graph.pb — output_node_names=MobileNetV1/Predictions/Reshape_1tflite_convert — output_file=retrainedlite.tflite — graph_def_file=retrained_graph.pb — inference_type=QUANTIZED_UINT8 — input_arrays=input — output_arrays=MobilenetV1/Predictions/Reshape_1 \ — mean_values=224 — std_dev_values=127

Once conversion is complete, we transferred our model into our Android platform in Android studio. In the interests of time, we built our solution on top of a demonstration app available in the Tensorflow repository. Note that an explanation of all of the elements of the app, Android Studio, or Java is beyond the scope of this tutorial — we’ll focus on building our demo functionality here instead.

In a straightforward process, We move our Tensorflow Lite solution, consisting of the labels and the .tflite model file, into the “assets” resource directory within your Android Studio Project.

The final model, located in the assets resource folder together with the labels and an older model.

We then specify the names of our model and labels within the ImageClassifierQuantizedMobileNet java file.

@Override
protected String getModelPath() {
// you can download this file from
// https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_224_android_quant_2017_11_08.zip
return "retrained_graph.tflite";
}

@Override
protected String getLabelPath() {
return "retrained_labels.txt";
}

We then built a new cosmetic layout for our app by modifying the camerafragment.xml layout file:

Finally, we can create a threshold for our classifier, which we use here to launch visual toasts, but could be used for communication applications in a final product.

public void splitCheck(String entry){
String components[] =entry.split(":");
String mostlikely = components[0];
String mostlikelyprob = components[1];
resultString = mostlikely;
resultProb = mostlikelyprob.substring(3,5);
//TODO managed to implement check every x seconds, DONE. Only choose the last two numbers, cast as int
if (Integer.parseInt(resultProb)>=65){
match = mostlikely+ " detected";
itemForCount = mostlikely;

}else{
match = "No match";
}

We can then compile our solution and build our .apk file, and transfer this onto our Android device.

Here’s a live demonstration of our final solution.

With our demo and slides, we were able to achieve a first-place victory at the NASA SpaceApps Challenge with a team consisting of complete strangers of diverse backgrounds, over roughly 12 hours of work. You can view our presentation video for the Global Finals below:

That wraps up this little aside into Tensorflow Lite. In our next article, we’ll go back into exploring Reinforcement Learning, by demonstrating its utilities in a Doom gym environment.

We hope you enjoyed this article, and hope you check out the many other articles covering applied and theoretical aspects of AI. To stay up to date with the latest updates on GradientCrescent, please consider following the publication and following our Github repository.

--

--