TensorFlow on Mobile: TensorFlow Lite

What do we get with it?

SAGAR SHARMA
Towards Data Science

--

TensorFlow Lite is a lightweight and a next step from TensorFlow Mobile. You can do almost all the things that you do on TensorFlow mobile but much faster.

Just like TensorFlow Mobile it is majorly focused on the mobile and embedded device developers, so that they can make next level apps on systems like Android, iOS,Raspberry PI etc.

How is it faster? 🚀

It supports a set of core operators, both quantized and float, which have been tuned for mobile platforms.

The core operations incorporate pre-fused activations and biases to further enhance performance and quantized accuracy.

TensorFlow Lite supports using custom operations in models.

How does it work?

Let’s look at the architecture to know how does work.

It starts with the simple trained model that we generate from training data and performs tests later on. It converts the model into a .tflite model file format, based on FlatBuffer which is similar to protocol buffers but not so much. The codeprint of Floatbuffer is much smaller.

Then Depending upon the OS you want to use the mobile-optimized interpreter 😎 which makes apps smaller in size & fast.

The interpreter ensures minimum load, initialization and execution delay by using static graph ordering and custom memory allocator.

Special on Android: It provides Android Neural Networks API library for better interface between device to leverage hardware acceleration.

What do we get with it?

  1. Android Neural Networks API and C++ API
  2. New Model File Format .tflite 👏

3. Speed: New Neural Network API makes computation much faster. 🏃

4. Privacy: The data does not leave the device. 🔐

5. Availability: It works offline. ✈️

6. No Computation Cost: All the computation is performed on your device.Therefore, no server needed. 💵

7. Pre-tested models: All the models work out of the box. 📦

a) Inception V3

b) MobileNets

b) For all first and third party apps 😎 Smart Replies on Android Wear.

Trade-Offs 😐

  1. System utilization: Evaluating neural networks involve a lot of computation, which could increase battery power usage. You should consider monitoring the battery 🔋 health if this is a concern for your app, especially for long-running computations.
  2. Application size: Pay attention to the size of your models. Models may take up multiple megabytes of space. If bundling large models in your APK would unduly impact your users, you may want to consider downloading the models after app installation, using smaller models, or running your computations in the cloud. NNAPI does not provide functionality for running models in the cloud. ☁️

️I have tried to keep the article as exact and easy to understand as possible. Any comments, suggestions or if you have any questions, write it in the comments.

For further Tutorials on how to use TensorFlow in Mobile apps follow me on Medium, Facebook, Twitter, LinkedIn, Google+, Quora to see similar posts.

Clap it! Share it! Follow Me!

Happy to be helpful. kudos…..

--

--