Artificial Intelligence is on everyone’s lips. You have seen it on mobile phones when asking Siri how the wheater is like today, on Google Maps when you require the optimal route for reaching the destination, or on Amazon when you scroll between the products suggested for you.

However, Artificial Intelligence does not exist stand-alone. There are underlying technologies to enable it as we commonly use it. Just these days the paradigm is leaving an evolution that will bring Artificial Intelligence to a safer and cheaper place. Let’s see what is about.
Not all that glitters is gold
While Artificial Intelligence is getting the momentum up, Cloud Computing has become a central part of its evolution. This approach is however challenging because Cloud-based AI systems run their core AI algorithm at a data center. The continuous transfer of information between devices and data centers is a method that works, but inherently involves some problems for the integration of a robust artificial intelligence system.

Latency is the next problem. Products that use a remote data center must send the data and wait for it to be processed before receiving the results. Since Internet connection is not immediate, there will always be a minimal delay and this latency will vary based on traffic. Furthermore, as the number of users on the network increases, the latency of the system will also increase. This problem can potentially cause products to not respond as they should.
Cloud Computing centrality comes with another issue. Sensitive information is sent to an unknown location that could potentially be accessible to unauthorized persons. Amazon’s Alexa could potentially record conversations and archive them without the customer knowing or consenting to the process, thus making them accessible to the many Amazon employees who have access to data or AI systems.
I Wanna Do It on the Edge!
Edge Computing is the champion of justice, the knight in shining armor. With its superpowers, Edge resolves all the problems generated by Artificial Intelligence on the Cloud.
Edge Computing moves the execution of AI from the data center to a device. While the training is not typically performed locally (as this is a complex and expensive task, and still needs to use Cloud resources), the inference of the model can be performed locally.
This immediately resolves also the latency issue. AI models are processed as data is available, which can significantly decrease execution time. Instead of having to wait for the device to establish an Internet connection, send the data, wait for it to be processed by the data center, and return the results to the device for a response, Edge Computing performs the entire process locally, reducing the need of the Internet connection and, consequently, the latency.
The use of Edge Computing also keeps potentially sensitive information on the local device, since a data center is not used to process the information.

However, all superheroes have weak spots. While the kryptonite weakens Superman and Supergirl, the computation capacity of Edge devices is wonderfully limited.
Fortunately, big players have already done a great job for building Edge devices with really good computation capacity, and this is just the start!
Single but Never Alone

Let me bring off this discussion by mentioning some little computers with great potential for Artificial Intelligence. Nvidia Jetson Nano and Raspberry Pi 4 are two Single-Board Computers (SBC) whose features are extremely similar. While they are built with ARM processors, respectively with a Quad-core ARM Cortex-A72 64-bit @ 1.5 GHz and a Quad-Core ARM Cortex-A57 64-bit @ 1.42 GHz, the biggest difference between these two boards is with the NVIDIA Jetson Nano including a higher performant, more capable GPU. The NVIDIA board includes a Maxwell microarchitecture with 128 CUDA cores @ 921 Mhz. CUDA (an acronym for Compute Unified Device Architecture) is a parallel processing hardware architecture created by NVIDIA, on which software developers can write applications capable of performing parallel computing on the GPUs of NVIDIA video cards. This turns out to be a great advantage for Artificial Intelligence computation and mostly for making inferences.
A Bright Future
Edge Computing is quickly becoming disruptive for Artificial Intelligence and almost a standard de-facto. As a consequence, Single-board Computers are born every day with new amazing features thanks also to the wearable computing boost. Where such a change does take us? TinyML is right around the corner and won’t be long before it becomes more pervasive.