The world’s leading publication for data science, AI, and ML professionals.

August Edition: Deep Learning

8 Must-Read Articles

Artificial intelligence (AI) is gradually embedding itself into every aspect of our lives. Interestingly, as it does so it is becoming less noticeable and more, human. This is, in part, due to the way the machines are learning. There are many different methods used to train & develop AI models; some focus on solving specific problems & tasks, whilst others use ‘feature learning’ to analyse data representations, spotting patterns & learning from them. Deep learning architectures are those of the latter, usually built with ‘networks’, not dissimilar from the neural structures of our own brains. In fact, there are now situations arising where AI is so effective that it is outpacing our own human experts.

Look at the latest computational juggernaut of chess: AlphaZero. Developed by the DeepMind team at Google, it has a combination of deep neural networks (for pattern recognition like that which the greatest human chess players rely on) and a general reinforcement learning algorithm. Meaning that when it started, it had no data beyond the rules of the game, and learnt simply by improving against itself. Within 4 hours it had progressed enough to beat the 2016 Chess Engine World Champion Stockfish 8 in a 100 game match. Further, Stockfish 8 analyses 70 million positions per second compared to the 80 thousand that AlphaZero looks at in the same time frame. AlphaZero’s approach (learning by reinforcement) through recognising and improving on patterns without relying on ‘data’ anymore than a human might is much less like the machine intelligence you’d expect.

Looking through this month’s picks on Deep Learning, you can take a crash course on the fundamentals and begin to better understand some of the current applications of this technology. As you become more familiar with the science behind AI, you might start to realise you have more in common than you think… Good luck! Joshua Fleming – Editor


How I implemented iPhone X’s FaceID using Deep Learning in Python

By Norman Di Palo – 8 min read

One of the most discussed features of the new iPhone X is the new unlocking method, the successor of TouchID: FaceID. Having created a bezel-less phone, Apple had to develop a new method to unlock the phone in a easy and fast way.


Using Deep Learning to improve FIFA 18 graphics

By Chintan Trivedi – 6 min read

Game Studios spend millions of dollars and thousands of development hours designing game graphics in trying to make them look as close to reality as possible. While the graphics have looked amazingly realistic in the last few years, it is still easy to distinguish them from the real world.


Intuitively Understanding Convolutions for Deep Learning

By Irhum Shafkat – 15 min read

The advent of powerful and versatile deep learning frameworks in recent years has made it possible to implement convolution layers into a deep learning model an extremely simple task, often achievable in a single line of code.


Stochastic Weight Averaging – a New Way to Get State of the Art Results in Deep Learning

By Max Pechyonkin – 8 min read

In this article, I will discuss two interesting recent papers that provide an easy way to improve performance of any given neural network by using a smart way to ensemble.


Must know Information Theory concepts in Deep Learning (AI)

By Abhishek Parbhakar – 6 min read

Information theory is an important field that has made significant contribution to deep learning and AI, and yet is unknown to many. Information theory can be seen as a sophisticated amalgamation of basic building blocks of deep learning: calculus, probability and statistics.


A "weird" introduction to Deep Learning

By Favio Vázquez – 14 min read

There are amazing introductions, courses and blog posts on Deep Learning. I will name some of them in the resources sections, but this is a different kind of introduction.


Deep Learning meets Physics: Restricted Boltzmann Machines

By Artem Oppermann – 8 min read

This tutorial is part one of a two part series about Restricted Boltzmann Machines, a powerful deep learning architecture for collaborative filtering. In this part I introduce the theory behind Restricted Boltzmann Machines.


Deep learning in your browser: A brisk guide

By Mike Shi – 7 min read

We’ll cover pulling the original Tiny YOLO Darknet model, converting it into Keras, converting that into Tensorflow.js, doing some predictions, gotchas while writing in Tensorflow.js, and using webcam/images easily for predictions.


Related Articles