Dear readers and contributors,
Please find here our selection of the week:
By Eugenio Culurciello— 7min read.
The new kind of neural networks are an evolution of the initial feed-forward model of LeNet5 / AlexNet and derivatives, and include more sophisticated by-pass schemes than ResNet / Inception. These feedforward neural networks are also called encoders, as they compress and encode images into smaller representation vectors.
By Martin Schmitz, PhD — 3 min read.
People often think a given model can just be put into deployment forever. In fact, the opposite is true. You need to maintain your models like you maintain a machine. Machine Learning models can get off or broken overtime. This sounds odd to you because they have no moving pieces? Well, you might want to have a close look on change and drifts of concept.
By juan.mateos-garcia — 17 min read.
Dig below the surface of some of today’s biggest tech controversies and you are likely to find an algorithm misfiring:
By Benjamin Cooley — 5 min read.
Every week I clip, save and bookmark tons of cool things I find on the web that relate to data. So here’s what caught my eye the week of May 1. In typical newsletter fashion, I’ll include a bunch of links for you to click on, save for later and then never return to again (it’s ok, we all do it). To catch next weeks post, follow me here on Medium for an update. I’m also on Twitter.
By Karlijn Willems — 25 min read.
By now, you might already know machine learning, a branch in computer science that studies the design of algorithms that can learn. Today, you’re going to focus on deep learning, a subfield of machine learning that is a set of algorithms that is inspired by the structure and function of the brain.
By Dave Currie — 11 min read.
In this article, I am going to walk you through most of my project where I created a one-to-sequence model that can generate tweets similar to Trump. The actual model is very similar to the one I built in my “How to Build Your First Chatbot” article.