Why Neural Nets Can Approximate Any Function

Basic Overview of the Universal Approximation Theorem with PyTorch Code and Visuals

Thomas Hikaru Clark
Towards Data Science
8 min readJul 11, 2020

--

In this article, I will explain the Universal Approximation Theorem and showcase two quick examples with PyTorch code to demonstrate neural networks learning to approximate functions. Feel free to skip straight to the code and visualizations if you already know the basics of how a neural network works!

When a lot of people hear the word function, they just think of high school algebra and relations like f(x)=x². Although I have nothing against high school algebra (I taught it for two years!), it’s important to keep in mind that a function is just a mapping from inputs to outputs, and these can take many forms.

Photo by Kitti Incédi on Unsplash

Let’s say you want to train a machine learning model that predicts a person’s clothing size. (I recently used such an algorithm to estimate my size for a jacket). The inputs are the person’s height, weight, and age. The output is the size. What we are trying to do is produce a function that converts a person’s height/weight/age combination (a triple of numbers) into a size (perhaps a continuous scalar value or a classification like XS, S, M, L, XL).

According to machine learning, we can do this by using the following steps:

--

--