What is a perceptron?

Gerry Saporito
Towards Data Science
2 min readSep 17, 2019

--

A neural network is an interconnected system of perceptrons, so it is safe to say perceptrons are the foundation of any neural network. Perceptrons can be viewed as building blocks in a single layer in a neural network, made up of four different parts:

  1. Input Values or One Input Layer
  2. Weights and Bias
  3. Net sum
  4. Activation function

A neural network, which is made up of perceptrons, can be perceived as a complex logical statement (neural network) made up of very simple logical statements (perceptrons); of “AND” and “OR” statements. A statement can only be true or false, but never both at the same time. The goal of a perceptron is to determine from the input whether the feature it is recognizing is true, in other words whether the output is going to be a 0 or 1. A complex statement is still a statement, and its output can only be either a 0 or 1.

Following the map of how a perceptron functions is not very difficult: summing up the weighted inputs (product of each input from the previous layer multiplied by their weight), and adding a bias (value hidden in the circle), will produce a weighted net sum. The inputs can either come from the input layer or perceptrons in a previous layer. The weighted net sum is then applied to an activation function which then standardizes the value, producing an output of 0 or 1. This decision made by the perceptron is then passed onto the next layer for the next perceptron to use in their decision.

Together, these pieces make up a single perceptron in a layer of a neural network. These perceptrons work together to classify or predict inputs successfully, by passing on whether the feature it sees is present (1) or is not (0). The perceptrons are essentially messengers, passing on the ratio of features that correlate with the classification vs the total number of features that the classification has. For example, if 90% of those features exist then it is probably true that the input is the classification, rather than another input that only has 20% of the features of the classification. It’s just as Helen Keller once said, “Alone we can do so little; together we can do so much.” and this is very true for perceptrons all around.

--

--