The world’s leading publication for data science, AI, and ML professionals.

Why Linear Regression is All You Need

Understand it, and getting into deep learning will become less of a struggle (still a struggle, but less)

This short post is for beginners who are starting on linear regression and are climbing their way into the Deep Learning field. But I’m not addressing these topics in-depth in this article.

My intent is only to give an idea of why linear regression can facilitate the path into deeper waters. I’ve written about linear regression before. You can read [here](https://medium.com/geekculture/machine-learning-algorithm-from-scratch-4a1a48a9a355) and here.

"Oh, is linear regression really machine learning?!.."

It was my thought.

I hear people joking (or half-joking) by saying that linear regression is not Machine Learning. I get it! Linear regression could be disappointing if you decided to get into the field to build your self-driving car or a robot to clean up your house, as in the movies.

But believe me, linear regression is essential to get the intuition right of other more complex topics. And understanding logistic regression is even better. Don’t be demotivated by the (relatively speaking) simplicity of it. Keep studying.

Weights, bias, optimiser – it’s everywhere!

The formula of the line as well! y-hat = weight*x + bias

I started into neural networks, and deep learning and I’m thankful I learned linear regression first. It’s truly helpful to get the basics.

  • Weights (slope of the line)
  • Bias (the y-intercept of the line)
  • An optimiser predicts some value, compares with the actual data (ground truth) and calculates the error (distance between prediction and the actual data). Then the algorithm adjusts the weights and biases to minimise the error.
Figure 2. Various lines' rotation representation. Image by the author.
Figure 2. Various lines’ rotation representation. Image by the author.

ALL these concepts are part of deep learning as well. The same stuff happens in a neural network.

In a nutshell, a problem requires deep learning because a straight line may not cover, or better saying, represent the data well. A straight line can’t fit the example on the right below.

Figure 3. Linear and non-linear representation. Image by the author.
Figure 3. Linear and non-linear representation. Image by the author.

But the core logic is the same. In neural networks, putting it in simple terms, each node represents a piece of the line. They are combined to form the line required to represent the data points.

Figure 4. From the ground truth to prediction using deep learning. Image by the author.
Figure 4. From the ground truth to prediction using deep learning. Image by the author.

Final Thoughts

I had read about the nodes, hidden layers, weights and biases, but I had not gotten a clear picture of what all these things were or do. Thanks to learning linear regression, I have a clearer understanding.

There is so much more to neural networks, and their inner workings can get quite complex.

Have you tried to follow the implementation of a language translator (encoder-decoder/seq2seq) or the word2vec algorithm? Give it a try, and you’ll see.

My goal is to cover more on the deep learning topic. The different activation functions, the evolution of the RNNs to LSTM & GRUs to seq2seq to seq2seq with attention to Transformers. The convolution networks for image recognition as well and more.

For now, that’s it. Thanks for reading.


Related Articles