This short post is for beginners who are starting on linear regression and are climbing their way into the Deep Learning field. But I’m not addressing these topics in-depth in this article.
My intent is only to give an idea of why linear regression can facilitate the path into deeper waters. I’ve written about linear regression before. You can read [here](https://medium.com/geekculture/machine-learning-algorithm-from-scratch-4a1a48a9a355) and here.
"Oh, is linear regression really machine learning?!.."
It was my thought.
I hear people joking (or half-joking) by saying that linear regression is not Machine Learning. I get it! Linear regression could be disappointing if you decided to get into the field to build your self-driving car or a robot to clean up your house, as in the movies.
But believe me, linear regression is essential to get the intuition right of other more complex topics. And understanding logistic regression is even better. Don’t be demotivated by the (relatively speaking) simplicity of it. Keep studying.
Weights, bias, optimiser – it’s everywhere!
The formula of the line as well! y-hat = weight*x + bias
I started into neural networks, and deep learning and I’m thankful I learned linear regression first. It’s truly helpful to get the basics.
- Weights (slope of the line)
- Bias (the y-intercept of the line)
- An optimiser predicts some value, compares with the actual data (ground truth) and calculates the error (distance between prediction and the actual data). Then the algorithm adjusts the weights and biases to minimise the error.

ALL these concepts are part of deep learning as well. The same stuff happens in a neural network.
In a nutshell, a problem requires deep learning because a straight line may not cover, or better saying, represent the data well. A straight line can’t fit the example on the right below.

But the core logic is the same. In neural networks, putting it in simple terms, each node represents a piece of the line. They are combined to form the line required to represent the data points.

Final Thoughts
I had read about the nodes, hidden layers, weights and biases, but I had not gotten a clear picture of what all these things were or do. Thanks to learning linear regression, I have a clearer understanding.
There is so much more to neural networks, and their inner workings can get quite complex.
Have you tried to follow the implementation of a language translator (encoder-decoder/seq2seq) or the word2vec algorithm? Give it a try, and you’ll see.
My goal is to cover more on the deep learning topic. The different activation functions, the evolution of the RNNs to LSTM & GRUs to seq2seq to seq2seq with attention to Transformers. The convolution networks for image recognition as well and more.
For now, that’s it. Thanks for reading.