Ten minutes to learn Linear regression for dummies!!!

Linear Regression is a supervised machine learning algorithm where the predicted output is continuous and has a constant slope. It is used to predict values within the continuous range.

Bala Venkatesh
5 min readFeb 21, 2020
Learn with laughing

Linear regression is the first step to learn the concept of machine learning. When you start to say that you are going to learn machine learning; Firstly, we will think that we should have a confident base in mathematics and basic equation. Do not worry I will guide you to learn the linear regression algorithm at a very basic step. Let’s start the learning part. Going further, since it is a beginner level we will not dive-in into linear regression mathematical formula.

Keep calm

Types of Linear Regression

  1. Simple regression

If a single independent variable is used to predict the value of a numerical dependent variable, then such a Linear Regression algorithm is called Simple Linear Regression.

2. Multivariable regression

If more than one independent variable is used to predict the value of a numerical dependent variable, then such a Linear Regression algorithm is called Multiple Linear Regression.

Let’s see the Linear regression Equation

Y = m*X + b

Y is Dependent Variable

X is Independent Variable

b is intercept(mnemonic : ‘b’ means where the line begins)

m is slope (mnemonic : ‘m’ means ‘move’)

What are the steps we should follow to solve the regression model? Understand below that these two steps to solve the linear regression algorithm as it is an important algorithm to solve linear regression.

  1. Cost Function (Mean Squared Error)
  2. Gradient Descent
Linear Regression

Observe the above image(Linear Regression) and question the image.

Question 1:- What is Red dots?

Ans: The red dots are your data; we have two values age and weight. Age is X variable(Independent Variable) and weight is Y variable(Dependent Variable). For values, we put in red dots in the Graph.

Question 2: What is the centerline between the red dots?

Ans: That is the best fit line.

Question 3: How to draw the best fit line?

Ans: We can draw one fit line with our own assumption(predicted line) like the below image. Now the equation is in the gameplay to find the Best Fit Line with our dataset. The best fit line will have the least error.

Predicted Line

Using the Cost Function which is also known as the Mean Squared Error(MSE) function and Gradient Descent to get the best fit line.

Step 1:- What is Cost Function(MSE) and How it works?

The cost function helps us to figure out the best possible values for m and b which would provide the best fit line for the data points. Since we want the best values for m and b, we convert this search problem into a minimization problem whereby to minimize the error between the predicted value and the actual value.

n is the total number of observations (data points)yi = Actual value

We square the error difference and sum over all data points and divide that value by the total number of data points. This provides the average squared error over all the data points.

Step 2:- What is Gradient Descent and How it works?

Now we have gotten a minimum error value using the cost function. The next important concept needed to understand linear regression is gradient descent. Gradient descent is a method of updating m and b to reduce the cost function(MSE). The idea is that; we start with some values for m and b and then we change these values iteratively to reduce the cost. Gradient descent helps us on how to change the values.

Applying Gradient Descent

You may wonder how to use gradient descent to update m and b. To update m and b; we take the gradients from the cost function. To find these gradients, we take partial derivatives with respect to m and b. Now, we are able to understand how the partial derivatives are found below. You would require some calculus but if you do not know, it is alright. You can take it as it is.

Here

The partial derivates are the gradients and they are used to update the values of m and b. Alpha is the learning rate which is a hyperparameter that you must specify. A smaller learning rate could get you closer to the minima but takes more time to reach the minima, a larger learning rate converges sooner but there is a chance that you could overshoot the minima.

Finally, we got the best fit line using the above two steps. We can use these steps to predict new values using the best fit line.

Conclusion

Linear regression is an algorithm that every machine learning enthusiast must know and it is also the right place to start for people who want to learn machine learning. It is a simple and useful algorithm. I hope this article will be useful to your end!!!

Let’s start writing code to build a Linear regression model. We can use the Scikit-learn library to write code to build a Linear regression model because it has predefined methods to build a machine learning algorithm.

References:

http://cs229.stanford.edu/

Gradient Descent, Step-by-Step

Gradient Descent and Cost Function

ml-cheatsheet

100-Days-Of-ML-Code

machinelearningmastery

--

--

Bala Venkatesh

I have a passion for understanding technology at a fundamental level and Sharing ideas and code. * Aspire to Inspire before I expire*