Making Predictions with Linear Regression

Linear Regression

Linear Regression stands as one of the most fundamental algorithms in the field of machine learning and data science. It’s not only a staple in the toolbox of any data analyst but also a gateway to understanding the predictive power harnessed by AI. In this post, I’ll take you through a project where I applied linear regression to a real-world problem: predicting housing prices.

Project Overview

My journey began with a dataset reflective of the housing market, brimming with various features: from crime rates to the concentrations of nitric oxides, average numbers of rooms, accessibility to highways, and more. My objective was clear: predict the price of a house based on these features.

Data Loading and Understanding

Like any good story, my started with an introduction to the characters – the variables. By loading my dataset and meticulously reviewing each variable’s role, I set the stage for my analysis. Understanding the data is paramount; it lays the groundwork for all subsequent steps and ensures my models are built on a solid foundation.

Preparing the Data

Next, I divided my dataset into inputs and outputs, the predictors and the predicted. This step is akin to casting roles in a play, deciding which actors take the stage (input features) and what story they’ll tell (output predictions).

Training and Test Split

In machine learning, it’s essential to leave out a part of the data as a final exam for our model — this is my test set. I split my data, reserving 30% of it for testing, ensuring that my model would be judged on unseen data, a true test of its predictive abilities.

Visualizing Relationships

Before I let my model learn from the data, I took a visual approach to understand the relationships between my features and the target variable. Through pair plots, I gained insights into which variables showed promise as predictors and which might require a second look.

The Linear Regression Model

With my stage set, I introduced the star of the show — my linear regression model. This model learns from the training data, adjusting its parameters to fit the patterns it observes. I trained it, I tested it, and then I visualized its predictions against actual values.

Performance Evaluation

No performance is complete without a review. I turned to two key metrics to evaluate my model: the Root Mean Square Error (RMS) and the R-squared value. The RMS told me how close my predictions were to reality, on average. The R-squared value gave me the percentage of variance my model could explain.

Results and Reflections

My model performed admirably, with an R-squared value of 0.7495, suggesting that it could explain about 75% of the variance in housing prices. However, the RMS of 4.6650 indicated room for improvement.


Linear regression provided me with a powerful baseline model for predicting housing prices. My analysis demonstrated a structured approach to predictive modeling using Python, which is both accessible and potent. Yet, like any first draft, my model can be revised and improved. The path forward may involve feature engineering, model tuning, or even exploring more sophisticated algorithms.