[go: up one dir, main page]

0% found this document useful (0 votes)
26 views3 pages

Multiple Linear Regression

The document outlines the implementation of Multiple Linear Regression, detailing its formula and the steps required for execution. Key steps include importing libraries, generating synthetic training data, defining prediction and cost functions, computing gradients, and applying the gradient descent algorithm. Finally, it demonstrates how to run the model and visualize the cost function over iterations.

Uploaded by

bolaatalmdaar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views3 pages

Multiple Linear Regression

The document outlines the implementation of Multiple Linear Regression, detailing its formula and the steps required for execution. Key steps include importing libraries, generating synthetic training data, defining prediction and cost functions, computing gradients, and applying the gradient descent algorithm. Finally, it demonstrates how to run the model and visualize the cost function over iterations.

Uploaded by

bolaatalmdaar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Multiple Linear Regression Implementation

# Multiple Linear Regression

Multiple Linear Regression is a predictive modeling technique that


assumes a linear relationship between multiple independent variables
and a dependent variable.

## Formula
f(x) = w1*x1 + w2*x2 + ... + wn*xn + b

Where:
- x1, x2, ..., xn are the features (independent variables)
- w1, w2, ..., wn are the weights (coefficients)
- b is the bias term
- f(x) is the predicted output

## Implementation Steps

### 1. Importing Necessary Libraries


```python
import numpy as np
import matplotlib.pyplot as plt
```
### 2. Generating Synthetic Training Data
```python
np.random.seed(42)
m = 100
X_train = np.random.rand(m, 4) * 100
y_train = 50 + 3 * X_train[:, 0] + 5 * X_train[:, 1] + 2 * X_train[:, 2] - 1.5 *
X_train[:, 3] + np.random.randn(m) * 10
```
### 3. Prediction Function
```python
def predict(X, w, b):
return np.dot(X, w) + b
```
### 4. Cost Function (Mean Squared Error - MSE)
```python
def compute_cost(X, y, w, b):
m = len(y)
predictions = predict(X, w, b)
cost = (1 / (2 * m)) * np.sum((predictions - y) ** 2)
return cost
```
### 5. Gradient Computation
```python
def compute_gradient(X, y, w, b):
m = len(y)
predictions = predict(X, w, b)
error = predictions - y

dw = (1 / m) * np.dot(X.T, error)
db = (1 / m) * np.sum(error)

return dw, db
```
### 6. Gradient Descent Algorithm
```python
def gradient_descent(X, y, w, b, alpha, epochs):
cost_history = []

for i in range(epochs):
dw, db = compute_gradient(X, y, w, b)
w -= alpha * dw
b -= alpha * db
cost = compute_cost(X, y, w, b)
cost_history.append(cost)

if i % 100 == 0:
print(f"Iteration {i}: Cost = {cost}")

return w, b, cost_history
```
### 7. Running the Model
```python
alpha = 0.0001
epochs = 1000

w_init = np.zeros(4)
b_init = 0

w_final, b_final, cost_history = gradient_descent(X_train, y_train,


w_init, b_init, alpha, epochs)
```
### 8. Plotting Cost Function
```python
plt.plot(range(epochs), cost_history)
plt.xlabel("Iterations")
plt.ylabel("Cost")
plt.title("Cost Function Over Iterations")
plt.show()
```

You might also like