Perceptron Model Implementation Flow

1. Initialization

First, we set up the Perceptron. Think of this as giving the model a "blank brain" with some random starting guesses.

  • Initialize Weights (w): A set of random numbers. We need one weight for every input feature.
    self.weights = 2 * np.random.rand(input_size) - 1
  • Initialize Bias (b): A single random number that helps shift the output.
    self.bias = np.random.rand(1)
  • Set Learning Rate (α): A small number that controls how *fast* the model learns.
    self.learning_rate = 0.1
2. The Training Loop

This is where the learning happens! We show the model our data (inputs and correct answers) over and over, letting it adjust itself each time.

This happens for each data point:
Forward Pass: Make a Guess (Predict)

Feed an input into the Perceptron to see what it predicts.

prediction = self.sigmoid(np.dot(inputs, self.weights) + self.bias)
Check the Guess (Calculate Error)

Compare the prediction to the *actual* target answer.

error = target_output - prediction
Backpropagation: Learn from the Mistake (Update)

Slightly adjust the weights and bias based on the error.

self.weights += self.learning_rate * error * inputs
self.bias += self.learning_rate * error

We repeat this A-B-C process for every data point. One full pass through the *entire* dataset is called an Epoch. We run many epochs until the model's average error is very low.

3. Evaluation & Prediction

Once training is complete, the model is ready. We use its (now "smart") weights and bias to make predictions on new, unseen data.

  • Make Predictions: Feed new data to the predict() method. It will output a probability (e.g., 0.9).
  • Apply a Threshold: Convert the probability into a clear 0 or 1. (e.g., if 0.9 > 0.5, classify as 1).
  • Measure Success: We check how many predictions the model got right using a metric like **Accuracy**.