First, we set up the Perceptron. Think of this as giving the model a "blank brain" with some random starting guesses.
This is where the learning happens! We show the model our data (inputs and correct answers) over and over, letting it adjust itself each time.
Feed an input into the Perceptron to see what it predicts.
prediction = self.sigmoid(np.dot(inputs, self.weights) + self.bias)Compare the prediction to the *actual* target answer.
error = target_output - predictionSlightly adjust the weights and bias based on the error.
self.weights += self.learning_rate * error * inputsWe repeat this A-B-C process for every data point. One full pass through the *entire* dataset is called an Epoch. We run many epochs until the model's average error is very low.
Once training is complete, the model is ready. We use its (now "smart") weights and bias to make predictions on new, unseen data.
All assignment codes are stored here:
View Source Code