Perceptron Demo

Perceptron Demo preview image

1 collaborator

Default-person Marco Giordano (Author)

Tags

artificial intelligence 

"The perceptron is a legacy systems on which a lot of modern AI models are based."

Tagged by Marco Giordano 6 months ago

Visible to everyone | Changeable by the author
Model was written in NetLogo 6.4.0 • Viewed 220 times • Downloaded 21 times • Run 0 times
Download the 'Perceptron Demo' modelDownload this modelEmbed this model

Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)


WHAT IS IT?

This model demonstrates how a simple Perceptron learns to classify linearly separable data points in a 2D space.

The Perceptron is one of the earliest models of an artificial neuron, introduced by Frank Rosenblatt in 1958 while working at the Cornell Aeronautical Laboratory. It was inspired by biological neurons and designed to mimic how the human brain processes information. The original perceptron was implemented as an actual physical machine, using motorized potentiometers to adjust its weights.

Rosenblatt’s work sparked significant interest in artificial intelligence (AI) and machine learning, as the Perceptron demonstrated that machines could learn from data through trial and error. However, in 1969, Marvin Minsky and Seymour Papert published a famous critique, Perceptrons, which proved that a single-layer perceptron cannot solve problems that require nonlinear decision boundaries (such as the XOR problem). This led to a temporary decline in research on neural networks.

Despite its limitations, the Perceptron remains an important foundation for modern deep learning models, which overcome its weaknesses using multi-layered networks (e.g., multi-layer perceptrons, convolutional networks, transformers, etc.).

This NetLogo model illustrates the basic Perceptron algorithm by training it to find a linear decision boundary that separates two groups of points in the 2D plane. Users can interactively train the perceptron, observe how the error decreases over time, and test the trained model by adding new points.

HOW IT WORKS

  • The Perceptron starts with random weights and a bias.
  • It is trained on a set of points divided into two groups: -- Red points (Class 1) in the upper-right half-plane. -- Green points (Class 0) in the lower-left half-plane.
  • Training follows these steps:
  • Calculate the output of the perceptron for each training point using a weighted sum.
  • Compare the output to the correct class label.
  • If the output is incorrect, adjust the weights based on the error.
  • Repeat for multiple epochs until the perceptron correctly classifies all points.
  • A decision boundary (line) is updated dynamically during training.
  • The user can click to test new points, and the trained Perceptron classifies them.

HOW TO USE IT

Sliders

  • Ndatapoints – Determines the number of training points in each class.
  • learning_rate – Controls how quickly the Perceptron adjusts its weights during training.
  • training-delay - introduces a delay in the processing of one data point, so that it is possible follow the training of each point and the consequent update of the separation line ### Buttons
  • Setup – Initializes the Perceptron, generates the training dataset, and plots the initial error.
  • Train – Runs one epoch of training, updating weights and the decision boundary.
  • Test Mode – When in test mode, clicking on the world will classify a new test point. ### Plots and Monitors
  • Two monitors display the current values of weights and bias
  • Training Error – Shows the initial error and how the error decreases over training epochs.

THINGS TO NOTICE

  • Observe how the decision boundary updates as training progresses.
  • Watch how the training error evolves in the plot: it should decrease as the perceptron learns.
  • When the Perceptron correctly classifies all points, the error should stabilize at zero.

THINGS TO TRY

  • Change Ndatapoints and observe how more or fewer points affect learning speed.
  • Adjust the learning_rate to see how it impacts training: -- A higher learning rate speeds up training but might overshoot. -- A lower learning rate slows training but may lead to slower convergence.
  • Click to classify new points in Test Mode and observe whether the trained Perceptron generalizes well.

EXTENDING THE MODEL

  • Introduce non-linearly separable data and modify the perceptron to handle it (e.g., using multiple layers).
  • Add a second perceptron and train both together to separate more complex data distributions.
  • Implement a different activation function (such as a sigmoid) to allow continuous output values.
  • Add a reset function to clear test points while keeping the trained perceptron.

NETLOGO FEATURES

  • Uses breeds to manage different classes of points (red-points, green-points, test-points).
  • Uses patch colors to dynamically visualize the decision boundary.
  • Handles user interaction with mouse-down? to allow real-time testing.

RELATED MODELS

Artificial Neural Net - Perceptron Artificial Neurla Net - Multilayer

CREDITS AND REFERENCES

Comments and Questions

Please start the discussion about this model! (You'll first need to log in.)

Click to Run Model

; NetLogo script to illustrate a simple Perceptron

globals [
  weights  ; List of weights [w1 w2] for inputs x and y
  bias     ; Bias term
  epoch-error ; list of errors for each epoch
  initial-error ; cumulative initial-error
  ; Learning rate for weight updates is an interface control
]

breed [red-points red-point] ; Breed for red points (class 1)
breed [green-points green-point] ; Breed for green points (class 0)
breed [test-points test-point] ; Breed for test points

patches-own [ original-color ] ; To store original patch color for visualization

to setup
  clear-all
  clear-plot
  set epoch-error []
  set initial-error 0
  ask patches [ set pcolor white ] ; Set background color to white
  initialize-weights  ; Initialize perceptron weights and bias
  create-training-data ; Generate training dataset

  ; Compute initial error and highlight misclassified points
  ask red-points [
    let err_value compute-error self "red"
    set initial-error initial-error + abs err_value
    if abs err_value > 0 [ set shape "triangle" ]  ;; Highlight misclassified red points
  ]
  ask green-points [
    let err_value compute-error self "green"
    set initial-error initial-error + abs err_value
    if abs err_value > 0 [ set shape "triangle" ]  ;; Highlight misclassified green points
  ]

  ; Initialize epoch-error with the first error value
  set epoch-error lput initial-error epoch-error
  print (word "Initial Error: " (initial-error))

  reset-ticks ; Reset the tick counter
  update-display ; Update the visualization
  plot-error  ; Plot initial error
end 

to go ; Training procedure to be called repeatedly
  train ; Perform one training epoch
  update-display ; Update the display after training
  plot-error ; Update the error plot
end 

to start-test-mode
  ;; Enable test mode interaction
  create-test-point
end 

to initialize-weights
  ; Initialize weights and bias to small random values
  set weights (list random-float 1 random-float 1)
  set weights replace-item 1 weights (-1 * item 1 weights) ; to get an initial separation line in the first-third quadrant
  set bias random-float 1
end 

to create-training-data
  let MARGIN 3  ;; Separation margin from y = -x

  ; Red points: Upper regions, ensuring a margin above y = -x
  create-red-points N_data_points [
    let attempts 0
    let max-attempts 100  ;; Avoid infinite loops
    let x random 40 - 20
    let y random 40 - 20
    while [(y < (- x + MARGIN)) and (attempts < max-attempts)] [
      set x random 40 - 20
      set y random 40 - 20
      set attempts attempts + 1
    ]
    if attempts < max-attempts [  ;; Place only if valid
      setxy x y
      set color red
      set shape "dot"
      set original-color red
    ]
  ]

  ; Green points: Lower regions, ensuring a margin below y = -x
  create-green-points N_data_points [
    let attempts 0
    let max-attempts 100  ;; Avoid infinite loops
    let x random 40 - 20
    let y random 40 - 20
    while [(y > (- x - MARGIN)) and (attempts < max-attempts)] [
      set x random 40 - 20
      set y random 40 - 20
      set attempts attempts + 1
    ]
    if attempts < max-attempts [  ;; Place only if valid
      setxy x y
      set color green
      set shape "dot"
      set original-color green
    ]
  ]
end 

to train
  let total-error 0
  print "---- STARTING TRAINING EPOCH ----"  ;; Debug message

  ;; Train on all red points one at a time
  foreach sort red-points [
    point ->
    ask point [ set shape "star"
    set size 3]
    wait training-delay / 2
    ask point [set size 1]
    let err train-perceptron point "red"
    set total-error total-error + abs err
    update-display  ;; Now called in observer context
    ;plot-error
    wait training-delay / 2  ;; Pause to visualize the change
  ]

  ;; Train on all green points one at a time
  foreach sort green-points [
    point ->
    ask point [ set shape "star"
    set size 3]
    wait training-delay / 2
    ask point [set size 1]
    let err train-perceptron point "green"
    set total-error total-error + abs err
    update-display  ;; Now called in observer context
    ;plot-error
    wait training-delay / 2  ;; Pause to visualize the change
  ]

  ;; Store error and update epoch
  set epoch-error lput total-error epoch-error
  print (word "Epoch: " (length epoch-error - 1) " | Error: " total-error)
end 

to-report compute-error [point expected-label]
  let input-x [xcor] of point
  let input-y [ycor] of point
  let output calculate-output input-x input-y
  let ground-truth ifelse-value (expected-label = "red") [1] [0]
  let err ground-truth - output

  report err
end 

to-report train-perceptron [point expected-label]
  let old-w1 item 0 weights
  let old-w2 item 1 weights
  let old-bias bias
  ; Perceptron learning rule

  let err compute-error point expected-label ; compute error
  print (word "point (" ([xcor] of point) "," ([ycor] of point) ") Err:" err)

  ; Update weights and bias if there is an error
  if err != 0 [
    ask point [set shape "triangle"]
    let new-w1 item 0 weights + learning-rate * err * [xcor] of point
    let new-w2 item 1 weights + learning-rate * err * [ycor] of point
    set weights (list new-w1 new-w2)
    set bias bias + learning-rate * err
    ;print (word "🔄 Updating Weights: " old-w1 ", " old-w2 " → " new-w1 ", " new-w2 " | Bias: " old-bias " → " bias)
  ]
  if err = 0 [
    ask point [ set shape "dot" ]  ;; Restore normal shape
  ]
  report abs err
end 

to-report calculate-output [input-x input-y]
  ; Perceptron output calculation
  let s (item 0 weights) * input-x + (item 1 weights) * input-y + bias ; Linear combination
  ifelse s > 0
    [ report 1 ] ; Activation function (step function): 1 if sum > 0, else 0
    [ report 0 ]
end 

to-report classify [x y] ; Classify a point (x, y)
  let output calculate-output x y ; Get perceptron output
  ifelse (output = 1)
    [ report "red" ] ; If output is 1, classify as "red"
    [report "green"] ; Otherwise, classify as "green"
end 

to create-test-point
    if mouse-down? [
      let x mouse-xcor
      let y mouse-ycor
      ;; Check if the click is within world bounds
      if (x >= min-pxcor and x <= max-pxcor and y >= min-pycor and y <= max-pycor) [
        create-test-points 1 [
          setxy x y
          set color black  ;; Default test point color
          set shape "square"
          update-test-point-color self
        ]
      ]
    ]
end 

to update-display
  ; Update the display: clear drawing and redraw separator line
  clear-drawing
  draw-separator-line ; Draw the line representing the perceptron decision boundary
  ask test-points [ update-test-point-color self ] ; Update color of test points based on classification
end 

to draw-separator-line
  ; Ensure the entire world is updated
  ask patches [ set pcolor white ]  ;; Reset all patches to white

  ask patches
  [
    let s (item 0 weights) * pxcor + (item 1 weights) * pycor + bias ; Linear combination
    ifelse (s > 0)
      [ set pcolor rgb 80 80 80 ]
      [ set pcolor white ]
  ]
end 

to update-test-point-color [testPoint]
  ; Update the color of a test point based on perceptron classification
  ask testPoint [
    let classification classify xcor ycor ; Classify the test point
    if classification = "red" [ set color red ] ; Set color to red if classified as red
    if classification = "green" [ set color green ] ; Set color to green if classified as green
  ]
end 

to plot-error
  set-current-plot "Training Error"

  ; Plot initial error in a different color
  set-current-plot-pen "Initial Error"
  plotxy 0 initial-error


  ;; Plot the line graph (default pen behavior)
  set-current-plot-pen "Error"
  plot last epoch-error  ;; Standard plot (connects points with a line)
end 

There are 2 versions of this model.

Uploaded by When Description Download
Marco Giordano 6 months ago several bug fixes Download this version
Marco Giordano 6 months ago Initial upload Download this version

Attached files

File Type Description Last updated
Perceptron Demo.png preview Preview for 'Perceptron Demo' 6 months ago, by Marco Giordano Download

This model does not have any ancestors.

This model does not have any descendants.