Artificial Neural Net - Perceptron
Do you have questions or comments about this model? Ask them here! (You'll first need to log in.)
WHAT IS IT?
Artificial Neural Networks (ANNs) are computational parallels of biological neurons. The "perceptron" was the first attempt at this particular type of machine learning. It attempts to classify input signals and output a result. It does this by being given a lot of examples and attempting to classify them, and having a supervisor tell it if the classification was right or wrong. Based on this information the perceptron updates its weights until it classifies all inputs correctly.
For a while it was thought that perceptrons might make good general pattern recognition units. However, it was discovered that a single perceptron can not learn some basic tasks like 'xor' because they are not linearly separable. This model illustrates this case.
HOW IT WORKS
The nodes on the left are the input nodes. They can have a value of 1 or -1. These are how one presents input to the perceptron. The node in the middle is the bias node. Its value is constantly set to '1' and allows the perceptron to use a constant in its calculation. The one output node is on the right. The nodes are connected by links. Each link has a weight.
To determine its value, an output node computes the weighted sum of its input nodes. The value of each input node is multiplied by the weight of the link connecting it to the output node to give a weighted value. The weighted values are then all added up. If the result is above a threshold value, then the value is 1, otherwise it is -1. The threshold value for the output node in this model is 0.
While the network is training, inputs are presented to the perceptron. The output node value is compared to an expected value, and the weights of the links are updated in order to try and correctly classify the inputs.
HOW TO USE IT
SETUP will initialize the model and reset any weights to a small random number.
Press TRAIN ONCE to run one epoch of training. The number of examples presented to the network during this epoch is controlled by EXAMPLES-PER-EPOCH slider.
Press TRAIN to continually train the network.
Moving the LEARNING-RATE slider changes the maximum amount of movement that any one example can have on a particular weight.
Pressing TEST will input the values of INPUT-1 and INPUT-2 to the perceptron and compute the output.
In the view, the larger the size of the link the greater the weight it has. If the link is red then its a positive weight. If the link is blue then its a negative weight.
If SHOW-WEIGHTS? is on then the links will be labelled with their weights.
The TARGET-FUNCTION chooser allows you to decide which function the perceptron is trying to learn.
THINGS TO NOTICE
The perceptron will quickly learn the 'or' function. However it will never learn the 'xor' function. Not only that but when trying to learn the 'xor' function it will never settle down to a particular set of weights as a result it is completely useless as a pattern classifier for non-linearly separable functions. This problem with perceptrons can be solved by combining several of them together as is done in multi-layer networks. For an example of that please examine the ANN Neural Network model.
The RULE LEARNED graph visually demonstrates the line of separation that the perceptron has learned, and presents the current inputs and their classifications. Dots that are green represent points that should be classified positively. Dots that are red represent points that should be classified negatively. The line that is presented is what the perceptron has learned. Everything on one side of the line will be classified positively and everything on the other side of the line will be classified negatively. As should be obvious from watching this graph, it is impossible to draw a straight line that separates the red and the green dots in the 'xor' function. This is what is meant when it is said that the 'xor' function is not linearly separable.
The ERROR VS. EPOCHS graph displays the relationship between the squared error and the number of training epochs.
THINGS TO TRY
Try different learning rates and see how this affects the motion of the RULE LEARNED graph.
Try training the perceptron several times using the 'or' rule and turning on SHOW-WEIGHTS? Does the model ever change?
How does modifying the number of EXAMPLES-PER-EPOCH affect the ERROR graph?
EXTENDING THE MODEL
Can you come up with a new learning rule to update the edge weights that will always converge even if the function is not linearly separable?
Can you modify the LEARNED RULE graph so it is obvious which side of the line is positive and which side is negative?
NETLOGO FEATURES
This model makes use of some of the link features. It also treats each node and link as an individual agent. This is distinct from many other languages where the whole perceptron would be treated as a single agent.
RELATED MODELS
Artificial Neural Net shows how arranging perceptrons in multiple layers can overcomes some of the limitations of this model (such as the inability to learn 'xor')
CREDITS AND REFERENCES
Several of the equations in this model are derived from Tom Mitchell's book "Machine Learning" (1997).
Perceptrons were initially proposed in the late 1950s by Frank Rosenblatt.
A standard work on perceptrons is the book Perceptrons by Marvin Minsky and Seymour Papert (1969). The book includes the result that single-layer perceptrons cannot learn XOR. The discovery that multi-layer perceptrons can learn it came later, in the 1980s.
Thanks to Craig Brozefsky for his work in improving this model.
HOW TO CITE
If you mention this model in a publication, we ask that you include these citations for the model itself and for the NetLogo software:
- Rand, W. and Wilensky, U. (2006). NetLogo Artificial Neural Net - Perceptron model. http://ccl.northwestern.edu/netlogo/models/ArtificialNeuralNet-Perceptron. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.
- Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.
COPYRIGHT AND LICENSE
Copyright 2006 Uri Wilensky.
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
Commercial licenses are also available. To inquire about commercial licenses, please contact Uri Wilensky at uri@northwestern.edu.
Comments and Questions
globals [ epoch-error ;; average error in this epoch perceptron ;; a single output-node input-node-1 ;; keep the input nodes in globals so we can refer input-node-2 ;; to them directly and distinctly ] ;; A perceptron is modeled by input-node and bias-node agents ;; connected to an output-node agent. ;; Connections from input nodes to output nodes ;; in a perceptron. links-own [ weight ] ;; all nodes an activation ;; input nodes have a value of 1 or -1 ;; bias-nodes are always 1 turtles-own [activation] breed [ input-nodes input-node ] ;; bias nodes are input-nodes whose activation ;; is always 1. breed [ bias-nodes bias-node ] ;; output nodes compute the weighted some of their ;; inputs and then set their activation to 1 if ;; the sum is greater than their threshold. An ;; output node can also be the input-node for another ;; perceptron. breed [ output-nodes output-node ] output-nodes-own [threshold] ;; ;; Setup Procedures ;; to setup clear-all ;; set our background to something more viewable than black ask patches [ set pcolor grey ] set-default-shape input-nodes "circle" set-default-shape bias-nodes "bias-node" set-default-shape output-nodes "output-node" create-output-nodes 1 [ set activation random-activation set xcor 6 set size 2 set threshold 0 set perceptron self ] create-bias-nodes 1 [ set activation 1 setxy 3 7 set size 1.5 my-create-link-to perceptron ] create-input-nodes 1 [ setup-input-node setxy -6 5 set input-node-1 self ] create-input-nodes 1 [ setup-input-node setxy -6 0 set input-node-2 self ] ask perceptron [ compute-activation ] reset-ticks end to setup-input-node set activation random-activation set size 1.5 my-create-link-to perceptron end ;; links an input or bias node to an output node to my-create-link-to [ anode ] ;; input or bias node procedure create-link-to anode [ set color red + 1 ;; links start with a random weight set weight random-float 0.1 - 0.05 set shape "small-arrow-shape" ] end ;; ;; Runtime Procedures ;; ;; train sets the input nodes to a random input ;; it then computes the output ;; it determines the correct answer and back propagates the weight changes to train ;; observer procedure set epoch-error 0 repeat examples-per-epoch [ ;; set the input nodes randomly ask input-nodes [ set activation random-activation ] ;; distribute error ask perceptron [ compute-activation update-weights target-answer recolor ] ] ;; plot stats set epoch-error epoch-error / examples-per-epoch set epoch-error epoch-error * 0.5 tick plot-error plot-learned-line end ;; compute activation by summing the inputs * weights \ ;; and run through sign function which determines whether ;; the computed value is above or below the threshold to compute-activation ;; output-node procedure set activation sign sum [ [activation] of end1 * weight ] of my-in-links recolor end to update-weights [ answer ] ;; output-node procedure let output-answer activation ;; calculate error for output nodes let output-error answer - output-answer ;; update the epoch-error set epoch-error epoch-error + (answer - sign output-answer) ^ 2 ;; examine input output edges and set their new weight ;; increasing or decreasing it by a value determined by the learning-rate ask my-in-links [ set weight weight + learning-rate * output-error * [activation] of end1 ] end ;; computes the sign function given an input value to-report sign [input] ;; output-node procedure ifelse input > threshold [ report 1 ] [ report -1 ] end to-report random-activation ;; observer procedure ifelse random 2 = 0 [ report 1 ] [ report -1 ] end to-report target-answer ;; observer procedure let a [activation] of input-node-1 = 1 let b [activation] of input-node-2 = 1 report ifelse-value (run-result (word "my-" target-function " a b")) [1][-1] end to-report my-or [a b];; output-node procedure report (a or b) end to-report my-xor [a b] ;; output-node procedure report (a xor b) end to-report my-and [a b] ;; output-node procedure report (a and b) end to-report my-nor [a b] ;; output-node procedure report not (a or b) end to-report my-nand [a b] ;; output-node procedure report not (a and b) end ;; test runs one instance and computes the output to test ;; observer procedure ask input-node-1 [ set activation input-1 ] ask input-node-2 [ set activation input-2 ] ;; compute the correct answer let correct-answer target-answer ;; color the nodes ask perceptron [ compute-activation ] ;; compute the answer let output-answer [activation] of perceptron ;; output the result ifelse output-answer = correct-answer [ user-message (word "Output: " output-answer "\nTarget: " correct-answer "\nCorrect Answer!") ] [ user-message (word "Output: " output-answer "\nTarget: " correct-answer "\nIncorrect Answer!") ] end ;; Sets the color of the perceptron's nodes appropriately ;; based on activation to recolor ;; output, input, or bias node procedure ifelse activation = 1 [ set color white ] [ set color black ] ask in-link-neighbors [ recolor ] resize-recolor-links end ;; resize and recolor the edges ;; resize to indicate weight ;; recolor to indicate positive or negative to resize-recolor-links ask links [ ifelse show-weights? [ set label precision weight 4 ] [ set label "" ] set thickness 0.1 + 20 * abs weight ifelse weight > 0 [ set color [ 255 0 0 196 ] ] ; transparent red [ set color [ 0 0 255 196 ] ] ; transparent light blue ] end ;; ;; Plotting Procedures ;; ;; plot the error from the training to plot-error ;; observer procedure set-current-plot "Error vs. Epochs" plotxy ticks epoch-error end ;; plot the decision line learned to plot-learned-line ;; observer procedure set-current-plot "Rule Learned" clear-plot run word "plot-" target-function ;; cycle through all the x-values and plot the corresponding x-values let x1 -2 let edge1 [out-link-to perceptron] of input-node-1 let edge2 [out-link-to perceptron] of input-node-2 foreach n-values 5 [? - 2] [ ;; calculate w0 (the bias weight) let w0 sum [[weight] of out-link-to perceptron] of bias-nodes ;; put it all together let x2 ( (- w0 - [weight] of edge1 * ?) / [weight] of edge2 ) ;; plot x1, x2 set-current-plot-pen "rule" plotxy ? x2 ] end to plot-or set-current-plot-pen "positives" plotxy -1 1 plotxy 1 1 plotxy 1 -1 set-current-plot-pen "negatives" plotxy -1 -1 end to plot-xor set-current-plot-pen "positives" plotxy -1 1 plotxy 1 -1 set-current-plot-pen "negatives" plotxy 1 1 plotxy -1 -1 end to plot-and set-current-plot-pen "positives" plotxy 1 1 set-current-plot-pen "negatives" plotxy 1 -1 plotxy -1 1 plotxy -1 -1 end to plot-nor set-current-plot-pen "positives" plotxy -1 -1 set-current-plot-pen "negatives" plotxy 1 1 plotxy 1 -1 plotxy -1 1 end to plot-nand set-current-plot-pen "positives" plotxy -1 -1 plotxy 1 -1 plotxy -1 1 set-current-plot-pen "negatives" plotxy 1 1 end ; Copyright 2006 Uri Wilensky. ; See Info tab for full copyright and license.
There is only one version of this model, created over 11 years ago by Uri Wilensky.
Attached files
No files
This model does not have any ancestors.
This model does not have any descendants.