Delta and Perceptron Training Rules for Neuron Training
Delta and Perceptron Training Rules for Neuron Training
This Demonstration shows how a single neuron is trained to perform simple linear functions in the form of logic functions (AND, OR, X1, X2) and its inability to do that for a nonlinear function (XOR) using either the "delta rule" or the "perceptron training rule".
Select the logic function to be trained on the perceptron. As you vary the training set, the plot and table are updated to show the current weights, decision line, and how the function would be evaluated according to the perceptron's state. You can adjust the learning rate with the parameter . The "Random" button randomizes the weights so that the perceptron can learn from scratch.
α
The inputs can be set on and off with the checkboxes. The dot representing the input coordinates is green or red as the function evaluates to true or false, respectively.
The diagram on the right shows the connections between the inputs ( and ), weights ( and ), and threshold ().
x
1
x
2
w
1
w
2
θ
The current logic table is shown in the table below the graph with inputs and and output .
x
1
x
2
y
The pattern space is the area in which the neuron is defined. This area represents the different possibilities that can occur for different inputs. The decision line, the horizon from which the function is evaluated as true or false, is calculated from the -intercepts and .
y
0,
θ
w
2
,0
θ
w
2
w
1
w
2
As more training sets are passed through the perceptron (as you move the slider to the right) the perceptron learns the behavior expected from it. If the perceptron does not converge to a desired solution, reset the weights and try again. The exception is the XOR function, which will never converge as it is not a linearly separable function.