supervised learning - Supervised Learning Network Dr K Ganesan Director TIFAC-CORE in Automotive Infotronics VIT University Vellore 632 014

supervised learning - Supervised Learning Network Dr K...

This preview shows page 1 - 8 out of 90 pages.

Supervised Learning Network Dr. K. Ganesan Director, TIFAC-CORE in Automotive Infotronics VIT University, Vellore – 632 014 [email protected]
Image of page 1
Perceptron Network Perceptron networks come under single-layer feed-forward networks and also called simple perceptrons. The key concepts to be considered here are: The perceptron network consists of 3 units namely sensory unit (input unit), associator unit (hidden unit) and response unit (output unit).. The sensory units are connected to associator units with fixed weights having values 1,0 or -1, which are assigned at random. The binary activation function is used in sensory unit and associator unit. The response unit has an activation of 1, 0 or -1. The binary step with fixed threshold θ is used as activation for associator. The output signals that are sent from the associator unit to the response unit are only binary.
Image of page 2
A perceptron network with its 3 units is shown in Figure below.
Image of page 3
Perceptron Learning Rule In the case of perceptron learning rule, the learning signal is the difference between the desired and actual response of a neuron. Consider a finite “n” number of input training vectors, with their associated target (desired) values x(n) and t(n), where “n” ranges from 1 to N. The target is either +1 or -1. The output “y” is obtained on the basis of the net input calculated and activation function being applied over the net input. y = f(yin) = 1 if yn > θ = 0 if – θ ≤ yin ≤ θ = -1 if yin < - θ The weight updation in case of perceptron learning is an as shown: If y ≠ t, then w(new) = w(old) + α.tx else, we have w(new) = w(old).
Image of page 4
Perceptron Learning Rule In the original perceptron network, the output obtained from the associator unit is a binary vector, and hence that output can be taken as input signal to the response unit, and classification can be performed. Here only the weights between the associator unit and the output unit can be adjusted, and weights between the sensory and associator units are fixed. As a result the discussion of the network is limited to a single portion. Thus the associator unit behaves like the input unit.
Image of page 5
Perceptron training algorithm for single output classes The perceptron algorithm can be used for either binary or bipolar input vectors, having bipolar targets , threshold being fixed and variable bias. This algorithm is not sensitive to the initial values of the weights or the value of the learning rate . In the algorithm discussed below, initially the inputs are assigned. Then the net input is calculated. The output of the network is obtained by applying the activation function over the calculated net input. On performing the comparison over the calculated and the desired output, the weight updation process is carried out. The entire network is trained based on the mentioned stopping criterion.
Image of page 6
Image of page 7
Image of page 8

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture