Therefore the adaption method is extended by a

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: a cluster center. Before the learning phase of the network, the two-dimensional structure of the output units is fixed and the weights are initialized randomly. During learning, the sample vectors (defining the documents) are repeatedly propagated through the network. The weights of the most similar prototype ws (winner neuron) are modified such that the prototype moves toward the input vector wi , which is defined by the currently considered document d, i.e. wi := td (competitive learning). As similarity measure usually the Euclidean distance is used. However, for text documents the scalar product (see Eq. 3) can be applied. The weights ws of the winner neuron are modified according to the following equation: w s = w s + σ · ( w s − wi ), where σ is a learning rate. To preserve the neighborhood relations, prototypes that are close to the winner neuron in the two-dimensional structure are also moved in the same direction. The weight change decreases with the distance from the winner neuron. Therefore, the adaption method is extended by a neighborhood function v (see also Fig. 3): w s = w s + v ( i , s ) · σ · ( w s − wi ), where σ is a learning rate. By this learning procedure, the...
View Full Document

This note was uploaded on 06/19/2011 for the course IT 2258 taught by Professor Aymenali during the Summer '11 term at Abu Dhabi University.

Ask a homework question - tutors are online