This preview shows page 1. Sign up to view the full content.
Unformatted text preview: a cluster center. Before the learning phase of the network,
the two-dimensional structure of the output units is ﬁxed and the weights
are initialized randomly. During learning, the sample vectors (deﬁning the
documents) are repeatedly propagated through the network. The weights of the
most similar prototype ws (winner neuron) are modiﬁed such that the prototype
moves toward the input vector wi , which is deﬁned by the currently considered
document d, i.e. wi := td (competitive learning). As similarity measure usually
the Euclidean distance is used. However, for text documents the scalar product
(see Eq. 3) can be applied. The weights ws of the winner neuron are modiﬁed
according to the following equation:
w s = w s + σ · ( w s − wi ),
where σ is a learning rate.
To preserve the neighborhood relations, prototypes that are close to the
winner neuron in the two-dimensional structure are also moved in the same
direction. The weight change decreases with the distance from the winner
neuron. Therefore, the adaption method is extended by a neighborhood function
v (see also Fig. 3):
w s = w s + v ( i , s ) · σ · ( w s − wi ),
where σ is a learning rate. By this learning procedure, the...
View Full Document
This note was uploaded on 06/19/2011 for the course IT 2258 taught by Professor Aymenali during the Summer '11 term at Abu Dhabi University.
- Summer '11