13
The Hopfield Model
One of the milestones for the current renaissance in the field of neural networks
was the associative model proposed by Hopfield at the beginning of the 1980s.
Hopfield’s approach illustrates the way theoretical physicists like to think
about ensembles of computing units. No synchronization is required, each
unit behaving as a kind of elementary system in complex interaction with the
rest of the ensemble. An energy function must be introduced to harness the
theoretical complexities posed by such an approach. The next two sections
deal with the structure of Hopfield networks. We then proceed to show that
the model converges to a stable state and that two kinds of learning rules can
be used to find appropriate network weights.
13.1 Synchronous and asynchronous networks
A relevant issue for the correct design of recurrent neural networks is the ad
equate synchronization of the computing elements. In the case of McCulloch
Pitts networks we solved this diﬃculty by assuming that the activation of each
computing element consumes a unit of time. The network is built taking this
delay into account and by arranging the elements and their connections in the
necessary pattern. When the arrangement becomes too contrived, additional
units can be included which serve as delay elements. What happens when
this assumption is lifted, that is, when the synchronization of the computing
elements is eliminated?
13.1.1 Recursive networks with stochastic dynamics
We discussed the design and operation of associative networks in the previous
chapter. The synchronization of the output was achieved by requiring that all
computing elements evaluate their inputs and compute their output simulta
neously. Under this assumption the operation of the associative memory can
R. Rojas: Neural Networks, SpringerVerlag, Berlin, 1996
R. Rojas: Neural Networks, SpringerVerlag, Berlin, 1996
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
338
13 The Hopfield Model
be described with simple linear algebraic methods. The excitation of the out
put units is computed using vectormatrix multiplication and evaluating the
sign function at each node.
The methods we have used before to avoid dealing explicitly with the
synchronization problem have the disadvantage, from the point of view of both
biology and physics, that global information is needed, namely a global time.
Whereas in conventional computers synchronization of the digital building
blocks is achieved using a clock signal, there is no such global clock in biological
systems. In a more biologically oriented simulation, global synchronization
should thus be avoided. In this chapter we deal with the problem of identifying
the properties of neural networks lacking global synchronization.
Networks in which the computing units are activated at different times
and which provide a computation after a variable amount of time are stochas
tic automata. Networks built from this kind of units behave like
stochastic
dynamical systems
.
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '09
 Artificial Intelligence, Neural Networks, Artificial neural network, neural network, R. Rojas

Click to edit the document details