This preview shows page 1. Sign up to view the full content.
Unformatted text preview: BIOSYSTEMS II: NEUROSCIENCES 2008 Spring Semester Lecture 27 Kechen Zhang (1) 4/9/2008 Commonly Used single Neuron Models Compartment models Integrateandfire models Stochastic models Firing rate models Compartment model of various complexity Details of 3 compartments IntegrateandFire Model
(V V0 ) dV C = +I dt R
Generate a spike whenever V reaches threshold Vthres, then immediately reset it to Vreset V  voltage I  input current C  capacitance R  resistance V0  resting potential Inputoutput relation (gain function) of an integrateandfire neuron Examples of gain functions
Logistic or sigmoid function: 1 y= 1 + exp ( x ) More general form: 1 y= 1 + exp ( k (x c )) y = sign( x) Firing Rate Model of a Neuron Inputs firing rates from other neurons: x1, x2, x3, ... Synaptic weights: w1, w2, w3, ... Nonlinear gain function (inputoutput relation): g Threshold parameter: Output firing rate: y=g wi xi
i Feedforward vs. Recurrent Networks
Feedforward Recurrent An Example of Biological Neural Networks: C elegans All synaptic connections among 302 neurons are known and consistent from animal to animal A feedforward network can describe only part of a larger recurrent network An Example of Biological Neural Networks: Mammalian Neocortex Neocortex of a kitten (Cajal) Local architecture Over 80% of brain volume in humans are occupied by neocortex, and over 98% of axons in white matter interconnect different areas of the neocortex itself rather than connect with other parts of the brain. So the neocortical system is a huge recurrent network. Example: A feedforward network in visual pathway Oriented receptive field of a simple cell in visual cortex can be derived by linearly combining inputs from many smaller circular receptive fields in lateral geniculate body.
(Hubel) McCulloch and Pitts Model (1943) Binary neurons with thresholds Synchronized update Memory as reverberant activity in a circle A network is powerful enough to do arbitrary logical calculations But, brain is not a digital computer Perceptron Invented by Rosenblatt (1950's) Output is a weighted linear combination of the inputs Supervised learning: minimize error for each example by updating the weights Guaranteed convergence to a solution in finite steps if the classification problem is linearly separable Perceptron
Output: y=
i wi xi
input pattern: x1, x2, x3, ... weights: w1, w2, w3, ... threshold: where Each input pattern is classified into one of two classes depending on whether y > 0 or y < 0 . Learning rule: wi = (Y
where y ) xi desired output: Y learning rate: > 0 Example of Linear Separability: Volleyball team
Name John Mary Tom
Ouput Height (input x1) 5'10" 5'7" 6'2" Weight (input x2) 140 110 190 Gender (desired output Y) 1 0 1 y = w1 x1 + w2 x 2
Separable Case where = 0.5 is threshold Inseparable Case Weight Weight y>0
y<0
Height Height Perceptron learning rule as gradient descent
Consider the square error for the desired output Y and the actual output y : 1 E = ( y Y )2 2 To minimize the error, we change the weights slightly along the direction of the steepest descent (gradient descent ). This gives : E y = (y Y) wi = = ( y Y ) xi wi wi which is exactly the perceptron learning rule. ExclusiveOR (XOR) Problem
Input x1 Input x2 Output y 0 0 1 1 0 1 0 1 1 1 1 1
0 0 1 1 x2 x1 XOR: either A or B but not both. This problem is linearly inseparable and cannot be learned by a perceptron. w i x i = const In general, the decision boundary of a perceptron is given by i which corresponds to a hyperplane in the input space. A Solution to the XOR Problem by Multilayer Perceptron Multiple solutions can be found by training a network. Here is one solution. A number inside a circle is a threshold and a number by an arrow is a synaptic weight. Linear perceptron with multiple inputs and outputs Linear mapping: yi =
j Wij x j Vectormatrix form: y = Wx Optimal linear mapping
Suppose multiple inputs are mapped linearlly to multiple outputs: y1 =W ym x1(1) X=
(1) xn (2) xn
x1 or y = Wx, where W is an m n weight matrix xn x1(2) ,Y =
(1) ym (2) ym Given all the inputs and corresponding desired outputs: y1(1) y1(2) , find W such that Y = WX. The optimal weight matrix W that mininizes the square error of between the actual outputs WX and the desired outputs Y is: W = YX + where is X + the pseudoinverse of X. Example of linear mapping: autoassociative memory
Image Data: X Desired autoassociation: X = WX Solution: W = XX
Original Input (key) Output (recall) Multilayer Perceptron Output Units Hidden Units Input Units Example: Learning in multilayer perceptron Multilayer Perceptron Learning A multilayer feedforward network learns to approximate an unknown inputoutput relationship from given examples of inputoutput pairs. All the weights in the network can be learned by minimizing the square error between the actual outputs of the network and the desired outputs in the examples, just like in a single layer perceptron. There are several algorithms for minimizing the error. The first one, the backpropagation algorithm, was discovered by Rumelhart, Hinton and McCleveland in mid 1980's. It is essentially a gradient descent method, like the learning rule of the single layer perceptron. Unlike in a single layer perceptron, the final form of the learning rule here is not local in the sense that modification of a synaptic weight depends not only on the activities of the pre and postsynaptic neurons but also on the activities of all other neurons in the entire network. ...
View Full
Document
 Spring '08
 Wang

Click to edit the document details