This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Outline of the Lecture Outline of the Lecture Nonlinear Recurrent Network Models Nonlinear Recurrent Network Models Rectification Winnertakeall Gain Rectification Winnertakeall Gain Modulation Modulation Rectification Induces Higher Rectification Induces Higher Amplification and Selection, Amplification and Selection, and Tuning and Gain and Tuning and Gain Controls Controls A recurrent network is a feedforward network with a recurrent synaptic weight matrix. Assuming that the activation function F is linear, that is, F(x)=x, and denoting the input as h = W . u τ r d v d t =  v + h + M × v τ r d v d t =  I × v + h + M × v τ r d v d t = M I ( ) × v + h τ r d v d t =  v + F W × u + M × v ( ) We now consider the consequences of the assumption that the activation function is a rectification with threshold : F r h + Μ ⋅ ρ ω ( 29 = ρ η + Μ ⋅ ρ ω  ρ γ + r x [ ] + = ξ ι ξ ι ≥ 0 0 ξ ι < 0 r γ We also consider the continuous approximation for recurrent networks (for the particular example of orientation selectivity):...
View
Full Document
 Spring '09
 Grzywacz
 Amplifier, Continuous function, Artificial neural network, Rectifier, recurrent neural network

Click to edit the document details