This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: A recurrent network is a feedforward network with a recurrent synaptic weight matrix. Assuming that the activation function F is linear, that is, F(x)=x, and denoting the input as h = W . u τ r d v dt = − v + h + M ⋅ v τ r d v dt = − I ⋅ v + h + M ⋅ v τ r d v dt = M − I ( ) ⋅ v + h τ r d v dt = − v + F W ⋅ u + M ⋅ v ( ) We now consider the consequences of the assumption that the activation function is a rectification with threshold : F h + M ⋅ v ( ) = h + M ⋅ v − γ ⎡ ⎣ ⎤ ⎦ + x [ ] + = x i x i ≥ x i < ⎧ ⎨ ⎪ ⎩ ⎪ γ We also consider the continuous approximation for recurrent networks (for the particular example of orientation selectivity): The main property in the primary visual cortex is orientation selectivity, which arises from feedforward and recurrent synapses. We also consider the continuous approximation for recurrent networks (for the particular example of orientation selectivity): τ r d v dt = −...
View
Full
Document
This note was uploaded on 06/08/2009 for the course BME 575L taught by Professor Grzywacz during the Spring '09 term at USC.
 Spring '09
 Grzywacz

Click to edit the document details