Oculardominance columns develop slowly after birth, as this autoradiographs from the developing kitten cortex shows.
This model develops quasi-periodic columns, with variations due to random initial conditions.
Optical imaging (9 12 mm) shows that
Outline of the Lecture
Kalman Filtering
Kalman Filtering
Changing Environments Contrast Adaptation
Kalman Filtering Models Optimal Neural Adaptation over Time
Optimal Probabilistic Adaptation
When flying from the outside of a canyon into.
the
Optimal Probabilistic Adaptation
When flying from the outside of a canyon into.
the corridors of the canyon
the statistics of the environment change.
Optimal Probabilistic Adaptation
Kalmanfiltering Adaptation
The goal is to update the estima
Outline of the Lecture
Bayesian Inference
Device Optimization
Bayesian Decision ROC Analysis
Bayesian Decision Theory Helps Understanding Inference in the Brain
Computations performed by neural networks can be expressed as energy minimization.
Computations performed by neural networks can be expressed as energy minimization.
A link exists between energy minimization and Bayesian processes (and therefore, between these things and neural networks). If a system can be in a finite number of s
Outline of the Lecture
Energy Minimization
Energy Motion Aperture Problem Regularization
One Can Understand Some Network Computations as Energy Minimization
Swimming uses half centers, with cross inhibition ending mainly by local inhibition.
Th
Swimming uses half centers, with cross inhibition ending mainly by local inhibition.
The bifurcation diagram shows fixed points and limit cycles of varying frequencies as one modulates the tonic excitation to C neurons).
David Marrs three levels of
Outline of the Lecture
Information Theory
Entropy Noise Entropy Mutual Information
Mutual Information Measures How Much Responses Tell about Stimuli
In a black-box model, we try to describe a system well enough to predict its responses without k
In a black-box model, we try to describe a system well enough to predict its responses without knowing what is inside the system.
If the firing is different when one presents the same stimulus twice, then how does the brain know what is in the stimu
Outline of the Lecture
Population Decoding
Population Code Population-vector Bayesian Decoding
Different Decoding Schemes Lead to Different Accuracies of Measurement
In a black-box model, we try to describe a system well enough to predict its re
In a black-box model, we try to describe a system well enough to predict its responses without knowing what is inside the system.
In this example, the stimulus was a motion of varying speed (A), responses were spikes (B), and experimenters estimated
Outline of the Lecture
Nonlinear System Identification
Volterra Series
Nonlinear Kernels Wiener Series
Volterra and Wiener Kernels Characterize Linear and Nonlinear Systems
In a black-box model, we try to describe a system well enough to predict
In a black-box model, we try to describe a system well enough to predict its responses without knowing what is inside the system.
If the black box is linear, then we can describe the system fully with the impulse response, as any stimulus is a sum o
Outline of the Lecture
Linear System Identification
Back-box Models Impulse Response Reverse Correlation
Reverse Correlation Can Determine a Neural Linear-systems Impulse Response
A recurrent network is a feedforward network with a recurrent syn
A recurrent network is a feedforward network with a recurrent synaptic weight matrix.
Some neuronal tissues are so massive and complex that network analysis is not too useful.
Perception is a constructive process that depends on both the stimulus i
Outline of the Lecture
Minimal-wiring Hypothesis
Elastic Nets
Dimensionality Reduction Development
The Function of Cortical Maps May Be to Minimize Wiring
Oculardominance columns develop slowly after birth, as this autoradiographs from the dev
Unsupervised learning is self-organization to maximize extraction of information from input.
One way is by having a supervisor tell the network whether its performance is good.
Classical conditioning is an implicit-memory learning dependent on stim
Outline of the Lecture
Hebbian Supervised Learning
With
Spider!
Without Error Correction Error Correction
Hebbian Supervised Learning Benefits from External Information about the Truth
This model develops quasi-periodic columns, with variations
Spider!
This model develops quasi-periodic columns, with variations due to random initial conditions.
Unsupervised learning is self-organization to maximize extraction of information from input.
How can we modify networks like these to learn how t
Outline of the Lecture
Hebbian Unsupervised Learning
Development Orientation Selectivity Ocular Dominance
Hebbian Unsupervised Learning Leads to Self Organization of Neural Circuits
Long-term potentiation and depression at the hippocampus are e
Long-term potentiation and depression at the hippocampus are examples of Hebb-Stent rule.
A full feedforward network has vector inputs and outputs connected by a weight matrix.
The simplest rule following Hebbs conjecture is
w
dw = vu dt
In le
Outline of the Lecture
Models of Motor-pattern Generation
A Simple Model Computer Simulations
Models of Spinal MotorPattern Generation Depend Strongly on Parameters
If the system is rectifying, and Re() > 0 and Im( ) 0, trajectories converge to
If the system is rectifying, and Re() > 0 and Im() 0, trajectories converge to limit cycles.
Reduced preparations show that the motorpattern generation circuitry is in the spinal cord.
One of the best known centralpattern generators is that of the
Outline of the Lecture
Excitatory-inhibitory Network Models
Non-symmetric Matrices Olfactory Bulb Phase Plane
Recurrent Networks with Non-symmetric Matrices May Exhibit Oscillations
A recurrent network is a feedforward network with a recurrent sy
A recurrent network is a feedforward network with a recurrent synaptic weight matrix.
For symmetric M, the eigenvectors are orthonormal, i.e., e e = and general solutions have time constants r/(1-r) :
dv r = (M I)v + h dt
v (t ) =
c ( t ) e
Outline of the Lecture
Nonlinear Recurrent Network Models
Rectification Gain Modulation Winner-take-all
Rectification Induces Higher Amplification and Selection, and Tuning and Gain Controls
A recurrent network is a feedforward network with a rec
A recurrent network is a feedforward network with a recurrent synaptic weight matrix.
Assuming that the activation function F is linear, that is, F(x)=x, and denoting the input as h = W.u
dv r = v + F (W u + M v ) dt dv r = v + h + M v dt dv r =
Outline of the Lecture
Linear Recurrent Network Models
Recurrent Matrices Properties Eigenvalues, Eigenvectors
Synaptic-Matrix Eigenvector Properties Determine Responses of Linear Recurrent Networks
A full feedforward network has vector inputs an
A full feedforward network has vector inputs and outputs connected by a weight matrix.
A recurrent network is a feedforward network with a recurrent synaptic weight matrix.
For a feedforward network:
dv r = v + F (W u ) dt
Na dva r dt =
Outline of the Lecture
Feedforward Network Models
Feedforward Networks Example: Reaching Dynamics
Feedforward Networks Are the Simplest Kind of Brain Circuits
The simplest neural-network model for brain computations is feedforward with one output