Chapter12-1.pdf - Chapter 12 Solutions to Review Questions 12.1 Show that the normalised version of Hebbian learning to the Oja\u2019s rule A single

# Chapter12-1.pdf - Chapter 12 Solutions to Review Questions...

• 16

This preview shows page 1 - 4 out of 16 pages.

Chapter 12 Solutions to Review Questions 12.1 Show that the normalised version of Hebbian learning to the Oja’s rule. A single neuron (linear) with a Hebbian –type adaptation rule for its synaptic weights can evolve into a fitter for the first principal component of the input distribution. The model is linear in the sense that the model output is a linear combination of its inputs. The neurons receives a set of p input signals x 0 , x 1 , ……… x p-1 through a corresponding set of p synapses with weights w o , w 1 ……., w p-0 respectively. Y is defined as = = 1 0 p i i i x w y in accordance with Hebb’s postulate of learning we have ( ) ( ) ( ) ( ) 1 , , 1 , 0 1 = + = + p i n x n y n w n w i i i L L η ( ) ( ) ( ) ( ) ( ) ( ) ( ) [ ] 2 1 1 0 2 1 + + = + = p i i i i i i n x n y n w n x n y n w n w η η ( ) ( ) ( ) ( ) [ ] [ ] 2 1 2 1 p 0 i i i i i i ) n ( x ) n ( y ) n ( w n x n y n w 1 n w = η + η + = + ( ) ( ) ( ) ( ) [ ] 2 1 1 p 0 i i i 2 i 2 2 2 i i i i ) n ( w ) n ( x ) n ( y 2 ) n ( x ) n ( y ) n ( w n x n y n w 1 n w = η + η + η + = + ( ) ( ) ( ) ( ) [ ] 2 1 1 p 0 i 2 2 i 2 2 2 i i i i ) n ( y 2 ) n ( x ) n ( y ) n ( w n x n y n W 1 n w = η + η + η + = + ( ) ( ) ( ) ( ) [ ] ( ){ 2 1 2 i 2 2 i 2 2 2 i i i i } ) n ( w ) n ( y 2 ) n ( x ) n ( y 1 n w n x n y n w 1 n w η + η + η + = + ( ) ( ) ( ) ( ) [ ] ( ) ( ) { } + η + = + 2 1 2 i i i i 1 n w 1 n x n y n w 1 n w ( ) ( ) ( ) ( ) [ ] ( ) ( ) ( ) ( ) ( ) η η η + = + n w n y 2 1 n w n y 2 2 1 1 n w n x n y n w 1 n w 2 i 2 2 2 i 2 2 i i i i
( ) ( ) ( ) ( ) [ ] ( ) ( ) η η + = L L L n w n y 1 n x n y n w n w 1 2 i 2 i i 2 i ( ) ( ) ( ) ( ) ( ) ( ) ( ) η η + η = L L L 2 i 2 i i 2 i 2 i n x n y n w n w n y n w n w 1 ( ) ( ) ( ) ( ) ( ) ( ) { } [ ] ( ) 2 i i i 2 i 0 n x n w n y n y n w n w 1 η + η = ( ) ( ) ( ) ( ) ( ) ( ) [ ] [ ] ( ) ( ) = = η + η + = 1 p 0 i 2 i 2 i i i 2 i 1 n w , if 0 n w n y n x n y n w n w 1 ( ) ( ) ( ) ( ) ( ) [ ] [ ] ( ) 2 i i i 0 n w n y n x n y n w η + η + = ( ) ( ) ( ) ( ) ( ) ( ) [ ] n w n y n x n y n w 1 n w i i i i η + = + ( ) ( ) ( ) ( ) ( ) ( ) ( ) n w n y n x n x where n x n y n W i i i i i = + = η Oja’s Rule 12.2 First limit theorem: ) )] ( [ ( 2 2 1 ) ( 2 1 ) ( 2 1 )] ( [ ] [ } ) ( { 2 W s E X W W W W dt d W W dt d W W dt d W s E X E W s X E W T T T γ α γ α γ α = = = = = = & θ be the angle between weight vector W and X vector
( ) ( ) ( ) 2 T 2 2 3 3 2 T 3 2 T 3 T T T 3 T T T 2 T T T ) W X ( W X W X W X ) W X ( W X W X ) W X ( W X E W X ) W )] s ( [ E X ( WW X W X ) W )] s ( [ E X ( X E W X W W W X W X W X E W dt d W X W X W X W X E W X W X dt d E cos dt d E α = α α = α α = γ α γ α = = = = θ & & & 0 W and 0 X 0, 3 > α Using Cauchy Schwartz inequality 0 ) (cos dt d E 0 ) W X ( W X 2 T 2 2 θ θ cos is increasing function of time so 0 θ Thus W asymptotically reaches to X .

#### You've reached the end of your free preview.

Want to read all 16 pages?

### What students are saying

• As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

Kiran Temple University Fox School of Business ‘17, Course Hero Intern

• I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

Dana University of Pennsylvania ‘17, Course Hero Intern

• The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

Jill Tulane University ‘16, Course Hero Intern