{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

L1.4 - Perception (Jul-15) - Interactive Activation

# L1.4 - Perception (Jul-15) - Interactive Activation - Red...

This preview shows pages 1–9. Sign up to view the full content.

In the full network, there are feature and letter layers for each of four positions. The letter layer in each connects to the same word layer. Treat this as the network for the third position only.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
The word "WORD" is presented, which in the third position activates the features for the letter "R".
Here are the activations for the upper horizontal bar feature. (Note that activations spread for all features simultaneously, but are shown one at a time here.)

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Now we add the activations for the diagonal. (Note that activations spread for all features simultaneously, but are shown one at a time here.)
Now we add the remaining two features. (Note that activations spread for all features simultaneously, but are shown one at a time here.)

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
Once the activations have spread, the net effect on each of the letters is calculated.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Red means low activation, white means medium, and green means high. Not shown is where activation spreads from the letters in each of the four positions in the words, up to the word layer. Suppose that the net effect of that process is that the "WORD" node is activated, and the other nodes are neutral, as shown here. Activation spreads back down from the word layer to the letter layer. Calculating the impact of those activations, there is now even more overwhelming support for R, and strong evidence against the other letters. Note that activation would spread back up again to the word layer, and back down to the letter layer, and back and forth until all the values settled down to a steady state....
View Full Document

{[ snackBarMessage ]}