This preview shows page 1. Sign up to view the full content.
Unformatted text preview: LetterPosition Encoding The problem How is letter position encoded along the lexical route during visual word recognition? That is, given ability to recognize letters, how is a location invariant representation of letter order computed? Important question because developmental dyslexics do not visually process letter strings like normal readers. In order to understand what goes wrong in dyslexia, we need to understand normal orthographic processing in detail. Then perhaps we can invent more efficient remediation strategies for developmental dyslexia. Assumptions Input is a retinotopic representation, split across the hemispheres. Output is a lexical representation (orthographic word form OWF). FORM Left Anterior Fusiform ??? F O R M LVF/RH V1 + RVF/LH Retinal location Approach Work backwards: What kind of orthographic representation activates the lexical level? How is that representation activated? And so on ... until reaching V1. How is OWF activated? Relevant data comes from masked priming in lexical decision experiments (Jonathan Grainger & colleagues). Most informative experiment: Sevenletter targets: e.g., BLANKET Primes: 1357 : BAKT 1537 : BKAT dddd : GNPR Results: 1357 faster than dddd 1537 same as dddd Implications Representation encodes relative order of letters. Solution: Openbigrams Neurons that encode a pair of letters in a particular order. Activation of openbigram encodes separation of letters. No separation: 1.0 1 letter separation: 0.7 2 letter separation 0.5 > 2 0.0 E.g., stimulus FORM activates openbigrams: FO, OR, RM, FR, OM, FM Weight from each openbigram to each word unit equals openbigram activation for that word. OpenBigrams BLANKET Connection weights BA NK KT BK AT BT Bigram activations for stimulus BAKT Input to BLANKET= 3*1.0 * 0.7 = 2.1 OpenBigrams BLANKET Connection weights BK KA AT BA KT BT Bigram activations for stimulus BKAT Input to BLANKET= 2*0.7*0.7 = 0.98 OpenBigrams Input to target 1234567 : for 1357 = 2.10 for 1537 = 0.98 (47%) Input to target 12345 : for 12345 = 5.97 for 13245 = 4.75 (80%) OpenBigrams Explains priming pattern. Explains why HROSE is similar to HORSE. Becoming generally accepted (Grainger, Dehaene, and others). Recall evidence for IT neurons that encode relative position of pairs of features in object. Openbigrams assumed to be adaptation of this type of representation to letter strings. How are OpenBigrams Activated? Presumably by singleletter representations. Are they: Retinotopic? Positionspecific? Abstract (location / positionindependent)? Evidence comes from priming in alphabeticdecision experiments (Peressotti & Grainger, 1995). Stimulus threecharacter string: e.g., GBH or JP% Task: Is string comprised only of letters? Alphabetic Decision Primes 213 321 132 312 ddd Results (RTs): 213, 321,132, 312 < ddd Interpretation abstract letter units Abstract Letter Units There must be a way to dynamically bind position information to abstract letter units. Two possibilities: I) activation gradient II) serial encoding (~10 ms/ letter) activation I) II) F R O M F R O M time How to Decide? Look at Perceptual Patterns Letter string is briefly presented (< 200ms) Task is to identify the letters, or say if a specified target letter is in string. Look at accuracy across string positions. If activation gradient, should see decreasing accuracy across the string. Perceptual Patterns Typical pattern for exposures < 100 ms accurary Supports activation gradient. 1 2 3 4 5 Typical pattern for exposures >100 ms accurary 1 2 3 4 5 Different pattern at final letter Inconsistent with activation gradient. What to make of this??? Interpretation Serial Encoding time 1 1 1 1 1 2 2 2 3 3 3 4 4 5 75 ms exposure time 111111 2 2 2 2 3 3 3 3 4 4 5 5 5 5 5 150 ms exposure Letters to OpenBigrams Serial letter encoding. Openbigram XY is activated when letter X fires before letter Y. Activation of XY decreases as time between X and Y increases. How NOT to test for Seriality See if there's a length effect on RTs in lexical decision. If no must be parallel If yes serial Why not? rests on assumption that the time that it takes for the lexical network to settle after the final letter fires is independent of length. E.g., what if lexical network settles faster after the E in CANDLE than after the N in CAN ??? If shorter settling time for longer words, there could be NO length effect, despite a serial encoding. How to test for Seriality Temporally manipulate stimulus. Does time have a different effect across string positions? For example, present string for 60 ms, replace with another string for 30 ms. Subjects report letters perceived. If serial processing, should report initial letters of first string and final letters of second string. This has indeed been observed (Harcum & Nice, 1975). Serial Letter Encoding Very controversial, most people deny, due to: Lack of length effect for central presentation. Length effect is observed in some situations (e.g., LVF presentation). Serial encoding explains presence/absence of length effect with single mechanism. Otherwise, both serial and parallel mechanisms are required. Have experimentally shown how to create or abolish length effect (Whitney & Lavidor, 2004). Intuition that visual processing is parallel. but intuition unreliable. Influence of InteractiveActivation model. irrelevant How is Serial Encoding Induced? Won't go into the details. Based on general oscillatory mechanism proposed by others (Hopfield, 1995 Lisman & Idiart, 1995.) Basic ideas: Representation at next lower level (feature layer) is retinotopic. Activation gradient across feature layer. Activation level from feature layer determines timing of firing in letter layer. Left Mid. Fus. OpenBigrams BI IR RD BR ID BD time B Left Pos. Fus. Letters I R time D activation B Bilateral V4 Features (subletter or wholeletter) I R + Locational Gradient D LVF /RH RVF/LH location Space (retinotopic feature representation) is mapped onto time (serial letter representation) location invariance. What is Activation Pattern in V1/V2 ? Cortical magnification in retinotopic areas. For a given amount of space, the number of neurons representing that space decreases as distance from center of fovea increases. Noticeable reduction even 0.25 from center of fovea. Activation (total amount of neural activity) decreases as eccentricity increases. activation B Bilateral V4 Features I R Proposed pattern. D Learned, stringspecific processing LVF /RH + RVF/LH location activation Bilateral V1/V2 Edges B I R D Models visual acuity. LVF/RH + RVF/LH location Edge to Feature Layers Feature A S C A A S T S T L T L E L E E Edge C A S T L E LVF/RH + RVF/LH Creation of Locational Gradient RH: stronger bottomup excitation Brings first letter to high activation level. *RH: lefttoright inhibition Inverts acuity gradient. RH to LH: inhibition "Joins" hemispheric gradients. How might formation of the locational gradient be learned? Initially, topdown attention imposes locational gradient. The cat Visual system then learns to form gradient from bottom up. proposed EdgetoFeature processing in the model attention gradient no longer required. + What goes Wrong in Dyslexia? Attention gradient requires covert allocation of attention (focus of attention away from fixation point). Dyslexics known to have difficulty with covert attention. Especially in LVF. Proposal: dyslexics cannot form attention gradient abnormal orthographic analysis abnormal phonological and lexical processing Intriguing Data Results from Dubois et al. (2007). th 7 grade French subjects One dyslexic (MT) Seven controls Position x VF Interaction in Trigram Identification (Theory) Activation 1 1 2 3 2 Bottomup Excitation 3 1 2 3 4 3 2 1 0 1 2 3 4 (RVF/LH) (LVF/RH) Location (letter widths) Trigram Data (Controls) Accuracy (%) 100 90 80 70 60 50 1 1 1 3 3 1 1 3 1 3 3 3 3 2 1 0 1 2 3 Retinal Location All controls show strong effect of string position in LVF (effect of at least 20 percentage points at each location). Trigram Data (MT) Accuracy (%) 100 90 80 70 60 50 1 3 1 3 1 3 1 1 1 3 3 3 3 2 1 0 1 2 3 Retinal Location More Research is Required! For the Test Three questions, one each on the nature of: OpenBigram representation LetterLayer representation FeaturetoLetter transformation "nature of" means: Specification of representation OR Relationship to visual object processing OR Computational effect of proposed processing. ...
View Full Document
This note was uploaded on 07/29/2008 for the course NEUROSCIEN 70 taught by Professor Whitney during the Spring '08 term at Johns Hopkins.
- Spring '08