07 HarleyCh6 - 174 THE PSYCHOLOGY OF LANGUAGE Finally the...

Info icon This preview shows pages 1–16. Sign up to view the full content.

Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Image of page 2
Image of page 3

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Image of page 4
Image of page 5

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Image of page 6
Image of page 7

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Image of page 8
Image of page 9

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Image of page 10
Image of page 11

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Image of page 12
Image of page 13

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Image of page 14
Image of page 15

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Image of page 16
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 174 THE PSYCHOLOGY OF LANGUAGE Finally, the extreme View of cognitive linguistics (Langacker, 1987) dispenses with formal linguistic rules altogether, focusing instead upon the communicative conventions of particular languages. CHAPTER SIX Semantics INTRODUCTION How do we represent the meaning of words? How is our knowledge of the world organised? These are issues concerned with the study of meaning, or semantics. In the previous chapter we saw how the sentence ‘ » processing mechanism parses sentences to construct a representation of the syntactic relationships between words. Important as this stage might be, it is only an intermediate step towards the real goal of comprehension, constructing a representation of the meaning of the sentence. In this chapter we shall look at how the meaning of individual words are represented, and in the next how we combine these meanings to form a representation of the meaning of the whole sentence and beyond. Our discussion of non—semantic reading in Chapter 4 showed that words and their meanings can be dissociated. There is further intuitive evidence to support this dissociation (Hirsh-Pasek, Reeves, & Golinkoff, 1993). First, we can translate words from one language to another. Furthermore, not every word meaning is represented by a simple, single word in every language (see Chapter 10 for further discussion of this). Second, there is an imperfect mapping between words and their meanings such that some words have more than one meaning (ambiguity) while some words have the same meaning (synonymy). Third, the meaning of words to some extent depends upon the context. 175 176 THE PSYCHOLOGY OF LANGUAGE The word “big” means different things in the phrases “the big ant” and “the big rocket”. Tulving (1972) drew what is now regarded as a fundamental distinction between episodic and semantic memory. Episodic memory is our memory for events and particular episodes; semantic memory is, in simple terms, our general knowledge. Hence my knowledge that the capital of France is Paris is stored in semantic memory; my memory of learning as a child in a geography lesson at school when the Eiffel Tower was built is an instance of an episodic memory. Semantic memory develops from or is abstracted from episodes which may be repeated many times. I cannot now recall when I learnt the name of the capital of France, but clearly I must have been exposed to it at least once. We have already seen that our mental dictionary has been given the name lexicon, and similarly our store of semantic knowledge can be called our mental encyclopaedia. Clearly there is a close relationship between the two, both in development and in the developed system, but they must also be separable for the reasons given above. Neuropsychology reveals important dissociations in this respect. We have seen that words and their meanings can be dissociated; but we must be wary of confusing a loss of semantic information with the inability to access or use that information. This problem is particularly important when we consider semantic neuropsychological deficits. The notion of meaning is closely bound to that of categorisation. Concepts are very closely related to meaning. A concept refers to a mental representation that determines how things are related or categorised. It enables us to group things together, so that instances of a category all have something in common. Thus concepts somehow specify category membership. All words have an underlying concept, but not all concepts are labelled by a word. We have a word “dog” which we can use about certain things in the world, but not others. There are two fundamental questions about this. We can say that the philosophical question is: how does the concept of “dog” relate to the members of the category dog? The psychological question is: how is the meaning of “dog” represented, and how do we pick out instances of dogs in the environment? As just hinted, we could in principle have a word, say “brog”, to refer to brown dogs. That we do not have such a term is probably because this is not a particularly useful concept in this domain. Rosch (1978) pointed out that categorisation is not arbitrary, but determined by two important features of our cognitive system. First, the categories we form are determined in part by the way in which we perceive the structure of the world. Perceptual features are tied together because they form objects and have a shared function. How the categories we form are determined by the biological factors is an 6. SEMANTICS 177 important topic, about which little is known. We shall return to this with the example of colour in our discussion of the relationship between language and thought in Chapter 10. Second, the structure of categories might be determined by a principle known as cognitive economy. This states that semantic memory is organised so as to avoid excessive duplication. Of course we cannot be too economical, as we often need to make distinctions between members of some categories more than others. Rosch proposed that this compromise resulted in a basic level of categorisation which tends to be the default level at which we categorise and think unless there is particular reason to do otherwise. Unless given reason to do otherwise, we use the basic level of “chairs”, rather than the lower level of “armchairs” or the higher level of “furniture”. It should be obvious that the study of meaning therefore necessitates capturing the way in which words refer to things that are all members of the same category and have something in common, yet are different from non-members. (Of course something can belong to two categories at once: we can have a category labelled by the word “ghost”, and another by the word “invisible”, and indeed we can join the two to form the category of invisible ghosts labelled by the words “invisible ghosts”.) There are two issues here. First, what distinguishes items of one category from items of another? Second, how are hierarchical relationships between categories to be captured? There are category relationships between words. For example, the basic level category “dog” has a large number of category superordinate levels above it (such as “mammal”, “animal”, “animate thing”, and “object”) and subordinates (such as “terrier”, “rottweiler”, and “alsatian”—these are said to be category co-ordinates of each other). Hierarchical relationships between categories are one clear way in which words can be related in meaning, but there are other ways that are equally important. Some words refer to properties of things referred to by other words (e.g. “dog” and “paw”). Some words (antonyms) are opposites in meaning (e.g. “hot” and “cold”). We can attempt to define many words: for example, we might offer the definition “unmarried man” for “bachelor”. Another fundamental issue for semantics concerns how we should capture all these relationships. It should by now be clear that semantics is in many ways the interface between language and the rest of perception and cognition. This relationship is made explicit in the work of Jackendoff (1983), who proposed a theory of the connection between semantics and other cognitive, perceptual, and motor processes. From these considerations, he proposed two constraints on a general theory of semantics. The grammatical constraint says that we should prefer a semantic theory that explains otherwise arbitrary generalisations about syntax and the 178 THE PSYCHOLOGY 0F LANGUAGE lexicon. Both some AI theories and theories based on logic (in partticulacig a form of logic known as predicate calculus) fail this constrain asmd work at all they both have to make up entities that do not-corrleesp21 of to anything real. The cognitive constraint says that there is 2:101: .cal representation where semantic: mustiiifilterfacgzggltiigtlher psyc gi ' ns such as those erive 0m p I . . refifihilstiligpter we will focus upon two main topics. First, how 3:1“; represent the meaning of words? In particular, hpw doesi a Emt does meaning deal with the issues we have Just raised. Secon ,t and its the neuropsychology of meaning tell us about its represent: 10111 a ment relationship with the encyclopaedia? We Will con51der til; eve :ges in of meaning, and how it is related to exposure to spec1 c epis , Chapter 12. CLASSICAL APPROACHES It is useful to distinguish immediately between a word’s denoltation aIpd ' its connotation. The denotation of a word is its core, essentila Team The connotations of a word are all of its secondary imp ica ionsf,the emotional or evaluative associations. For example, the denotatio: o d a word “dog” is its core meaning: it is the relation between: a wor . a}: be class of objects to which it can refer. The connotations of dog” miges on “nice”, “frightening”, or “smelly”. Put another way, everyone agreHere the denotation, but the connotations differ from person to peg-sotn. tion we are primarily concerned with denotation, although the 18 inc can become quite hazy. Ask a person on the street what the meaning of “dog” is, and they . might well point to one. This theory of meaning that woréishmeanTvilllgi: they refer to is one of the oldest, and is called the referentza tt 8:12); net at are two major problems with this lay theory, however. Firs , 1 1 u Oint all clear how such a theory treats abstract concepts. How ca: yo cpl as to “truth”, yet alone point to the meaning of a wor dsund the “whomsoever”? Second, there is a dissoc1ation between a w,oi('Ga k for things to which it can refer. Consider the words filesperus _ resecar") “The Evening Star”) and “Phosphorus” (Greek for The Morning 61 the. They have the same reference or extenSion iii our universe, iiaénedy the planet Venus, but they have different senses or intenswns. n ethéu h ancients did not know that they were the same thing, 5; even t the words “Hesperus” and “Phosphorus” actually refer ‘to t e same 001115 (the planet Venus), the worids have tcilifferent s;21e;.aie‘slp:::: hows” er to a anet in e evenin , I . O . Sgdldboenlljbz dieldeio refer t]: a planet in the morning sky. This distinction 6. SEMANTICS 179 K was made explicit in the work of Frege (1892/1952), who distinguished between the sense (often called the intension) of a word and its reference (often called its extension). The intension is its abstract specification or meaning, determining how a word is related in meaning to other words, and which specifies the properties an object must have to be a member of the class, while the extension is what it stands for in the world that is the objects picked out by that intension. T ese notions can be extended from words or descriptive phrases to expressions or sentences. Frege took the extension of a sentence to be its truth value (which is simply whether it is true or not), and its intension ’ mathematics, and computing languages, its application has been extended to natural language. The importance of formal approaches to meaning for psycholinguistics is not clear. Although they help refine what meaning might be, they appear to say little about how we represent or compute it. SEMANTIC NETWORKS One of the most influential of all processing approaches to meaning is based on the idea that the meaning of a word is embedded within a network of other meanings. In a semantic network, knowledge is given meaning only by the way in which it relates to other knowledge. Some of the earliest theories of meaning, from Aristotle to the Behaviourists, viewed meaning as deriving from a word’s association. From infancy, we are exposed to many episodes involving the word “dog”. For the behaviouri'sts, the meaning of the word “dog” was simply the sum of all our associations to the word: it obtains its meaning by its place in a network of associations. The meaning of “dog” might involve an 5 _ association with “barks”, “four legs”, “furry”, and so on. It soon became apparent that association in itself was insufficiently powerful to be able to capture all aspects of meaning. There is no structure in an associative network, with no relationship between words, no hierarchy of .1 _ information, and no cognitive economy. In a semantic network, this additional power is obtained by making the connections between items do something—they are not merely associations representing contiguity 180 THE PSYCHOLOGY OF LANGUAGE of frequent co-occurrence, but themselves have a semantic value. That is, in a semantic network the links have meaning. The Collins and Quillian semantic network model Perhaps the best known example of a semantic network is that of Collins and Quillian (1969). The idea arose from an attempt to develop a teachable language comprehender to assist machine translation between languages. A semantic network is particularly useful for representing information about natural kind terms. These are words that refer to naturally occurring categories and their members—such as types of animal or metal or precious stone. The scheme attributes fundamental importance to their inherently hierarchical nature: for example, a bald eagle is a type of eagle, an eagle is a type of bird of prey, a bird of prey is a bird, and a bird is a type of animal. This hierarchical format suggests a straightforward way to implement cognitive economy. If you store the information that birds have wings at the level of bird, you do not need to repeat it at the level of particular instances (e.g. eagles, bald eagles, , > and robins). An example of a fragment of such a network is shown in Fig. 6.1. In the network, nodes are connected by links which specify the relationship between the linked nodes; the most common link is an ISA link which means that the lower level node “is a” type of the higher level node. Attributes are stored at the lowest possible node at which they are true of all lower nodes in the network. The sentence verification task. One of the most commonly used tasks in semantic memory research is that of sentence verification. Subjects are presented with simple “facts” and have to press one button if the sentence is true, another if it is false. The reaction time is an index of how difficult the decision was. Collins and Quillian (1969) presented subjects with sentences such as (1) to (4). Arobin is a robin. * Arobin is a bird. A robin is an animal Arobin is a fish. 99°F”? Sentence (4) is of course false. Sentence (1) is trivially true, but it obviously still takes subjects some time to respond “yes”; clearly they have to read the sentence and initiate a response, but it does provide a baseline measure. The response time to (1) is less than to (2), which in turn is less than that to (3). Furthermore, the difference between the ' V reaction times is about the same—that is, there is a linear relationship. 6. SEMANTlCS 131 has wings lays eggs flies has red breast SWimS . cannot fly farm animal pink skin ults? According to this mod ' . el, produce responses by starting off from the node in the networlflilkifici: the subject in the sentenc ' ‘ e (here “robin” ' network until they find the information 1’1: nd traveumg 'through the attribute is stored at the “robin” node, slower ” level above “robin”, and red two levels above “robin” at 182 THE PSYCHOLOGY OF LANGUAGE 5. A robin has a red breast. 6. Arobin has wings. 7. Arobin has lungs. These data from early sentence verification experiments therefore supported the Collins and Quillian model. Problems with the Collins and Quillian model. A numfber oftgfli: lems for this model soon emerged. First, clearly not allllin OliIIfilOHShip easily represented in hierarchical form. What? is t e Crle :Oblem is between “trut ”, “justice” and “law”, for example. A sew}? tp ear to that the materials in the sentence verification‘task t a allot};1 What support the hierarchical model confound semantic distance Wile of the is called conjoint frequency. This is exemplified by t e examp in the words “bird” and “robin”; these words appear toge1 e far more languagethey are used in the same sentence for examp e——e of how than do “bird” and “animal”. Conjomt frequency is a meaiuf: uenc ‘ frequently two words co-occur. When you control for conjom d. eq ear): the linear relationship between semantic distance tim:3f 1:312:11 no (Conrad, 1972; Wilkins, 1971); in particular hierarchical e ec s has longer be found for verifying statements about attributes ( a canari); an lungs”) although they persist for 'class inclusmn ( aficaéiaflyresults animal” Th” sugges” that the Email? £1°fhief§st§$2fificaion ‘ cause t e sen ences . tifrl:sf:bliilliiiiniifd1¥d:ewhich are more closely associated. Another possxbtl; confound in the original sentefice Zenfgatiinneégegrélmtigt: tlilse :17; SS ize; the class of “anima s” is y e mi 10 iff‘ligdzfi: so perhaps this is why it‘takes longer to searclli (“Lapdaiiégig Freedman, 1968). However, they did not properly controS ozhyghOben and semantic distance (see Rips, Shoben, & Smith, 1973, m1 , , ' 4 . . . . & fifiidlilzehierarchical model makes some incorrect pridictggnségg find that a sentence such as (8) is verified much faster t an , though animal is higher in the hierarchy than mammal (Rips et al., 1973). 8. a cow is an animal 9. a cow is a mammal We do not reject all untrue statements equally slowly. Sentence :10) is rejected faster than (11), even though both are equally gile (Schaeffer & Wallace, 1969, 1970; Wilkins, 1971). 1st: ft is t: e ect: the more related two things are, e ar ’d‘ilsceiitia’ifglse gem, even if they are not ultimately from the same class. 6. SEMANTICS 183 10. a pine is a chair 11. a pine is a flower Neither are all true statements involving the same semantic distance responded to equally quickly. Sentence (12) is verified faster than (13), even though both involve only one semantic link (Rips et al., 1973), and a “robin” is judged to be a more typical bird than a “penguin” or an “ostrich” (Rosch, 1973). This is called the prototypicality effect. 12. a robin is a bird 13. a penguin is a bird In summary there are too many problematical findings from sentence verification experiments to accept the hierarchical network model in its original form. We shall see that of these troublesome findings, the prototypicality effect is particularly important. Revisions to the semantic network model. Collins and Loftus (1975) proposed a revision of the model based upon the idea of spreading activation. The structure of the network became more complex, with the links between nodes varying in strength or distance (see Fig. 6.2). Hence “penguin” is more distant from “bird” than is “robin”. The structure is no longer primarily hierarchical, although hierarchical relationships still form parts of the network. Access and priming in the network occur . through a mechanism of spreading activation. The concepts of activation travelling along links of different strengths and many simple units connected together in complex ways are of course important concepts in connectionist models. SEMANTIC FEATURES A different approach to semantic memory views the meaning of a word as encoded not by the position of a word in a network of meaning, but by its decomposition into smaller units of meaning called semantic features. This works very well for some simple domains where there is a clear relationship between the terms: one such domain much studied by anthropologists is that of kinship terms. A simplified example is shown in Table 6.1. Here the meanings of the four words “mother”, “father”, “son” and “daughter” can be captured by combinations of the three features “human”, “male” or “female”, and “older” or “younger”. We could provide hierarchical arrangement of these features (e.g. human —> young and old; young —> male or female, and old —> male or female) but this would either be totally unprincipled (there is no reason 184 THE PSYCHOLOGY OF LANGUAGE antic network. (Based on Collins & Loftus. ' ‘ ' sem FIG. 6.2. Example ofaspreading activation ions cannot do...
View Full Document

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern