The structure of sounds and their sign language equivalents: Phonology.
In order to have sentences, one must have words, and words – at least in spoken language – are pronounced as a series of sounds. What about the sign of sign language? Does it have a level of substructure like the spoken word? Since spoken and signed languages are produced and perceived by different physical systems – oral / aural, and manual / visual – one might expect to find the least amount of similarity across the two modalities at this level of analysis. Yet, here, too, there is much common ground. In 1960, William Stokoe published a monograph in which he demonstrated that the words of American Sign Language are not holistic gestures, but, rather, are analyzable as a combination of three meaningless yet linguistically significant categories: handshapes, locations, and movements. That is, by changing some feature of any one of those three categories, themselves meaningless, one could change the meaning of the sign. For example, by changing only the configuration of the hand, the signs DECIDE and PERSON are distinguished. In these two signs, the locations and movements are the same. Only the hand configuration is different. Similar pairs exist that are distinguished only by their locations or only by their movements. The example in figure 22.2 is analogous to the English pair, pan, tan, in which the first sound of each word – p and t – is different. The sounds are themselves meaningless, but they are linguistically significant because they make a difference in meaning when put in a word. In the sign language pair, DECIDE, PERSON, the hand configurations are also meaningless, yet they too make a difference in meaning.
The other formational elements – locations and movements – can, like hand configurations, independently make a difference in meaning, though they are themselves meaningless. This finding was of supreme importance. Ever since its discovery, it has no longer been possible to assume, as most people previously had, that signs are fundamentally different from spoken words, that they are simple iconic gestures with no substructure. Rather, Stokoe showed that ASL is characterized by a defining feature of language in general: duality of patterning. This duality is between the meaningful level (consisting of morphemes, words, phrases, sentences), and the meaningless level, which in spoken languages is the level of the sounds that make up the meaningful expressions. The meaningless elements of spoken language are linguistically significant (i.e., they independently make a difference in meaning); they obey constraints on their combination within morphemes and words; and they may be systematically altered in different contexts. This is the domain of phonology. The list of handshapes, locations, and movements are the formational elements of sign language phonology, comparable to the list of consonants and vowels in spoken language. We will now show that sign language phonology is also characterized by constraints on the combination of these elements, and by systematic changes in “pronunciation” according to context.
All languages have constraints on the cooccurrence of sounds in syllables and words. For example, English does not allow the sequences *sr or *chl at the beginning of a syllable or word (although other languages do permit such combinations). Sign languages as well have constraints on the combination of elements at this same level of structure. For example, only one group of fingers may characterize the handshape within any sign. While either the finger group 5 (all fingers) or the group V (index plus middle finger) may occur in a sign, a sequence of the two shapes, *5-V is prohibited in the native signs of ASL and other sign languages. Similarly, all languages have assimilation processes, in which sounds borrow some or all aspects of neighboring sounds. For example, in the English compound words, greenback and beanbag, the sound [n] often borrows (assimilates) the place of articulation “lips” from the [b] that follows it: gree[m]back, bea[m]bag. In many common ASL compounds, part of the hand configuration may similarly assimilate from one part of the compound to the other. The example here (figure 22.3) is from the compound which means BELIEVE, made from the two words THINK and MARRY. Just as the [n] borrowed one of the features of [b] (the “lip” feature) in the English example above, in the ASL compound, the hand configuration of THINK borrows a feature from the following sign in the compound, MARRY. It borrows the orientation feature. That is, rather than being oriented toward the head as in the citation form of THINK, the dominant, signing hand in the compound BELIEVE is oriented toward the palm of the other hand, as in the sign, MARRY. The phonology of sign languages has been shown to be similar to that of spoken languages at even more surprising levels of analysis. For example, it has been demonstrated that the phonological elements of ASL words are not all simultaneously organized as Stokoe had claimed, but rather have significant sequential structure, just as spoken languages have one sound after another. A sign language equivalent of the syllable has even been argued for.
An aspect of language structure that involves both phonology and syntax is prosody. Prosody involves rhythm, to separate the parts of a sentence; prominence, to emphasize selected elements; and intonation, to communicate other important information, such as the discourse function of the sentence, e.g., whether an utterance is a plain declarative sentence or a question. Recent work argues that sign languages have the equivalent of prosody. While spokenlanguages use the rise and fall of the pitch of the voice, volume, and pause to achieve these effects, sign languages employ facial expressions, body postures, and rhythmic devices in similar ways and for similar functions. Examples are the Israeli Sign Language facial expressions for yes / no questions, and for information assumed to be shared by the signer and addressee, shown in figure 22.4.
Sign language facial “intonation” is different from the Sign language facial expressions used by hearing people in their communication, which are affective and not mandatory or systematic. Rather, sign language facial expressions are like the intonational pitch patterns of spoken language. Both tonal melodies and facial melodies are grammaticalized, i.e., fixed and systematic. For example, the intonational melody used in spoken language to ask a question requiring an answer of “yes” or “no” is systematically different from the one used to make a declarative statement. The same is true of the facial intonations for these two types of sentences in sign language. In the next subsection, what is perhaps The most central aspect of language is examined: the word.
Comments
Post a Comment