Emergence of grammar in a new sign language

Grant Proposal


We are presented with a rare opportunity to study the emergence of grammar in a new and isolated sign language. The first signers of the language of the Abu Shara Bedouins in Israel were born about 70 years ago. Because of the family and social patterns in this 3,500-member community, there are now more than 80 deaf people in two generations, ranging from middle age through infancy. Because a person’s language stabilizes with age, the living speakers of the language represent a number of points in the history of the language. This fact allows us to observe the development of the language almost since its inception. The proposed research aims to document the development of grammar in Abu Shara Bedouin Sign Language (henceforth ABSL), and through it, the nature of the development of grammar in sign languages and human language more broadly. Our work joins a growing body of literature on new sign languages around the world, including in Japan (Osugi, Supalla and Webb 1999), in Bali (Branson, Middler, and Marsaua, 1996), and in Nicaragua (Senghas 1995; Kegl, Senghas and Coppolla 1999, Senghas, Coppola, Newport and Supalla 1997), which seek to understand the roots of human language. Previous work on established sign languages has shown that they share many fundamental properties of spoken language, thus demonstrating that many properties of languages are independent of the mode of communication. Furthermore, established sign languages share a number of quite specific structures and these sign language-universal structures are remarkably complex, despite the relative youth of the languages. In our previous work, we have argued that these common structures emerge in sign languages because of the iconic, visual, and bimanual advantages of the sign modality. Still, evidence from Nicaraguan Sign Language (Kegl, Senghas et al. 1999) and from our preliminary findings in Abu Shara indicate that complex grammatical structure is not born overnight.

The members of our team are first and foremost linguists with strong backgrounds in descriptive, theoretical, and typological linguistics. Our primary goal is to produce a grammatical description of ABSL that will be of interest to theoretical and typological linguists, whose own largest goal is the understanding of human language. Because ABSL is an emergent language, we are particularly interested in the ways in which its structure is growing in grammatical complexity, an issue that has also been central to our research as a team (Aronoff, Meir, & Sandler, 2000, henceforth AMS; Aronoff, Meir, Padden, & Sandler, in press, henceforth, AMPS). We will focus especially on those grammatical structures that are known to be common to established sign languages for which this research has produced well-understood measures of development.

We will concentrate on specific properties that are typically found in sign languages and that are rooted in the medium of expression: (a) the use of space and location to encode grammatical roles and relations; (b) the use of the body and particular handshapes with motion to represent classes of referents, their action and interaction; and (c) an interpretive system of rhythm and facial expression. In established sign languages, these properties are manifested by specific grammatical structures: a verb agreement system with particular properties (Padden 1988; Meir 1998); a subsystem of classifier constructions (Supalla 1985); and prosodic constituents with facial ‘intonation’ (Nespor & Sandler 1999). As the constituents of prosody are known to reflect syntactic constituency and complexity, we aim to use the more salient prosodic patterning as a point of entry for analyzing words and sentences in ABSL. Another aim of our project is to track the emergence of a lexicon. Prosody will help us here as well: criteria for phonological words (Brentari 1998; Sandler 1999a) will help us determine when gestural elements become lexicalized.

Our preliminary work on ABSL shows that though these particular grammatical systems seem ideally suited for visual-gestural languages, they begin life as an unsystematic amalgam and, in fact, may not even be present in a new sign language. Our aim is to describe a chronology of how grammars develop. Though stable for over 200 years, the community and especially its sign language is potentially fragile due to outside influences, leading us to move with urgency to describe the sign language.

As a service to the community, we intend to make a dictionary of the language, which will double as a means of recording the lexicon, including word categories and combinatorial structures at the morphological level such as compounding and affixation.


Herodotus tells the story of the Egyptian king Psammetichos’s effort to determine what the first language was. He had newborn twins placed in the custody of a deaf shepherd on an uninhabited island. Later, he returned to discover that the children’s first recognizable word was ‘bekos’, the Phrygian word for bread, and so concluded that Phrygian was the first language. The tale of Psammetichos has long been one of the best remembered of Herodotus’s stories because it strikes a nerve. Our own research covers a similar case: we have uncovered an autochthonous sign language that has developed over the last seventy or so years in an isolated and homogeneous community with a high incidence of profound prelingual deafness. Because of the unusual features of this new sign language, we believe it contributes uniquely to the current work on creoles and new sign languages.

History of the Abu Shara community and their language. The Abu Shara Bedouin group was founded about 200 years ago in the Negev region of present-day Israel. Originally fellahin ‘peasants’ from Egypt who worked for traditional Bedouins as laborers, the Abu Shara now function autonomously and are regarded by outsiders as Bedouin. The group is now in its seventh generation and contains about 3,500 members, all of whom reside together in a single community exclusive of others. Consanguineous marriage has been the norm in the group since its third generation. Such marriage patterns are common in the area and lead to very strong group-internal bonds and group-external exclusion. It is indicative that the Abu Shara still view themselves as a single large family, though now subdivided into subfamilies.

Within the past three generations, approximately 80 individuals with congenital deafness have been born into the community, all of them descendants of two of the founders’ five adult sons. All deaf individuals show profound prelingual neurosensory hearing loss at all frequencies, have an otherwise normal phenotype, and are of normal intelligence. Scott et al. (1995) identify the deafness as (recessive) DFNB1 and show that it has a locus on chromosome 13q12 similar to the locus of several other forms of nonsyndromic deafness.

Kisch (2000) has done a detailed anthropological study of deafness in the Abu Shara community, showing that the deaf members of the community are fully integrated into is social structure and are not shunned or stigmatized. Kisch was the first to report that the deaf members of the community and a significant fraction of its hearing members communicate by means of a sign language. Neither Kisch nor anyone else, however, had attempted to analyze the language before our team’s first systematic work on the language in 2002.

Our research team has now made several visits to the Abu Shara community and has developed a working relationship with a number of both its deaf and hearing members. Preliminary findings have revealed a pervasive robust sign language used across generations of deaf and hearing members of the community, which we have labeled Abu Shara Bedouin Sign Language (ABSL). Though it is new, ABSL is unique with respect to two characteristics: it is used in a stable language community with many deaf and hearing signers, and younger signers are born into a native-like environment with numerous adult models of the language available to them.

Age of sign languages and linguistic complexity. Languages produced by the hands and body and perceived by the eyes bear striking resemblance to the languages that are used more widely, those that are spoken and heard. A body of research that is quite large compared to its relative recency clearly proves that this is so (see Emmorey 2002; Emmorey & Lane, 2000; Meier et al, 2002; Sandler & Lillo-Martin, to appear for recent collections).

At the same time, some aspects of sign language structure appear to be modality specific. Two central properties of sign language structure underlie the similarities found across sign languages as well as differences from spoken languages: iconic motivation and simultaneous structuring (both exemplified below). By comparing universal with particular aspects in the conventionalization of a nascent sign language, we hope to gain fresh insight into the very nature of grammar.

All known sign languages are young, as human languages go, each having had an opportunity to develop only when the necessary conditions presented themselves, i.e., when a stable community of deaf people was able to meet regularly over time, often beginning at a school for the deaf. Yet all established sign languages that have been studied have complex grammars. How do these grammars arise? What is the course of their development? How do random and simple forms accrue systematicity and complexity? How much of this development can be attributed to universals of language structure, and how much to the manual-visual modality? What can the occurrence of a new language in a normal family and community setting tell us about the essential nature of human language and the way in which its grammar emerges? These questions motivate the proposed research.

Creoles. It has often been noted that sign languages resemble creole languages in their history and structure (Bickerton 1977, Fischer 1978, Feldman, Goldin-Meadow and Gleitman 1978, Gee & Goodhart 1985, 1988). Historically, individual creoles and sign languages are new: the oldest known sign language (French Sign Language) is little more than two centuries old and the oldest known spoken creoles about five hundred years old. Structurally, both types show remarkably little affixation, presumably because insufficient time has passed to allow for the development of affixes (AMPS). Researchers have sometimes seen creoles as prototypes that provide a special window into the nature of language, precisely because of their newness (Bickerton 1981; DeGraff 1999). But the very social factors that lead to the creation of creoles--the coming together of people who do not share a common culture and language--make it difficult to draw such conclusions directly: creoles do not develop under normal social circumstances, so one must be concerned that the social discontinuities that lead to the creation of pidgin and creole languages may also play a large part in determining their particular structure. As it happens, all previously studied fully developed sign languages are similar to creoles in their social origin: all originated under conditions of social discontinuity, usually among children brought together in a school setting. An additional factor militating against the ‘original’ nature of creoles is the fact that no creole is ever truly autochthonous (pidgins and creoles are also called contact languages), which is why other researchers have emphasized the role of substrate and superstrate languages in the development of individual creoles. Sign languages might seem immune from such external influences, but their school-based origins might mean that this is not so. Both these factors (the discontinuity of social structure and the influence of other languages) are absent in truly autochthonous languages, those that develop de novo in an established community without outside contact.

Such languages are exceedingly rare. The only previously documented case is that of Martha’s Vineyard Sign Language (Groce 1985), but that language disappeared before it could be studied in any detail. ABSL thus presents a remarkable opportunity to study a new language that has grown inside a very stable, longstanding community. The closed nature of the society itself makes it highly unlikely that there has been any significant outside influence on the language; the homogeneity of the society makes it an ideal breeding ground for any language. ABSL is thus as close to Psammetichos’s experiment as any language is likely to come.

Home Sign. Unlike creoles and pidgins, which involve contact between two existing languages, home sign systems used by deaf children are apparently spontaneously created without input from either a spoken or a sign language. In their studies of young deaf children who created their own gestures to communicate with adults around them, Goldin-Meadow and her colleagues found that once the child’s gestural repertoire developed, the child consistently used the same set of handshapes and movements to form gestures. Further, the child quickly developed combinatorial structures, combining components of gestures to build meaning (Goldin-Meadow, McNeill et al. 1996)

Goldin-Meadow & Mylander (1998) find that when home sign systems are compared across different deaf children (who have no contact with one another), there is a similar syntax underlying their gestural strings. Typically, the gesture representing the action is preceded by a gesture for the patient, and following the action is the recipient (patient-act-recipient). The fact that the syntax is similar among home sign systems is indicative of the presence of another resilient property of language – hierarchical structure, or the expression of thematic roles in a basic syntax.

Home sign systems, however, lack what natural sign languages share: a complex morphology and syntax. Though the expressive power of their gestures is increased by their combinatorial properties, home signers have a smaller repertoire of handshapes and movements than are found in natural sign languages. Further, their syntax is basic, often not expressing explicitly thematic roles other than patient and recipient. At least two factors constrain home signs: 1) lack of input from a natural language inhibits the development of fragile or language-specific and complex language structures and 2) iconicity, or the fact that their communicative power depends on the transparency of the gestures, ensuring comprehensibility to others. The latter factor may play a crucial role in limiting the ability of signers to constrain and conventionalize the phonology and morphology of their system, or to build a more complex morphology. To do so would make their signing more arbitrary and less iconic, reducing its communicative power. ABSL is not constrained by either of these considerations. Many of the hearing signers with whom the deaf signers communicate were exposed to ABSL from a young age. We expect to see a more complex syntax with a wider set of thematic roles; in addition, we should see the emergence of complex morphology, including a repertoire of conventionalized components used in combination with one another.

Aspects of sign language grammar.

Argument structure. All languages have ways of encoding the hierarchical relationship holding among the nominals involved in a particular event, i.e., the argument structure of verbs. This structure is reflected in syntactic mechanisms, such as word order, and/or morphological devices, like verb agreement and case marking. Sign languages are no exception. It has been argued that ASL, for example, has a basic word order of SVO (Liddell 1980; Padden 1988; Lillo-Martin 1991). All sign languages studied to date have a class of verbs that inflect for agreement. In sentences containing agreement verbs, the specific role of arguments of the verb is marked by the direction of the path movement of the verb (toward the R-locus associated with that argument, or away from that R-locus) and the facing of the hands (details below). Some sign languages have auxiliary-like elements that co-occur mainly with non-agreeing verbs, whose only role is to indicate the grammatical function of the arguments (Taiwan Sign Language, Smith 1990; Sign Language of the Netherlands, Bos 1994; Sign Language of Japan, Fischer 1996). Meir (in press) describes an object pronominal form in ISL, which can appear only in the object position of certain types of verbs. This form differs from the general pointing pronominal form, which is not restricted to any syntactic position. It has also been argued that ASL uses non-manual markers, such as eye gaze or body tilt, to indicate grammatical relations (Bahan 1996; and Neidle et. al, 2001).

Though argument structure is fundamental to any human language, grammatical marking of this structure is often redundant, as the relationship between the arguments and the verb may be inferred from the semantics of the verb and the properties of the arguments, coupled with contextual clues and general knowledge. And indeed languages vary widely in the degree and type of grammatical marking of argument structure.

What should be expected of young languages? As morphological marking takes time to develop and conventionalize, it is reasonable to expect young languages to rely heavily on semantic and contextual clues at first. Grammatical devices might emerge only in cases of ambiguity, e.g., when a verb takes two animate arguments. In such cases, young languages are predicted to rely mainly on word order, since inflectional morphology takes time to develop (AMS). The literature on spoken pidgins (e.g., Hymes 1971) confirms that young pidgins rely heavily on word order to express basic syntactic relations, developing morphological devices over the course of several generations.

However, sign languages seem to differ from young spoken languages in this respect; they are all relatively young, yet all have grammatical ways of marking argument structure. AMS suggest that sign languages may develop certain complex morphological structures much more quickly than spoken languages, since they can represent in a non-arbitrary manner grammatical relations that are based on visuo-spatial concepts. This leaves us with two conflicting expectations concerning the development of argument structure in a new sign language. On the one hand, as young languages, they are predicted to rely on syntactic devices, i.e., word order. On the other hand, as visual languages, they might start to develop some kind of morphological marking quite early.

Senghas, Coppola, Newport, & Supalla (1997) studied the development of argument structure in the first two generations of signers of the sign language that emerged at a deaf school in Nicaragua in the 1980s. According to their findings, the first generation hardly used any grammatical devices to encode argument structure. Furthermore, verbs typically occurred with only one argument. Transitive actions involving two animate arguments were expressed by two verbs. For example, a videotape of a man pushing a woman was described in sign as "MAN PUSH WOMAN FALL" (ibid. p. 555). In addition, directional movements on the verbs were not used consistently. The second generation of signers differed from the first generation in that they used the same two-verb sentences, but introduced an additional word order, in which the two verbs are adjacent to each other, possibly a precursor to a serial verb construction. Additionally, second generation signers use directional movements in a much more consistent manner. As explained below, consistent use of space is essential for the development of verb agreement in a sign language. These findings indicate that it takes more than two generations for a grammatical device to emerge, but that the potential for a grammatical system is already apparent in the second cohort of signers.

The Abu Shara situation is similar to that of Nicaragua in that we can observe more than one generation at the same time, but it differs from that of Nicaragua in important ways. First, there are several Abu Shara families with three generations of deaf signers, and all signers have native-like exposure to the language. This means that the language is transmitted naturally, from infancy, in the environment of a sign language community – conditions that may result in a different course of development from that of the sign language of Nicaragua. Second, ABSL is older: the first Abu Shara signers were born 70 years ago, while the first Nicaraguan cohort arrived at the school about 25 years ago. Working with ABSL, then, it will be possible to track the emergence of argument structure incrementally over a longer diachronic span than has been possible in past studies. Furthermore, we will be able to compare our findings with those of the sign language of Nicaragua. Such a comparison will enable us to determine whether linguistic complexity in visuo-spatial languages takes a single road or multiple roads of development.

Verb agreement. Verb agreement is one of the ways in which argument structure is encoded in a language, through the grammatical marking of properties of one or more of its arguments on a verb. Like verb agreement in spoken languages, sign language verb agreement is a grammatical system, as it involves systematic encoding of syntactic and thematic roles. Padden (1988) showed that ASL verb agreement is also different from that of spoken languages, in that the language has a three-way classification of verbs, according to their agreement patterns: plain, spatial and agreement verbs.

Verb agreement in sign languages takes the following form: the beginning and ending points of the agreeing verb are associated with the points in space established for the arguments of the verb. In sign languages, nominals in a clause are associated with discrete locations in space, called ‘R(eferential)-loci’. This association is achieved by pointing to, or directing the gaze towards, a specific point in space. These R-loci are used for anaphoric and pronominal reference for the nominals associated with them, and are therefore regarded as the visual manifestation of the pronominal features of the nominals in question (see e.g., Klima & Bellugi 1979; Lillo-Martin & Klima 1990; Meier 1990; Janis 1992; and Bahan 1996).

In addition to pronominal signs, verbs that inflect for agreement (the so-called 'agreement verbs') also make use of the system of R-loci: the direction of the path movement of the verb is determined by the R-loci of the verb’s arguments. In agreement verbs, the beginning and end points are determined by the R-loci of their grammatical arguments, and the facing of the hands is towards the syntactic object. The ISL verb HELP, for example, moves from the location associated with its subject argument towards the location associated with its object argument.

The two other classes of verbs behave differently with respect to verb agreement. Plain verbs have invariant beginning and end points; in particular, the direction of the path movement of these verbs is not determined by the R-loci of their arguments. Spatial verbs are those whose beginning and end points are determined by spatial referents, that is, locations and not subjects or objects. The locations encoded by verbs in this class are interpreted analogically and literally, and not as representing abstract grammatical arguments (Padden 1988). Subsequent research on many sign languages has revealed an important similarity: all of them have verb agreement, and all exhibit this tripartite division of verbs into the same categories. However, we know of no diachronic studies that describe how this grammatical system arises in a new language, and how it develops across several generations.

Classifier structures. Because they seem ideally suited to the manual-visual modality, signs that describe objects by their physical or semantic characteristics are common in sign languages, including home sign systems. Such signs, called classifier constructions, consist of an inventory of handshapes that represent a “class” or set of objects and rule-governed combinations of these handshapes with movements and locations as they depict one or more objects located in and moving through space. Emmorey (2002) groups classifier handshapes into four types: 1) whole entity classifiers, which represent objects by semantic, not descriptive features such as VEHICLE, STATIONARY-OBJECT; 2) handling and instrument classifiers, which she describes as involving “an outside agent…interpreted as causing the motion” (p. 76) or action, e.g. HOLD-HAMMER, TOOTHBRUSH; 3) limb classifiers which represent an animate object’s limbs, e.g. TWO-LEGGED, FOUR-LEGGED, and 4) extension classifier handshapes which describe the extent of an object’s shape and outline such as SHAPE-OF-CAR-HOOD. A second set of signs, described by Engberg-Pedersen (1993) as “referent projection,” uses not the hands, but the body of the signer as a representational object. Instead of using the hands to show relative size or shape of an object or its other characteristics, in referent projection, the signer uses the body as a map of animate motion. For example, to depict a bird flying, the arms and hands are used to flap as if winged or to depict a cat grooming itself, the limbs and paws of the cat are “projected” onto the arms and hands of the signer, and the signer acts as if licking his own hands. Both sets of structures are common to sign languages thus far described.

Sign languages specify by combinatory rules how handshapes are combined with movements in either sequential or simultaneous structure. In ASL, while two “extension” handshapes can combine simultaneously, that is, both hands have the same or different handshapes contacting each other, an entity classifier handshape cannot occur in the same kind of structure with classifier handshapes of other types; e.g., the handshape for VEHICLE cannot have a handshape describing a round object (e.g., a tire) on top of it. The constraint is described as a conflict of “scale” – because entity classifiers group by semantic not physical features, a handshape that represents a physical feature of an object, e.g., roundedness, cannot be combined with it (Emmorey, 2002). However, the description can be sequentially structured: first a classifier structure with an entity classifier handshape, followed by a structure with an extension or combination of extension handshapes, e.g. VEHICLE-MOVE-STRAIGHT. ROUND-OBJECT-ON-TOP-FLAT SURFACE, ‘A car with a tire on top of it drove down the road.” Thus, the structuring of classifiers with respect to one another in either sequential or simultaneous form is not a matter of individual performative style, but is strictly rule-governed. Emmorey (2002) also notes that possibilities for movements that depict actual motion differ from movements that describe the extension or shape of an object. To describe the length of a pipe, the hands begin together to show the size of the pipe, and then move in opposite directions to show the extension. To show that the pipe is itself actually moving in space, say on a conveyor belt in a factory, the hands move together in parallel, not opposite fashion. Classifier constructions vary in the combinatory possibilities of handshapes with one another, and with handshapes and types of movements, creating a complex system of representation. The exploration of such forms in a new sign language should reveal how such structuring emerges.

Researchers have noticed the similarity between many lexical items such as WRITE or SWEEP and classifier constructions, and some (e.g., McDonald 1982) have suggested that classifier constructions are the source of lexemes in sign languages. However, Padden (1988) and (Senghas 2000) have argued that while classifier structures may share handshapes and even movements with agreement verbs, the two systems are in fact distinct. Padden (1988) shows that while classifier movements can vary in path between two points, agreement verbs can only have straight path movements. Senghas (2000) finds that in Nicaraguan signers, the management of spatial perspective, e.g. rotating signs to match the perspective of the viewer is not found in verb agreement, suggesting that as languages grammaticize, they differentiate between argument structure and spatial location.

This previous research has shown that, while classifier constructions are pervasive in sign languages, there are systematic differences between languages in the ways in which the classifiers are incorporated into the grammar. We expect that ABSL will provide an invaluable perspective on the growth and nature of these constructions.

Prosody. Building on a theory of Ohala (1984), Gussenhoven (1999) proposes that (discrete) linguistic intonation systems are phonologized out of a pool of innate (and gradient) nonlinguistic patterns, and that both are synchronically available in language. A study by Campbell et al. (1999) presents a similar picture in what is hypothesized to be the corresponding system in sign language, facial expression. They show that the yes-no and wh facial expressions commonly found across sign languages are similar to universal facial expressions that denote surprise and puzzlement, respectively, suggesting that sign language linguistic facial expressions are grammaticized from a pool of universal nonlinguistic facial expressions. It has been shown that ASL uses both grammatical and affective facial expression (Baker & Padden 1978; Liddell 1980); that the two can be formally distinguished (Baker-Shenk 1983); that they are controlled by different areas of the brain (Corina, Bellugi, & Reilly 1999); and that they are acquired differently by children (Reilly, McIntire, & Bellugi 1991). A new sign language is pristine terrain for exploring the emergence of grammatical ‘intonation’ in a human language.

Prosody involves rhythm and stress as well as intonation. Rhythmic chunking of prosodic constituents has been shown to be related to the syntactic structure of sentences (Selkirk 1984; Nespor & Vogel 1986; Pierrehumbert 1980; Hayes & Lahiri 1991). These prosodic constituents, such as the phonological phrase and the intonational phrase, are arranged in a hierarchy, and each is associated with corresponding syntactic constituents, like phrases and clauses. Similar rhythmic patterns and constituents have been found in sign language (Wilbur 1999, 2000, for ASL; Nespor & Sandler 1999; Sandler 1999b, for ISL).

We intend to exploit this demonstrated relationship between prosody and syntax in our investigation of ABSL. Without prosody, there is a considerable amount of indeterminacy in interpreting strings of words. For example, in the study of argument structure in Nicaraguan Sign Language mentioned above (Senghas, Coppola, Newport, & Supalla 1997), various strings were observed in the two generations, such as MAN PUSH WOMAN GET-PUSHED, and MAN WOMAN PUSH FALL. With context, it is possible to interpret which arguments are related to which verbs, as the authors did. However, it is very difficult to determine what the sentence structure is, in the absence of a prosodic analysis. For example, the second sentence might be translated as, ‘There were a man and a woman there. The man pushed the woman. The woman fell down.’ Other possibilities are ‘The man pushed the woman down,’ or, ‘The man pushed the woman, who fell down.’ It is likely that the syntactic structure of such strings can be determined unambiguously with the help of a prosodic analysis, using what we call “the prosodic envelope” as a clue.


In the planned research project we will assume the background outlined in Section B, cognizant of differences that have been discovered across established sign languages, and incorporating what we have learned from preliminary studies, to which we now turn.

Verb agreement. Comparing verb agreement in ISL with that of ASL and other sign languages, Meir (1998b, 2002) argues that the classification of verbs into plain, spatial and agreement verbs is predictable, and need not be listed as an idiosyncratic property of each verb. The classification is semantically determined: verbs denoting motion in space will turn out to be spatial verbs. Verbs denoting transfer are agreement verbs, and plain verbs are defined negatively, as denoting neither transfer nor motion. Most apparent counter-examples to these generalizations are easily explained on phonological grounds: some verbs denoting transfer fail to inflect for agreement because of constraints imposed by their phonological structure.

All sign languages investigated so far have verb agreement, and they resemble each other in both the morphological instantiation of agreement and the meaning of the members of each class. This is not to say that all sign languages have identical agreement systems. Fischer and Osugi (2000), for example, found that in Nihon Syuwa (the Sign Language of Japan), an “indexical classifier”, articulated in neutral space by the nondominant hand, marks the locus of agreement. Nevertheless, in all sign languages that we know of, the tri-partite classification of verbs still holds, as does the spatial and simultaneous nature of their instantiation. Furthermore, the kernels of verb agreement have been found in sign systems that are not fully developed sign languages, such as home sign (the signing systems developed by deaf children raised in an oral environment without exposure to any sign language (Goldin-Meadow 1993), the very young sign language which has evolved in Nicaragua (Senghas 1995; Senghas and Coppola 2001), International Sign (Supalla & Webb 1995), and the signing of deaf children exposed only to Manually Coded English (S. Supalla 1990).

Verb agreement and argument structure in ABSL. Our preliminary analysis of videotaped signed narratives in ABSL suggests that signers rely heavily on context for the interpretation of arguments. There are very few overt nouns relating to animate/human referents. The agents of the actions are not mentioned explicitly in most cases, and are understood from the context. There are hardly any pronouns in the older signers' narratives. Younger signers, however, show increasing use of pronouns. In the older signers in particular, many utterances consist of a mimetic sign that depicts an event, but is devoid of any syntactic or other cues to its lexical category or grammatical role. Many verbal utterances have no overt arguments.

Where arguments are signed overtly, there is generally only one overt argument per verb. Nevertheless, there are exceptions to the general pattern we have discerned so far. One signer in our preliminary study signed some utterances with two explicit animate nominals. For example, in the sentence, WOMAN BABY TAKE-TO-BREAST SUCK (‘The woman took the baby to her breast (and the baby) suckled.’), the sequence WOMAN BABY TAKE-TO-BREAST is interpreted as one clause with two animate arguments, and SUCK as a coordinate clause in the same sentence. We arrive at this interpretation from an analysis of the prosody (cf. below), which indicates that WOMAN and BABY are arguments of TAKE-TO-BREAST within the same constituent, and that SUCK is a closely related constituent and not a new sentence. Such utterances provide the first glimmer of an emerging sentence grammar.

There are some pointers to follow with respect to the emergence of agreement mechanisms. ABSL signers hardly ever localize referents, and never localize non-present referents. They do sometimes localize locations, however (“men’s wedding tent here, women’s tent there”), along the side-to-side axis. Signs connoting verbs of transfer like GIVE and TAKE do not refer to R-loci, but rather exploit the signer-forward axis regardless of the identity of the verb’s arguments.

This seems to show a marked distinction between what might correspond to spatial verbs and verbs denoting transfer, i.e., agreement verbs. If future research yields similar results, it would lend support to the hypothesis that agreement verbs evolve from plain verbs and not from spatial verbs. The verbs expected to evolve into agreement verbs do not start out with the locative agreement of spatial verbs and abstract away from it, but instead adopt a different and nonanalogic axis at the outset. Such results may indicate that the Locative Hypothesis (e.g., Anderson 1971) which regards spatial relations as more basic than other grammatical relations, and as a template on which other grammatical relations are built, may provide a model of language structure that is accurate synchronically, but not necessarily diachronically. This (still tentative) finding would be surprising in a visual language, but compatible in an interesting way with other initial findings described below.

Classifier structures. Though sign languages share the classifier structures described in Section B., they vary with respect to their repertoires of handshapes and structures. Furthermore, they vary in the combinatorial possibilities for handshapes with movements (see Emmorey, 2002). With respect to categories of classifier handshapes, our comparative research (AMPS) has shown that while Israeli Sign Language seems to have classifier handshapes of each type described by Emmorey (2002), it has fewer whole entity classifier handshapes than does ASL. The ASL entity classifier VEHICLE represents all land-bound vehicles: trains, cars, bicycles, subway trains and boats. ISL has no comparable classifier; instead, either “extension” or “instrument” classifier handshapes are used for each type of vehicle. We have suggested that the relative paucity of whole entity classifiers in ISL may be related to its younger age, since such forms appear to be more abstract, and may require more generations to conventionalize.

A further difference between ASL and ISL signers is the latter’s more frequent use of referent projections where ASL signers would normally use classifier handshapes. When describing a dog making a nest for itself in a field of tall grass, ASL signers use limb classifiers to describe the dog circling an area of grass, then lying down among the reeds. ISL signers, in contrast, use the body of the signer, specifically the arms and the hands as paws, to show walking in a circle, then the body is lowered with the arms and hands in front to show a dog in repose. In their study of Nicaraguan signers, Kegl et al. (1999) report that the first cohort uses referent projections more than the second does. In AMPS, we argue that use of the body is more basic in sign language development, and, in time, as the language becomes more conventionalized, there is increased use of the hand classifiers as more abstract symbols. Specifically, we argue that use of classifier handshapes in space requires conventionalization, not only of handshape (i.e., which handshapes for which set of objects) but also of rules for how objects are positioned or organized with respect to one another, e.g. front vs. back, left vs. right -- more generally, the management of perspective.

Classifiers in ABSL. From our preliminary observations of ABSL, we see use of referent projection when describing the motion of animate objects in space to a greater extent than is the case for either ASL or ISL, which suggests that the new sign language has yet to make a transition to the heavily spatialized and rich classifier system found in older sign languages.

Using picture stimuli from the Max Planck Institute (MPI) for Psycholinguistics, we elicited structures from ABSL signers that appear similar to classifier structures in older sign languages. Like ASL and ISL signers, ABSL signers use descriptive handshapes that mimic the size and shape characteristics of the objects. But their inventory of handshapes is smaller. Their handshapes seem to be generally much more unmarked, showing overall shape rather than smaller detail of shape. Further, the combinatory structures they use are markedly different from those of ASL. In response to a picture of three wooden rings arranged on a table, one with a thin wooden peg inside, another with a small square of wood and a third with a larger square, ABSL signers appear not to combine two handshapes to depict locative relationships of two objects, as ASL signers typically would.

ASL grammar takes advantage of the availability of two articulators, the two hands, and of the potential for iconic representation. An ASL description of this display would use one hand to depict a ring and another to depict the wooden peg inside, then move on to the next ring, again with the nondominant hand showing the size of the ring and the small wooden square arranged inside depicted by the dominant hand. ABSL signers instead organize the representation sequentially and with little locative description. Their structures appear to be a general description of the ground in which they indicate that the number of rings is three, then how they are arranged from left to right, and then they depict the order of objects inside the rings. ASL signers’ strategy of using the two hands to show locative relationships between two different objects, and the two objects’ positioning in space, is apparently absent in ABSL.

This leads to a very compelling conclusion. Bimanual and iconic advantages of the sign modality converge on a grammatical system with spatial and simultaneous properties – but this convergence takes time. It would appear that showing two descriptive handshapes simultaneously is an obvious way to show how the two objects occupy the same space together, but in fact, such representation must be grammaticized. While ABSL is more complex than home sign, its grammatical system is still unfolding. We wish to learn how its grammar ‘happens,’ and to provide an account of its developing structure.

The conclusion that even iconic aspects of sign language structure take generations to evolve is particularly interesting in the light of our previous work on simultaneous morphology in sign languages (AMS, AMPS). That work sought to solve the known puzzle presented by the complexity of sign language morphology. As all known sign languages are quite young, like creole languages, it is surprising that the former have complex morphology (e.g., Gee & Goodhart 1985, 1988), while the latter do not (McWhorter 1997). The explanation we found relies on a comparison between simultaneous morphology, such as verb agreement and classifier constructions, and sequential affixation in sign languages. Our study of the ASL –ZERO suffix and a group of sense prefixes in ISL led us to notice clusters of properties that characterize each type of morphology, summarized in Table 1.


related to spatial cognition arbitrary

productive; no lexical exceptions less productive to unproductive

no individual variation individual variation

semantically coherent less semantically coherent

universal across sign languages tied to specific sign languages

morphologically underspecified grammaticized from free words

many processes limited number of processes

Table 1. Two types of sign language morphology

We argued in AMS and AMPS that the simultaneous type of morphology arises quickly in all sign languages despite its complexity because it is iconically motivated and related to visuo-spatial cognition. Sign languages are particularly good at representing iconic shapes and relations in space, so they do so at a relatively early stage of their development. Sequential affixation that arises through grammaticization of adjacent words is a much more idiosyncratic and arbitrary process, and the development of rich and systematic morphology of that type takes a long time, even hundreds of years judging by spoken languages. This explains why sequential morphology is much rarer, less productive, and more language-particular across sign languages than the simultaneous kind. In terms of their sequential morphology, sign languages behave like any other young languages. As expected, ASL has more affixes than ISL (Sandler & Lillo-Martin, to appear).

Yet our preliminary work on ABSL indicates that even sign language universal morphology does not appear with catastrophic abruptness. We explain this in the following way: iconically motivated signs are no less symbolic than arbitrary ones, and the emergence of a grammar that uses those symbols systematically and grammatically is therefore a cognitively complex process. As such, it takes time -- if a good deal less time than the fully arbitrary kind. We speculate that the sign modality may provide optimal conditions for tracking the emergence of some aspects of grammar, because the iconically motivated representation system it exploits makes the representation of certain grammatical structures easier and likely to develop more quickly. But even these structures develop gradually, and we believe that we have caught this new and isolated language at the right moment for tracing the emergence of simultaneous, iconically motivated morphology. Whether the language is old enough to have accrued sequential affixation as well is an open question.

Prosody. Another important area in which to search for the emergence of grammatical complexity and systematicity is in the structure of propositions. As mentioned, it is difficult to grasp this level of structure by simply glossing strings of signs. We intend to study the prosody in its own right, and also to use prosody as a means for approaching syntactic structure. .

The prosodic component of the grammar contains its own primitives and rules for their combination. At the same time, this system is closely tied to other major components: syntax, semantics, phonology, and pragmatics. Many features of prosodic form are directly observable and measurable – features such as rhythmic chunking and intonational patterns. This makes them more accessible than features of other components, like sentence embedding or the form of interrogative sentences in the syntactic component. As prosody is linked to these other grammatical structures, it can be exploited as a tool for their analysis.

We will use as a guide three levels of the prosodic hierarchy (Selkirk 1984; Nespor & Vogel 1986) found in ISL: the phonological word, the phonological phrase, and the intonational phrase (Nespor & Sandler 1999; Sandler 1999a,b). Each of these is linked to syntactic constituency. We also note that the very presence of such a hierarchy implies a considerable degree of complexity. Each level is linked to morpho-syntactic and syntactic structures, also arranged in a hierarchy. Mature languages have words linked within phrases, phrases within clauses, and clauses within complex sentences. We describe work on the prosody of ISL to demonstrate what has been discovered about prosody in one established sign language and its relation to other grammatical components, and then go on to preliminary findings in the emerging sign language, ABSL.

The phonological word in sign languages has a strong tendency to be monosyllabic, i.e., to be characterized by a single movement (e.g., Coulter 1982; Sandler 1993, 1999a). In addition, phonological words are specified for a single handshape (selected finger group). Even morphologically complex words and host-clitic forms tend toward this structure (Sandler 1999a). This constrained form distinguishes lexical words from classifier constructions in established sign languages (AMPS), and from gestures in a newly developing sign language like ABSL. The latter two forms have no such phonological constraints, and yet they are potential precursors to words. We predict that words will emerge across generations in ABSL, starting out as amorphous gestural elements, and eventually conforming to the phonological constraints on words known to be typical in established sign languages. In tandem with standard morphosyntactic criteria for wordhood, this method will enable us to track the emergence of a lexicon in this new language.

Nespor & Vogel (1986) propose that the phonological phrase can be projected from syntactic constituents at the level of the phrase (e.g., NP, Adj P, etc.) according to an algorithm. Phonological phrases are cued by rhythm and intonation (see below). In addition, Nespor & Vogel demonstrated that phonological rules in a variety of languages have the phonological phrase as their domain of application. That is, certain rules that apply across word boundaries, such as French liaison, cannot apply between words in two separate phonological phrases. In the sentence, les enfants [sont^allés] P // à l’école. ‘The children went to school.’ there is liaison (^) between sont and allés (i.e., the [t] is pronounced) within the same phonological phrase, but not between allés and à (//), (the [s] is not pronounced), because there is a phonological phrase boundary in between. As the French example illustrates, phonological phrasing reflects syntactic phrases such as noun phrases, adjective phrases, etc., although they are not fully isomorphic with them.

In a study of ISL, Nespor and Sandler (1999) found that phonological phrases were marked rhythmically by holding the hand(s) in position at the end of the phrase, by relaxing them and pausing briefly, or by reiteration of the last sign of the phrase. They also found that the phonological phrase provides the boundaries for a phonological rule that spreads across words: Nondominant Hand Spread. The nondominant hand, triggered by its participation in a two-handed sign, appears in the signal with the handshape and location of the sign in which it originates before and/or after that sign, up to the phonological phrase boundary, but not beyond. The example below is from the ISL sentence: [[MALE-THERE]P]I I [I PERSUADE STUDY]P]I, ‘I persuaded him to study.’ In the second phonological phrase (P), the nondominant hand (fist shape) that is part of the lexical specification of the sign PERSUADE spreads through STUDY.


Figure (2) Nondominant Hand Spread within a phonological phrase

This spread does not exceed the phonological phrase boundary. Both the rhythmicity of phonological phrasing and the boundaries of NHS provide evidence for the existence of this constituent in ISL. (See Brentari & Crossley, in press, for similar results in ASL.)

Where phonological phrasing exists, we see evidence of syntactic complexity: more than one word is grouped together in a phrase, and the phrases in turn are hierarchically subordinate to a higher structure, such as the sentence. The relation between syntactic phrases and phonological phrases is indirect but demonstrably real (see Nespor & Sandler 1999), so that the phonological phrase reflects syntactic constituency.

The next level up is the Intonational Phrase (IP), so named because it is the primary domain of intonational tunes in spoken language (Beckman & Pierrehumbert 1986). This level of structure can include more than one phonological phrase within long sentences, and it normally sets off syntactic constituents that are in some sense independent of the rest of the sentence, such as topics, parentheticals, nonrestrictive relative clauses, and extraposed elements (Nespor & Vogel 1986). An IP may be comprised of more than one phonological phrase, and IPs are subordinate to the sentence, mirroring the typically complex hierarchical structure of syntax in human language.

The ISL study found clear cues for IPs: a change in head or body position, optional eye blink, and across-the-board changes in facial expression. Similar cues have been reported for ASL by Wilbur (1999, 2000). The suggestion that facial expression in sign language is like intonation in spoken language (Reilly, McIntire, & Bellugi 1991; Wilbur 1996) is strongly supported in the ISL study, both by the distribution of facial expression, and by its function. In ISL, small changes in facial expression sometimes occur in a new phonological phrase, and across-the-board changes occur in a new intonational phrase. The sentences in the corpus were strongly Topic-Comment in structure, and these two pragmatic constituents formed two IPs. The example below shows clearly different head and body postures as well as different facial expressions in two intonational phrases in ISL, from the sentence:

[[book-there ] P [he write ] P ] I [[interesting] P ] I (‘The book he wrote is interesting.’).


Figure (3) Different body postures and facial expressions in different intonational phrases

The sign language literature indicates that topic-comment structure is very common across sign languages (and in discourse-oriented languages generally (Givon 1979). Therefore, it makes sense to approach the task of discovering sentence structure and constituency in a new sign language by looking for the division of a discourse into intonational phrases, and then seeking some kind of relation – such as topic and comment or noun phrase and relative clause -- between them. Our preliminary results, outlined below, suggest that this will be a fruitful direction to pursue in tracking the differences in grammatical complexity across Abu Shara generations.

The melody of the prosodic system is intonation, conveying the illocutionary force of utterances, nuances of meaning, and certain types of relations among constituents. The melodic intonation of sign languages is visually manifested in facial expression (it is called superarticulation in Sandler1999b). For example, ISL superaticulatory arrays for wh questions (furrowed brows) and for shared information (squint) are shown in Figure (4a,b).

Figure (4a) wh-question (b) shared information (c) wh and shared

Complexity within the ‘tunes’ of sign language is achieved simultaneously, unlike spoken language tunes, which gain complexity in sequential fashion. For example, in ISL, when the linguistic facial expressions for a wh-question and for shared information (furrowed brows and contracted lower eyelids, respectively), co-occur in a phrase, they are superimposed upon one another simultaneously (Nespor & Sandler 1999; Sandler 1999b), as shown in (4c) above. Of course, such complexity of form conveys complexity of content, and it is therefore expected to develop and conventionalize gradually.

Like spoken language intonational patterns, sign language superarticulation is likely to have evolved from paralinguistic expressions, as explained in Section B. In the proposed study, we will inventory the superarticulations of the language and their distribution in signing across the Abu Shara generations, as an indication of grammaticalization and complexity within the intonational component.

The prosody sharply reflects both the discreteness and the linkedness of the two constituents within a hierarchy. The two constituents are separated rhythmically by various cues, and each constituent is marked by different head or body postures and facial expressions. At the same time, their connectedness is also encoded in the prosody. For example, brow raise is used in ISL to indicate continuation (as in conditional clauses); and eye squint designates certain constituents (often relative clauses) as shared information with respect to the rest of the material conveyed in the sentence. Crucially, only a language with complex syntax need avail itself of such detailed and systematic chunking and linking devices. Rigorous analysis of the relationship between facial expression and prosodic constituents will provide a key to the development of constituent structure in ABSL.

A reasonable scenario is one in which a language begins with utterances consisting of a single word-like gesture. It then progresses to a stage where words begin to have conventional form and meaning, and where more than one such word is conjoined within an utterance. At this point, there are two levels in the prosodic hierarchy: the (phonological) word and the intonational phrase. We predict the next stage to involve linking two or more phrasal units to a higher structure in the same proposition, i.e., for the intermediate level of the phonological phrase to arise between the IP and the word. We then have the potential prosodic envelope for complex sentences with more than one clause. We will use methods developed for studying the prosody of other sign languages in our analysis of Abu Shara Bedouin SL, and, in the process, gain access to the emergence of syntactic structuring.

Prosody in ABSL. As ABSL is a new sign language, we have a privileged opportunity to observe the emergence of an intonational grammar in its own right, hypothesized in Section B. to emerge from universal, nonlinguistic expressions. Our preliminary research with ABSL shows that there is significant individual variation in the amount and distribution of facial expression in the signing of the first target generation. The second target generation is more animated, but we do not yet know how much of their facial expression is systematic. While emotional facial expression that conveys the attitude of the signer toward the events in his/her narrative is apparent, the degree of grammaticization of an intonational system is still unknown. An exception is a characteristic ABSL expression noticed in several signers under age 30: an eye squint with mouth tautening and widening that tends to co-occur with time expressions that are manually signed at the beginning of a discourse segment.

Figure (5) ABSL facial expression indicating temporal frame of reference

Using the prosodic envelope as a key to syntactic structuring is showing itself to be a promising strategy. The first indication of this is in signing rate, a variable used in the Nicaragua study (Kegl et al. 1999), which reported that the first cohort (59 signs per minute) signs much faster than the second (74 signs per minute). We timed a minute of signing from the middle of a narrative of one fluent and animated Abu Shara signer from the older generation (age 40) and one from the younger generation (age 17). We compared our results to the rate of an ISL signer from a previously recorded narrative. The young Abu Shara signer signed significantly faster than did the older signer: 126 signs per minute, and 76 signs per minute, respectively. The ISL signer signed 160 signs in one minute. While it is clearly too early for firm conclusions, it is tempting to speculate that the similarity in average speed of the second cohort Nicaraguan signer and speed of the older ABSL signer is due to the fact that they are each second generation signers; the 40-year-old ABSL signer is a son of a first-generation deaf man now deceased.

In advance of morphological and syntactic analyses, which await complete glossing of the data and close work with consultants from the village, it is still possible to get an indirect measure of syntactic complexity by looking at the way information is packaged prosodically. Of relevance here is the constituent known as the intonational phrase, described above. It is not surprising that the prosodic cues setting off this constituent in ISL and ASL are very salient, as they mark off the highest prosodic constituent within a sentence. Hypothesizing that salient cues such as change in head/body position and facial expression may also mark IPs in other sign languages, we counted the number of expressions that were so marked in ABSL in signers from two generations, and compared the results to those of the ISL signer. More importantly, we counted the number of signs in each IP. If a sign language has syntactic complexity, then it is expected that an IP will consist of several signs, which must necessarily have some syntactic relation to one another.

Here are the results. The older ABSL signer had 45 IPs in one minute of signing and an average of fewer than two signs per IP. The younger ABSL signer had 29 IPs and an average of 4.5 signs per IP. The ISL signer had 40 IPs in a minute, with 4.5 signs per IP.

The older ABSL signer very often had only a single sign in an intonational phrase. In addition, many of these signs are pantomimic in nature, extending in time, space, and rhythm analogically with the event being described. The impression is of a series of broadly depicted events, each described with one sign or gesture (maximum two). The younger signer had far fewer IPs in a minute of narrative, despite the fact that his signing, it will be recalled, is much faster. This was possible because many IPs had several signs – indicating that an event within an IP involves a series of words that have some functional relation to each other. These differences further motivate the scenario sketched above, which will be tested rigorously across the ABSL generations.

The ISL signer had many more IPs because her signing was much faster, but a similar number of signs per IP on average as the younger ABSL signer. We note, however, that there was another difference in addition to speed between the young ABSL signer and the ISL signer. The ISL signer’s narrative was very rhythmic; most of the IPs consisted of about four signs. The young ABSL signer’s signing was characterized by the opposite pattern. Although the average number of signs per IP was about the same as was the case for the ISL signer, the actual phrases varied greatly in length. Many had only one or two signs, while other IPs contained as many as seven signs. Although it is premature to draw firm conclusions from this picture, it may suggest less regularity in the internal syntactic structure of ABSL compared to ISL – the latter, a language of similar chronological age but developed under very different circumstances.

ABSL Lexicon. A central goal of our study of ABSL since the beginning of our research has been the preparation of a dictionary or lexicon. We have found that Abu Shara signers understand readily the value of a dictionary to the community and that they respond easily to the main task involved: naming pictures presented on a laptop computer. Indeed, this task has been a very successful introduction to our research for almost all our consultants. A dictionary serves a number of social, practical, and scientific purposes. We outline here its scientific value, and turn to elicitation methods in Section D.

The most striking evidence that sign languages are languages in the strict sense of the word and not some other form of communication is the fact that all known developed sign languages are organized primarily around discrete words, which can be easily recognized, not in terms of meaning, but rather by their ‘phonological’ structure: across sign languages, words tend to be composed of a single prosodic unit, sometimes called a syllable (Coulter 1982; Sandler 1999a). This sort of structure has been accounted for by positing ‘monosyllabic’ Location-Movement-Location templates to which signs generally conform, whether they are morphologically simple or complex (Sandler 1989, 1993, 1999a).

Nonetheless, the words of a new sign language must evolve in large part from gestures, many of them iconically motivated. Unlike words, gestures are gradient and not discrete, are often not conventionalized, and may incorporate referential and analogical properties in an amalgam that defies classification in any particular lexical category. ABSL provides us with a rare opportunity to investigate the development of sign language words from gestures, since the array of signers that we have access to represent a continuum in the development of the language since near its inception. We expect to see that some vocabulary items are phrases or compounds, and in later generations, may become lexicalized in a simplification process (Supalla, 2002). Classifier constructions may provide a source for lexical words in ABSL as in other sign languages -- for example, the lexicalization of descriptions of domestic animalssuch as. horse, cow, and goat. In this case, the source handshapes and aspects of movement have become conventionalized, but the constructions are not lexemes. Their systematic combination, constrained by phonological word requirements, and the assigning of lexical category to these forms, will complete the lexicalization process. We will be looking for lexicalization from both sources -- phrases or compounds and classifier constructions -- in our study.

We believe that the research we bring as preliminary to this study taken together with our initial findings in the target population put us in a solid position to conduct a detailed and rigorous investigation of the emergence of a grammatical system in the language of Abu Shara.


Design. We will design videotaped materials for elicitation of short sentences reflecting functions usually expressed in sign languages through localization in space, verb agreement, classifier constructions, and prosodic markers. We will also present longer stimuli, such as short animated cartoons, to elicit discourse-length corpora. Photographs of simple objects in specific arrangements with one another will be used to elicit classifier handshapes and combinatorial structures. An additional aspect of our project is the compilation of a dictionary of ABSL as our contribution to the community. The benefit to us will be to amass a list of vocabulary items across generations, and to characterize the development of the lexicon. We will elicit words through pictures from various sources, including pictures of man-made objects, structures, plants, animals, and people in the village. Data will be collected during the biannual visits of the American investigators, and, in between, by the Israeli investigators. It will be digitized on DVDs and shared by all investigators.

Each of the investigators and consultants bring different and complementary expertise to the project, and each will analyze data accordingly, all the while interacting and contributing to all aspects of the project. Between visits, we will communicate by email, post, and videoconferencing, for which all four universities are well equipped. In the village, we will work with one or two local organizers, who will locate individuals according to our requirements (e.g., according to age), and arrange for us to meet with specific subjects during our visits.

We will begin by designing materials to elicit the specific structures we wish to study, grouped according to “Projects” below. We will then collect data on videotape using these materials for the oldest signers (hearing siblings and spouses of the first generation of now deceased deaf signers), older signers (in their 40s) and the youngest children able to respond to our materials, before they reach school). We will then move on to document the diachronic spectrum by videotaping deaf and hearing signers of all intervening ages. We will videotape about 20 of the approximately 80 deaf people in the village.

The first signers in the village were born about 70 years ago and the next generation of deaf people (the first for which we have data) are now in their 40s. Due to the way households are organized in the community – in this Bedouin community, one man may have 2-4 wives and each wife typically has several children – the clear distinction between generations begins to break down. We make crisp divisions at the two ends of the spectrum, isolating the oldest living group of signers, those in their 40s, and the youngest, those under school age, from the rest. All the intervening ages must be considered along a continuum, since all live in large extended families.

We will also collect data from hearing signers in deaf families for comparison. Hearing signers have access to spoken and written language, which the deaf signers in the village do not. At the same time, deaf signers rely solely on visual reception for all linguistic communication, which hearing signers do not. We will begin with two older people who are expected to know the sign language of their deceased deaf family members well: a brother and a wife of a deaf man.

In the last year of the project, we intend to run our materials with ISL signers. A good deal is known about this language (Meir & Sandler, in press), which is the same chronological age as ABSL, but formed as a true creole with input from other sign languages and developed in a community now numbering 8,000, with its own culture and institutions. These comparisons are expected to carry over into future research. A tentative timetable follows:

Year 1 Develop elicitation materials and conduct pilot studies across generations

Year 2 Oldest and youngest signers

Year 3 Signers of intervening ages

Year 4 Youngest signers after entering school

Year 5 Complete coding and analysis. Compare the structure of ABSL with that of ISL

Project 1: Argument structure and verb agreement. In Section B., we established the relation between verb agreement and argument structure. In studying the verb agreement system of ABSL, we will investigate how such a complex grammatical system might emerge and evolve. We suggest three hypotheses with respect to the emergence of agreement verbs. Hypothesis (1) states that the agreement system with all three verb classes, iconically motivated as it is, is born spontaneously and immediately in any sign language. If this is not the case, then we must assume that the system derives from a simpler one that includes only plain and spatial verbs, giving rise to two other hypotheses. Hypothesis (2): agreement verbs develop diachronically from spatial verbs, by adding marking for the function of transfer to the locative path of those verbs. Hypothesis (3): agreement verbs evolve from plain uninflected verbs. Plain verbs are simpler than the agreement verbs because they show no inflection at all. Spatial verbs may be simpler than agreement verbs because they represent iconically based locative notions and not grammatical relations. In terms of the locative hypothesis, spatial relations are linguistically more basic in that they serve as structural templates for other expressions (Anderson 1971). Spatial verbs also seem to be less linguistic in their use of space, in that they use space analogically (e.g., Cogill-Koez, 2000).

We now look at these hypotheses and the predictions they make in the context of our preliminary data from ABSL. First, we may safely surmise based on the Nicaragua study, and from our preliminary investigation of ABSL, that the agreement system described above, though possibly inevitable in a sign language, does not burst forth overnight. It is therefore possible to eliminate hypothesis (1) as unexplanatory, with no further ado.

Hypothesis (2), that agreement verbs evolve from spatial verbs, is motivated by the fact that all sign languages exploit the spatial medium to encode visuo-spatial concepts, explaining the presence of similar verb agreement systems in all established sign languages that have been studied (AMS, AMPS). It is also compatible with the Locative Hypothesis (Anderson 1971), which regards spatial relations as more basic than other grammatical relations, and as a template on which other grammatical relations are built. In this view, agreement verbs develop from spatial verbs, by grammaticizing the iconic use of shapes and their motion and location in space. In other words, sign languages develop by way of lexicalization of classifier predicates. They begin with descriptive and spatialized cores, which, through a process of conventionalization, become "frozen" as lexicalized verbs, adjectives and nouns. So the spatialized CARRY-BY-HAND later becomes lexicalized as GIVE. In addition, PEEL-BANANA becomes BANANA. A suggestion along these lines has been made by Janis 1992: "Non-locative verbs [i.e., agreement verbs/AMPS] are lexicalized forms of particular classifier predicates" (p. 263). This course of development is compatible with Meir’s (1988) analysis, showing that agreement verbs and spatial verbs share a basic semantic structure, but that agreement verbs contain an additional morpheme: TRANSFER. This additional element of complexity encodes the grammatical relations between the subject and object arguments (and not between their spatial thematic roles). Following this line of thought, it is possible to regard agreement verbs as more grammaticized than spatial verbs, as the latter encode only analogical locations, while agreement verbs encode grammatical roles like subject and object.

This hypothesis makes the following predictions:

1. The basic distinction in the initial stages of the language is between spatial verbs and plain verbs. As the language matures, the class of spatial verbs will divide into two classes, one indicating actual movement in space, the other, transfer.

2. Some characteristics of verb agreement (e.g., encoding a distinction between first and second person) should occur early in the life of a language. This is because the signer and the addressee are situated at different locations, a distinction encoded by spatial verbs.

3. Verbs of transfer (Jackendoff 1990) (e.g. GIVE) should behave initially as spatial (e.g., CARRY-BY-HAND). That is, the locations that they encode should be interpreted literally, and not as representative of grammatical roles like subject and object (Padden 1988). As the language matures and as R-loci are used for non-spatial referents as well, verbs of transfer will develop more grammatical behavior; that is, they will move between R-loci associated with the verb's grammatical arguments rather than between R-loci representing actual locations.

4. Verbs of abstract transfer (TEACH, ASK, SHOW, HELP), which have no spatial motivation, will initially behave like plain verbs and not like spatial verbs. After a class of concrete verbs of transfer arises (GIVE, SEND, etc.), some abstract verbs of transfer will begin to acquire the grammatical behavior patterns of agreement verbs.

The third hypothesis predicts that agreement verbs evolve from plain verbs, by developing a grammatical system of inflectional affixes. This happens by modulating the path movement of plain verbs with respect to locations in space associated with grammatical arguments. Such a scenario is compatible with the original classification suggested by Padden, which regards agreement and spatial verbs as two different entities. This hypothesis makes different predictions concerning the initial stage of the system as well as its course of development:

1. In the initial stages, the distinction will be between spatial verbs and plain verbs; all verbs of transfer, concrete and abstract, will pattern like plain verbs.

2. As the language matures, the class of plain verbs will split into two classes: plain verbs and agreement verbs.

3. Agreement verbs need not resemble spatial verbs in form. For example, it might be that agreement verbs use different portions or axes of the signing space than spatial verbs.

Methods and materials. Our goal is to study the diachronic development of grammatical complexity in the verb agreement system and argument structure of ABSL, and to test the three hypotheses outlined above. To that end, we will design materials for eliciting different types of verbs (according to the tri-partite morphological and semantic classification of verbs (Padden 1988; Meir 1998), different types of argument structures, and different types of referents (1st, 2nd and 3rd person referents). These materials include video clips and still photos for eliciting different types of verbs and argument structures. In designing the video-clips, we intend to follow the design of video-clips described in Senghas et al. (1997, 552-553) for eliciting different types of verbs. These clips showed the same three people in the same spatial relations to each other, performing different types of actions. The actions were intended to elicit verbs from four different verb classes: verbs with one animate argument (SLEEP, CRY), verbs with two arguments, one animate and one inanimate (PULL, TEAR), verbs with two animate arguments (PUSH, LOOK-AT), verbs with three arguments, including both regular and backwards verbs (GIVE, SHOW, and TAKE). To these classes we add two more: verbs with one inanimate argument (FALL, BREAK), and verbs denoting motion and spatial relations (PUT, HAND-TO, MOVE(transitive)).

The data elicited by these clips will be analyzed according to the following parameters: (a) consistent use of space when referring to the referents. (b) word order. (c) number of explicit arguments per verb. (d) use of space in the forms of verbs (that is, (i) whether the path movement of the verb encodes the grammatical/thematic roles of the referents; and (ii) whether different verbs show different patterns of the use of space). (e) use of non-manual markers, such as direction of head, shoulders, torso and eye-gaze, to refer to referents.

In addition to the clips, which are intended to elicit sentences with 3rd person arguments, we will develop materials to elicit 1st and 2nd person referents. This is of particular importance for two reasons: first, verb agreement forms are more salient when one of the arguments is 1st person. Hence, we might get clearer results in such cases. Secondly, the R-loci for 1st and 2nd pronouns are not dependent on a system of establishing R-loci for non-present referents, since the signer and the addressee are present in the discourse situation by definition. Therefore, a sign language might have inflected forms of verbs for 1st and 2nd referents before it acquires agreement with 3rd person referents. The materials that will elicit these forms will include video-clips or still-photos of the subjects themselves performing various actions (from the different classes mentioned above) with one of the investigators. The subjects will then look at these clips or photos together with the investigator (preferably in a subsequent visit), and will describe in signing the events documented in them. These data will be analyzed according to the parameters mentioned above, and compared with the same verbs with 3rd person referents.

Project 2: Classifier handshapes and combinatory structure. We find in our preliminary investigations of ABSL that despite their apparent iconicity, the use of classifier-type structures in space to show movement and location of objects is still unfolding in the new language. In the first part of our investigation, we propose to conduct elicitations to determine the inventory of handshapes in the language, i.e., whether ABSL signers have classifier handshapes in each of the four categories described in Emmorey (2002). One hypothesis is that the system develops “at once,” that is, all four categories (semantic, extension, limb, and handling or instrumental) develop roughly at the same time, and then over time, more handshape distinctions are added to each category. In other words, ABSL signers will show all four types of classifier handshapes because they are general to visual-gestural languages; however, because the language is young, there will be fewer exemplars in each category. The claim is that the categories represent the common dimensions of representation -- semantic vs. descriptive, animate vs. inanimate, body vs. limb, and handling vs. instrument – and that they will surface at about the same time.

An alternative hypothesis is that certain classifier handshapes or structures will appear before others. In other words, as the language develops, it adds not only more exemplars but more categories of representation. Following this hypothesis, we predict greater use of the body over the hands to depict human movement of objects. We have found from our preliminary work that older signers used the body to show the manner of movement, followed by a directional gesture to show the direction of movement, but younger signers showed more evidence of using hands to show the limbs walking in a forward movement, even if they did not specify the path of movement. Under this hypothesis, we predict that there is a body-hand hierarchy, with the expansion of categories and addition of more handshapes following the development of the grammar. As the most fundamental representational object, the human body is its own map: it depicts upper and lower, top and bottom, left and right, front and back. Mapping representation directly on the body allows signers to draw from general human principles to make meaning. As the grammar develops, perspective and planes of representation are conventionalized and the hands are increasingly used in space as components of the classifier system. Since the hands do not obviously represent front or back, left or right, the grammar of the language will define these dimensions.

We also predict that abstract and conventionalized categories will follow descriptive ones; entity classifiers will appear in the inventory after extension and instrument/handling classifier handshapes. Entity classifiers, such as VEHICLE in ASL, conventionalize the notion of “land-locked vehicles,” and stand in contrast to other vehicles of different properties, e.g., AIRPLANE, or “airborne vehicles.” The category of VEHICLE in ASL shows no iconicity in shape or visual characteristics, and covers submarines, wheelchairs, boats, cars as well as motorcycles and subway trains. Our investigation of ISL shows that the relatively younger language lacks any classifier comparable to ASL’s VEHICLE, and indeed has very few entity classifiers. It should be noted too that different established sign languages have different inventories of entity classifiers, which follows from the essentially abstract nature of this category of classifiers. The claim is about preponderance of one classifier category over another.

In the second part of our investigation, we focus on combinatory constraints in classifier structures, in particular, the constraints on movement and handshape combinations. We have found that ABSL signers often do not use two hands together simultaneously to show positioning of two objects relative to one another. Instead, they use sequences of handshapes and then pointing to show that the two objects are located in the same space. In ASL and other established sign languages, there is extensive use of extension classifiers simultaneously in both hands, to show multiple properties of objects, e.g., that a cup has a toothbrush resting on top of it. From our preliminary ABSL data, we have learned that the use of handshapes in simultaneous combinations is not iconically motivated, but develops as a course of conventionalization. Furthermore, there are constraints on which forms can combine in this fashion: in ASL and other sign languages, entity classifiers cannot be combined simultaneously with extension classifiers. Over the life of the language, we expect to see more conventionalization of movement and handshape combinations, and more conventionalization of the bimanual potential of sign languages to depict locational and positional relationships between objects.

Methods and materials. In our preliminary investigations, we used video clips from the Max Planck Institute for Psycholinguistics, which depicted objects moving in space, or individuals manipulating objects in space. We also used still pictures of objects placed on a surface in various arrangements with one another. Because we need a wider range of actions and objects in order to elicit from each of the classifier categories, we will develop our own video clips and still photographs in order to elicit forms of different combinatory possibilities and using different classifier handshapes. The video clips will show humans engaged in actions that involve changes in position and location: sitting behind, in front of and next to an object; walking up to a table, walking past a table and walking behind a table; humans meeting each other, walking past each other, walking behind another. Additional video clips will show humans engaged in actions with objects: handling, carrying, and transferring objects. To focus on bimanual possibilities, all clips will show pairs of objects: two humans, a human and an inanimate object, and two inanimate objects. After viewing the clips, we will ask signers to describe the action to another ABSL signer. The still photos will use objects common in the Bedouin environment arranged on a table surface with one another in order to elicit descriptions that would typically involve classifier handshapes. To elicit combinations of classifier handshapes, pairs of ABSL signers will be seated next to each other, separated by a blind, each viewing a matched array of pictures. The pictures will vary in one dimension, such as plane or location, and the signer viewing the description must select a picture from his or her own array that matches the description. We have successfully used this technique in our earlier investigation, and we find that ABSL signers understand quickly what the task requires.

Project 3: Prosodic structure. We are interested in using the prosodic structure as a preliminary diagnostic for morphosyntactic constituents, because the prosody is known to be systematically related to such constituents, if not strictly isomorphic with them (Nespor & Vogel, 1986). The phonological word typically has clear-cut characteristics in sign languages, which can distinguish the word from gesture. Similarly, the saliently marked intonational phrase is expected to cue clause level constituents, and the more subtly delineated phonological phrase to cue constituency at the level of the syntactic phrase. We also wish to trace the emergence of an intonational (superarticulation) system from emotional facial displays.

Methods and materials. The Word. We compare signing elicited from individual pictures with those in short utterances in order to ascertain the form of a word. This analysis will proceed together with other tests for wordhood included in the Lexicon Project. The phonological phrase and the intonational phrase. Using short utterances elicited for the other projects as well as narratives based on animated cartoons, we will use the coding methods and categories developed for research on the prosody of ISL and Sign Language of the Netherlands in order to parse strings into prosodic constituents. These analyses will be combined with morphological and syntactic analyses to determine how grammatical complexity emerges. A coding page has the gloss of an utterance at the top, with lines underneath for coding all nonmanual and manual behavior (simplified example in 1 below). Individual lines for each facial articulator (brows, eyes, mouth, etc.), for body and head position, for the rhythmic behavior of the hands, and for assimilation and spreading effects are all included. The facial articulations are coded using Ekman and Friesen’s Facial Action Coding System. Two Israeli research assistants have been certified in the use of this system and have considerable experience in using it. For each facial articulator, the number of the action unit is included and its extent vis à vis the glossed words at the top of the page is indicated with lines.

Example 1. ‘The book he wrote is interesting.’

[[book-there ] P [he write ] P ] I [[interesting] P ] I

brows up(1b,2b)------------------------- down(4c)---------

eyes squint (6b)- squint(5b)------


mouth ‘O’(25)---------- down (15b,17c)--


head tilt (52b)--------------------------- forward (57)-----

mouthing ‘book’-------- ‘interesting’------

torso lean left---------------------------- forward right-----

hold =

reiteration -1 x 3 x 4


speed slow

size big big

The system will enable us to determine whether facial expressions are being conventionalized, systematized, and combined into complex superarticulation patterns. It will also provide a tool for testing the predictions made in Section C concerning the emergence of complexity in the utterances of Abu Shara Bedouin Sign Language.

Project 4: Compilation of lexicon. The dictionary project is a good vehicle for the study of words in isolation. We have already seen significant individual variation in the form and number of signs used in the older generation of signers to name pictures. We will compare these utterances to those of younger signers in order to attempt to determine what it means to say that a ‘word’ exists in the language. Candidates for ABSL words will be analyzed using prosodic characteristics of words in sign languages. Other lexical properties such as argument structure will be analyzed according to principles developed in Sections B and C. As part of the dictionary project, we will also elicit sentential contexts for each entry, which is expected to give information about lexical category and other syntactic and semantic properties.

We will keep close track of lexical variation, especially the extent of such variation. Abu Shara is a very close-knit community. Even within the group, individuals tend to associate most with members of their own extended family, women almost exclusively so. We therefore expect any lexical variation to correlate not just with the age of the participant, but also with social networks (which quite closely parallel genealogy, which is well mapped out). The extent and manner of variation will provide a fine-grained measure of how a language spreads through such networks.

Methods and materials. The fact that ABSL has emerged in such a stable culture means that we can rely on that culture in preparing stimuli for dictionary elicitation. The initial set of stimuli consists of culturally appropriate photo images (displayed on a laptop) of items from the 100-word Swadesh list, adapted for use with sign languages. The people in these images, for example, are dressed in traditional Arab clothing; the animals are those familiar to the group. Other stimuli will be similarly appropriate.

We will use standard tests for lexical categoryhood, such as different types of modification. We have found instances in which signers combine two signs to describe an item in a picture. We will explore the possibility that such expressions are compounds, investigating whether such combinations have rigid order, whether they can be interrupted by other words, and whether they have idiosyncrasies of form or meaning.

Summary of Research Design and Methods

Our goal is to learn how a human community creates language de novo, and to trace the early development of a new language across generations. The target language is a sign language, leading to the expectation that certain types of grammatical structure will develop early, boosted by iconic motivation and visual representation. We will focus on those structures – e.g., verb agreement and classifier constructions – while documenting and analyzing other grammatical patterns as we encounter them. That the target forms are sign language universal makes them no less linguistic, and studying their structure and development will clearly be significant for our understanding of linguistics and linguistic universals in general. We will also trace the emergence of the lexicon in ABSL. Finally, we will exploit the ‘prosodic envelope’ as a point of entry into the system. Documenting the grammaticalization of facial expression will reveal how linguistic intonation emerges from unsystematic emotional displays. By investigating the use of nonmanual markers to delineate prosodic constituents, we expect to gain insight into the structure of propositions in the language at different stages of its development.