CRL is a thriving research organization spanning all aspects of language. We cannot do justice to all the research conducted by CRL members. Below is a summary of several of the major research projects funded through grants administered by CRL.
The goal of all language scholars is to uncover the fundamental nature of human language – e.g., what makes a communication scheme a language? What are the core elements of a language? What are language universals? One way of doing this is to observe the emergence of a new language arising without influence from other languages. This is very hard to do, because humans have been using language for tens of thousands of years, and there are no truly new languages spoken today. Sign languages are the only natural languages that can still be caught in the act of being born – if one is fortunate enough to be at the right place at the right time.
The CRL research team headed by Carol Padden at UCSD (Dept. of Communication) and Wendy Sandler (University of Haifa) has been witnessing just such an instance of language emergence. Mark Aronoff, SUNY Stonybrook and Irit Meir, University of Haifa, are co-investigators on the project. With support from the National Institute for Deafness and Communicative Disorders (NIDCD), these researchers have been investigating a new sign language that arose in isolation in a Bedouin village in Israel that has a very high proportion of deaf people. Fortunately, this research team has caught the language in time to document its characteristics and structure just one generation after it first appeared. Of special interest is to determine what sorts of information can be communicated in such a young language and to chronicle what conventionalized means have developed for conveying ideas.
The CRL research team has found that in the space of just one generation, a language has been born which conveys a wide range of information important to any community: information about social relations and activities, home construction methods, fertility, national insurance, and even folk remedies now out of use. They have also discovered that this new language has quickly developed a grammatical structure – that is, a means for encoding the relations between the do-er of an action, the action itself, and the recipient of the action. Al-Sayyid Bedouin Sign Language (ABSL) encodes this information through the order of words in a sentence, which is: (S)ubject, then (O)bject, and then (V)erb (for example, mother daughter feed, meaning ‘The mother fed the daughter’). Importantly, this SOV order differs from that of any of the other languages in the area – Arabic, Hebrew, or Israeli Sign Language. In the coming years, this research team will be exploring the roots of this new structure, its other characteristics, and how the language continues to develop and change with each new generation of signers.
Using this novel approach, CRL investigators have identified a significant basic trait of human language to develop systematic syntactic structure very early in the course of human communication.
Linguistic Anthropologist and Anthropology Professor John B. Haviland concentrates on Tzotzil (Mayan) speaking peasant corn farmers from Zinacantán in Chiapas, Mexico, and on speakers of Guugu Yimithirr (Paman), especially at the Hopevale Aboriginal Community, near Cooktown in northern Queensland, Australia. He is a certified legal interpreter for the Tzotzil language, he founded and directs UCSD's Linguistic Anthropology Laboratory, and a current project focuses on how and why conventionalized "words" seem to form in an emerging sign language in Zinacantán.
Spontaneously created sign languages provide a natural laboratory for exploring the human language capacity, allowing us the only possible glimpse of how language can be created without direct "linguistic input." Where do words--elements whose conventionality and abstractness allow them to be emancipated from particular contexts of use—come into existence, and why? What sorts of paradigmatic semantic, metasemantic, and pragmatic categories accrue to systems of linguistic communication? How do syntax, morphology, and phonology emerge as "design features" of language?
A miniature "speech community" among speakers of Tzotzil (Mayan) in the township of Zinacantán in Chiapas, Mexico provides an opportunity to explore such questions about the nature, origins, and evolution of language. The research analyzes the first generation of a newly evolved manual communication system created by three deaf siblings and their two hearing age-mates in a relatively isolated community of Mayan Indians. A hearing infant, son of the eldest deaf speaker and now just over three years of age, is simultaneously learning sign and spoken Tzotzil. A central part of the research is detailed longitudinal documentation of this child's bilingual acquisition of Tzotzil and Zinacantec Family Homesign (ZFHS)—the beginning, and possibly also the end, of a second generation of speakers.
The study involves weekly video recordings of the bilingual signing child over two years that include the expected period of his most explosive language growth. It combines recording of the child with continued documentation of caregiver ZFHS and spoken Tzotzil, using both spontaneous interaction and semi-structured experimental techniques. By linking a description of ZFHS as evidenced by adult usage to the infant's socialization into language through interactions with both deaf and hearing caregivers, the study intends to contribute to scientific understanding of how human communicative needs recruit, transform, and structure complementary modalities to fashion language itself. (Support from NSF "RAPID" grant in linguistics.)
It is well established that deaf children without access to language from an early age are at risk for delays in language, literacy, and other academic outcomes. Over 90% of deaf children are born to hearing parents, and thus their early linguistic environment is highly atypical and frequently impoverished. However, as yet there is very little known about how sign language is processed, and how processing ability may affect vocabulary and other linguistic skills in deaf children and adults. Amy Lieberman received funding from NIDCD to investigate processing of American Sign Language (ASL) through development of a novel paradigm. The paradigm is an adaptation of the looking-while-listening (LWL) paradigm that has been used extensively with hearing individuals from infancy to adulthood and has yielded important revelations about processing speed and efficiency in a range of populations. The goal is to obtain data on real-time processing of sign language by deaf children and adults, and to reveal possible differences in processing between individuals who have been exposed to ASL from birth, and those who acquire ASL at later ages. This is approached by measuring accuracy and RT in the realtime sign processing task in two groups of deaf adults (native and late-learners), and two groups of deaf children (native and non-native signers). Understanding how sign language is processed in native and non-native learners will reveal important insights about language processing in sign, effects of age of acquisition and quality of input on lexical processing, and the relationship between processing measures and language proficiency.
This project investigates the means by which humans achieve real-time linguistic understanding from the spoken and written word. Roger Levy has funding from NICHD to investigate how the two fundamental processes of word recognition and grammatical analysis—commonly understood as independent of one another—are in fact deeply intertwined, how they jointly recruit the two key information sources of sensory input and linguistic knowledge, and how they guide not only moment-by-moment understanding but even detailed patterns of eye movements in reading. This work lays the foundation for deeper understanding and improved treatment of both language disorders and age-related changes in reading and spoken language comprehension, which can arise as a consequence of processing breakdowns involving either or both of these key two information sources.
CRL has offered formal interdisciplinary training opportunities to graduate students and postdoctoral researchers of language at UCSD since 1993. This training program, funded by the National Institute for Deafness and Communicative Disorders (5 T32 DC00041-12), was directed by Professor Elizabeth Bates from 1993 to 2003, and by Professor Marta Kutas from 2004 to the present. Six predoctoral fellowships are awarded each year to graduate students from the departments of Cognitive Science, Psychology, Linguistics, and the SDSU-UCSD Joint Doctoral Program (JDP) in Language and Communicative Disorders. Typically, these fellowships have been used both to recruit highly-qualified incoming students and to reward continuing graduates for excellence in scholarship and research activities. On average, two have gone to Cognitive Science, two to the JDP, and one each to Psychology and Linguistics. In addition, two postdoctoral fellowships (mentored by an affiliated – participating or consulting - faculty) are awarded each year following a competitive, nationwide search; most postdoctoral fellows receive a second year of support, contingent on their standing.
We have funded seven post-doctoral fellows in the past 5 years. Of these post-doctoral fellows, 3 now have faculty equivalent positions, 3 have lecturer or adjunct positions at universities, and one is continuing. We have trained 27 graduate students over the past 5 years, of which eight have graduated or left the program. Of these eight, 3 are currently postdoctoral researchers, 2 have faculty positions, 1 is working in a staff research position at UCSD, 1 is working in the health care industry, and 1 is working in the Office of Graduate Studies at UCSD. In reviewing the past 10 years of the CRL training program, 35% of our trainees no longer in the program currently have faculty positions at other universities.
The CRL training program emphasizes new technologies and new theoretical frameworks in the cognitive science and neuroscience of language processing (e.g., advances in neural imaging, electrophysiological and behavioral studies of real-time language processing, computer simulations of language learning and breakdown). The program integrates the expertise, ideas, populations and technologies that are available in abundance across this community, and places them at the disposal of young scientists interested in the mental and neural mechanisms that underlie language learning, language use and language disorders.
To date there have been six components to this interdisciplinary approach to language research each headed by a member of the executive committee but represented by many CRL faculty (these will be re-organized in the upcoming renewal to NIDCD due to new faculty arrivals, several untimely deaths in the participating faculty, and changes in research strengths in the community):
1. Communication Disorders includes the studies of adult aphasia and childhood language disorders (B. Wulfeck)
2. Psycholinguistics includes language processing in normal (monolingual and bilingual) populations and in language-disordered populations (adult aphasia, childhood specific language impairment or SLI) using converging evidence from real-time behavioral, event related brain potentials (ERPs), and functional magnetic resonance (fMRI) techniques as well as lesion studies (D. Swinney)
3. Multilingual & Comparative Language Studies, a new component, emphasizes multilingualism (processing, learning, disorders), loss and relearning of "heritage" languages in immigrant populations, and comparative studies across typologically distinct language groups (M. Polinsky)
4. Neural Network Studies of Language emphasizes simulations of language learning and language breakdown under a range of different assumptions about the structure of the system and the context of learning and loss (J. Elman)
5. Electrophysiological Studies. This neural imaging technique (and its magnetic counterpart) are used to study language comprehension and production in normal children and adults, and in children and adults with neurological impairments and/or behaviorally defined communication disorders, and is complementary to the imaging fMRI technique (M. Kutas)
6. Functional Magnetic Resonance Imaging (fMRI). This emerging imaging technique is being used by a wide variety of training faculty in numerous studies of higher cognitive processes in humans and other primates (M. Sereno)
All pre-doctoral and post-doctoral students specialize in (at least) two of the six areas (a major and a minor), and receive some exposure to all six areas through laboratory rotations, coursework, and numerous CRL-sponsored meetings. Courses and laboratory rotations are offered by a larger faculty of scientists at UCSD, San Diego State University and the Salk Institute for Biological Studies. All trainees meet weekly during the academic year on Tuesdays at 4:00-5:30 pm for the CRL seminar series; trainees are expected to give a research talk at this weekly meeting at least once during their training tenure, although most do so more often. Fellows (and other students) use this meeting to get to know faculty (and vice versa) from departments other than their own, to learn about new techniques and new data from the different laboratories in the community, to practice poster presentations and talks for upcoming meetings, and to start discussion groups and collaborations. In the past two years (2004-2006), this meeting was preceded by a half hour social gathering (tea time).
All trainees also receive instruction in the responsible conduct of research by taking one of the several courses available at UCSD (e.g. COG SCI 241, Ethics and Academic Survival Skills).
This project, funded by the NIDCD and smaller grants from the McDonnell Foundation and NATO, has involved several investigators at UCSD and abroad. Elizabeth Bates was project director for 20 years with Nina Dronkers taking over in 2004. Other investigators who have worked on this multidisciplinary, international effort include: Beverly Wulfeck (CRL & SDSU), Mark Appelbaum (UCSD Psychology), Mark Kritchevsky (UCSD Neurosciences), Rick Buxton, Larry Frank, and Eric Wong (UCSD Radiology), Fred Dick (CRL), Marty Sereno (Cognitive Science), Luigi Pizzamiglio (Fdn. Sta. Lucia, Rome, Italy), Stefano Cappa and Daniela Perani (Univ. of Milan, Italy), Ovid Tzeng and Daisy Hung (National Yang Ming University, Taiwan), Angela Friederici (Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany), Boicho Kokinov (New Bulgarian University), Anna Szekeley (University of Budapest, Hungary), and Antonella Devescovi and Simonetta D’amico (the Universita La Sapienza, Rome, Italy), among other students and postdoctoral fellows.
This five-year cycle of the “Cross-Linguistic Studies in Aphasia” grant has examined language processing and language breakdown in normal and brain-injured aphasic speakers of three different languages (English, Italian and Chinese). These languages differ dramatically in their linguistic structure and thus provide an opportunity to evaluate which features are universal and which are specific to a particular language. By examining the effect of brain injury on speakers of these particular languages, they also aim to determine which brain-language relationships are universal. These studies with aphasic patients are complemented by functional magnetic resonance imaging (fMRI) investigations with neurologically-normal speakers of the same three languages.
Despite the sad loss of our ORU director, mentor, friend, and project leader Elizabeth Bates midway through this funding cycle, this work has continued and flourished through the commitment and tenacity of the research group and its grounding and integration with CRL, Indeed, UCSD students, post-docs, faculty, and invited international collaborators have converged at CRL to produce several significant works, some of which are highlighted below.
Inferences about the functional organization of the mental lexicon (the dictionaries in our heads), often made from the time it takes individuals to read words and name pictures, have revealed the importance of factors such as frequency of word usage and word length. Most of this work has been done in English. Despite the fact that there are 5000 or so languages in the world, many with different characteristics, most of this work has been done in English. CRL researchers, in particular, appreciate these differences and realize that generalizability mandates that similar data be collected from across the world’s languages. This was a primary motivation for the IPNP. Based on 520 object pictures and 275 action pictures, the IPNP provides timed picture naming data in seven different languages that is internationally accessible from a CRL website [http://crl.ucsd.edu/~aszekely/ipnp/] and that can be accessed internationally. Analyses of these norms provide invaluable and unparalleled information about what features influence the accuracy and speed of lexical retrieval in all languages. CRL researchers have demonstrated that picture naming data can be obtained (with and without sentence context) by children as young as 3, and by aphasic patients who are capable of single-word speech. The impact of this invaluable resource provided by CRL to language researchers worldwide and free of charge will be evident for years to come. The IPNP models for language researchers a novel cross-language approach to the study of language processing.
In the same spirit, JDPLCD graduate Analia Arevalo and colleagues have completed the gesture norming project in which nouns and verbs from the CRL IPNP have been coded based on participants’ gestural representations of them. Variables such as ‘manipulability’ have proved to be helpful in the examination of the relationship between meaning and sensorimotor properties -- an important feature of the embodiment hypothesis, according to which concrete bodily experiences play a major role in how people think about abstract concepts. This in turn has led to specific predictions about what areas of the brain might be involved in understanding nouns, verbs, metaphors, objects, movements, gestures, and so forth.
Timed word reading and timed auditory word repetition norms have also been collected for the IPNP stimuli, with the aim of identifying the factors that influence these tasks equally and differentially. All of these have at times been used as a basis for inferences about the functional organization of semantic memory (i.e., knowledge people have about people, places, and things, including words). From this normed set, Arevalo developed a three-modality “mini-battery” (picture naming, word reading, auditory word repetition) with which she assessed aphasic patients. Her analysis led to a better characterization of aphasic patients than that they simply have language problems or word processing difficulties; more specifically, aphasic patients with frontal lesions were found to have significantly greater difficulty (relative to healthy controls) processing items involving hand imagery. An fMRI analog to the minibattery in English and in Italian is under analysis.
Just which areas of the brain, if any, are essential for and/or specific to language processing? The answers to these questions are hotly contested in neurolinguistics. CRL researchers have been conducting behavioral and fMRI studies of non-verbal processing in search of neural substrates that are shared with language. CRL graduates Fred Dick and Ayse Saygin have published lesion data from aphasic patients as well as complementary fMRI from healthy adults showing substantial overlap in brain regions involved in processing environmental sounds and their linguistic equivalents. This supports the “shared resources” view according to which language is superimposed on more ancient sensori-motor systems that continue to subserve their non-linguistic functions. In related work, Saygin and colleagues have examined the neural mechanisms for the comprehension of linguistic and gestural actions and have explored brain areas involved in the lower-level perceptual processing of actions. This work reveals that lesions in the frontal analog of the human “mirror neuron system” are associated with deficits in non-linguistic action understanding, as well as with language deficits.
Clearly, injury to certain brain areas leads to various patterns of deficits in language processing. However, the specific inferences warranted from such deficits are likely to be very different if similar patterns can be elicited from healthy adults under pressure. And, indeed healthy adults subjected to various degrees and types of stimulus degradation and/or cognitive overload do show similar patterns of deficits. CRL research shows it is time to rethink brain areas and their functions!
Based on the datasets from aphasic patients in three languages, CRL researchers have analyzed the Italian data to show a “continuous symptom space” alternative to traditional aphasia taxonomies, in which each patient is a point in a multi-dimensional symptom space. By using canonical correlation and regression, these researchers have obtained valuable information typically lost by placing patients into all-or-none categories.
An interdisciplinary team of CRL researchers has developed a novel imaging technique - voxel-based lesion symptom mapping (VLSM) – that produces colored maps based on statistical values that “light up” the relationship between the severity of a behavioral deficit and the voxels (similar to pixels in computer images) in the brain that contribute the most to that deficit. VLSM combines patients’ reconstructed MRI scans with their behavioral data to test brain-behavior relationships at each 1x1x1 mm voxel. This is an important breakthrough tool since these lesion-symptom maps easily compare to those from functional imaging, thereby facilitating communication between the lesion and functional imaging literatures.
VLSM is relatively easy to perform and offers a vast improvement in our ability to uncover brain-behavior relationships The Voxel-based Lesion-Symptom Mapping algorithms are freely available online at http://crl.ucsd.edu/vlsm. Laboratories worldwide are applying this procedure to their patient groups. A CRL graduate, Ayse Saygin, established a CRL web site with links to the papers using VLSM [http://crl.ucsd.edu/~saygin/vlsmpapers.html], including but not limited to CRL publications on sentence comprehension, speech production, grammaticality judgments, comprehension of actions and environmental sounds, verbal fluency, arithmetic, and executive functions. Dronkers and colleagues have received funding for two new grants based on this methodology. In ongoing work by CRL researchers, VLSM is being extended to the comparative study of aphasic patients in three structurally different languages - English, Italian, and Chinese.
The Project in Cognitive and Neural Development (PCND), created in 1990 as a project of the Center for Research in Language, is the institutional home for the NIH program project, “The Center for the Neural Bases of Language and Learning”. Funding is provided by a grant from the National Institute for Neurological Disorders and Stroke. The collaborative group of core researchers, who first came together in the late 1980’s at the Center, continues to work, learn, and produce research results stretching over almost 25 years. The successful evolution of the multidisciplinary Center’s studies, learning what “worked” from each funding period, has resulted in a well-structured, efficient, and productive research center with highly experienced staff and researchers.
The PCND, directed by Prof. Elizabeth Bates until 2003, is currently headed by Doris Trauner, M.D., with subcontracts at the Salk Institute for Biological Studies and SDSU. Collaborators on this project in the past 5 years include: Mark Appelbaum, Psychology UCSD; Angela Ballantyne, Neurosciences UCSD; Elizabeth Bates, Cognitive Science UCSD; Ursula Bellugi, Psychology UCSD and Salk; Rick Buxton, Radiology UCSD; Rita Ceponiene, CRL UCSD; Karen Dobkins, Psychology UCSD; Theresa Doyle, Salk; Marta Kutas, Cognitive Science UCSD; Pam Moses, Center for Human Development UCSD; Ruth Nass, New York University; Judy Reilly, SDSU and Ctr for Human Development/CRL UCSD; Marty Sereno, Cognitive Science UCSD; Joan Stiles, Cognitive Science UCSD; Doris Trauner, Neurosciences UCSD; Jeanne Townsend, Neurosciences UCSD; Beverly Wulfeck, SDSU & CRL UCSD.
The scientific mission of PCND is to investigate the brain bases of language, cognition, perception, memory, and communication from birth through adolescence. The studies focus on typically developing children, as well as children with language impairments, early brain injury, Williams Syndrome, Down Syndrome or autism spectrum disorders using a wide range of methods. Behavioral methods range from naturalistic observations (mother and infant playing with toys on the floor) to computer-based studies that measure real-time processing of linguistic and non-linguistic information. Parallel brain imaging studies (fMRI), event-related brain potentials (ERP) and magnetic resonance imaging (MRI) are also yielding exciting new findings regarding the interaction between experience and brain development. Since its inception, PCND has tested and processed data for over a thousand subjects, most with multiple data points and including many with data from all the imaging modalities.
Over the past five years, these CRL researchers have sought profiles of association and dissociation across behavioral domains, collected cross-sectional and longitudinal data from 7-year-olds to adolescents, conducted "on-line" studies of the temporal microstructure of language, attention and cognition, attempted to link these behavioral profiles and developmental trajectories to specific indices of brain structure and brain function, and explored the nature and limits of neural specialization for language and other aspects of cognition, and the alternative forms of organization in the mental and neural processes responsible for language and cognition after early damage.
In brief, PCND researchers find that (1) every clinical population has its own pattern of associations and dissociations across behavioral domains. Children with Williams Syndrome, for example, demonstrate relative strengths in several aspects of language and in face processing, but are markedly impaired in visual spatial skills. Some children with language impairment (LI), by contrast, have poorly functioning language skills throughout childhood and adolescence together with subtle deficits in the non-verbal domain. (2) Cross-sectional and longitudinal studies indicate that these patterns of abilities and deficits seem to change over time: Children with focal lesions, for example, may have language skills that are delayed early on, but fall within the normal range by the time they becomes 8 years old. At the same time these same children continue to demonstrate subtle but persistent visual-spatial deficits well into adolescence. (3) While most school-age children with LI “know” their grammar, they nonetheless find it difficult to use that knowledge efficiently, in “off-line” situations (e.g., the ability to tell a coherent and well-formed story) and in “on-line” language processing tasks. (4) Different brain areas seem to be active in adults, typically developing children and children with LI performing the same language task. (5) A most remarkable and important finding in the PCND studies of children with focal brain lesions is that the developing brain is highly plastic: intellectual function, language, and other non-verbal cognitive skills (e.g., facial recognition) seem to be within the normal range despite lesions which in adult strokes would severely impair these functions.
PCND investigators also have funding from NIDCD (April 2004-March 2009) to conduct longitudinal studies of the relationship between early brain injury and language development from birth to age 5 using state-of-the-art methods for structural imaging and lesion-symptom mapping.
These investigators along with some new colleagues have submitted an exciting renewal proposal to NINDS including a Neuro-Imaging Core using state-of-the-art neural imaging techniques pioneered by Prof. Anders Dale (Radiology). Also joining is Prof. Halgren, a recognized leader in the field of magnetoencephalography and intracranial electrophysiological recordings.
The general goal of the new proposed studies is to investigate the inter-relationships among neuroanatomic structures, neurocognitive systems, and processing in normal and abnormal development. One series of experiments examines how early levels of processing (sensory perceptual functions, attention, working memory) result in higher-order social, linguistic and visuospatial proficiencies (or lack thereof). In so doing, the neural underpinnings of distinctive cognitive profiles can be identified. This is especially important – i.e., of clinical relevance - given that children with diverse developmental disorders may present with similar behavioral profiles. Both remediation and intervention are best done with knowledge of exactly what the core deficits are (i.e., sensory, attentional, neural, etc.).
Maria Polinsky (Professor of Linguistics) has worked in two major areas during the past five years; the first area was discussed under 1.1 above.
M. Polinsky and students are also highly respected for their work on the mental representation of language under incomplete acquisition—in healthy subjects who switch from their home childhood language to the dominant language in the society, thus failing to achieve complete mastery of what starts out as L1 (so-called “heritage languages”). Two common views on such subjects’ knowledge of language are (1) that they retain a random collection of “language chunks” learned because they came in early and were very frequent, and (2) that adult incomplete learners are “frozen” at the interruption stage. Polinsky’s research with several languages (Russian, Korean, and Vietnamese) shows that both these conceptions are wrong. Neither the frequency of chunks nor the age of acquisition have a direct bearing on the end-state grammar of an incomplete learner. Instead, this grammar emerges as a rule-governed system, albeit very different from the grammar of a fully acquired language. These findings have led to several experimental projects on categorization, gender (this subproject was also linked to the picture naming project and the aphasia project), passive, and complex structures. The current research has just been funded by a Title VI grant at UCLA, in which Professor Polinsky is responsible for the subproject “Grammar of Heritage languages” (2006-2009). The first stage of this project will include two intensive workshops on heritage languages which will be conducted in summer 2007. Na-Young Kwon, a graduate student in linguistics, has worked closely with Professor Polinsky in developing testing tools for assessing heritage speaker proficiency; part of that research was funded by UCSD course development grants, a subgrant from the Language Institute at UCLA (2005), and NSF. The heritage language research program has developed in intense collaboration with practical work in the teaching of heritage languages to UCSD students through the Heritage Language Program (HLP) in the department of Linguistics. In 2004-2006, Prof. Polinsky and her students held regular workshops for HLP instructors in the structure of heritage languages.
One of the crucial questions in understanding the structure of human language has to do with correlations between individual structural properties. Once such correlations are empirically established, they need to be accounted for in a principled manner. In a project funded by NSF (2001-2006), Maria Polinsky has been investigating this general question with respect to a family of complex sentence structures such as control. In the English sentence Kim wants to travel, a single noun (Kim) is understood to be the subject of both wants and the subordinate verb travel, but appears only once, as the subject of the main verb wants. Theoretical linguists have long been interested in phenomena that involve such “missing” elements across sentences. The challenge has been to constrain the theory enough to allow for such missing elements and at the same time to account for all the variation in such constructions across languages. Polinsky and her students (Kertz, Fukuda, Kwon) approach a range of phenomena when the subject of one verb is also understood as the subject of another verb.
Until recently, linguistic theory required that the subject of the main verb be expressed and the subject of the subordinate verb remain unexpressed. Polinsky and collaborators has discovered and analyzed a previously unknown pattern of the opposite type, which would resemble a sentence like [___ wants [Kim to travel]], which she calls “Backwards control”. She argues for the existence of this phenomenon in minority endangered languages of the northeast Caucasus on which she has done extensive fieldwork. She has developed a theoretical account that can handle this unusual phenomenon, forcing a major reassessment of existing syntactic theory. Predictions regarding the types of languages where this phenomenon can be found follow from the theory, and has led this group of CRL researchers to more extensive fieldwork in several other languages. These empirical results have significantly advanced the theoretical account of complex structures and also allowed Polinsky and her students to design several experiments (based on Korean and Japanese). The project has resulted in a continually expanding database of complex structures hosted on a dedicated server at CRL (http://accent.ucsd.edu/). Polinsky’s group is currently preparing a new proposal to NSF building on these results of the project, to be submitted in January 2007.
Jeff Elman (Cognitive Science) and co-investigators Ken McRae (University of Western Ontario) and Mary Hare (Bowling Green State University) have received funding from NIMH to research the role of probabilistic usage in guiding expectancies during language comprehension; the role of verb meaning in determining structurally-based expectancies, the organization and structure of semantic memory; and the role of event knowledge. What makes this research remarkable and unique is that the same group of researchers employs three different research paradigms – corpus analysis, experimental psycholinguistics, computational modeling – to address these issues. The corpus analysis provides information (statistics) about what actual language users do, thereby providing guidance in stimulus development for the experimental studies. The experimental data in turn provide testbeds for computational models that make concrete and explicit what the relevant factors are and how they interact to influence on-line word and sentence processing. The bottom line is a well-grounded and testable theory of language processing. This work highlights the CRL perspective of seeing what language users actually do and think rather than merely assuming it as is often the case. Analyses of large corpora, for example, provided estimates of the frequency of occurrence of certain grammatical structures (e.g., the frequency of usage in each of 17 possible complement structures following verbs; the frequency of three broad sets of syntactic structures: subject and object clefts; subject and object relative clauses; active and passive sentences; and the frequency of occurrence of various word orders (e.g., SV, OV, VO, VS, etc.). With these statistics this CRL group was able to predict (and test) some of the factors that were important in determining actual usage. For example, it was discovered and then empirically shown that even when the use of the complementizer “that” is optional (He admitted the doctor was lying/He admitted that the doctor was lying), many different factors including verb sense (admit: let in vs concede) influence whether or not “that” is actually used. Overall, this CRL group is breaking new psycholinguistic ground with their three-pronged methodological approach and with their questioning long held assumptions about what factors are important for how language is understood. Indeed, they have even questioned the canonical view of what a word is – viewing it as an active player in meaning construction rather than a passive object for storage, retrieval and integration. The repercussions of this new take on the word will be felt for years to come. This CRL group’s work also emphasizes the obvious but hitherto under-appreciated role of events (agents, patients, actions, locations) in the functional organization of semantic memory and its online deployment during the construction of sentence meaning.
Both Professor Carol Padden and Professor Rachel Mayberry are affiliates in the Visual Language and Visual Learning (VL2) Center, one of six Science of Learning Centers funded by the National Science Foundation. VL2, centered at Gallaudet University, Washington, D.C., brings together deaf and hearing researchers and educators from a variety of disciplines and institutions to study how language and literacy develop in deaf individuals.
VL2’s mission is to gain a greater understanding of the conditions that influence the acquisition of language and knowledge through the vision modality:
The knowledge gained will help improve education for deaf students and contributes to the understanding of how learning occurs through the visual pathway for all individuals -- deaf and hearing. Professor Padden’s VL2 study is “The Role of Gesture in Learning” and Professor Mayberry’s, “Early Literary Interactions between Deaf Mothers and Their Deaf Children.”
HOW DO GESTURES AFFECT LEARNING?
Among hearing, non-signing children, gesture plays a significant role in learning, although it is not fully understood why gesture has this “benefit.” One reason may be that when speech and gesture are produced together, information is conveyed in two distinct dimensions, a discrete representational format (speech) and a continuous, imagistic representation format (gesture). Sign languages have been shown to use both discrete and imagistic representational formats (Liddell, 1988). If the “benefit” of gesture is that it adds a spatial and imagistic element to problem solving and other cognitive tasks, then signers should show a clear benefit from using sign language. Alternatively, gesture may have a benefit because it divides the cognitive load between the manual and the oral modalities. If this were the case, then the benefit of gesture found in hearing children would not accrue to signers because they use principally one modality.
This study by Professor Padden and Post-doctoral Researcher Melissa Herzig will evaluate the two hypotheses with deaf children who are native signers of American Sign Language by repeating studies carried out with hearing children.
|Carol Padden, UCSD, and Wendy Sandler, University of Haifa, Israel, Co-PIs|
Rita Ceponiene, PI; Doris Trauner, Co-PI; Jeanne Townsend & Amy Spilkin, Co-Investigators
|Rachel Mayberry, PI|